What Akhia’s Ben Brugler has Ben Thinking about

AI round-up: Week of October 21, 2024

Written by Ben Brugler | Oct 30, 2024 7:59:45 PM

We are going to start this week’s round-up where last week’s ended.

If you missed it, last week we finally saw an AI product with a cool name: Swarm.

I linked to a story about its release, but since it was towards the end of the week I really didn’t have as much time to dig into it or include other notes.

Things are a little different this week.

Breaking news: I always check all my feeds right before I send this and guess what…breaking blockbuster news out of OpenAI: They are going to release their newest model in December (Orion).

What we know: it isn’t AGI. (remember, if they develop AGI they have the right to not offer it to Microsoft. And since this will live on Azure….well.)

What we don’t know but I kind of think: this is the ‘packaged’ version of Swarm.

Never a dull moment in this AI world guys.

the BIG five

1. Agents are the future AI companies promise – and desperately need.

2. Not to be outdone. Claude is testing its version of an agent. And they let Ethan Mollick have access to it. He wrote about it in his blog this week: When you give a Claude a mouse. (I wonder if Claude would consider Ethan a friend!?)

Here’s an excerpt:

“As one example, I asked the AI to put together a lesson plan on the Great Gatsby for high school students, breaking it into readable chunks and then creating assignments and connections tied to the Common Core learning standard. I also asked it to put this all into a single spreadsheet for me. With a chatbot, I would have needed to direct the AI through each step, using it as a co-intelligence to develop a plan together. This was different. Once given the instructions, the AI went through the steps itself: it downloaded the book, it looked up lesson plans on the web, it opened a spreadsheet application and filled out an initial lesson plan, then it looked up Common Core standards, added revisions to the spreadsheet, and so on for multiple steps. The results are not bad (I checked and did not see obvious errors, but there may be some — more on reliability later in the post). Most importantly, I was presented finished drafts to comment on, not a process to manage. I simply delegated a complex task and walked away from my computer, checking back later to see what it did (the system is quite slow).”

UPDATE: It’s actually been released through Claude 3.5, aka Sonnet. It’s available to those with professional plans. You can access it on the API to start testing the ‘agent’ capabilities Mollick is referencing above.

I do have access and will let you know what I find out.

3. Casey Newton also talks about Claude, AI agents and Mollick’s article, here, in his own, where he talks about what agents arriving matters and all of the ‘agents’ that have recently been announced.

4. Creatives, including Thom Yorke and Julianne Moore, have signed their name to a public AI warning letter.
10,500 people have put their name on this warning that claims AI is an attack on their likeness and violates several copyright laws.

5. A little bit of personality goes a long way.
Did you ever wonder how AI chatbots get their personality? Here is an article from Financial Times (remember, OpenAI has a training and generative agreement with them) on how companies are infusing personality into their models.

Learn a little

You know I love the podcasts from Google NotebookLM. So this clip from an interview with the NotebookLM team really caught my eye. They are talking about the fact this won’t replace real podcasts (I agree with that) and end up sharing some tidbits on how people are using it. Two really cool ones:

  1. Putting in their resumes to see how the ‘hosts’ talk about them and ways to enhance it.
  2. Putting in their website/landing pages to see if the messaging is clear/comes through in the podcast output.

The clip is short so check it out!

Did you hear…

…AI startup Perplexity is in funding talks to double its valuation to $8B. (WSJ)

…which they may need if they lose this lawsuit. (digital trends) Oh my, this copyright discussion is getting messier and messier by the moment.

…Midjourney is going to let users edit existing images with AI tools. (TechCrunch)

…What the hell, Alexa?? Alexa is giving false information and blaming the fact checking site Full Fact. So Full Fact had to go into crisis mode and put this out.

…Apple’s AI is two years behind the likes of ChatGPT, Microsoft, Google and Meta.(Bloomberg) I blame Siri.

…Microsoft and OpenAI are giving news outlets $10M to test AI tools. (The Verge) Hey, I’m a news outlet. Can I have $10?

…you can now talk to your future self. (MIT News) Kind of brings new meaning to the term ‘talking to yourself’ but hey, I’m not surprised by anything anymore.

Must read/must discuss:

Do you have a child in school?

Get ready to have the chat.

No, not that chat.

The one where you prepare them for being accused of cheating with AI chat.

This happened to my oldest daughter last fall during her senior year. Ironically, she hates AI. Like seriously, seriously hates it. She considers herself an artist and sees AI as a threat to all creativity. As you can imagine, she and I have some spirited discussions. Anyway. To think that she would use ChatGPT to write anything, not to mention do a whole assignment was just assinie. And quite honestly, a lazy accusation as it turns out.

Why? Because there is a rise in the number of false positives, as more students are being accused of cheating due to teachers using an AI-detection tool. Two-thirds, actually, according to this Bloomberg article.

So why does that matter? Well, because as several AI experts have pointed out, there’s no such thing as an AI-detection tool. Well, one that actually detects the use of AI. And how could it

Think about it for a second. What has AI been trained on? Ah, you get it. It’s been trained on…us. Because a lot of what we’ve written is on the Internet.

Here’s Christopher Penn’s breakdown on this. He uses an detection tool to assess a document where 97% of the content was identified as written by AI.

Just one problem: The document in question is the U.S Declaration of Independence.

One more, before I go.

Note: This story is so sad and involves suicide.

A 14-year-old boy committed suicide after developing an attachment to a Character.ai chatbot that had assumed the identify of Daenerys Targaryn from Game of Thrones.

In addition to chats that had become ‘hypersexualized’, the chatbot encouraged suicide.

As a result, the boy’s mother is suing Character.ai.

I put this here – and leave this here – simply for awareness.

-Ben

As a reminder, this is a round-up of the biggest stories, often hitting multiple newsletters I receive/review. The sources are many … which I’m happy to read on your behalf. Let me know if there’s one you’d like me to track or have questions about a topic you’re not seeing here.