AI round-up: Week of August 26, 2024

Hold the presses. Stop the phone.

Willy Wonka Reverse Quote

There IS so much to get into. WHY? Because … yep … Project Strawberry is back in the news.

And it’s our first big thing.

This link takes you to a LinkedIn post from Paul Roetzer. He’s summing up an article from The Information (if you don’t have a subscription, and it’s a pricy one, they don’t give you much in the way of free paragraphs or articles so you’ll want to read Paul’s summary).

If you remember, Paul and Mike broke down some things that may have led to Sam Altman leaving OpenAI … Q* (now named Project Strawberry) being one of them. You can go back and listen to Episode 73 (which I have re-listened to now and it hits different, as the kids say).

All this to say … we haven’t stopped talking about Strawberries. Whether it be sundaes, jam, Darryls, daiquiris, shortcakes, fields or AI, you can expect to see more in this space in weeks to come.

Rounding out the big 5

AI Godfather fears regulators are running out of time to take action.

It seemed like a nice way to follow up on that last story, didn't it? When you consider the line from Paul’s Strawberry summary regarding how Washington responded to seeing it in action … yeah, I think we need to get this figured out. (Bloomberg)

Side note: AI Godfather. How we feeling about that title? One you want? Might not want as time goes on? I’m not sure how that title ages.

OpenAI supports a California bill that would require companies to ‘watermark’ AI-generated content.

No, not that bill … this is another AI bill that the AI higher-ups seem to be in favor of. For now. Hmmm. What if they combined the bill? What if this bill became pork in that bill? And if you asked AI to come up with an image for that last sentence, just how disturbing would it be

Breaking news: Bill 1047, the more contested bill, just passed.

All this talk about bills has me wondering how one becomes a law. Go ahead. Click it. You know you want to.

AI can detect illness by scanning an image of your tongue. And it’s 98% accurate.

Say ahhhhh.

Nope.

Well what about this AI tool that ‘listens’ for sickness?

What?

Amazon claims AI saves its coders a crazy amount of time.

If you’re curious, one measurement of crazy is 4,500 years, which is crazy.

Learn a little

Christopher Penn has returned to this section with a doozy. It will take some time and I encourage you read it. Let it sit. Experiment. Read again.

The topic? A real conversation about how AI will impact your SEO.

Did you hear about…

…this Sora competitor, Hotshot, that you can try for free? (Techradar)

…Gemini is creating people again? (The Verge)

…the number of classes popping up to teach seniors how to use AI? (AP)

…this AI camera will do the searching for you. (The Verge)

Must read/must discuss:

Have you seen the (new) It movies? You know how Pennywise says “You’ll float too!”? That’s what kept ringing in my head as I saw a recent post from Shelly Palmer where he says, “ChatGPT will make you hallucinate too.”

It’s just as scary.

I’ll just let him tell you. (The excerpt below is from his newsletter)

I read this study* before bed last night: "Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews." It scared me more than I thought it would.

The study explores how chatbots powered by LLMs can influence the formation of false memories for users; in other words, they can literally make users hallucinate.

The researchers conducted an experiment where participants watched a video of a crime scene and then interacted with different types of AI systems or completed a survey. They found that AI chatbots were significantly more likely to induce false memories in participants compared to traditional survey methods or no intervention at all.

It gets worse. Not only did AI chatbots create more false memories, but they also increased participants' confidence in these inaccurate recollections. The study revealed that these false memories (and the associated high confidence levels) persisted even after a week.
It's just one study; more work will need to be done to verify the findings. However, misremembering is a well-documented phenomenon.

There are decades of research that highlight the fallibility of human memory and the unreliability of eyewitness testimony. This new study on AI-induced false memories adds another layer of complexity to our understanding of memory formation. It suggests that as AI becomes more integrated into our daily lives, we will need to be even more vigilant about the accuracy of our recollections and the sources of our information.

Ben note: the article he links to is the actual research report he’s referencing. I have not had a chance to read it/check it out, but I will. It’s a long one.

Thanks for reading!

-Ben

As a reminder, this is a round-up of the biggest stories, often hitting multiple newsletters I receive/review. The sources are many … which I’m happy to read on your behalf. Let me know if there’s one you’d like me to track or have questions about a topic you’re not seeing here.