AI round-up: Week of March 10, 2025

Happy Pi day!

We’ve hit issue 10 already (are we that far into the year?), and I thought it would be a good opportunity to take a look at two quotes that have dominated the AI landscape for the last two years.

The first: “AI won’t take your job. Someone who uses AI will take your job.”

This quote drives me nuts. We’ve heard it quite a bit and for the most part, I feel people have used it as a scare tactic. Listen, there’s plenty to scare us. We don’t need this ominous quote.

So, will AI take our jobs? Yes, some of them. Ask people in tech or customer service if they have concerns. But my bigger issue is the general vagueness of ‘someone who uses AI…’ when speaking to who’s left without a chair when the music stops.

I don’t think using it is enough. I think you have to immerse yourself in it. Become an expert in it. And really, there’s only one way to do that. Are you ready for it?

Start using it. All the time. Think AI first. How can you integrate it into what you do to be better at what you do. I can USE AI. In fact, I could use it to create legal documents. That doesn’t mean I will replace a lawyer. However, I do use AI and I am good at what I do in this world of marketing communications (if I do say so myself). So will I replace a bunch of me who don’t use AI? Or who just plop and drop things into AI?

Good question. Guess we’ll see.

The second quote: “This is the worst version of AI you will ever use.”

Ah, an instant classic. A nice, clean, evergreen quote from Ethan Mollick. This will never age. It will always be right. Which is the real kicker – because as amazing as the tools we have now are … they will never be as good as the tools that are to come. With apologies to Toby Keith, the next version of AI will be better than it once was.

Too much of a stretch to work in a song lyric? Maybe.

Let’s get to it.

The Heavy Stuff

Story 1: Understanding what you know. Don’t know. Eh, just read the post.
Christopher Penn is a must-read. In his latest blog, he talks about where we are with generative AI usage and how we’re using it to do what we already know how to do (Lumiere’s Law) when we should be using the Rumsfield Matrix to dig in and take the next step.

Question: what’s the Rumsfield Matrix? Well, he explains it in the piece, but it’s a matrix that uses four categories of classification: The knowns, the known unknowns, the unknown knowns and the unknown unknowns.

Ya know?

So why does this matrix matter? Well, according to Penn, it’s the best way to identify the ways you could be using generative AI to unlock your next level. (Think back to the opening…what could I be doing better in my field to create true separation?)

Enjoy the journey through the unknowns (and knowns).

Story 2: Paul explains it all.
Last week I ended this newsletter with a link to the Ezra Klein podcast. That episode was ‘The Government Knows AGI is Coming’ (And other scary stories to tell in the dark!) Ok, I made the last part up.

I highly suggest listening to it. However, if you don’t – or don’t have time, I would definitely recommend listening to this week’s episode of The Artificial Intelligence Show. In it Paul breaks this article down. His take definitely belongs here in ‘The Heavy Stuff’.

I admire Paul’s ability to dissect these headlines and tell us the story of what it means and why it matters. I'm biased because in addition to being a Northeast Ohio guy, he’s a fellow MAC grad and more importantly, a journalism major. So he is coming at this with a burden of responsibility to tell us the facts and help inform us. Do yourself a favor and make the show a weekly must listen. (But regardless, at least listen to the first 20 minutes of this week’s.)

Story 3: The goal isn’t AGI. It’s to stop others from achieving AGI. (TechCrunch)
If you listen to The Artificial Intelligence show (linked in the article above), you’ll hear the speculation that ‘hmmmm….maybe Ilya Sutskever saw something that concerned him enough to form his own company just so he could develop and sit on dangerous AI tech’.

A stretch? I don’t think so. Especially when you consider the caliber of people who want us to turn our attention away from developing AGI and put it on deterring other countries from achieving it.

Story 4: Manus.
What is Manus? Shelly Palmer explains it well:

“China unleashed Manus—an autonomous AI agent that’s sparking what some observers are calling a genuine “ChatGPT moment.” Manus doesn’t just answer questions or give you a weather forecast—it autonomously executes complex tasks with startling proficiency. Want to build an entire social media plan including creative? Manus will do it, step by step, from initial concept to completion. It’s actually a bit scary.

This isn’t just another iteration of conversational AI; it’s a clear vision of the future, where AI doesn’t just assist—it actually takes the reins and executes real-world tasks autonomously.“

I admittedly get a little nervous when someone like Shelly says something is ‘scary’…

I have not tried Manus, but here is a demo video I’d recommend watching. You can be sure we will be talking more about this.

Side note: Seems Manus has Fortune’s attention:
China’s Autonomous Agent, Manus, Changes Everything.

Story 5: A dozen eggs only cost you $31?
If you are curious about what an AI agent can really do, this piece from The Guardian is one worth checking into. It’s a little longer read, but the reporter takes a deep dive into how she used it, what it did (and didn’t do) and if it’s worth using.

The best example was when she had it do grocery shopping. Some issues with ordering things she doesn’t need or like … and spending $31 on a dozen eggs. (Sounds like a deal!)

She used OpenAI’s Operator for this piece. (Remember, that costs $200/month.)

Story 6: That last story is pretty much obsolete.
Well, that was quick.

Now OpenAI has unveiled its new Responses API, allowing developers to create advanced AI agents capable of tasks such as web browsing, file management, and computer operations. Additionally, OpenAI introduced Agents SDK, facilitating the orchestration of multiple AI agents to handle complex workflow. These tools aim to streamline the development of autonomous AI systems, enabling businesses to enhance productivity and efficiency across various applications.

I asked ChatGPT if it’s fair to assume that this new technology will allow companies to build their own AI workforce:

“Yes, it’s fair to say that with OpenAI’s new Responses API and Agents SDK, companies now have the tools to build their own AI workforce—AI-powered agents capable of automating complex workflows, handling reasoning tasks, and even operating across different applications. These aren’t just chatbots; they can browse the web, manage files, and execute actions autonomously. It’s a major step toward AI systems that function as independent digital employees rather than passive assistants. The big question now: how soon will businesses trust these agents to handle real, mission-critical work?”

Hey, this is the ‘heavy stuff’ section for a reason.

Story 7: OpenAI calls on the U.S. government to let it freely use copyrighted material for AI training. (Mac Rumors)
Wow. Just … wow.

The not-so-heavy stuff

Story 1: AI is never ‘the answer’ – it’s ‘a tool’. (TechCrunch)
Mark Cuban knows a thing or two about tech. And while he thinks people should be doing all they can to learn every bit of AI, he subscribes to the ‘human in the loop’ theory.

Story 2: China agrees with Mark Cuban. (Bloomberg)
China is rolling out AI courses in primary and secondary education courses.

Story 3: AI better be on your resume! (WSJ)

One more: Guess AI can’t fix everything…
(Note: the Sixers are 22-43)

Daryl Morey on if he uses AI to make team decisions

A few that don’t fit in either category

Story 1: An LLM dedicated to product manufacturing. (3dprint)
Larry Page (yes, Google Larry Page) is investing and developing a generative AI tool to help companies in the design phase of product manufacturing. This seems like such a natural evolution of generative AI, but still … pretty wild to think you could basically ‘speak’ a product into existence.

Story 2: Microsoft considering AI models to replace OpenAI in Copilot. (PYMNTS)
It’s also training models that could compete directly with OpenAI.

Story 3: Open AI says it has trained an AI model that’s “really good” at creative writing.
And isn’t that what creatives truly aspire towards? To be “really good”? Now, if we could make a model that’s “really good” at fact-checking… (report: LLMs are wrong at a 60% clip, according to the Columbia Journalism Review)

Story 4: Should generative AI LLMs have a ‘quit button’? (ars Technica)
The idea – an LLM would just ‘quit’ your task if they find it unpleasant – was floated by Anthropic CEO Dario Amodei during an interview at the Council on Foreign Relations.

Final note:

It’s a quick one…but in talking with a friend yesterday, I realized … authenticity is being attacked. By … generative AI. What am I talking about? Well, my friend mentioned that his new phone comes with Gemini, which is obsessed with ‘fixing’ his communications. Not improving or recommending – fixing. Polishing. Making it too perfect.

Are we losing our sense of self when we use sterile, polished copy to communicate at a human level? Are we choosing ‘fix,’ ‘copy,' or’ paste’ over raw, reactive words and phrases?

Today … speak out. Speak up. Write something without autocorrect or editing. Think about what you feel v. what’s expected or recommended. Don’t let the ‘machines’ become the language.

Thanks for reading.

-Ben

As a reminder, this is a round-up of the biggest stories, often hitting multiple newsletters I receive/review. The sources are many … which I’m happy to read on your behalf. Let me know if there’s one you’d like me to track or have questions about a topic you’re not seeing here.