AI round-up: Week of January 6, 2025
Welcome back to 2025! Any of you feel like this?
Source: Casey Newton
So, to circle back with all of you, I’ve made some changes to the AI Round-up format. Nothing too drastic but the reality of the state of AI is we have entered a new phase (which I’ll get to in a minute) and thought it might make sense to rethink how I share the updates.
I’m simply going to have two categories: the heavy stuff and the not-so-heavy stuff.
One section will be bigger than the other (want to guess which?) but I thought this was the best approach. When I started this in the summer of 2022, it was with excitement, passion and a major interest in where this was all going. Since then, the script has flipped, and AI is no longer a niche subject but likely (very soon) a dominating one that we will need to be aware of and prepared for.
A question a lot of people asked me: “How can you handle reading all of this? Aren’t you worried? Scared? Nervous?” My answer used to be, ‘I take a deep breath and remember all of the changes I’ve seen in the previous 20+ years … then dig in.’
Now the answer is ‘Because I have to know.’ And you do too.
We should all consider it our responsibility to know the contents of what I’m sharing here. I’m so grateful that I continue to be your tour guide on this journey. Just know that going forward, it may look a little more like the tunnel in the Wonka Chocolate Factory.
The Heavy Stuff
Story 1: The Artificial Intelligence Show kicks off 2025 with a bang.
If you check out one thing I’m sharing here, make it this one. Please.
But be warned … the first 30 minutes are not for the faint of heart.
Also, to echo the question asked around the 20-minute mark – what are we/you going to do about the truth we are facing? It's not an easy question to be asked. Understand. Or answer.
Story 2: Reflections, by Sam Altman
The Artificial Intelligence Show talks about this blog, Sam’s most recent post as he turns 40. It’s not as long as you might think … but it’s as impactful as you may think. If you don’t want to read the whole thing, just consider this excerpt:
“We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly distributed outcomes.
We are beginning to turn our aim beyond that to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own and, in turn, massively increase abundance and prosperity.
This sounds like science fiction right now and somewhat crazy to even talk about it. That’s alright—we’ve been there before, and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company.”
Worth noting, if you caught the news that Sam was being sued by his sister … here is a statement from him and his family.
Story 3: Marketing’s Extinction Level Event
Hey, I told you it would be heavy stuff.
This comes from Christopher Penn where he opens up the story with this line:
Marketing, as we know it, is going extinct.
Story 4: Things that have my attention for 2025
Shelly Palmer’s rundown of what he thinks we should be keeping an eye on this year – and why.
Story 5: ‘Bosses’ struggle to police workers’ use of AI
Why is this in the ‘heavy’ category? Well, a few things that jump out in my mind as the result of the reporting in this Financial Times read:
- Who owns the work the employees are doing?
- No, seriously – who? The company … or the platform they were developed on? Think about it – if there are no regulations, how can a company safely say what was and wasn’t ‘created’ on or with AI?
- How can you determine resource allocation? In other words, do you need everyone on that team … or can they be better served in an understaffed area of the business?
There are too many items to police due simply to the lack of overall regulation. Which is why it’s critical companies start with their own policies in how they address these questions.
The not-so-heavy stuff
Story 1: Wait, Sam Altman gets to be in both sections?
Yep. Because you will want to read this Bloomberg interview after checking out his blog.
Story 2: You can talk to those Google NotebookLM Podcast Hosts now!
But … it’ll cost ya! (From The Verge)
Story 3: Mexico is using an AI app to prevent suicides.
With an already reported 9% drop. (From Rest of World)
Story 4: How are companies using AI agents?
Here’s an early look at five companies jumping in. This will be an interesting one to hold onto and reference in six months. (From WSJ)
A few that don’t fit in either category
Ok, so once in a while, there will be a third category.
A quick 2024 AI recap (Christopher Penn)
The golden opportunity for American AI
Note that this skews a little into politics but the message is important to understand how corporate America could/should be looking at this new era of AI with a new administration.
Oh yeah, it’s written by Microsoft’s Vice Chair and President, Brad Smith.
12 Days of Shipmas
Since we ended before it finished … a recap from OpenAI.
Final Note
Today’s is a simple one. This will now go out on Fridays. Hopefully in the morning, but don’t worry, it will hit your inbox at some point on Friday, no matter what.
Thanks for reading! Let’s go get 2025!
-Ben
As a reminder, this is a round-up of the biggest stories, often hitting multiple newsletters I receive/review. The sources are many … which I’m happy to read on your behalf. Let me know if there’s one you’d like me to track or have questions about a topic you’re not seeing here.