AI round-up: Week of February 3, 2025

I typically start the round-up off with a lead-in to one of the big stories. Today I want to start with a quote.

“Playing with AI is like playing with Legos. You get to build cool stuff.”

I mean, I don’t think I can put it any better. And thanks to the recent ruling on AI copyright (which we talked about last week), you can copyright those cool things you build as long as you can prove you had an impact on the outcome.

That quote is from our senior content architect and AI council member, Eric Knappenberger. So as you read the stories and the advancements that have absolutely overwhelmed us this past week, remember that quote. Let’s be like Eric – let’s have fun ‘building cool stuff’ with our modern-day Legos.

Let’s get into it.

The Heavy Stuff

Story 1: DeepSeek, again.
Now that the dust has settled (not really, but I feel obligated to say that), we can start to see how DeepSeek could really impact the open-source enterprise markets. Learn more in this Forbes article talking about an article in The New York Times.

Story 2: So … what WILL be the economic impact of AI?
Legit question, right? Bloomberg is asking it, and I’m not sure everyone will like the potential scenarios. Sure, some will – especially the one where AI fuels an economic boom, drives the economy and adds jobs.

Of course, there’s a flip side to that. AI has little to no impact, and we’ve spent a lot of money for nothing. (Trust me, we won’t be seeing that option.)

More dystopian options do exist. We reach the singularity. Or AI will wipe out all the jobs (which Elon Musk famously said), leading to the end of cognitive work.

But don’t worry, that one probably won’t happen for a long, long, lo- Oh… never mind. I see Elon is testing that thinking on the U.S. government (as of this writing). So stay tuned! I think we’ll know the answer to this question sooner than we ever thought we would.

Story 3: Meta says it MAY stop the development of AI programs it deems too risky.
Zuck says Meta defines this as follows:

“…both “high-risk” and “critical-risk” systems are capable of aiding in cybersecurity, chemical, and biological attacks, the difference being that “critical-risk” systems could result in catastrophic outcomes”

So why is this up for debate exactly? In related news, NASA is saying it MAY stop asteroids that could destroy Earth. Seriously, what am I missing?

Story 4: California bill would make AI companies remind kids that chatbots aren’t people.
“I remember when that was a parent’s job.”

That’s what you might say while shaking your fist at the sky. However, parents are no match for tech companies’ ability to deploy “addictive engagement patterns”. Which makes the whole kids on phones and in apps and on social a really hard and smelly onion to unpeel. But for now, there are measures being taken to aid parents in their quest to protect kids.

Story 5: So what the hell IS AGI exactly? (The Verge)
The goalpost keeps moving, and now anyone can say what AGI is or isn’t … or when we will or won’t achieve it. Thanks, Sam.

The latest person to make a claim is SoftBank’s CEO Masayoshi Son, who says we will achieve – ’IT’– much earlier than expected. (You be the judge, but I don’t think he really knows.)

Story 6: AI has invented a new material that’s as strong as steel … but as light as foam.
This sounds like something right out of a Marvel movie. (BGR)

Side note: Over the past few weeks/months … how many times have you said to yourself or a friend, ‘This seems like something out of a movie…’ Admit it. A lot.

The not-so-heavy stuff

Story 1: Are LLMs becoming commoditized? (CNBC)
Maybe. But that just means a faster shift to agentic AI.

Story 2: The AI-restored Beatles song won the Grammy for Best Rock Performance.
Huh? Why? The song isn’t that good, and I think we need to revisit how we classify rock. But, hey, AI was used soooo…. yay!

Story 3: Remember Devin? The world’s first AI software engineer? He sucks.
Plus … is it even needed, given how much the LLMs can do for you in this area?

Story 4: I guess Hollywood isn’t quite ready to jump into bed with OpenAI.
Data privacy and upset unions are the two biggest reasons.

Story 5: The new AI-enabled Alexa is almost here.
For real, this time. (Reuters)

Story 6: OpenAI rebranded.
Looks almost the same. And still has dumb product names. Other than that, it’s a whole new OpenAI!!! (Wallpaper)

A few that don’t fit in either category

Story 1: Why is this CEO bragging about replacing humans with AI?
Remember Klarna? The company that was going to replace customer service with AI? Well, now it’s more than customer service. (The New York Times)

Story 2: AI is a new trick. Let’s not be old dogs.
You may find the author of this article a little annoying. I’d read it anyway.

AI image of a robot walking dogs

Final Note

Earlier in this round-up, I shared an article about Meta ‘maaayyyybeeee’ not making AI that is dangerous. Well, textbook dangerous.

Google is going the other way. This week it announced the company would remove the pledge to not use AI for ‘weapons or surveillance’.

I mean … why? Seriously, why?

Don’t answer that.

Thanks for reading.

-Ben

As a reminder, this is a round-up of the biggest stories, often hitting multiple newsletters I receive/review. The sources are many … which I’m happy to read on your behalf. Let me know if there’s one you’d like me to track or have questions about a topic you’re not seeing here.