I know, I know. There wasn’t a round-up last week! BUT, I have a good excuse. I was at MAICON 2024! And yes, that was a wild ride. To everyone involved, thank you for putting on this extraordinary event – and working so hard to keep it here in Cleveland, where we had 1,100 people come in (many from out of town) and enjoy this beautiful, underrated city.
Speaking of 1,100 people … ironically, I kept thinking, “Wow, only 1,100 people here…”
Why? Because at a few key points, I looked around the room and was like, “Did anyone else just hear … am I the only one freaking … did he just say what I think he...”
The highlights were many, the shocks were big, and the news was timely. The highlight, for me, was the interview Paul conducted with Adam Brotman and Andy Sack, authors of the book AI First, although maybe more famously known as the guys who interviewed Sam Altman when he gave that 95% quote. Coupled with Paul’s keynote the morning prior, I had enough to keep me up for the next few nights.
Listen to the AI Show pod for a deeper recap of the ‘shocks’ that many in the audience felt as the authors recounted their interviews with industry leaders Reid Hoffman, Bill Gates, Mustafa Suleyman, Sal Kahn, Altman and others.
As for the timely news, well, I guess Strawberry is a thing. And how nice was it for OpenAI to wait and release it hours before the closing keynote of MAICON?? Mike … Paul … the AI gods smile upon ‘ye.
I won’t cover less than usual in this space, but I will shuffle things a bit. I’m going to use ‘The Big 5’ section to address my MAICON takeaways.
(And stay tuned for an invite to discuss these and others at a MAICON follow-up roundtable I’m going to be hosting. It will be old school in-person, meaning it will be held at a physical location.)
AI’s Speed is Breaking the Sound Barrier
Everyone’s talking about the blistering pace of AI development—it's not linear, it's exponential. AGI-like capabilities (looking at you, “Strawberry”) are inching closer, and it's clear: things are about to get really weird, really fast.
Ben’s Takeaway:
Keep your finger on the pulse because this rollercoaster isn’t slowing down.
Your AI Twin is Ready for Work
As Andrew Davis shared, digital twins and AI agents are officially no longer sci-fi. Think AI tools that are your coworkers—or maybe even your replacements (just kidding … sort of). These tools are streamlining workflows like never before.
Ben’s Takeaway:
Start playing around with AI assistants before they become your new office buddy.
Humans vs. AI: Who’s in Charge?
Will AI augment your role or replace it? The debate continues. Leaders are divided between AI as an enhancer or a job killer, especially for knowledge workers. A big thought I wrestled with was all of the references to how AI will augment and unlock human potential. However, I couldn’t help but notice the topics and presentations around tools and capabilities that ultimately were replacing (or displacing?) resources. (To note: Ethan Mollick has a very interesting take on this in Co-Intelligence when he compares AI tools improving our work to what happened when direct lines hit the telephone industry.)
Ben’s Takeaway:
Don’t ignore reality. Focus on what makes you irreplaceable—creativity, emotional intelligence, and strategic thinking. And do it now.
Ethics: The Elephant in the Server Room
AI’s ethical and legal landscape is more ‘welp’ than well-defined. Companies still lack solid processes for validating AI outputs, and the gap between ethics and the law keeps widening. There is a lot to be decided still (see point one) but we should not be waiting for it to be decided!
Ben’s Takeaway:
Don’t wait for regulators. Build your own ethical guidelines and validation processes now.
AI Strategy: Get Your Roadmap in Gear
Christopher Penn’s roadmap for Open v. Closed is SO good that I am borrowing it for an AI roadmap, period. (By the way, Christopher looked pretty good for just having surgery.)
He outlines three key phases:
I doubt I have to convince you that we need a road map of some sort. But just in case, I loved this stat shared by Andrew Au: 91% of organizations’ change management programs fail. Don’t be another statistic. The consequences are too big!
Ben’s Takeaway:
Build a road map so you can have directions to get to the 9%.
Oh, come on. You didn’t think I’d talk that much about Strawberry only to ignore it, did you?
So here’s all you need to know about Strawberry, for now.
My advice? Start using it. Mainly because I’m not sure anyone really knows how just yet and I’m not at a point where I can see a significant difference. I’ve tried it out on a few things, and beyond making me wait a whole 11 seconds for a response, the output isn’t anything that can make your socks go up and down. However, it’s ability to engage with me about understanding what I’m asking is different. I think that’s where playing with it, experimenting with how you reason and run scenarios with it, will pay off in the (short) long term.
For now, here’s Ethan Mollick’s take on it:
…there are AI agents in Minecraft building a world, government, economy and more. Kind of like a modern-day Sea-Monkeys. (RRW)
…TMobile is signing a deal with OpenAI. But don’t worry, Sam Altman says it won’t use your data to train AIs. No, seriously, he said that. (WSJ)
The new Runway is here. And it’s blowing minds. (Digital Trends)
…Lionsgate signs a deal with Runway. The studio will get a custom AI tool to use in movie production and Runway gets access to all of the studio’s films, which includes John Wick. We’re a long way from last summer’s strike. (WSJ)
…Don’t try that crap in California, Lionsgate! Gov. Newsom has signed a bill protecting actors' AI likenesses. (The Hollywood Reporter)
57% of the internet may already be AI sludge. Awesome. (Digital Trends)
Open AI could increase subscription rates to $2,000/month. I’m sorry … what now? (Digital Trends)
How about one more OpenAI story?
Yea. We’ll do one more because this one … this one might really be the biggest. You’ve been warned: Sam Altman told staff that OpenAI is going to move away from a nonprofit structure next year. (Fortune)
A recent article in Psychology Today explores how AI, particularly LLMs, may serve as a powerful tool for personal and psychological exploration. Through iterative, non-judgmental dialogues, AI can mirror our thoughts back to us, offering deep self-reflection.
So, the hard part to answer is: Will our views of ourselves be limited and finite? Are we equipped to really analyze … ourselves?
As I pointed out in The Big 5, Andrew Davis really nailed his point about a digital twin as it pertains to the work you can be outsourcing to AI. It was fascinating.
So, I took his advice and started with the first step of creating an AI twin of the author of this newsletter/blog. (Yes, that’s me.)
I’m happy to share the details of how I got to the point of trusting it to handle some of the first drafts, but to note that “The Big 5” and “Must read/must discuss” sections were largely supported by the other me. I’d say roughly a 70/30 split with (the real) me just doing some refining and tweaking of my voice. What did you think of it? Easy to spot? Didn’t notice until I pointed it out?
Side note: I will never misrepresent myself and pawn off some AI writing as me, so as Christopher Penn does in his blogs, be on the lookout for these callouts going forward.
Thanks for reading!
-Ben
As a reminder, this is a round-up of the biggest stories, often hitting multiple newsletters I receive/review. The sources are many … which I’m happy to read on your behalf. Let me know if there’s one you’d like me to track or have questions about a topic you’re not seeing here.