I took a moment at the start of the week to experiment with Google NotebookLM. It is an AI-powered tool designed to assist with research and note-taking, but what’s really gaining attention is its ability to transform documents into a podcast interview, where two speakers discuss the content. I fed it a few of our recent blog posts, hit ‘generate,’ and listened as the AI created a podcast on employee surveys, strategy, and culture measurement. The result was fascinating—and surprisingly compelling.
AI and Creativity in Business
The podcast generated is a great example of how large language models (LLMs) can create engaging, useful content. It synthesized some complex ideas into a conversational format that was easy to follow, and that felt genuinely insightful. It highlights how far AI technology has come. Working with AI over the last two decides I’ve watched it evolve from simple rule-based systems to the powerful, data-driven models we use today. I first wrote about AI and the future of work on this blog over 10 years ago. Even at that time, AI-driven change felt imminent. My point is that AI is far from new, but the exponential rise of high-performance computing, and the abundance of text and images available on the internet, has enabled it to be trained on huge datasets and to solve increasingly complex problems. Yet, even with all these advances, AI’s potential is still often misunderstood, particularly when it comes to integrating it into day-to-day operations.
Change appears quickly, happens slowly and rarely completes.
The AI Hype Cycle: What’s New, What’s Not
While it feels like 2024 is the “year of AI,” we’ve been here a few times before. I remember some of this sentiment even back even in 2018, when Google showcased Duplex at their I/O conference, demonstrating how AI could make phone calls to book appointments. Perhaps without the same level of rhetoric and boom, but it was impressive. However, like many AI innovations, it struggled with real-world deployment—largely due to messy data, regulatory hurdles, and the realities of human-AI interaction. It is a lesson organisations should keep in mind: AI demos well, but real-world success is most-often frustratingly elusive.
Grounding AI
One of the biggest challenges with generative AI, like NotebookLM, is the risk of factual inaccuracies and imposibilities. The models are generally probabilistic, which means they sometimes generate outputs that are very convincing, but also randomly incorrect. This is a phenomenon often referred to as AI “hallucination.” For creative applications this can be fun, but for decision-making it’s a significant risk. This is where “grounding” comes in. Grounding involves feeding AI reliable, high-quality data in an attempt to ensure its outputs are accurate. In my little experiment, our own blog posts and research papers guided the AI’s podcast script. This is a form of grounding. The first version of the podcast was creative, but it was also factually inaccurate in some areas, even with that data as an input, so I ran it again to produce a more polished, factually sound version. Even with grounding, LLMs will have errors in their output, so it is crucial to monitor generative AI. The more data that is processed, the higher the risk of errors. This is one of the reasons it still isn’t suited to data analysis.
Beyond Generative AI: Managing Bias and Distortion
It’s not just about generative AI—any AI model, even those used for tasks like summarisation or data analysis, can introduce biases, based on how they are trained and the sources of data used. AI models are even biased about how they write about the risks of AI errors. The tool I use to proof read my writing made some very interesting suggestions about what to reword and remove in this post. It seemed surprisingly determined that generative AI, rather that Natural Language Processing, was the solution I should be writing about! While AI can automate and enhance tasks, it also requires thoughtful integration to avoid unintended consequences. Natural Language Process has many advantages in terms of efficiency and significantly lower error rates, and LLMs are notoriously poor at mathematical tasks (and counting the r’s in strawberry).
The Future of AI in Business: Opportunities and Challenges
As AI becomes more embedded in the workplace, it’s essential to recognise both the opportunities and the challenges. Yes, there are concerns about job displacement, but for the same reasons that I gave ten years ago, I still firmly believe that AI is more likely to augment human capabilities than replace them. That said, successful AI integration in business requires:
- Responsible AI Development: Ensuring AI is developed and deployed responsibly, with appropriate safety measures in place.
- Leveraging AI’s Strengths: Focus on AI’s strengths in specific tasks while acknowledging its limitations in others.
- Preparing the workforce and the workplace: Equipping for the changing nature of work, with the skills and the information sources needed to thrive in an AI-augmented world.
AI is still a very long way from replicating embodied human intelligence, despite technology company announcements and claims. Costs, reliability, resource consumption and governance have a long way to go. While today’s AI is increasingly multimodal (able to deal with text, speech and images), it is a long way from the tacit knowledge and embodied intelligence required in the world of work.
What’s Next? The AI Content Explosion
But in the meantime, it has some great applicability for a creative outputs, when used responsibly. Although I am not sure what the world is going to do with the millions of hours of podcasts that are about to be produced, AI is already creating a tidal wave of content. Tools like NotebookLM make it easier than ever to generate podcasts, articles, and reports at scale. That actually makes your people and your culture even more important. How do you ground your strategy in real data? I just happen to have a handy podcast episode about that…