Some days I think my job is that of a technology therapist, resolving executives' many confusing views about AI. Whether it is untangling hype and dogma, unwinding fears about evil sentient robots and malicious disinformation campaigns, or addressing real concerns about job loss and chances of success in our increasingly automated world, my first job is level-setting.
A realistic view of AI success should be grounded in both organizational and personal goals. Instead, it usually begins with clarity and then inspires a belief that, yes, every person who can use a computer or a mobile phone can enjoy benefits from AI. Further, if they are part of a team, an association, or an enterprise, they can help their team succeed on a larger scale.
Navigating AI confusion is often a task of addressing one or multiple factors, which I call the six Ds of AI Confusion: Disappointment, dogma, dystopia, disinformation, denial, and despondency. Let’s look a little deeper at these causes.
Disappointment
When a promised result fails, people are left disappointed. Almost every generative AI application launched over the past two years has simply underwhelmed or even disappointed a majority of users. Further, it’s confusing to be promised a result by vendors, media, and influencers, only to find implementing AI is extremely difficult for a wide variety of reasons, many of which go beyond the technology itself.
Now, people confront the reality of current AI technologies versus overinflated expectations. Despite impressive achievements and use cases, AI still struggles with common sense reasoning, genuine understanding, and reliable performance outside narrow parameters. The gap between marketing hype and actual capabilities has left many feeling jaded about AI's potential, especially when confronted with the technology's frequent failures and limitations.
Underwhelming results haven’t prevented partial or whole-scale implications for the workforce. As noted in our last article, writing and UX design gigs have been down by double digits since the launch of generative AI. Talk about disappointing; it's hard to like seeing jobs and revenues lost to inferior early-generation applications. What’s more confusing is why companies would prefer scaling lesser quality outputs over using humans to govern automation efforts for higher quality.
Regardless of how AI hype deflation continues, “the discourse surrounding AI will also have lasting effects on labor,” says David Gray Widder and Mar Hicks of Harvard’s Ash Center. “Some workers will see the scope of their work reduced, while others will face wage stagnation or cuts owing to the threat, however empty, that they might be replaced with poor facsimiles of themselves.” Whatever the long-term dogmatic impact might be, it’s hard to see how over-amplified AI purists did anything but confuse people with false promises.
Dogma
Dogma within the AI field itself has also fueled elements of the confusing hype cycle with great promises of a panacea for a wide variety of tasks, capabilities, and even full-on occupations.
Large rifts exist between those who see artificial general intelligence (AGI) as an inevitability and those who view such claims as dangerously overconfident, if not absolutely full of crap.
The quasi-religious fervor of AGI proponents like OpenAI CEO Sam Altman and the sometimes mindless parroting of these proponents by influencers has prompted skepticism and pushback from those who prefer a more measured approach to technological progress. The best example of this is Gary Marcus’ ongoing criticism of Sam Altman.
Dogma is just a bad form of confusing hype and is particularly sad when it comes from supposed market leaders promoting human replacement. Perhaps it would be best to focus on the actual product fit for customer missions. Is that too much to ask?
Dystopia
AI may be the worst buzzword of all time for a Silicon Valley tech movement. Who thought it was a good idea to market a technology concept already tarnished with Hollywood tropes and villains featured in decades of movies, including 2001: A Space Odyssey's Hal, The Terminator, and The Matrix?
That's not to mention decades of science fiction covering the topic. Today, an AI villain is more common than random Snoop Dogg references, and this cultural programming makes it harder for leaders to build trust and enthusiasm for AI initiatives.
Criticism and fears of AI are shaped by this legacy. When executives propose AI solutions to automate tasks, they often face immediate pushback shaped by dystopian cultural narratives.
People envision automated warfare, ubiquitous surveillance, mass unemployment, and the eventual subjugation of humanity by superior machine intelligence. This reflexive negativity can stall important digital transformation efforts and create unnecessary resistance to even simple automation projects.
The AI-villain bias is particularly confusing and hard to overcome because it operates at both conscious and unconscious levels. Even technically sophisticated teams may harbor unstated reservations about AI shaped by decades of cultural conditioning. Success with AI initiatives often requires explicitly addressing and defusing these dystopian assumptions before practical work can begin.
Is it any wonder that dystopian confusion about AI looms large in the cultural imagination? Why didn’t we just call them data apps or some other next-gen naming convention?
Disinformation
Disinformation creates a more tangible impact, demonstrating how AI harms information quality through hallucinations and scams. The viral spread of malicious AI-generated content, from diplomatic photo manipulations to fabricated celebrity videos, has already begun eroding public trust in digital media.
Given how information flows more freely than ever in our digital age, AI systems are used to generate convincing text, images, and videos at scale. Many people are worried about a future where truth becomes increasingly difficult to discern from artificially crafted falsehoods or content created from bad data and sources, including sourcing questionable content already created by AI. Talk about meta problems.
Denial
Denial manifests in various ways among AI skeptics, from refusing to acknowledge legitimate advances to dismissing obvious gains to be had in their own workflows. This psychological defense mechanism often stems from feeling overwhelmed and confused by the pace of technological change or threatened by its implications. Of course, this traditional Luddite stance – oft typified by writing off AI as slop – is short-sighted, and can create more risk of replacement by AI-capable peers.
Despondency
Despondency emerges as people grapple with feelings of powerlessness in the face of AI's rapid advancement. Many feel that decisions about AI development and deployment are being made without their input or consent by powerful corporations and institutions pursuing their own agendas. Or their companies and organizations simply do not have the resources to compete at the same scale as some of their larger industry peers.
They feel despondent and hopeless, believing that they cannot compete. And they are confused about where to begin, given the immense scope of the AI movement. Of course, little do they know that while many corporations and organizations are talking a good AI game, few are anywhere near to what could be called an advanced level of AI maturity (link).
Further fueling this sense of helplessness is compounded by concerns about job displacement, privacy erosion, and the potential loss of human agency in an increasingly automated world. Is it any wonder that there is a sense of despondency?
Conclusion
So, you can see, there is a lot working against AI in the marketplace before actual conversations on implementing the technology start. It would be great if vendors were more mindful of their marketing impacts so they could better address the 6 Ds. But, hey, that’s what happens when you rely on LLMs for a CMO, I guess. OK, that was humor, sort of.
What do you think about AI confusion as a barrier?
Images created with Midjourney.
Additional Reading
Artificial Intelligence: The Most Unfortunate Buzzword
Artificial Intelligence or AI the buzzword suffers from additional unfortunate baggage.