The Inevitable Changes Wrought by AI Use
Whether jobs or the marketing ecosystem, AI deployments are wreaking havoc on all that is normal.
There is an old Greek proverb credited to the philosopher Heraclitus: “You cannot swim in the same river twice.” It acknowledges the ever-changing nature of life and our inner selves, too. The water is never the same. Though changes are inevitable, most of us fight new things, particularly when they are unwelcome. Many people find the consequences of AI usage to be the disruptive variety of change.
It’s easy to understand why. Because of how some media and tech companies are deploying AI, we are experiencing widespread and sometimes scary changes in how information is created and consumed on the Internet. The resulting impacts on general media literacy and the corporate marketing/demand generation lifecycle are disruptive, to say the least.
And then there is the even bigger job loss issue. Agentic automations are impacting or, worse, eliminating jobs at companies like Microsoft and the oft-discussed Klarna case study.
While this article discusses the negative impacts of capitalist and individual AI deployments, it is more productive to acknowledge that widespread AI adoption and the resulting cultural changes have begun and are irreversible. So much money is being pumped into AI adoption from many different camps that it has reached a point of inevitable pervasiveness. In fact, it will only become more so with each passing quarter.
What can we do about it?
People Are People, and So Are Their Use Cases
We want to blame AI, but the angst is pointless. Though the pain is real for some, blaming AI is an oversimplified argument fueled by media and influencers more interested in clicks than accuracy. Further, the finger-wagging mirrors failed Luddite cries of the past. It would be better to focus on regulating human use of AI. As with prior technological evolutions, AI has good and bad uses.
Perhaps this is where America’s broken political environment really has come home to roost. While the European Union passed regulations to protect its citizens from AI abuses, Congress and the current Administration are trying to prevent any kind of regulation that would protect individual privacy, intellectual property, and society from ill uses.
Sadly, in an unregulated country, corporate entities that care more about profits than their teams have become overly aggressive about replacing their workforce with AI and automation. Those who crave attention and power are using AI to help manipulate media ecosystems to their advantage. And the big tech companies that sell AI and govern gigantic media and technology platforms use it to fuel market position and throttle the competition.
On the more positive side, good can be achieved with AI. Principled businesses, good managers, and smart workers use AI to minimize the rote, prioritize creative and strategic work for humans, and strengthen their organizations. Some are using it to create technological breakthroughs. Changemakers use AI to advocate for those they serve, protect wildlife, and resolve fundamental problems like improving medical diagnoses and resolving traffic challenges.
So, is AI the problem? Or as with other technologies, is the problem us and the way humans relate to each other economically and on a societal basis?
In a wild west of AI, abusive use cases are sure to pile up. This is where Silicon Valley’s recent power plays in government and technology platforms become problematic. In their zeal to deliver new, promising technologies and, perhaps more importantly for most founders, realize a massive return for investors, Silicon Valley AI companies have put country and society at risk. This is another black eye for the big-tech engines that exploited society with social media.
The “break it, then fix it” mentality that most startups embrace doesn’t work well with human societies. Unfortunately, breaking society may be the point in the most craven scenarios. Nevertheless, we must demand accountability from brands and governments with our voices, votes, and our purses.
This is not ideal talk. For example, if the offerings are close, I will always choose another company over OpenAI. Sam Altman and company have repeatedly disappointed me with their ethics and business approach. Why support it? I won’t even consider XAI for the same reasons, plus Elon Musk’s atrocious politics are beyond forgiveness. On the legislative front, you better believe my elected officials will hear about how bad the AI provisions in the current Congressional spending bill will be for our society.
But What About Our Jobs?
If change is inevitable, how does one survive AI disruption in professional communities?
Professionally, the answer is simple. While the Luddite says no to AI, the mindful will embrace it and use AI in mindful, productive fashions to benefit their companies and colleagues alike. It’s becoming painfully clear that those who dig their heels in against AI are the first to go. AI is expected to create as many jobs as it will displace, but there’s a hitch: You have to use AI.
While this can be a conscious choice (and I can’t blame someone for this), why lose work over a reaction to technology? A “change is hard” moment, if you would, can be overcome with intentional experimentation and training. AI technologies can be quite empowering when used appropriately. If you are fearful about how AI can be abused, then be the leader who sets the tone and guidelines for your organization. Demand better from your peers and the industry alike.
For most of us, that means working to create the data infrastructures, policies, procedures, and yes, the pilots to intelligently operationalize AI. Much of the dialogue at trade events and from an increasing number of professional services companies is quite encouraging. Most of these conversations discuss integrating LLMs into workflows, building simple automation (disguised as agents), and, most impactfully, old-school ML and analytics fueling decisions and personalization.
But for some, they must go further, in particular, sales and marketing teams. While they can automate and enact tactics faster, cheaper, and perhaps on a higher quality level with AI, it’s not enough. The media ecosystem has evolved in a significant fashion. While probably best discussed in depth in another post, here are the factors:
Ad-first, clickbait-second algorithms on social networks and search engines have disrupted the content ecosystem.
Answer engines have disrupted search, increasingly rendering search ad placement ineffective.
Answer engines are inaccurate because of LLM hallucinations, AND bad actors using AI-generated content and other blackhat techniques to source their content in answer engines, including misinformation and disinformation.
Over the past two decades, social networks and search engines have greatly weakened the media space. As a result, much of today’s journalism and influencer commentary is slanted and opinionated, designed to garner clicks rather than inform.
Supposedly independent research organizations are also biased and skewed toward their funders, who pay hundreds of thousands, even millions of dollars, to get information and also influence said reports.
Trade shows and conferences are pay-to-play venues. Organizers cater to exhibitors and sponsors who push their wares. Attendees who show up to get the latest trends are fed a more horizontal Pollyanna view of the industry and miss some of the challenges their vendors don’t care to highlight.
Due to the above changes, buyers distrust traditional sources of information. They’ve been burned and prefer to conduct private research. Buyers often talk with peer networks, investigate vendors without contacting them, and perform their buying process outside of the traditional “go-to-market” sales channel. According to Gartner (yes, a supposed independent research firm), 75% of B2B buyers prefer a rep-free sales experience.
In a world of algorithmic technology, sales and marketing staff need to change their playbooks. Now, sales and marketing must use tools to return to old-fashioned trust-building. You cannot automate the painstaking relationship-building process, but you can break a relationship with one significant automation gaffe.
Concluding Thoughts
In this new age of AI, we face a choice: Resist change and be left behind, or embrace it mindfully and help shape its trajectory. The river of technology continues to flow, and we cannot step back into the world of safe jobs and simple social media streams. For better or for worse, the algorithmic future has arrived.
While we may fear the current, courage is needed. We must learn to navigate the present AI landscape with intention, ethics, and humanity. This means demanding better from our leaders, choosing wisely where we spend our money and attention, and using these powerful tools to augment our uniquely human capabilities rather than replace them.
Algorithms will not write the future of AI. Instead, it belongs to those who choose to guide them. Good outcomes await us if we do it with wisdom, compassion, and a clear vision of the society we wish to create. And that, ultimately, is a very human responsibility, both yours and mine. Will you be a part of the solution?