Can We Learn from Social Media's Missteps and Implement AI Responsibly?
Demanding guardrails with AI is the only way businesses and individuals can protect themselves.
Image created on Midjourney Alpha v6.
Among the core social media evangelists still active online, many regret that the movement failed to improve people’s lives. This is one of the reasons many of us want the AI technology community to incorporate ethics, security measures, and other core guardrails to protect society.
The consequences will worsen if we aren’t mindful and avoid the same mistakes. AI is a technology that, as it matures, will be able to automate and learn its way to stronger results, good or bad.
The Social Media Lessons
Colleague Greg Verdino discusses the social media era’s dark side in the most recent No Brainer podcast.
Social media held so much promise. But as social networks and their algorithms achieved dominance, we received platforms that were designed to 1) push ads while 2) optimize human engagement, usually by appealing to our base emotive natures. Instead of ushering in a new era of enlightenment and democratizing people’s voices, social media has reduced freedom, becoming a tool to polarize society, stalk consumers, and condition human behavior.
What mistakes can we learn from that can benefit us in this early stage of generative AI adoption?
Experience has taught me technologies are rarely panaceas but rather just tools that humans use to further their interests, good and bad. In a capitalist society, that usually leads to making decisions for profit first and foremost above all else, consequences be damned.
The social networking industry saw the ills of commercialism manifest with big tech profiteering off of our worst natures, creating a wide variety of issues, from rending the social fibers of democracy to teen image shaming on social networks. And, at the most basic level, the ability to have your voice heard has turned into a daunting influencer game, where to achieve success, you must master the skill of algorithmic attention-getting instead of topical expertise. Online voices are celebrated more for their video editing skills (with AI tools) than their actual knowledge. That’s quite sad.
I feel shame for my prior strong sense of social media evangelism. But, I know many of my blogging peers from the 2000s feel the same.
This sense of guilt is one of the primary reasons I have taken a tempered view of generative AI’s potential benefits. It is also one of the reasons why AI guardrails – the responsible AI conversation – are almost always a central message point in my discussions with business leaders.
The tools have great potential for businesses and individuals alike, but only if we practice governance, ethics, and smart processes that regulate usage. Left unfettered, like social media, we will have undesirable outcomes wrought with exploitation, bias, and crime.
Consider the TikTok algorithm that kicks dwarves off the network because they look childlike. Where is the QA? Where is the human in the loop testing the algorithm before deployment?
Early Results on Smart AI Adoption Are Not Great
While this corner of the conversation focuses on smart implementation, the dominant generative AI conversation still revolves around hype fueled by an ecosystem of startups and big tech companies. In early March, we ran a podcast on AI Marketing Ethics that grossed approximately 25% fewer listeners than usual. Sure enough, the next episode saw a bounce back (See guest Paul Chaney’s response to this trend).
We saw a similar pattern last year when we ran a different podcast on ethics.
Beyond the small CognitivePath universe, there are signs wherever you look on the Internet. Blind eyes are turned to the growing mountain of ethical challenges created by OpenAI’s and other platforms' blatant use of copyrighted data to fuel its algorithms.
Instead of openly focusing on solutions to LLM technology’s complex hallucination problems, we see vendors driving conversations about AGI and video generation.
Malevolent use of AI is becoming as prolific as the positive use cases, creating moral quandaries and cybersecurity alike.
Meanwhile, every mainstream software package folds generative AI for individual use into their software, effectively rendering it as a feature set.
On social media, AI influencers promote personal productivity “prompt engineering” tricks as the path to widespread generative AI adoption, while enterprises remain unconvinced. Remember that these tricks become less valuable with each LLM evolution as interfaces improve.
While unchecked hype may be more interesting than conversations about innovative implementations, in my opinion, given the sector's current implementation status, it is not helpful. Instead, conversations about productive, safe use are needed—what the tech sector calls Responsible AI, if you would.
You can find a half-full or half-empty glass for generative AI wherever you look. Many businesses say they are now engaging in or experimenting with generative AI. However, so-called implementations are nothing more than experimentation on the personal productivity level or turning on generative AI within an existing app.
Perhaps the saving grace is this slow role businesses have engaged in, usually waiting to invest until it becomes clear that the organization has guardrails and use cases that offer the potential for enterprise-grade AI results. Businesses that insist on investing in technology that will improve the bottom line safely are avoiding the mistakes that the AI hype machine promises.
We cannot expect the government to regulate AI wisely or in time. Businesses and individual voices must insist on a better path. It may be less popular, but this path promises better outcomes for all.