AI's Trust Problem and What You Can Do About It
What are the roadblocks to trustworthy AI and how can enterprises put trust at the center of their AI strategy?
AI has a trust problem. And while that might seem like something for Sam, Satya, or Sundar to sort out, the truth is it’s a challenge that affects every organization racing to integrate AI into their operations.
OpenAI’s release of GPT-4o and its slick demos of an uncanny voice assistant grabbed headlines as a largely uncritical tech press swallowed then regurgitated the hype with little critical reflection. But the bigger OpenAI story was the resignation of technical co-founder Ilya Sutskever and head of alignment Jan Leike.
Sutskever’s departure is hardly a surprise, given his alleged role in Sam Altman’s ouster then reinstatement last November – a public drama during which the board of directors charged Altman with being “not consistently candid.”
But Leike’s departure is noteworthy on its own. As the company’s trust and alignment czar, he directly blamed the company’s emphasis on shiny objects over safety as his key reason for leaving. As if to prove Leike’s point, OpenAI swiftly disbanded Leike’s entire AI safety team upon his departure.
Now, OpenAI is hardly the first AI company to deprioritize safety. Early last year, Microsoft laid off its entire team focused on AI ethics and the societal impacts of AI. Google fired two of its top ethicists in 2020 and 2021, then disbanded its internal ethics watchdog organization at the start of 2023.
And last week alone, OpenAI wasn’t the only technology company that had trouble around trust. A Slack user discovered that the popular corporate messaging app is training its AI model on proprietary user data without consent, even as its parent company -- Salesforce.com – positions itself as a leader in trustworthy AI. (Salesforce did swiftly respond to outcries: Trust us, it’s fine!)
As I’m writing this, OpenAI is back in the news for another little white lie that underscores a larger betrayal of trust. When observers noticed that the voice of ChatGPT-4o bore a striking resemblance to Scarlett Johansson’s chatbot character Samantha from the movie Her, Altman and CTO Mira Murati claimed any similarities are a “coincidence.” Johansson has since gone public about OpenAI’s attempt to hire her to voice their bot and her belief that the company clearly trained it to mimic her even though she turned them down.
Despite missteps and misdeeds throughout the AI ecosystem, OpenAI may have a unique challenge that stems from its own leadership team when it comes to credibility.
Beyond that company’s unique problems, the AI industry’s trust problem runs deeper than headlines and the hype cycle, though. And not all responsibility rests on the shoulders of tech giants.
Five Key Barriers to Trusted AI
Let’s look at a handful of major barriers standing in the way of trusted (and trustworthy) AI.
Transparency and Explainability
AI systems often function as "black boxes." Even developers struggle to understand and explain how these systems reach their conclusions. This lack of clarity erodes user trust, especially when AI is deployed in sensitive domains like healthcare, criminal justice, and finance, government, and even human resources.
Much of the responsibility here does lie with the developers that train and maintain the foundation models most generative AI applications are built upon, and the applications companies that must ensure their systems perform as intended. But this doesn’t let end user organizations off the hook for things like being well aware of whatever might be hidden in Terms of Service and conducting proper diligence before committing to any AI technology partnership.
Ethical and Societal Concerns
AI systems, reflecting the biases in their training data, can exacerbate discrimination. For instance, facial recognition technology's higher error rates for people of color highlight the potential for harm in law enforcement applications. Ensuring ethical AI deployment requires rigorous standards to mitigate these biases and safeguard fairness.
At the same time, there’s growing awareness around AI’s high environmental impact, fears around the potential for workforce displacement, and concerns about privacy, data protection, and intellectual property rights.
Lagging Regulation
Regulatory frameworks struggle to keep pace with AI's rapid advancement, leading to significant gaps that can be exploited. This regulatory lag means that many AI systems are deployed without thorough vetting for safety, fairness, and ethical considerations. It also means that end user organizations may lack the guidance and clarity they need to adopt and use AI systems confidently.
While this falls squarely on legislators, the fact is corporate leaders should approach AI (or any new technology) from a stance of common sense and responsibility, regardless of whether there’s regulation to prescribe or prohibit certain uses.
Responsibility and Accountability
When AI systems falter or cause harm, pinpointing responsibility becomes murky. This ambiguity enables organizations to evade accountability, undermining trust further. The industry must establish robust accountability frameworks, ensuring that all stakeholders — from developers to end-users — are clearly delineated and responsible. Continuous human oversight is essential to identify and rectify biases and errors early on, preventing their reinforcement over time.
It's clear that not all harms stem from the model developers themselves. Take, for instance, the FTC’s recent move to ban national retailer Rite Aid from using AI facial recognition in its stores after finding that the system inaccurately flagged woman and people of color as shoplifters – and that Rite Aid failed to implement reasonable safeguards in its deployment of the technology.
All of this underscores the idea that, when it comes to AI in business, ethics aren’t optional.
The ”Credible Liar” Challenge
This trust-buster is a bit different than the others. In the two years since generative AI arrived on the scene, many have noted its ability to present inaccuracies, incomplete information, outright falsehoods, and invented information in a voice and tone that implies and instills absolute confidence. At best, this is annoying. At worst, it’s downright deceptive.
Despite this, recent research indicates that people may trust AI systems more than they trust other people. This creates all sorts of quandaries — from how easy it is to spread believable AI-generated disinformation to the way businesses may use ultra-persuasive AI to sway customers’ beliefs and behaviors. All of this is only going to get stickier in a world where most content is AI-generated, virtual influencers are everywhere, and search engines provide AI summaries instead of links to reputable sources.
While some of this is outside your control, none of it should be off your radar. The fact is, trust is crucial for scaling your organization’s AI programs for success.
After the jump, I’ll get into the specifics of why trustworthy AI matters for all enterprises and share 10 specific things any leader should do to make sure trust is at the center of their organization’s AI adoption.
Keep reading with a 7-day free trial
Subscribe to The CognitivePath to keep reading this post and get 7 days of free access to the full post archives.