How Organizations Tank Their Own AI Projects
Failing to understand and address cultural challenges can submarine AI projects
In our AI use case framework, we described the Organization Impact portion of our scoring methodology as the submarine quadrant. Internal cultural issues, from employee resistance to governance issues, can be some of the most challenging aspects of any AI pilot or implementation.
While organizations focus on aligning AI use cases with mission, identifying measurable impacts, and assessing and preparing technology readiness are critical steps, they tend to avoid the full impact their culture can make. Ignoring cultural readiness can create massive repercussions that limit or completely destroy an AI use case’s chances for success.
Overall, understanding an organization’s AI maturity and readiness for new projects can help address necessary programs and policy changes to help foster adoption. Here are some of the common ways organizational cultures tank their own AI projects.
Staff Resistance
Staff are not always quick to embrace AI.
One primary cultural challenge is internal resistance, staff who are unwilling to use AI for a variety of reasons. Over and over again, every week, we hear stories about AI implementations that were deployed and worked to some extent, but the staff was unwilling to use them.
There are many contributing factors that foster resistant team members, including:
Fear of job loss
Poor model quality
Low work value
Lack of user input
Siloed rivalries
Simply not knowing how to use the AI application
In almost every case, communication before and after the AI deployment is absolutely critical. Organizations and departments rarely want to think through the staff side of the issue, it’s costly and often dismissed as overhead. Yet, any organization is only as good as the humans that actually work there.
Building AI for the workforce with their input and optimizing the culture to maximize its chances of success is just a smart investment. It is better to think through and price in people as part of AI implementation. That includes garnering participation, fostering and allowing users to provide feedback, offering training, and, yes, building clear change management programs to ease adoption.
In some cases, they are afraid of the implications that come with success. Often, AI saves costs, which can speak directly to employee fears about being replaced by a bot. Unplanned or unspoken plans for resource allocation post impacts can stir team members’ imagination. If the outcome and resulting impacts are not communicated, then it’s easy to imagine how more efficiencies could mean eliminating jobs, creating more work, or leaving portions of time uncertain for undetermined “strategic work.”
In a similar vein, if management implements AI models without including the implementation's users in the planning and testing phases, all sorts of craziness can occur, all of which result in resistance. Not including users in the process lacks wisdom. Team members can provide feedback on where AI can best impact their work and reduce painful rote tasks. Their inputs on the actual technology, its usefulness, its shortcomings and needs, and how it fits within larger processes are absolutely essential.
On a macro level, the same problem can occur when technology decisions are made in silos. Cross-departmental rivalries can cause instant resistance. For example, not including IT in an acquisition, while expeditious in most cases, will cause instantaneous resistance. Similarly, if IT buys AI and forces it upon other departments, then there is resistance.
These are just a few impacts culture can make on AI use cases. Having a plan for upskilling and bettering team members’ careers as a result of successful AI implementations is just smart.
There is enough fear in the public discourse about AI replacing humans. Simply implementing without addressing fear and team needs head-on is asking for resistance. Success requires a smart combination of user participation, change management, and job training.
Oversight Issues
The Governance, Expertise, Team, and Alignment paths in the AI Maturity Model deal directly with staff resistance and oversight.
In addition to the various aspects and wrinkles resulting from cultural impacts, there are other ways an organization can stymie its own AI models. One of the most obvious roadblocks is oversight (or lack thereof) that seeks to protect the organization. AI oversight comes in multiple forms, but primarily from three different enterprise groups: HR, IT, and legal.
Legal is the most talked about example in public discourse. There is a wide-ranging set of legal impacts that can ground an AI model, from a draconian policy that forbids the use of AI in any manner to a legal counsel that doesn’t think through the implications of an AI on the company from a liability standpoint. Having competent well educated legal oversight allows for usage but also provides the right disclaimers and protections to save the organization from liability. That’s not easy to find… Yet.
HR oversight can prevent team members from using AI based on role or skillsets. Or worse, they may be the organization responsible for change management and training. While professionals may be competent and capable, HR tends to be one of the most underfunded and powerless departments in the organization. Success requires upskilling, funding, and empowering HR to foster a culture that can deploy AI effectively.
Finally, IT can make or break an implementation, just starting with data governance and cybersecurity. Strong AI is grounded in data governance with role-based access to tools. But poor AI can easily be hampered by the very same set of rules seeking to protect an organization… And hampering its competitiveness.
Frankly, this is an area where having educated staff is important. If oversight organizations are, at a minimum, up to speed on AI, then garnering experience in the form of evolutionary pilots and implementations will be essential. Like legal, strategic direction must ultimately govern the organization’s decision-making. Otherwise, gatekeepers may hinder success.
Conclusion
Rare is the organization that understands that it, too, must change to adapt AI. Self-aware and committed to innovation, organizations can stay in front of AI and create a culture that can nurture and advance with new technologies. Interested parties can download the AI Use Case Framework and our in-depth organizational coaching guide, The AI Maturity Model on the CognitivePath website.