The Number 1 Soft Skill Necessary for AI Success
A sense of curiosity can make the difference for those embracing new technology
There’s one soft skill that separates people who learn and adapt to AI with ease: Curiosity. It’s the same skill that defines a great journalist, an effective investigator, and makes a data scientist incredibly effective.
Often associated with science, curiosity is a very human trait, one that most children exhibit. As we grow older, we evolve into career paths:
Data scientists are curious.
Engineers are efficient.
Writers and designers are creative.
Project managers are detail-oriented.
Executives are strategic.
Expanding beyond those career paths and prescribed character traits can confuse the market, especially in a world of personal brand constructs. But, one could argue that curiosity is still inherently a part of all these jobs. In addition to exploring, curiosity builds tolerance for disruptive change and a desire for knowledge, according to a USC paper.
If fostered, it can allow one to thrive in any job, driving exploration and innovation. Curiosity inspires one to ask “what if” or “can something achieve a goal?” It explores solutions and follows through with additional questioning to test the results.
The benefits of curiosity are often quiet, but they drive innovation, lifelong learning, and creative problem-solving. Applied to AI, this ability to question possibilities and the outcomes of explorations is 100% necessary. Let me illustrate why.
The Most Important Question
Too often, I see professionals who want to buy an AI app and have it work immediately and resolve their problems. They say it should create their content, automate their email responses, or produce better summaries. Technology band-aids — old and new AI-branded ones — usually don’t work.
Experienced people know that AI is flawed and imperfect when asked to address a general situation. A curious mind will investigate the situation, examine the process, or map the effort to the actual business problem. What are the specific aspects of our work and the larger environment that are creating the problem? This is, in essence, an inventory process that gets down to specificity about cause and effect.
Once the specifics are uncovered, applying potential AI solutions becomes easier. AI works best when it has a narrowly defined problem, access to the proper dataset, and specific instructions to resolve it.
So, telling an AI to “draft an article” is often an invitation to a half-baked result. A curious marketer might ask, “Why do I want help creating content?” The answer more than likely would be, “To save time.” Other reasons might include a lack of subject matter knowledge, writer's block, and other priorities (another way of saying insufficient time).
It might make more sense to write an article manually and document the whole process end-to-end, from ideation, research, and drafting through to editing, versioning, and publishing. This is the painful work that curiosity demands. Are you willing to find out if AI can help you?
As a writer, I know my process. Here are the steps that AI helps me with:
Sometimes, not always, ideation when I am unsure of my thesis: Questions with Claude, use ChatGPT to ideate and critique further.
Sources: When I have a concept, I will often ask an AI to provide links to reputable sources with summaries. This helps my exploration process. I usually take about 50-60% of the suggestions, and yes, I do read the original source material.
Drafting: Usually, I opt for no help here. I hate LLM voices, and I like the thinking process behind drafting. It resolves my own curiosity about the topic. However, when I don’t care about the outcome, for example, a business process document, I will instruct an LLM specifically how I want it to draft, in what tone, and toward a desired outcome. I will then edit the draft. Some have trained their LLM on their voice, too. A pretty easy thing to do in ChatGPT or a Claude project folder.
Copyediting: Anyone who knows me quickly realizes I am a walking typo machine. Grammarly Go save me! But… I will also take the “mostly” typo-free article and have Claude criticize it.
Versioning? I am not interested. This is the work of AI. Medium, length, tone, and desired outcome all help shape the prompts. This task is ideal for agents and automations. It can easily be achieved in the GPT store.
Publishing: Another thing that can be automated, but I still do this manually. Habit?
Using AI, my writing cycle has been shaved by 40-80%, depending on the document. That saves a lot of time writing. And yeah, I write your blogs, not AI, but these tools help quite a bit.
This outcome was made impossible by process mining and identifying time devils, where probability machines can help. It was also made possible by inherent curiosity and not accepting that flawed LLM writing dooms the tool as a complete failure.
Another Example: DIY
This Deloitte Insights research illustrates why workforce members need to upskill themselves rather than waiting for their employer to do the right thing.
Entrepreneurs are notorious for their do-it-yourself (DIY) attitude. So should anyone who is looking to incorporate AI into their work life. The reality is that finding AI solutions is sloppy and requires an iterative process to address them. That means you have to have a healthy willingness to try solutions and then iterate when they don’t work.
Good news here: LLMs can be helpful in troubleshooting themselves, particularly when it comes to fixing processes and providing or identifying the need for more context (e.g. data) to realize an outcome. So use the LLM to help improve itself.
Way too many people want the result handed to them. Unfortunately, LLMs are often raw and powerful but refined in use cases. More and more GPTs and agents are being built daily to address these needs. However, anyone looking to incorporate AI into their work life needs to be willing to research, explore, and iterate.
Here’s an example: An AI expert emailed folks that he will show them how Deep Research works live using Google Gemini to generate a structured research report, infographics, quizzes, and more. Of course, all people need to do is sign up for the free webinar (and get sold to relentlessly for eternity)!
A curious-minded person would do it themselves. Or watch a video on YouTube with no strings attached. They would even ask their LLM of choice how Deep Research works (since they all have a form of this function now), and then ask the LLM to walk them through an example. Five minutes, less pain, and no spamming, err, marketing.
People will learn how to use Deep Research from the AI expert, who is reputable. Heck, ⅓ of them may even incorporate it into their work life. But will they be able to use AI well in their professional lives? I doubt it.
The webinar attendee learns what Deep Research can do. The curious experimenter learns how it works, when it fails, and why certain prompts produce better results. That deeper understanding, born from hands-on questioning and testing, transfers to every AI tool they encounter.
In Conclusion
The curious-minded soul will go further. Curiosity in the AI age isn't just about adopting new technologies—it's about developing a questioning relationship with them. It's the difference between being an AI user and being an AI collaborator.
Users follow instructions and hope for good results. Collaborators understand the underlying logic, recognize failure patterns, and continuously refine their approach.
Professionals who approach each new AI development with genuine curiosity: What problems does this AI model solve? What problems does it create? How can I test its boundaries? How can I accept its limitations? Are there specific elements that can be improved or are better managed by a human?
In a world where AI handles more routine tasks, the uniquely human ability to wonder, question, and explore becomes our most valuable asset. Foster curiosity—it's not just a nice-to-have soft skill. In the AI era, it's your competitive advantage.
I deeply appreciated your reflections on curiosity as the cornerstone of meaningful AI engagement. I’ve found something very similar in my journey: curiosity isn't just a helpful trait, it’s the fundamental skill that allows us to navigate the AI era effectively. In my writing ('Curiosity Is The Real Advantage'), I argue that genuine curiosity, going beyond quick answers and continually probing deeper, becomes our primary advantage in an increasingly AI-driven world.
Your distinction between being an 'AI user' versus an 'AI collaborator' resonates strongly with my perspective on cultivating thoughtful partnerships with AI. In 'The Human Prompting Playbook' and 'The Cognitive Canvas,' I explore similar practical methods to help people leverage AI as a genuine thinking partner, not just as a productivity tool.
I’m curious, have you noticed particular habits or practices that consistently help maintain curiosity, even when facing the inevitable frustrations or challenges of working deeply with AI?
Thanks for clearly articulating this crucial skill, it’s great to see such a thoughtful alignment on how curiosity shapes our relationship with technology.
Thank you for that, Geoff. AI + a curious mind is capable of all kinds of innovation. Heck, you used the term "AI therapist" in a post, and I was off to the races. From that, the focus on AI technostress and the newly developed AI Technostress Institute site was born. So, I say "hurrah" for curiosity.