Is Prompt Engineering Short-Sighted?
Do you think prompt engineering is a long-term skill set? I don't. It is a short-term fix for current AI limitations.
Do you think prompt engineering is a long-term skill set? I don't. It is a short-term fix for current AI limitations.
Before we go too far, let's review Techopedia's definition of prompt engineering:
Prompt engineering is an artificial intelligence (AI) technique to optimize and fine-tune language models for particular tasks and desired outputs. Also known as prompt design, it refers to the process of carefully constructing prompts or inputs for AI models to enhance their performance on specific tasks.
Numerous consultants suggest prompt engineering as the entryway for companies and organizations to adopt AI. In fact, a casual search reveals a wide variety of courses available for those who are interested.
I appreciate prompting is necessary to become familiar with using AI. In 2023, it's essential for all to begin experimenting with AI. But the approach is exceptionally tactical… and perhaps short-lived. Even Harvard Business Review gets this, "Despite the buzz surrounding it, the prominence of prompt engineering may be fleeting."
Here is why.
How Long Will Prompt Engineering Be Useful for Al?
Prompt engineering results from early-stage technology UX fails, a la hand coding websites in the 90s. My No Brainer podcast partner Greg Verdino astutely pointed that out to me seven months ago when we started our efforts together.
The reason why? You must manually input prompts to get AIs to generate your desired output. Then, you have to refine your prompt to get the desired result.
It feels like teaching junior employees how to Google for information they want. Now, there is value in becoming an expert Googler, as any professional researcher can testify.
And the parallel is valid. Great data scientists know how to prompt better than 99 percent of the population. But 99 percent of the people do not build or train AI algorithms and models. And nor will they have to.
First, they need to be able to get what they want from AI solutions. Second, AI interfaces are improving rapidly, fueled by AI itself to offer better input mechanisms for developers and everyday users alike.
More Intuitive Interfaces Are Arriving
When the World Wide Web launched, hand-coding websites was the norm. By the 2000s, blog engines and content management systems provided web developers and users with basic interfaces. By the mid-2010s, low-code, no-code solutions like Canva and advanced WordPress interfaces allowed vast swaths of the population to build content by dragging and dropping inputs. Today, entrepreneurs can quickly create their websites.
The move toward more intuitive AI UX is going to be much quicker. We already see a wave of low-code-no-code development tools to help data science teams rapidly build AI apps. GitHub Co-Pilot is the best example, increasing developer speeds by 55%.
UX engineers are always looking to improve how we interface with technology, which is already happening with AI tools. Here are some examples I have experienced:
Adobe Firefly incorporates WYSIWYGs into its interface to show people how to interact with the system.
ChatGPT offers examples of usable prompts in its UX.
Google Bard offers multiview searches with its answers, combining responses, reference articles, and deeper searches for those interested.
These are just text interfaces. It's not unreasonable to expect voice, tactile, and even video interfaces into AI tools. After all, most humans' first interactions with AI took place with conversational dialogues on their smartphones or home speakers.
Some of the most recent LLM advances feature multimodal interfaces and outputs in consumer-facing AI. As the technology matures, multimodal advances bring a more natural interface into and output from AI that transcends any singular media.
As a result of both better interfaces and the ability to use multiple forms of media for prompting, it will become much more natural and, dare I say, intuitive to humans. It's pretty safe to say prompt engineering will be a relatively short-lived phenomenon. Think a year, maybe even two or three. More than that? No way.
So, How Should a Business Approach Adoption?
Companies investing significant resources into prompt engineers and other similar "training" are barking up the wrong tree. Instead of investing in shiny objects, invest in change management so your workforce learns to incorporate AI naturally into their workflows.
As noted in the introduction, a basic introduction to how to prompt apps used by the team has value. Half of change management is just making change easy and acceptable.
Beyond this value point, businesses need to consider not just the short-term of AI but long-term impacts toward making their business more competitive. Adoption must begin with improving outcomes so the company can succeed.
“How can AI support the business strategy,” needs to be the question. Once good uses are identified, adoption should become more straightforward.
And, about those consultants selling prompt engineering as the answer? This is either the wrong strategic direction or taking advantage of business users perplexed by the AI hype cycle.
It happens with every boom. That's why businesses need to consider their business strategy before taking action. Otherwise, they will spend valuable resources on developing prompt engineering skills across the company when investing in methods to strengthen business processes with AI is more valuable.
This story is published on Generative AI. Connect with us on LinkedIn to get the latest AI stories and insights right in your feed. Let’s shape the future of AI together!