Prompt Engineering Is Dead — Long Live Prompt Engineering!
Why You Should (Still) Learn This Critical Skill Before It Becomes Obsolete
This spring, TIME reported that Prompt Engineers can make up to $335,000 a year — no college degree required. Aside from the fact that the ChatGPT hype has led to an abundance of new bootcamps and get-rich-quick schemes being created, computer scientists are wondering: “$335,000 a year? How is that even possible?” Their rationale: prompt engineering is not even an actual engineering discipline.
ChatGPT has taken the world by storm this year. Regardless of your role, domain, or technical background, chances are, you’ve actually engineered a prompt yourself, for example, the last time you’ve instructed ChatGPT, Bing, Bard, or Midjourney to generate an output for you. But, if you think that’s all there is to prompt engineering for generative AI projects, think again. Over the next few weeks, I’ll cover various aspects of prompt engineering in business and why there is merit to learning this skill.
In my last posts, I shared why your prompt engineers need more than ChatGPT skills, why they’re worth the money they’re making, and how prompting changes the game for software developers. Today, we’ll look at the reasons why prompt engineering will eventually become obsolete — at some point. What could that look like?
Is Prompt Engineering Just A Fad?
AI whisperer, prompt engineer, prompt designer, etc. — whatever you call it, instructing an AI model through natural language to generate an output is a fairly new skill. It’s only existed for about 9-12 months; maybe 36 months, if you’re being generous. That’s the time frame for which ChatGPT, DALL-E 2, and GPT-3 have been publicly available. So, naturally, those that have used these products for some time (and that know how to use them) are in hot demand. But, along with the excitement about this new career opportunity for many, there’s also a looming question: “How long will we even need to engineer prompts before a different interface becomes the dominant method?“
GPT-3 has been available since 2020. At least since then, prompting has been the dominant method to elicit an output from large language models. But it was Midjourney, DALL-E 2, ChatGPT, etc. that have made prompt engineering a new phenomenon these last few months. But the vast majority of the workforce has not even created a single prompt. So, despite the rush towards generative AI, this means if you’re considering to build or expand your prompt engineering skills, you’re still ahead of the curve. But crafting a prompt that quickly delivers the desired output is not trivial. It often requires experimentation and iteration.
If history serves as a predictor, we can assume that the task of prompt engineering is a temporary one. It’s just the beginning. A stepping stone to create higher-level methods and services to instruct a model to generate an output. And prompting will become even more convenient and reliable, too, before it becomes obsolete. Let’s look at how this could evolve.
Three Potential Scenarios For The Future Of Prompt Engineering
I see three potential scenarios how prompt engineering might evolve:
The “status quo” is as good as it gets. Prompting is the way to get a model to generate an output. We’ll always be prompting and it’s the dominant way. That’s it. Anyone looking to work with a foundation model will need to learn prompting skills — just like anyone looking to use search in academia needs to be able to understand related subjects and boolean logic to define the search query. However, it’s also the most unlikely scenario of the three. We can assume that prompting will go through an evolution.
Prompting will get easier. In this option, it will get easier in two ways and for two stakeholder groups: (a) prompt engineers who create the actual prompts and (b) end users who work with generative AI based applications.
(a) Higher-level prompting
Like programming languages before, prompting will go through a multi-year (maybe decade-long) evolution. We’ve seen this before with other programming languages — from Assembler to C/C++ to C#, Java, etc. Now, the programming language is simply natural language. The evolution will mean to move up in stack — from where the model is to where the application is. Just like C++ and C# take care of memory allocation and garbage collection for you, higher-level prompting methods could reduce the number of manual tasks you need to instruct the model to do. This will be great for software developers who can build more reliable applications on top of generative AI while reducing the burden of constraints and priming.
(b) User experience gets more user friendly
Like the evolution from command-line instructions to a graphical user interface (GUI) to speech and gestures, generative AI tools will become even easier to use. We’re already seeing some aspects in the sprawl of generative AI based applications today. But we can expect that additional instructions will become optional or unnecessary.
There won’t be any prompting. It just won’t be necessary anymore. Earlier this year, we’ve seen the first examples of autonomous agents (e.g. Auto-GPT or BabyAGI). Self-optimizing, autonomous agents that act on our behalf. We define the goal and they determine the optimal way to reach it. With additional data and available historic information, the effort to provide context to the model or agent will be drastically reduced.
Why Prompt Engineering Is Still Worth Learning
Whatever role prompt engineering will play in the future doesn’t solve your problem today. As businesses are looking to develop generative AI driven applications, prompt engineering skills are in hot demand (hence, also the $335,000 salary). Recently, progress in AI and generative AI has been at a rapid pace. It’s hard to predict how soon things will evolve. Given that prompt engineering has really only existed for 12 months, it’s still a nascent method and skill. That also means even if you’re starting to learn it today, you’ll still be ahead of the vast majority of the workforce that doesn’t work with generative AI. In addition, developing specific prompts and instructions varies between different vendors and the same prompt will generate different output. Hence, learning different prompting strategies by model or vendor will be needed as well.
Depending upon your role, you don’t need to be a “master” prompt engineer — at least not when you’re starting out. Good enough is just fine. Just like you don’t have to be a “master” Google search maestro. Being able to experiment with prompts, testing different iterations, and defining optimal outcomes are key skills for professional prompt engineers. And developing evaluation frameworks and strategies are key in software development centric roles. Taking all of this into account, it’s why prompt engineering is still worth learning — before it will eventually become obsolete in the future.
How do you see human interaction with generative AI evolve over time?
Develop your AI leadership skills
Join my bi-weekly live stream and podcast for leaders and hands-on practitioners. Each episode features a different guest who shares their AI journey and actionable insights. Learn from your peers how you can lead artificial intelligence, generative AI & automation in business with confidence.
Join us live
August 17 - Supreet Kaur, AI Product Evangelist, and I will talk about how you can upskill your product teams on generative AI.
August 29 - Eric Fraser, Culture Change Executive, will join and share his first-hand experience how much of his leadership role he is able to automate with generative AI.
Watch the latest episodes or listen to the podcast
Find me here
August 29 - Fireside Chat at Generative AI Online, why building generative AI products requires more than ChatGPT.
Follow me on LinkedIn for daily posts about how you can lead AI in business with confidence. Activate notifications (🔔) and never miss an update.
Together, let’s turn HYPE into OUTCOME. 👍🏻
—Andreas