Why Your Prompt Engineers Need More Than ChatGPT Skills
How Professional Prompt Engineers Deliver Repeatable Results For Businesses
This spring, TIME reported that Prompt Engineers are in such hot demand that they can make up to $335,000 a year — no college degree required1. Aside from the fact that the ChatGPT hype has led to an abundance of new bootcamps and get-rich-quick schemes being created, computer scientists are wondering: “$335,000 a year? How is that even possible?” Their rationale: prompt engineering is not even an actual engineering discipline.
ChatGPT has taken the world by storm this year. Regardless of your role, domain, or technical background, chances are, you’ve actually engineered a prompt yourself, for example, the last time you’ve instructed ChatGPT, Bing, Bard, or Midjourney to generate an output for you. But, if you think that’s all there is to prompt engineering for generative AI projects, think again. Over the next few weeks, I’ll cover various aspects of prompt engineering and why it is more than asking ChatGPT to spit out an answer for you. In this post, we’ll look at different approaches to creating a prompt.
» Watch the latest episodes on YouTube or listen wherever you get your podcasts. «
Prompt Engineering Is More Than ChatGPT
It’s good if you already have a basic understanding of interacting with generative AI systems like ChatGPT. But there’s a difference between casual prompt engineers and professional prompt engineers:
Casual
Use ChatGPT to create a elementary school graduation speech
Pay a flat fee for a consumer-grade application
Enter prompts to generate output
Professional/ developer
Create a prompt that will deliver repeatable output within an application
Pay per transaction
Experiment with types of prompts and programmatically deliver high quality results
To do prompt engineering well as part of a software development project, there are a lot more aspects to consider beyond writing and submitting instructions. Effective prompt engineering has a significant impact on your company’s ability to monetize your generative AI-enabled product and on seeing a return on investment (ROI). I’ve asked a few months ago: Is prompt engineering the “sexiest job” of the 21st century?
One of the fundamentals of prompt engineering is understanding the different approaches you can use to receive the desired output from the model. But what’s a prompt anyways?
Three Approaches To Creating Prompts
A prompt is a written instruction that a user provides to the generative AI model, so it generates an output. Instead of a user directly entering the prompt, it can also be part of an application and be submitted in the background as a function of an application (e.g. create a product description or create a social media post).
There are three different approaches for generating output using a prompt: zero-shot, one-shot, or few-shot. In essence, this means whether you are trying to get the model to generate output based on zero, one, or a few examples. The more context and the more examples you provide, the more specific the generated output will be. If you’ve used ChatGPT, you’ve likely experienced this yourself.
What does all of that have to do with prompt engineering?
Let’s assume you want to develop an application for a bowling team. You want your large language model (LLM) to generate output that follows a defined structure and format — for example, a file that has pairs of property names and values, such as JavaScript Object Notation (JSON).
Using a zero-shot approach, your prompt would be: “Create a JSON file for a bowling club application.” The generated file is divided into team and games with several properties for each such as firstName, lastName, etc.
// Bowling Club Application -- ZERO shot example
{
"team": {
"name": "String",
"members": [
{
"id": "Number",
"firstName": "String",
"lastName": "String",
"nickName": "String",
"email": "String",
"handicap": "Number",
"games": [
{
"id": "Number",
"date": "Date",
"score": "Number",
"strikes": "Number",
"spares": "Number",
"openFrames": "Number"
}
]
}
],
"games": [
{
"id": "Number",
"date": "Date",
"location": "String",
"opponentTeam": "String",
"teamScore": "Number",
"opponentScore": "Number",
"win": "Boolean"
}
]
}
}
In a one-shot approach, your prompt would be: “Create a JSON file for a bowling club application. The JSON file has the following structure [ENTER THE OUTPUT OF THE PREVIOUS PROMPT HERE]” You will notice the pre-populated data that follows the same structure that we’ve previously defined (e.g. "firstName": "John"
).
// Bowling Club Application Data -- ONE shot example
{
"team": {
"name": "The Bowling Stones",
"members": [
{
"id": 1,
"firstName": "John",
"lastName": "Doe",
"nickName": "Johndoe",
"email": "johndoe@example.com",
"handicap": 10,
"games": [
{
"id": 101,
"date": "2023-06-30",
"score": 150,
"strikes": 5,
"spares": 7,
"openFrames": 3
},
{
"id": 102,
"date": "2023-07-01",
"score": 180,
"strikes": 6,
"spares": 8,
"openFrames": 1
}
]
},
{
"id": 2,
"firstName": "Jane",
"lastName": "Smith",
"nickName": "Janesmith",
"email": "janesmith@example.com",
"handicap": 15,
"games": [
{
"id": 201,
"date": "2023-06-30",
"score": 160,
"strikes": 6,
"spares": 6,
"openFrames": 3
},
{
"id": 202,
"date": "2023-07-01",
"score": 170,
"strikes": 5,
"spares": 9,
"openFrames": 1
}
]
}
],
"games": [
{
"id": 1001,
"date": "2023-06-30",
"location": "Bowling Arena A",
"opponentTeam": "The Pin Pals",
"teamScore": 310,
"opponentScore": 280,
"win": true
},
{
"id": 1002,
"date": "2023-07-01",
"location": "Bowling Arena B",
"opponentTeam": "The Gutter Gang",
"teamScore": 350,
"opponentScore": 330,
"win": true
}
]
}
}
And following a few-shot approach, your prompt would include more formatting examples of the expected output (similar to the one-shot example): “Create a JSON file for a bowling club application. The JSON file has the following structure [EXAMPLE 1] or [EXAMPLE 2] or [EXAMPLE 3].”
Businesses Need Repeatable Results From Generative AI
As your teams move from first experimentation to productization of generative AI, one thing becomes clear: Generating repeatable results with generative AI is critical and hard to achieve. After all, generative AI is not based on static rules that deliver the same result every time. You can get an LLM to generate output, though, according to a format you provide, such as the bowling club application in the previous section. And you will likely need to add additional validity checks whether the generated output really contains all the information you need. That’s why good prompt engineers will know more than what they’ve tried at home in ChatGPT. That’s why they need to know more than that. Because…
The more repeatable the structure of the output needs be, the more expensive it will be to generate it.
In the next post, I’ll cover how that looks exactly and why that’s important…
» Watch the latest episodes on YouTube or listen wherever you get your podcasts. «
What’s next?
Appearances
July 10 - Monday Morning Data Chat with Joe Reis & Matt Housley on whether your business should chase generative AI.
Join us for the upcoming episodes of “What’s the BUZZ?”
August 1 - Scott Taylor, aka “The Data Whisperer”, will let us in on how effective storytelling help you get AI project funded.
August 17 - Supreet Kaur, AI Product Evangelist, and I will talk about how you can upskill your product teams on generative AI.
August 29 - Eric Fraser, Culture Change Executive, will join and share his first-hand experience how much of his leadership role he is able to with generative AI.
Follow me on LinkedIn for daily posts about how you can set up & scale your AI program in the enterprise. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas
How to Get a Six-Figure Job as an AI Prompt Engineer, TIME, 14 April 2023