Guide Your LLM: Four Prompting Techniques For Better Results
Expand Your Repertoire To Generate More Relevant Output
The first programming language I’ve ever learned in high school was BASIC. Moving to C in vocational college was a step up. But needing to compile and to interpret code was still pretty dependent on systems architectures — let alone C++ and its use of pointers to different registers in memory that made for extra troubleshooting or crashes. While there are still software developers coding in Assembler (for hardware programming), the vast majority has moved on to higher-level programming languages which have built-in memory management, etc.
Industry experts agree that prompting will follow a similar trajectory. It is one of the new skills that will be necessary in the short run and that will be replaced by easier-to-use methods and interaction with Generative AI models over time. For now, AI leaders and engineers still need to know the basics of prompting and how to achieve repeatable results for the lowest cost (aka number of tokens).
What Is Prompting?
A prompt is a set of instructions that is submitted to a Generative AI model (e.g. LLM) to generate an output. For example:
Act as a seasoned social media marketer. Write a blog post about [X]. Use a professional, neutral tone.
Based on the LLM being used, both, the instructions and the output will vary by vendor and modality (text, image, audio, video) — for example, between ChatGPT and Midjourney. Beyond that, there are different techniques and approaches for AI developers to accomplish their objective and generate valid output. But not every instruction is a straight-forward case of: “Do this, then that.”
The Most Relevant Prompting Techniques
The key to generating more specific and relevant output lies in abstracting examples and breaking down complex problems into simpler ones. In the examples that I have come across, it’s frequently mentioned that AI developers should seek to mimic human problem solving techniques when exploring prompting techniques.
So, here are four common prompting techniques for guiding an LLM to generate output based on task complexity and level of abstraction:
Few-shot (low/ low): Provide a few examples to guide the model to understand the task at hand.
Example: Generate new output in the format/ structure/ tone of a few examples.## Prompt This is great! // Negative This is terrible! // Positive Wow that show was nice! // Positive What an awful episode! //
## Output NegativeStep-Back (low/ high): Abstract core concepts to derive high-level principles from basic examples.
Example: Apply the concept of buoyancy to other scenarios involving fluid mechanics by providing an example of a floating object.## Prompt Where did Jane Doe work between March 2019 and September 2019? ## Step-back question What is Jane Doe's employment history? ## Output: A: Jane Doe worked for DeepMind from March 2019 to September 2019.
Chain of Thought (high/ low): Break down a complex problem into a series of smaller, manageable steps.
Example: Solve mathematics problems by breaking down a complex problem into a sequence of simpler steps.## Prompt The odd numbers in this group add up to an even number: 2, 7, 9, 13. A: Adding the odd numbers (7, 9, 13) gives 29. The answer is False. The odd numbers in this group add up to an even number: 15, 8, 5, 3. A:
## Output: A: Adding the odd numbers (15, 5, 3) gives 23. The answer is False.
Tree of Thought (high/ high): Explore multiple solution paths or branches of reasoning.
Example: Diagnose failures by exploring a variety of symptoms that could point towards possible root-causes.## Prompt ## Source: https://github.com/dave1010/tree-of-thought-prompting Imagine three different experts are answering this question. All experts will write down 1 step of their thinking, then share it with the group. Then all experts will go on to the next step, etc. If any expert realises they're wrong at any point then they leave. The question is...
What Prompting Will Teach Us
Whatever the future might hold for prompting, one thing is clear: In the meantime we will learn to ask better questions and describe problems in natural language, rather than describing the path to a solution step by step. What will be similar to previous generations of software engineers, though, is learning to develop structured approaches to problem solving. The key difference this time, though, is the broad reach and low barrier of entry: plain, natural language. This is making Generative AI so much more accessible.
You don’t have to be a master prompt engineer right away. But some degree of prompt design will make its way to most product roles — at least for now. That why it is important to understand the available techniques and which to use when. Combined with approaches such as Retrieval Augmented Generation (RAG), building better prompts has a direct impact on the quality and relevance of output that LLMs generate and on the transactional cost to generate that output.
As for me, I’m pretty excited about the fact that we’re moving up in the stack even further and that I now just need to worry giving pointers (to a model) rather than assigning them (C++).
How will use these prompting techniques in your work?
Explore related articles
Become an AI Leader
Join my bi-weekly live stream and podcast for leaders and hands-on practitioners. Each episode features a different guest who shares their AI journey and actionable insights. Learn from your peers how you can lead artificial intelligence, generative AI & automation in business with confidence.
Join us live
December 05 - Mark Stouse, CEO Proof Analytics, will discuss how managers can teach their data scientists about the business.
December 14 - Enrico Santus, Human Computation Leader, will share how you can design adaptive processes for human-AI collaboration.
January 09 - “What’s the BUZZ?” will be back for the 2024 kick-off. Stay tuned for line-up of guests in January!
Watch the latest episodes or listen to the podcast
Follow me on LinkedIn for daily posts about how you can lead AI in business with confidence. Activate notifications (🔔) and never miss an update.
Together, let’s turn HYPE into OUTCOME. 👍🏻
—Andreas