Prompt Mastery Part I: Basics of prompt engineering
- Andrew Jarman
- Oct 13, 2024
- 4 min read
Updated: Nov 1, 2024
Large Language Models (LLMs) like GPT-4 are revolutionising how businesses handle a range of tasks, from automating customer support to generating marketing content. However, the key to unlocking their full potential lies in how you craft your prompts. Well-structured inputs guide the model to produce useful, relevant outputs, while poorly framed prompts can lead to subpar results. In this blog, we’ll explore the basics of prompting LLMs for business applications, focusing on best practices and how to optimise your inputs for tasks like generating emails, reports, or marketing copy.
Be Clear and Specific
In a business context, clarity is essential. The more precise your prompt, the more likely the model will generate a useful and relevant response. Vague instructions can lead to irrelevant outputs, which waste time and can create confusion. Always aim to provide specific details about what you need.
Example of a vague prompt:
“Write a business email to a new client”
Improved prompt:
Write a business email to a new client, introducing our company’s accounting services and offering to set up an initial consultation.
What would be even better would be a brief note on the company name and primary business domain!
Provide Context and Background Information
Providing context helps the model understand your request in relation to the specific business scenario. Whether you’re asking for marketing copy, legal language, or product descriptions, more context will help the model align its response with your objectives.
Example context:
If you’re asking the model to generate a product description, including details about your target audience, key selling points, and tone of voice can make the output more tailored to your needs.
Vague prompt:
Write a product description for a high end smartwatch.
Improved Prompt:
Write a product description for a high-end smartwatch aimed at fitness professionals. Focus on the watch’s health tracking capabilities and long battery life, using a formal tone.
Use Structured Inputs like Triple Backticks (```) to specify inputs outside of the instruction prompt
When prompt engineering LLMs to use external information—such as data, code snippets, legal texts, or specific guidelines—it's best to 'delimit' this input using triple backticks (```). This helps the model clearly differentiate between your instructions and the background information you're supplying. Without delimiting, the model may confuse the two and generate less accurate responses.
This helps the model break up the prompt effectively with a process called Tokenisation.
Prompt with triple backticks (code interpretation):
Analyse the following Python code in triple backticks and explain what it does:
```def calculate_discount(price, discount):
return price - (price * discount / 100)```
Prompt with triple backticks (quote from customer feedback):
Analyse the following customer feedback and summarise the main points:
```I had a great experience with the service. The staff was friendly, and the delivery was quick. However, the packaging was slightly damaged when it arrived. Overall, I would recommend the service but suggest improving the packaging quality.```
Organise Responses with Lists and Bullet Points
In business communications, structured information is often more effective than a long paragraph of text. Whether you’re preparing a proposal, summarising a meeting, or providing recommendations, asking the model to use bullet points or numbered lists helps organise the information in a clear and professional way.
Example prompt with unspecified output:
What are the benefits of implementing an AI-driven customer support system for a retail business?
Example specifying organised output:
List three key benefits of implementing an AI-driven customer support system for a retail business in bullet points.
Rule 1 of prompt engineering: Iterate, Iterate, Iterate!
In business settings, especially when dealing with critical communications, reports, or data analysis, refining your prompts multiple times can be key to getting the best results. Large language models, while powerful, often benefit from clear and detailed instructions. By testing, adjusting, and refining your prompts, you can guide the model to produce outputs that better match your exact requirements.
Here’s how you can approach the iterative process:
Step 1: Start with a General Prompt
Begin with a broad or general instruction. This helps establish the basic framework of the response, but it may not provide enough detail for complex business tasks.
First prompt:
Write a project update for a client , we expect it'll be finished end of next week.
While this may generate an update, it’s likely to lack specifics and key details that are essential in a business context.
Step 2: Refine the Prompt by Adding More Specifics
In your next iteration, include more specific details to narrow down the scope of the response. Add relevant information that ensures the output will address all critical areas.
Refined prompt:
Write a project update for a client on the status of their new website development. Include details about the completed wireframes, current testing phase, and the expected completion date of 2025-01-35.
This refinement makes the model focus on specific stages of the project, ensuring that the client gets a clearer picture of progress and upcoming milestones. Note the exact date is used to eliminate any potential hallucination or misinterpretation.
Step 3: Further Adjustments for Tone and Structure
After reviewing the output, you may decide to fine-tune even further—such as adjusting the tone or adding formatting instructions. For example, if you need a more formal tone or wish to present the information in bullet points.
Final prompt:
Write a formal project update for a client regarding the new website development. Mention the completed wireframes, current testing phase, and expected completion of 2025-01-35. Structure the update in bullet points.
This iteration not only adds a formal tone but also introduces structure, improving readability and ensuring the update looks professional.
Conclusion
Implementing these practices should be the first steps in your journey to prompt mastery.
Iterating and testing your prompts is probably the most crucial practice when working with LLMs in business contexts. The process helps ensure you get optimal results, especially for important communications or when dealing with complex data.
Prompting isn't an exact science, and you shouldn't expect to get it right first time. You may need to change your prompts for different models, and different updates/versions of models!
If you're still struggling with your prompts, or you're using the same prompts over and over - we can help. Services we can offer include:
Development of Custom GPTs; with specific skills and integrations.
Prompt writing consultations.
Integrating AI / LLM outputs into your current systems and workflows.
Comments