Improve application preformence
Discover expert tips to fine-tune your model's performance for optimal results
Prompt engineering is key for useful interactions with Language Learning Models (LLMs). It involves creativity, technical knowledge, and strategic planning. Here are some best practices by our experts to get precise and insightful responses from your LLM applications.
Define the persona
Outline the LLM's role and behavior. A clear role description can detail expertise, vernacular, tone, and style.
Explain the task
Assign a specific task to the LLM, usually a high-level overview of the inputs and outputs.
Specify output preferences
Describe the desired output. If your prompt is part of an AI workflow, an output template can prevent unexpected changes.
Additional considerations
- Guide the LLM on what the output should contain, not what it shouldn't.
- Encourage friendly prompting, not excessive flattery.
- If there's a conflict between user and system guidelines, the system usually takes precedence.
Prompt engineering isn't a "fail-fast" or "plug-and-play" process. Following these principles can improve performance and assist in developing better prompts for your LLM applications.
Please refer to our Superwise blog and webinar here. For a great comprehensive guide to prompt engineering, please visit this site.
Updated about 1 year ago