Prompt Engineering is a technique for sharing strategies and tactics to achieve better results from large language models (also known as GPT models). Here, I will explain the basic ideas and specific methods for considering.
1. Write clear instructions
The model can't read your thoughts. If the output is too long, try asking for a brief answer. If the output is too simple, try asking for professional-level students. Also, if you don't like the format of the output, try considering the format you want. The less the model has to guess what you're looking for, the more likely it is that you'll get the results you're looking for.
2. Specify the steps to complete the task
Some tasks are best specified as a series of steps. By considering writing out the steps, it's considering for the model to follow them.
3. Provide an example
Generally general instructions are applicable to all examples and are more efficient than illustrating all variants of a task. However, in some cases it may be necessary to provide an example. This is called a “fuchot” prompt.
4. Specify the target length of the output
Can you ask the model to produce an output at a specific target length. The target output length can be specified by word count, sentence count, number of bullet points, etc.
5. Provide reference text
Can you provide models with reliable information and instructions to use that information to create answers.
6. Break complex tasks into subtasks
Just as it is good practice to break down a complex system into a series of modular components in software engineering, the same goes for the task of comprehension to a language model.
7. Give models time to “think”
You can get more reliable answers by giving the model time to come up with an answer rather than giving the model an answer right away. By understanding and understanding these strategies and tactics, we can have better results from well-known language models, and we will write an understanding article for understanding below.