The OpenAI’s new website “GPT Best Practices“ provides six strategies and tactics to maximize the effectiveness of Generative Pre-trained Transformers (GPTs) like ChatGPT-4. The information provided is very detailed with many technical suggestions. The key message of OpenAI’s best practices guide is that while GPTs are always capable of generating intricate, human-like text, user input and guidance are vital to attain the best outcomes. The better prompts you make, the better answers GPT will provide. Success in large part depends on you.

This blog will summarize the six main strategies outlined by OpenAI and include my original Midjourney photoshopped images for right-brain impact. ChatGPT-4, web-browsing pro version, helped me to write this, so did my WordPress software. It is all one big hybrid, multimodal, ediscovery.team effort.

Here is a synopsis of the six fundamental strategies provided by OpenAI for obtaining optimal results from GPTs:
- Write clear instructions: The AI cannot infer user intent, hence the need for clarity. If users require shorter responses, they should ask for brevity. For more technical outputs, they should request expert-level writing. If a particular format is desired, the user should demonstrate that format. Essentially, the clearer the instructions, the more accurate the GPT’s output.
- Provide reference text: GPTs can sometimes fabricate answers, especially when dealing with complex or unfamiliar topics. Providing reference texts can help guide the GPT to produce more accurate and reliable responses.
- Split complex tasks into simpler subtasks: GPTs are better at handling simpler tasks, which have lower error rates. A complex task can be broken down into a series of simpler tasks, with the output of earlier tasks used to construct the inputs for subsequent ones.
- Give GPTs time to “think”: GPTs can make errors when required to provide instant responses. Asking for a chain of reasoning before an answer can help GPTs reason their way to more accurate conclusions.
- Use external tools: To compensate for GPTs’ limitations, the outputs of other tools can be utilized. If a task can be done more reliably or efficiently by another tool, it should be offloaded to that tool.
- Test changes systematically: To improve GPT performance, any changes made to a prompt should be tested systematically. A modification might improve performance in some instances but worsen it in others, so it’s crucial to test these changes across a range of examples.

These strategies can be implemented through specific tactics, each tailored to the particular strategy:
- For writing clear instructions, tactics include providing important details in your query, asking the model to adopt a persona, using delimiters to distinguish different parts of the input, specifying the steps required to complete a task, providing examples (known as “few-shot” prompting), and specifying the desired output length.
- When providing reference text, include sufficient additional context or source material for the GPT to understand the reference.
- To split complex tasks into simpler subtasks, the approach is to break down the task into a workflow of smaller, more manageable tasks.
- For giving GPTs time to “think”, the article suggests asking the AI for a chain of reasoning before providing an answer, allowing it to work out a more accurate response.
- In using external tools, the idea is to use the outputs of other tools to complement the abilities of the GPT. For instance, a text retrieval system or a code execution engine can be used to augment the GPT’s abilities.
- For testing changes systematically, the suggestion is to develop a comprehensive test suite, also known as an “eval”, to measure the impact of modifications made to prompts.

The bottom line of the information provided here is that although GPTs are capable tools, the quality of their output depends on: the clarity of the instructions they receive; sufficient context to any reference text provided; the ability to decompose complex tasks into simpler ones; having the time to “think”; the use of external tools when necessary; and, the systematic testing of changes. If you learn to use these strategies and tactics, you can significantly enhance the effectiveness of your interactions with all of OpenAI’s GPT models.

Conclusion
Implementing the strategies and tactics suggested by OpenAI can, indeed, help users get the most out of ChatGPT. Seems to me OpenAI should have provided these instructions upon launch. Maybe then so many newbies, and really we are all newbies at this new software, would not have complained so much about the accuracy, relevance and quality of its outputs. Basically OpenAI is invoking the old saying, “Garbage In, Garbage Out.”

I hope we see many more instructions like this from Open AI in the coming months. In the meantime, there are hundreds of software hackers who have attained some level of prompt engineering skills and are already sharing their prompting tips. I have even ventured into this territory by sharing some of my more interesting prompt experiments, such as: Prompting a GPT-4 “Hive Mind” to Dialogue with Itself on the Future of Law, AI and Adjudications; and ChatGTP-4 Prompted To Talk With Itself About “The Singularity”.

As OpenAI points out, the Six Strategies and Tactics given here can be used in a variety of ways. It all depends, as lawyers love to say, on the particular use case. It also depends on the capabilities of the particular GPT model you use. There are already many variations, with 3.5 being the first and weakest.
As always I encourage everyone to go hands-on with this. Hack around with this new software yourself. If you are a lawyer or other professional with client confidential information, be extremely careful in its use for all client work. Make sure you engage privacy settings and do not expose client confidential information. Legal ethics and common sense also require that you verify very carefully all of the output of GPT, especially in these early days. Your trust level should be low and skeptical level high.

So go ahead, experiment and adapt these six strategies to suit your needs and requirements. Just remember it may seem like you are dealing with a great savant here, but never forget ChatGPT is an Idiot-Savant. Just a child really, but with a big vocabulary. It is prone to forgetfulness, memory limitations, hallucinations, outright errors, ethics jailbreaks, and many, many other humanlike foibles. It may seem like a genius in a box, but it is not. It is more like a bottom of the class law student that somehow sounds smarter than he is, especially to non-experts. Still, he did somehow get into law school and might be able to pass your state’s Bar Exam.
See my blog for many articles about ChatGPT’s many unique foibles. Finally, note my e-Discovery Team blog now has a handy new, easy to remember HTML address – EDISCOVERY.TEAM. Yup, team is a domain name and you don’t have to remember to put a hyphen between the e and d. Yes, we humans are prone to forgetfulness too.

Ralph Losey 2023 Copyright — ALL RIGHTS RESERVED
[…] Article Link: OpenAI’s Best Practices For Using GPT Software | e-Discovery Team ® […]
[…] This productivity increase has been my experience to date, and that of other tech lawyers I have spoken with. It does, however, require prompt skills, in other words, you have to know what you are doing. See: OpenAI’s Best Practices For Using GPT Software. […]
[…] This productivity increase has been my experience to date, and that of other tech lawyers I have spoken with. It does, however, require prompt skills, in other words, you have to know what you are doing. See: OpenAI’s Best Practices For Using GPT Software. […]
[…] This productivity increase has been my experience to date, and that of other tech lawyers I have spoken with. It does, however, require prompt skills, in other words, you have to know what you are doing. See: OpenAI’s Best Practices For Using GPT Software. […]