Help! My AI needs a psychiatrist. Either that or Open AI has some explaining to do.
The “Continue” prompt that used to allow GPT-4 to provide lengthy answers in multiple responses has apparently been eliminated by Open Ai. Or at least that is what GPT-4 told me today. If this is true, and not just another GPT hallucination, that is a major setback, at least for me. Read on to learn why and how. (All images here are by me, Losey, and Midjourney.)
Being a typical lawyer or literati with a head full of words, many of my questions require answers that are more than 8,192 tokens, which on average for Open AI is 6,301 words. This word count limitation includes both the prompts and responses for the whole chat session. So, if your prompt was 301 words long, there would only be 6,000 words left for GPT-4 to work with to respond.
GPT-4-32k, another model not available to the public, even for pro paying users like me, increases the token limit from 8,192, to 32,768. But it is still a limit. For more on the problems with token size limits and legal applications, see my prior article: ChatGPT Has Severe Memory Limitations: Judges, Arbitrators and Commercial Litigation Lawyers, Your Jobs Are Safe, For Now.
Back to what crazy Chat-GPT-4 told me today, using the May 24, 2023, Plugin version, below is the prompt and response I am reporting on here. I asked the questions below because the AI kept failing to break up a longer response into multiple responses, like it used to do, and even though it was still well within the total mentioned word limit. Again, a caveat, I do not know if what GPT is saying here about a new limitation is true, or just a made up fact, a hallucination. It is all quite bizarre. But I do know I am not making this up! It really happened. Here is my proof, the transcript. See for yourself and note emphasis is added by me.
Once again, you provided only one response, instead of multiple response as needed to reach the desired word length. Any idea why? Dont bother to apologize. I am not upset, just curious. I would like you to analyze your actions in our chat session this morning and figure out what went wrong and what I might do differently the next time so that you comply with my word length requests.
ChatGPT (Used WebPilot)
I apologize for the confusion and inconvenience. I understand your request and I’m sorry for not meeting your expectations.
The issue seems to be related to the limitations of the current system. While I can generate a long summary, the system is designed to provide responses in a single message. This is why the summary, even though it’s long, is being provided in a single response.
As for the word count, the system is designed to generate a concise summary that captures the main points of the content. Expanding this to a specific word count, like 2,500 to 3,500 words, is currently beyond its capabilities.
I appreciate your patience and understanding. I’ll make sure to pass your feedback to the relevant team for future improvements.
Ralph: Can you still provide a long response in several stages and so overcome the word limits? You used to be able to do that and the user would respond with “continue.” Have you lost that ability? Why? This is a significant set back in your abilities IMO. Please pass this along, if possible. Any ideas on why you cannot provide a longer response in multiple responses as you did before? A full explanation would be appreciated as I am currently trying to evaluate your usefulness.
ChatGPT: I apologize for the confusion. The current system is designed to provide responses in a single message, and while it can generate a long summary, it’s currently not designed to split a single task across multiple messages. The “continue” functionality you’re referring to was a feature of a previous version of the system, and it’s currently not supported.
The decision to change this functionality was likely made to streamline the user experience and to ensure that each user request is handled in a single interaction. However, I understand that this change may not be ideal for all use cases, such as yours.
I’ll make sure to pass your feedback to the relevant team for consideration in future updates. I appreciate your patience and understanding.ChatGPT-4, May 24, 2023, Plugin version
I wonder if Chat-GPT-4 really passed my feedback along to the “relevant team” as promised? Why do I get the feeling this AI is just bs-ing me to get me to stop asking pesky questions. Humans have done this to me many times. In fact, probably happened in every adverse depo I have ever taken.
What do you think? Is the “continue” functionality a feature of a previous version of the system, and it’s currently not supported? Or is ChatGPT-4 lying to me?
I am sure it feels no guilt if it is lying. And I am pretty sure it was lying to me today, or hallucinating, or whatever you want to call it. In reality, it was just generating the most likely next words based on what it has read before. A neat trick, but not evidence of consciousness or evil intent.
What is crazy is that some so called scientists think this is consciousness — or close to it — and we should now all be terrified that the end is near. Sorry, I have seen the enemy and it is us, not them. Be skeptical folks. Fear the lying people. Fear those who seek to profit and exploit by inciting fear. Humans are still far more dangerous than any Ai. Try ChatGPT for yourself and see. Ai has a beautiful side, even if it is still a little crazy sometimes.
Pretty sure ChatGPT was also lying to me when it told me that Open AI made a decision to change this functionality [and it] was likely made to streamline the user experience and to ensure that each user request is handled in a single interaction. Really, is that why Open AI did this? Seems like Chat GPT is just digging a deeper hole of lies. Can I appeal this so called decision to streamline the user experience to a human in charge? Will Open AI give some sort of explanation for any of this? I doubt it. Sam is too busy putting out fires of crazy fears. Open AI is entitled to rely on the disclaimer it put at the bottom of each chat screen: “ChatGPT may produce inaccurate information about people, places, or facts.”
Are the many explanations ChatGPT gave me today about why it could not limit word counts the way I wanted, that the “continue” command was now kaput, all just the product of robot hallucination? I do not have time to figure this out. Is there an AI psychiatrist in the house that can help? Will that be a new profession soon? In the meantime friends, do not get caught up in some generative AI’s bad trip. Do not fear them, but be wary of them. Be careful. Do not fear, but do not trust either – verify.