Custom GPTs: Why Constant Updating Is Essential for Relevance and Performance

April 22, 2025

By Ralph Losey, April 22, 2025

Over the past few weeks, I’ve been immersed in the necessary but time-consuming task of updating my Custom GPTs—AI tools I designed using OpenAI’s GPT Builder platform. Some are private, tailored to my work as a lawyer, educator, and writer. Others are freely available to the public. You can find them through the OpenAI GPT Store search or directly from links in this post.

To use any Custom GPT, you need to be logged into ChatGPT—either a free or paid account. If you’re new to ChatGPT, you can create an account here. To see a collection of ten web pages describing all of my Custom GPTs in greater detail than provided in this short article, go here to Losey.ai and use the pull-down menus for GPTs at the top of the page.

⚠️ Why Most Custom GPTs Are Junk

Let’s get something out of the way: most of the GPTs in the public store are half-baked. They’re built once and abandoned—often created by hobbyists experimenting for a few hours and never returning. Some of these models still rack up user numbers because of good promotion—or sheer novelty—but they haven’t been updated in months. Others were never built properly to begin with.

The problem? AI evolves rapidly. A Custom GPT that was “pretty good” in January can easily be broken—or worse, irrelevant—by April.

🧨 The Myth of “Set It and Forget It”

We are in a period of hyper-acceleration. OpenAI alone has released or iterated on multiple versions this year—GPT-4.5, GPT-4 Turbo, and GPT-4o (Omni), GPT-1o mini—each with subtle, undocumented behavioral changes. We expect another new model, GPT-1.5, to be released soon. The big leap to GPT-5.0 may not come until 2016. To quote the CEO of OpenAI, Sam Altman on X (twitter) explaining the delay:

There are a bunch of reasons for this, but the most exciting one is that we are going to be able to make GPT-5 much better than we originally thought. We also found it harder than we thought it was going to be to smoothly integrate everything. and we want to make sure we have enough capacity to support what we expect to be unprecedented demand.

Many internal updates to these existing versions roll out silently. That has been especially true lately on Omni and GPT-4 Turbo. All Custom GPTs currently run on 4-Turbo but that will change soon. When using an OpenAI model a small alert will just appear in the top corner of your screen telling you to refresh your session because there has been an update. That’s it. And yet those quiet backend tweaks can significantly affect how your Custom GPT performs, especially if your instructions were tightly engineered or dependent on specific behaviors.

I sign into ChatGPT daily. I test my Custom GPTs regularly. I rebuild their workflows when needed. Yes, I also use Google’s Gemini AIs occasionally—but for my GPTs, OpenAI is the foundation. I will now have to spend more time than ever before updating my Custom GPTs but it is worth it because they are becoming ever more powerful. Some are becoming truly incredible in what they can do.

🛠️ RAG, Instructions, and “Behavioral Fine-Tuning”

Let’s clear up a common misconception. Custom GPTs do not support traditional fine-tuning—that is, retraining the model’s parameters on your own data, although it does seem like that.

But you can fine-tune behavior using two powerful tools:

  1. Private Instructions. Hidden system-level directives that shape tone, logic, and response style. These tell the GPT what to prioritize—and how to think.
  2. Custom Knowledge (RAG). You can upload documents, reference files, or data sources. This is retrieval-augmented generation, or RAG. It lets the GPT cite and pull from curated materials.

More on Private Instructions. Every Custom GPT includes a set of private instructions, also known as system instructions. These are hidden from users during normal interaction, but they are what truly define the GPT’s behavior. Think of them as the GPT’s operating system settings. This is where you tell the Custom GPT model you are designing:

  • What its purpose is.
  • Who its audience is.
  • How it should prioritize reasoning, ethics, creativity, or brevity.
  • What tone or persona to adopt (formal, humorous, friendly, academic, etc.).
  • What it should avoid doing (e.g., giving legal advice, discussing politics, using filler phrases).

If you want your GPT to behave like a no-nonsense lawyer who writes in IRAC format (issue, rule, application, conclusion) with embedded case citations—this is where you program that. If you want a whimsical illustrator who gives art critiques in the voice of Salvador Dalí, it starts here too.

These instructions are far more influential than people realize. They don’t just tweak tone—they shape how the GPT thinks about your question, what it notices first, and what it treats as noise.The best instructions are goal-directed, specific, and tested in real conversation. And they need to be updated often to adapt to OpenAI’s shifting model behavior. What worked in GPT-4.0 may produce different results in GPT-4o—even with the same prompt.

More on Custom Knowledge. Custom GPTs also allow you to upload files or curated content—what OpenAI calls custom knowledge. This is where the real power of Retrieval-Augmented Generation (RAG) comes in. Think of RAG like giving your GPT a secure, searchable private library.

When you upload documents—case law, policies, workflows, datasets, FAQs, blog articles, even transcripts—the GPT doesn’t memorize them. Instead, it indexes them and retrieves relevant passages in real time when responding to a user’s prompt. Here’s why that matters:

  • A legal GPT can cite your actual motion templates or court rules.
  • A training bot can reference specific company policy documents.
  • A writing assistant can echo your previous blog voice, quotes, or story structure.

Unlike memory (which is still being phased in cautiously across GPTs), RAG gives you precision control over what the GPT knows and what it should forget. It reduces hallucinations and improves factual grounding, but only if the uploaded material is:

  • Well-written
  • Properly segmented and titled
  • Regularly updated

Combined, these two-Special Instructions and RAG-let you create a GPT that’s smarter than the base model. But here’s the catch: You have to keep updating both. OpenAI changes. Your work evolves. Your GPT must adapt too, or it will degrade.

🧠 Why Professionals Should Build Custom GPTs

If you’re a lawyer, educator, consultant, or content creator, this is your edge. A well-crafted Custom GPT can:

  • Handle repetitive research tasks.
  • Draft content in your voice and tone.
  • Teach complex ideas to specific audiences.
  • Preserve your best workflows, prompts, or case law.

In my work, these GPTs save me hours every week. I use them for legal analysis, writing, prompt testing, teaching CLEs and other public speaking, especially to local seniors. See on of my favorite Custom GPTs, AI Speaks to Seniors, a gentle, voice-enabled AI guide designed especially for adults aged 60 years and up. It was built using recent scientific studies that I update regularly. Per the internal instructions the AI answers questions slowly in clear, simple language.

🧰 Some of My Custom GPTs

Some of these are private. Others are public and free to try:

🎨 Visual Muse: My Creative Partner

My most-used and popular custom GPT is Visual Muse: illustrating concepts with style. I use it nearly every day. So do my friends at EDRM. Almost all of the images on my blogs in the past few years were generated using Visual Muse. You can try it yourself on the OpenAI GPT Store. I love generating images with AI. It is a very relaxing process and helps inspire many of the ideas in the text. The images start by illustrating the text but then often go on to and inspire me and revise the text in surprising ways. The images usually delight me with unexpected perspectives that inspire new ideas. It is a new kind of positive feedback loop. The images usually please me in their beauty but sometimes, like when I picked an artistic style I’d never heard of called Expressionist Horror, they were scary as hell.

With Visual Muse, you simply describe what you want to illustrate and pick a style. The Muse responds by suggesting six artistic styles—distinct, imaginative, and often unexpected—that can be used individually or blended together. You can also bypass the suggestions and name your preferred art style directly, or request imagery in the tradition of a particular artist, genre, or movement. As mentioned the above image used a style the Muse suggested called Expressionist Horror. Take a look at this same image of an artist painting and looking back in three different styles, Steam Punk, Picasso-Cubist and Soft Watercolor.

Any prompt asking for a image will summon the Visual Muse to create. Then, like any good collaborator, she’s ready to iterate—refining the image, remixing the style, or shifting the mood—until it’s just right.

A big challenge right now for all Custom GPTs for image generation is a pending major upgrade by OpenAI. The new, significantly improved graphic capabilities in GPT4o (omni) will phase out and replace DALL-E 3. I have been busy upgrading Visual Muse so that it will use Omni just as soon as it is available to individual users. Open AI is in the process of rolling it out now based on level of paid subscription and their own server capacities. The below image of Visual Muse is an example of one of the new Omni multimodal capabilities, which allows for better integration of text and image.

  • Illustrate this idea: A memory from the future
  • Show me six styles for a peaceful robot garden
  • Visualize the phrase: Creativity is a superpower
  • Create an image of a dog floating through a library in zero gravity
  • Suggest images in six different styles to illustrate the next paragraph
  • Show me a long list of artistic styles
  • Show me a list of favorite artist styles

Yes, these are some of my favorite prompts to use with Visual Muse. For details and examples on what happens when you use each of these prompts see the website that I just updated on Losey.ai, Visual Muse: illustrating concepts with style. It includes a history of OpenAI’s ever evolving image generation models.

🆕 Coming Soon: Omni and Multimodal Upgrades

OpenAI is now rolling out GPT-4o (Omni), its most advanced model. Omni is natively multimodal—understanding text, image, and audio together in real time. Visual Muse is being retooled to integrate Omni’s visual brainpower. The artwork it can create is more expressive, detailed, and emotionally resonant. Soon, prompts like: “Create an image of a Swedish Vallhund floating in zero gravity” will create images like these created today on 4o Omni.

Conclusion

Building a Custom GPT is just the beginning. The real value emerges over time—through testing, revision, feedback, and iteration. Updating isn’t optional, it’s essential. OpenAI’s models evolve. Instructions need tuning. Knowledge expands. Your own workflows and goals change. If your GPTs don’t grow alongside you, they won’t serve you for long. But that’s not a burden, it’s an opportunity.

You’re not just building software. You’re designing the most advanced thinking tool even known to Mankind. It can evolve to mirror your expertise, anticipate your needs, and collaborate with you on everything from strategy to storytelling. Done right, a Custom GPT becomes more than a time-saver. It becomes a partner, a reflection of how you think, a booster of your creativity and a bridge between your past insights and your future ideas.

So try a create one. Start simple. Or explore one of mine—Visual Muse, Hey Bot the AI Friend, or something more niche like another of my favorites, The Dude Abides and Gives Advice, Man. Then tweak it. Add instructions. Feed it better knowledge. Let it surprise you. In this new age of generative intelligence, the best tools are never static. They’re co-evolving with us.

And the best creators? They don’t just use AI—they train it to think like them, and then push it to think beyond them. It is the team approach. I hope you e-discover it soon.

That’s not just productivity. That’s progress.

That’s practice.

That’s the point.


I give the last word, as usual, to the Gemini twin podcasters that summarize the article. Echoes of AI on: “Custom GPTs: Why Constant Updating Is Essential for Relevance and Performance” Hear two Gemini AIs talk about this article for 12.5 minutes. They wrote the podcast, not me. 

Ralph Losey Copyright 2025 — All Rights Reserved


Evolution of DALL·E with Demonstrations of its Current Text to Image Abilities

August 19, 2024

Ralph Losey. Published August 20, 2024.

The images shown here are to demonstrate some of the current abilities of DALL-E. They were all created by Ralph Losey using his custom GPT, Visual Muse: illustrating concepts with style, which is driven by OpenAI’s DALL-E software. Ralph has chosen one of his favorite types of images for this demonstration – “optical illusions” – since he does not often get a chance to use this image type in his blog. These images will be shown in a variety of different artistic styles, especially that of Salvador Dali, who is known for his love of optical illusions.

Left click on any image to see it alone in full size. Careful, don’t fall in!

Introduction

The first images to demonstrate DALL-E capabilities shown above are a type of “Op Art” using a classic black and white geometric style. All illustrations were created in the same day, about four hours, with about half coming out right – the way Ralph wanted – on the first try. A 50% precision rate like this is unusually high for him. Many of the images were not used to save space. Ralph’s workflow then includes use of Photoshop for final tuning and size changes. The research and writing itself took about three hours of Ralph’s time, with about 50% help from ChatGPT 4o – omni using a cyborg method. From Centaurs To Cyborgs: Our evolving relationship with generative AI (e-Discovery Team, 4/24/24).

The development of DALL·E, a generative AI model by OpenAI, from its first release in January 2021 to today, represents a significant achievement in the field of AI-driven image generation. The broad outlines of the development will be discussed in this article, but all images shown will be from the latest version of August 2024. For more examples of what DALL-E is capable of, suggest you look at the hundreds of Ralph’s illustrations in e-Discovery Team blog. A few are created using OpenAI’s main competitor in image generation, Midjourney.

Ralph’s blog images usually illustrate the topics discussed in the accompanying text. For Losey they represent a new form of expression where words and images and hyperlinks form a multimodal whole, created by a hybrid combination of Man and Machine. In his blog the human – Ralph – does almost all the work on the text, including most of the research, and the AI does most of the work on the illustrations. Those familiar with Ralph’s work in e-Discovery know this mirrors his work with multimodal hybrid search techniques, i.w. Predictive Coding. They are described in detail in the free TAR Course linked to at the top the blog.

The History of OpenAI’s Generative AI Image Tool: DALL-E

1. Initial Release: DALL·E 1 (January 2021)

The initial release of DALL·E was in January 2021. The name is a homage to the famous Twentieth Century artist Salvador Dali. OpenAI’s release of DALL-E was a breakthrough moment for generative models to create images from text descriptions. DALL·E 1 utilized a modified GPT-3 architecture to generate images from text prompts. Although remarkable compared to what others had been able to achieve before, this first model exhibited limitations in image coherence, resolution, and the ability to accurately represent complex scenes. The underlying transformer architecture was effective in generating diverse and creative outputs, but the model struggled with maintaining consistency and realism across different elements of the image.

2. Introduction of DALL·E 2 (April 2022)

In April 2022, OpenAI released DALL·E 2, which introduced several critical improvements:

Enhanced Resolution and Image Quality: DALL·E 2 featured improvements in the model’s ability to generate higher-resolution images with finer details. This was achieved through the refinement of the underlying generative process, likely involving improvements in the training dataset and the introduction of more sophisticated images for training.

Improved Compositional Understanding: The model demonstrated enhanced capabilities in handling complex prompts that required the accurate rendering of multiple objects and interactions. This improvement can be attributed to advancements in the model’s attention mechanisms, enabling better spatial awareness and coherence in generated images.

Advanced Control Mechanisms: Users were provided with more granular control over image attributes such as style, color, and composition. This was likely facilitated by the integration of additional conditioning layers or modules within the model architecture, allowing for more targeted manipulation of the generated outputs.

3. August 2023 Update: Refinement and Realism

The August 2023 update was the last full training update to the model. This made possible significant refinements to DALL·E’s functionality, focusing on realism, detail, and user customization:

Increased Realism and Texture Fidelity: The model’s ability to generate photorealistic images was markedly improved, particularly in rendering textures, lighting, and shadows. These enhancements suggest advancements in the model’s ability to learn and apply high-fidelity visual patterns from training data, potentially through the use of more complex loss functions and training techniques that prioritize visual accuracy.

Enhanced Text Integration: DALL·E’s capability to incorporate textual elements within images saw notable improvement. This likely involved the refinement of text-to-image embedding processes and a better alignment between text tokens and their corresponding visual representations within the model.

User Customization: The update provided users with increased control over specific aspects of image generation, such as adjusting the perspective or selecting a particular art style. This was achieved through the introduction of more sophisticated user interfaces and the likely addition of new conditional input mechanisms within the model.

Feedback-Driven Optimization: The update also integrated feedback from users, leading to iterative adjustments that enhanced the model’s overall performance and usability. This process likely involved fine-tuning the model on user-provided data or leveraging reinforcement learning techniques to align the model outputs more closely with user preferences.

4. Ongoing Enhancements (Post-August 2023)

After the August 2023 update, DALL·E has continued to evolve with ongoing technical enhancements:

Refinement of Image Generation: Continuous improvements have been made to the model’s image generation capabilities, particularly in handling edge cases and complex scene compositions. These refinements suggest iterative updates to the model’s training regimen, potentially involving more diverse and higher-quality datasets.

Increased Processing Efficiency: The model has seen improvements in processing speed, reducing latency in image generation. This is indicative of optimizations in the model’s computational efficiency, likely through algorithmic refinements or the adoption of more efficient neural network architectures.

Advanced Control Features: The introduction of more nuanced control features has provided users with the ability to manipulate image attributes with greater precision. These features likely involve the integration of additional conditioning factors within the model, allowing for more detailed user input.

5. Interface and Usability Enhancements

In addition to technical improvements, there have been significant updates to the DALL·E user interface and overall usability:

Improved User Interface: The interface has been refined to offer a more intuitive user experience, facilitating easier access to advanced features. This likely involved the integration of better design principles and user experience research into the interface development process.

Accessibility Enhancements: Updates have been made to improve accessibility, ensuring that the platform is usable by a broader audience, including individuals with disabilities. This may involve the adoption of accessibility standards in interface design and the introduction of assistive technologies.

Collaborative Functionality: The platform has introduced features that support collaborative use cases, enabling multiple users to contribute to the image generation process. This functionality suggests the integration of multi-user input mechanisms and enhanced session management capabilities.

6. Industry-Specific Tools and Content Moderation

Recent developments have also focused on the introduction of tools tailored to specific industries and the enhancement of content moderation mechanisms:

Industry-Specific Tools: DALL·E has introduced features designed to meet the needs of particular industries, such as fashion, architecture, and graphic design. These tools likely involve the addition of domain-specific models or fine-tuning the base model on industry-specific datasets.

Content Moderation Enhancements: There have been improvements in content moderation, ensuring that generated images adhere to ethical standards and legal requirements. This likely involves the integration of content filtering algorithms and the use of human-in-the-loop processes to monitor and curate outputs. Sometimes in Ralph’s opinion they go overboard in policing potential copyright violations and other guardrails. Unlike DALL-E’s main competitor, Midjourney, they do not have instant appeals and, where warranted, reversals. That can be annoying.

Conclusion: Ongoing Development and Legal Implications

The evolution of DALL·E underscores the rapid pace of advancement in generative AI technology. This is a powerful, fun new tool for all creators to make their own images and play with the incredible abilities of generative AI. If you just stick to words and computer code, you will miss out.

Plus, it is getting better and better every month. The kind of things you can do with it now are mind bending. You may think it is all an optical illusion, but it is not. It is a great time to be alive. For me it is a relaxing hobby. That’s one reason I made, often update and freely share the Visual Muse custom GPT. It is at the OpenAI Custom GPT Store, along with thousands of other free GPTs to try out. If you like visual images and want to go from the beginner level to the intermediate and advanced levels of DALL-E use, this may be a good tool for you. Plus it can help teach you about artists and styles of art.

Each OpenAI update to DALL-E not only improves the creative capabilities of the model but also raises important legal and ethical implications of AI-generated content. As DALL·E continues to develop, legal professionals must remain vigilant in understanding these advancements to effectively navigate the associated legal challenges, including intellectual property rights, content moderation, and the ethical use of AI. To do that it helps to be able to use the tools yourself, at least somewhat. Generative AI has to be used to be understood. Otherwise, no matter how smart you are, your understanding will be superficial, maybe even illusory.

Ralph Losey Copyright 2024 — All Rights Reserved