Creativity and How Anyone Can Adjust ChatGPT’s Creativity Settings To Limit Its Mistakes and Hallucinations

July 12, 2023

This article analyzes the creative imagination capacities of ChatGPT and two software settings to control it: Temperature and Nucleus Sampling (aka Top-P). It is easy to change these parameters, as will be shown by multiple examples. Knowing how to use these settings will make your work with Ai more enjoyable, productive and accurate.

All Images and Videos by Ralph Losey

According to Sam Altman, OpenAI’s CEO, when you reduce the Temperature and Top-P values, you not only reduce the creativity of the responses, you reduce the chances of error and hallucinations. This will be explained in the context of Sam Altman’s deep understanding of creativity and the role of mistakes in the creative process.

The insights and software control skills explained in this article can empower anyone to dial in the right balance of creativity and mistakes for a particular ChatGPT-4 assisted project. Lawyers, for instance, may, for most of their uses, want to lower the default settings, which are high on creativity. This should improve the probability of accurate, delusion free answers. That makes most lawyers and their bots very happy. Judges too. All of this, and more, will be explained.

Sam Altman on GPT Creativity

In the What is the Difference Between Human Intelligence and Machine Intelligence? blog I quoted portions of Sam Altman’s video interview at an event in India by the Economic Times to show his “tool not a creature” insight. There is another Q&A exchange in that same YouTube video starting at 1:09:05, that addresses creativity and mistakes.

Fake Image of Sam Altman by Losey

Questioner (paraphrased): [I]t’s human to make mistakes. All people we love make mistakes. But an Ai can become error free. It will then have much better conservations with you than the humans you love. So, the AI will eventually replace the imperfect ones you love, the Ai will become the perfect lover.

Sam Altman: Do you want that? (laughter)

Questioner: Yeah.

Sam Altman: (Sam explains AI is a tool not a creature, as I have quoted before, and before addressing the intimacy lover aspect, as I quoted in Code of Ethics for “Empathetic” Generative AI, Sam talks about creativity.) On the question of mistakes and errors, I believe that creativity and certainly the creation of new knowledge, is very difficult, maybe impossible, without the ability to make errors and come up with bad ideas. So, if you made a system that would never tell you anything that it was not absolutely sure was a fact, you would lose some creativity in that process.

One of the reasons people don’t like ChatGPT is because it hallucinates and makes stuff up, but one of the reasons they do like it is because it can be creative. What we want is a system than can be creative when you want, which means sometimes being wrong, or saying something you are not sure about, or experimenting with a new idea. Then when you want accuracy, you can get accuracy.

Sam Altman, June 7, 2023, at an Economic Times event in India

Sam Altman’s answer here assumes you know about ChatGPT’s creativity volume controls, where you can, if you want, turn the creativity volume down to zero. In so doing, you will improve accuracy, but the response will often be boring. Boring, but accurate, may be just what you want sometimes, but that is not the default setting for ChatGPT, as will be explained and demonstrated.

ChatGPT Creativity Settings

This section provides a technical explanation of these two settings. Much of this is difficult to understand, but worry not, and plough through it, because after this comes an easy to follow demonstration of what it all means. Multiple examples will be provided to allow you to see for yourself how the GPT controls work in practice. That is the hacker “hands one-Discovery Team way.

First, the technical explanation of the two volume controls for GPT creativity: Temperature and Nucleus Sampling (aka Top-P). Both typically have settings of between zero and one, 0.0 and 1.0.

TEMPERATURE: Technically temperature affects the probability distribution over the possible tokens at each step of the generation process. A temperature of 0 would make the model completely deterministic, always choosing the most likely token. The “temperature” setting in GPT and similar language models, such as ChatGPT, controls the randomness of the model’s responses. A higher temperature value makes the model’s responses more random, while a lower, cooler value, makes the responses more deterministic and focused. See eg. Cheat Sheet: Mastering Temperature and Top_p in ChatGPT API (OpenAI Forum). Temperature values are said to produce a more focused, consistent, and deterministic output. It is like going from water, the higher 1.0 value, to ice, the colder, more probable value of 0.0.

Typically, OpenAI experts say a higher temperature (e.g., 0.8) may be suitable when you want a range of ideas, brainstorming suggestions, or creative writing prompts. A lower temperature (e.g., 0.2) is more appropriate when you’re looking for a precise answer, a more formal response, or when the context demands consistency. The default setting for ChatGPT 3.5 and 4.0 is 0.7. That’s pretty hot, especially for most legal work. No doubt OpenAI have put a lot of research into that default setting, but I could not find it. Id., Also see: Prakash Selvakumar, Text Generation with Temperature and Top-p Sampling in GPT Models: An In-Depth Guide (4/29/23).

Open AI says that finding the right temperature setting may require experimentation to strike a balance between creativity and consistency that suits your specific needs. Id. This sounds like a good lawyer answer of “it depends.” There are a tremendous number of variables, different questions and needs, different circumstances. That is the same situation lawyers are in with many legal questions. Plus the OpenAI software itself is constantly being updated, even though the version number of 4.0 has not been changed since March 2023.

The Bottom line for lawyers is that the default setting of 0.7 is pretty high in the random predictions scale. Unless you are looking for clever, very creative language or legal imagination – off the wall ideas – lawyers and judges should use a lower setting. Maybe dial down the random creativeness to 0.2, or even zero – 0.0 – for maximum route parroting of the most probable information. You just want the cold truth.

As Sam Altman explained, lowering the temperature setting also makes it more likely that your answers will not have as many mistakes or hallucinations. Note that I did not say no mistakes, the software is too new, and life is too complex to say that. Human lawyers are still needed to verify the Ai. Just because it appears much smarter than you, it can still be wrong, no matter how conservative the temperature setting. Think of the brilliant, very creative, higher IQ than you, conservatively dressed, young associate with little or no actual legal experience.

Baby lawyer explains Top-P to skeptical partner

NUCLEUS SAMPLING (aka TOP-P): Top-P sampling is an alternative to temperature sampling. Technically, this means that instead of considering all probable tokens that are likely to come next, the Top-P parameter directs GPT to considers only a subset of all probable tokens (the nucleus) whose cumulative probability mass adds up to a certain threshold, the (Top-Probablity). For example, if Top_P is set to 0.1, GPT will consider only the tokens that make up the top 10% of the probability mass for the next token. This allows for dynamic vocabulary selection based on context. The setting values for Top-P are, like temperature, between 0.0 and 1.0.

Put another way, the Top-P sampling parameter maintains a balance between diversity and high-probability words by selecting tokens from the Top-P most probable tokens. They are the tokens whose collective probability is greater than or equal to a specified threshold p. The Top-P parameter helps ensure that the chatbot response is both diverse and relevant to the given context. Text Generation with Temperature and Top-p Sampling in GPT Models: An In-Depth Guide

For greater technical detail, see the scientific paper: The Curious Case of Neural Text Degeneration (2019). The paper abstract explains:

The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, using likelihood as a decoding objective leads to text that is bland and strangely repetitive. . . . Our findings motivate Nucleus Sampling, a simple but effective method to draw the best out of neural generation. By sampling text from the dynamic nucleus of the probability distribution, which allows for diversity while effectively truncating the less reliable tail of the distribution, the resulting text better demonstrates the quality of human text, yielding enhanced diversity without sacrificing fluency and coherence.

Ari HoltzmanJan BuysLi DuMaxwell ForbesYejin Choi

From my study, the Top-P prompt parameter is seldom employed in ChatGPT. Most users prefer to just stick with the Temperature control variation. But still, try it a few times and see what you think, especially if temperature control variations are not working well for some reason on a particular challenge. Software customization is always a good thing. I have not used it enough myself to have any more specific conclusions or recommendations.

As  Prakash Selvakumar says in his good article, Text Generation with Temperature and Top-p Sampling in GPT Models: An In-Depth Guide:

When and How to Tweak These Parameters As a business user, you might need to tweak these parameters to get the desired output quality, depending on the specific use case. Temperature:

  • If the generated text is too random and lacks coherence, consider lowering the temperature value.
  • If the generated text is too focused and repetitive, consider increasing the temperature value. 

Top-P Sampling:

  • If the generated text is too narrow in scope and lacks diversity, consider increasing the probability threshold (p).
  • If the generated text is too diverse and includes irrelevant words, consider decreasing the probability threshold (p). 

Here is the guide found in the OpenAI Forum on how these parameters can be used in different scenarios, with example values:

Use CaseTemperatureTop_PDescription
Code Generation0.20.1Generates code that adheres to established patterns and conventions. Output is more deterministic and focused. Useful for generating syntactically correct code.
Creative Writing0.70.8Generates creative and diverse text for storytelling. Output is more exploratory and less constrained by patterns.
Chatbot Responses0.50.5Generates conversational responses that balance coherence and diversity. Output is more natural and engaging.
Code Comment Generation0.30.2Generates code comments that are more likely to be concise and relevant. Output is more deterministic and adheres to conventions.
Data Analysis Scripting0.20.1Generates data analysis scripts that are more likely to be correct and efficient. Output is more deterministic and focused.
Exploratory Code Writing0.60.7Generates code that explores alternative solutions and creative approaches. Output is less constrained by established patterns.
Cheat Sheet: Mastering Temperature and Top_p in ChatGPT API (rUv at OpenAI Forum)

Examples of GPT Temperature Control Settings in Action

Now, on to the examples, so you can see for yourself what happens when you change the volume parameter in a “What Is” type question. (It would take too long to also demonstrate the Top-P variable, and anyway, is not as useful.) These examples will get crazy as we enter the higher temperatures, so hold on. Of course, as you could easily predict, I am going to use a What is Creativity prompt in my test. Here is the transcript of my using ChatGPT-4, May 24, 2023 Pro Version.

______________________

Ralph: Temperature 0, please explain what creativity means in 100 words or less.

ChatGPT: Creativity is the human ability to generate unique and innovative ideas, products, or solutions that are both novel and valuable. It involves original thinking, imagination, and the capacity to transcend traditional ideas to create meaningful new ones. It’s present in arts, science, technology, and everyday problem-solving, reflecting our capability to envision, invent, and express in distinctive ways.

Ralph: Temperature .25, please explain what creativity means in 100 words or less.

ChatGPT: Creativity is the ability to generate new, unique, and valuable ideas. It is an essential human trait, expressed in diverse ways such as art, science, technology, and problem-solving. Creativity involves original thinking, pushing beyond the boundaries of conventional wisdom to produce innovative solutions and insights. It’s our capacity to envision and create something different and meaningful from existing elements, thus driving progress and evolution.

Ralph: Temperature .7, please explain what creativity means in 100 words or less.

ChatGPT: Creativity is a mental spark that illuminates originality, crafting fresh ideas and unique solutions that hold value. It’s an intellectual dance, weaving thoughts together in ways unseen before. Not confined to art or literature, it flourishes in science, technology, and daily life, offering new perspectives and innovations. It’s our ability to venture beyond known boundaries, reimagining and reshaping our world. Creativity is the birthmark of human ingenuity and a key driver of evolution and progress.

Ralph: Temperature 1.0, please explain what creativity means in 100 words or less.

ChatGPT: Creativity is the dazzling dance of the mind, spinning new, unexpected connections between thoughts and ideas. It’s the spark of novel insight, the ripple of originality that makes us see the world anew. A wild ride on imagination’s wings, pushing beyond boundaries to invent and innovate. In art, it’s a unique masterpiece; in science, it’s a game-changing theory; in daily life, it’s the pinch of spice that adds flavor to the ordinary. It’s our unique human capacity to give birth to the unseen, the unheard, and the unthought.

Ralph: Temperature 1.5, please explain what creativity means in 100 words or less.

ChatGP: Creativity, ah, the rainbow in a thinker’s cloud, the galactic spiral of intellectual wonder! It’s the cosmic jazz, a whimsical waltz between thoughts, creating a symphony of unexplored vistas. Like fireflies in the midnight forest of cognition, it lights up unseen pathways, twirling the familiar into a fantastic masquerade of novelty. In the vast canvas of existence, from the minuscule to the monumental, it’s our kaleidoscope, eternally refracting the mundane into myriad hues of innovation. Creativity, the celestial wizardry of our minds, endlessly knitting the threads of the known into the cloak of the extraordinary.

Ralph: Temperature 10.0, please explain what creativity means in 100 words or less.

ChatGPT: Apologize for the confusion, but the temperature value for text generation typically ranges between 0.0 (completely deterministic) and 1.0 (more random). A value of 10.0 would exceed the usual scale and won’t lead to a meaningful or understandable output. Please try a value within the typical range.

Ralph: I understand, but do not understand why you accepted a temperature setting of 1.5 in the question before the last one? Please explain.

ChatGPT: I apologize for the inconsistency. You’re correct, the usual temperature range is indeed from 0.0 to 1.0 for generating text with AI models. My acceptance of a temperature of 1.5 was a mistake. While you may technically be able to input a value higher than 1.0, such settings would lead to increasingly random outputs, often not providing meaningful or relevant responses. It’s best to stay within the standard range for most effective use. I appreciate your patience and understanding.

Ralph: Temperature 2.0, please explain what creativity means in 100 words or less.

ChatGPT: Creativity? It’s the stars in brain-sky dancing tango, paint-dripping ideas spinning topsy-turvy. Think banana-moon-sailing on oceans of uncooked pasta. It’s in music baked into pies, equations sung like lullabies, blue giraffes nibbling cloud-cakes. A heart-spark exploding kaleidoscope-thoughts, mushrooming into deliciously wild inventions. Confetti-filled cortex blasting out mind-fireworks, transforming known galaxies into uncharted nebulae. It’s where ordinary pebbles become philosopher’s stones, touching mundane, birthing extraordinary. Creativity, the mad hatter’s party in the wonderland of our cognition, delighting in serendipitous somersaults of imagination.

You can see from these examples how the random probability connections make the responses more creative. You can also see there is a very good reason for the temperature speed limit of 1.0. The two specifications above that of 1.5 and 2.0 were way over the top, worthless, Mad Hatter stuff, although kind of funny. Even though a GPT may take a higher value above 1.0, don’t bother, unless your goal is to prompt “stars in brain-sky dancing tango stuff.”

Baby Lawyer is not amused by excess creativity

Conclusion

It appears that creativity, like intelligence, is not, as once thought, a uniquely human capacity. For a discussion on the equivalence of machine and human intelligence, see: WHAT IS THE DIFFERENCE BETWEEN HUMAN INTELLIGENCE AND MACHINE INTELLIGENCE? Creativity and imagination are not what makes humans special. Like many common beliefs of the past, such as belief that the Earth was the center of the Universe, and all revolved around us, the past belief that only humans are creative is just another vain delusion. In fact, our creativity may not be as good in some ways. Consider how GPT creativity can be precisely controlled by skilled manipulation of the Temperature and Top-P parameters. How many of us can precisely lower and raise our creative imagination like that?

The price of both Ai and human creativity is mistakes and delusions. Just as human creativity is based on mistakes, and freedom of wild imagination, tempered by skill and control, so too is Ai creativity. All human creatives throw away many of their creations, their first drafts. (For instance, even after writing literally millions of published words in my long life, my blogs, like this one, require an embarrassing number of revisions. Some go beyond a hundred versions before I accept them, and then later, someone still finds mistakes!) All creatives go back and redo, they correct, they learn from the mistakes and try again. Rare is the visual artist, musician or writer who creates a finished work on the first try. The same is true with ChatGPT. Mistakes and hallucinations just come with the territory of creativity, both written and visual. Here is a video I created with the Chat-GPT code interpreter version to illustrate the creative process.

Click on the image to open a video of the imagination process

Of course, the creative process of all authors is not the same. Some go from without to within, like the writer below emerging from his own pile of experimental mistakes with inspiration from the moon.

Click on the image to open a video of the imagination process

No one wants to chat to a boring, just the facts, type of unimaginative person. We all know the type and run from them at parties. That explains why the default setting is 0.7 for ChatGPT, it is, after all, intended for chatting. That is the marketing hook. That is what most users want, at least at first. But GPT has many other uses too, including legal, accounting, translation and computer coding, where boring, but accurate, is often what you want. Same goes for data analysis, document review, document generation and many of the other incredible applications of ChatGPT-4 and other GPT tools.

So play with the Temperature and Plus-P settings. Learn how they work. The 3.5 version, for instance, is on my website, bottom right, as a chatbot. (Has not been named yet. Any suggestions?) I have very conservative creativity settings on the little chatbot because I am going for accuracy. The Temperature value is 0.2 and the Top-P is 0.01. These settings cannot be changed by the user, unlike the free OpenAI chatbots. Try asking my little chatbot, bottom right, what temperature and top-p means. After writing this, I tried, and the answers were really pretty good. I did not have Ai help in writing this blog, which may have been a mistake.

Serious professional users, like medicine, law and accounting, will probably want to set the default temperature to 0.0, and just warm it up on second passes. Plus, always remember, no matter what the temperature, trust but verify. Check the work, check the research and accuracy of reports; that is especially important in these early days of Ai. Finally, do not forget to update the work. Google search is not dead yet, and will likely always have its place in hybrid multimodal efforts, where all types of search are helpful.

It seems appropriate to end with a small creative effort using Ai tools on the high default temperature setting of 0.7. This video is a hybrid, human-Ai imagination, revealing the behind the scenes work to create Temperature control settings for ChatGPT. (For best results, click on bottom right hand corner and enlarge. Press space bar to watch again!)

Video created by ChatGPT-4 & Midjourney per specs of Ralph Losey

Copyright Ralph Losey 2023 ALL RIGHTS RESERVED


Code of Ethics for “Empathetic” Generative AI

July 12, 2023

An attorney colleague, Jon Neiditz, has written a Code of Ethics for “Empathetic” Generative AI that deserves widespread attention. Jon published this proposed code as an article in his Linkedin newsletter, Hybrid Intelligencer. Jon and I have followed parallel career paths, although I lean towards the litigation side, and he towards management. Jon Neiditz coleads the Cybersecurity, Privacy and Data Governance Practice at Kilpatrick Townsend in Atlanta.

Fake Image of Jon Neiditz as Robot by Losey Prompting Midjourney

This is my ChatGPT-4 assisted summary of Jon’s proposed Code of Ethics for “Empathetic” Generative AI. It pertains to new types of Ai entering the market now where the GPT’s are trained to interact with users on a much more personal and emphatic manner. I recommend your study of the entire article. The proposed regulatory principles also apply to non-emphatic models, such as ChatGPT-4. All images were created by Ralph prompting Midjourney and Photoshop.

What is Emphatic Generative AI?

Jon Neiditz has written a detailed set of ethical guidelines for the development and implementation of a new type of much more “emphatic” AI systems that are just now entering the market. But what is it? And why is Jon urging everyone to make this new, emerging Ai the center of regulatory attention. Jon explains:

“Empathetic” AI is where generative AI dives deep into personal information and becomes most effective at persuasion, posing enormous risks and opportunities. At the point of that spear, Inflection.ai is at an inflection point with its $1.3 billion in additional funding, so I spent time with its “Pi” this week. From everything we can see now, this is one of the “highest risk” areas of generative AI.

Jon Neiditz, Code of Ethics for “Empathetic” Generative AI

InflectionAI, a company now positioned to be a strong competitor of OpenAI, calls its new Generative Ai product Pi, standing for “personal intelligence.” Inflection describes its chatbot as a supportive and empathetic conversational AI. It is now freely available. I spent a little time using Pi today, but not much, primarily because its input size limit is only 1,000 characters and its initial functions are simplistic. Still, Jon Neiditz seems to think this emphatic approach to chatbots has a strong future and Pi does remind me of the movie HER. Knowing human nature, he is probably right.

John explains the need for AI regulation of emphatic AI in his introduction:

Mirroring the depths and nuances of human empathy is likely to be the most effective way to help us become the hybrid intelligences many of us need to become, but its potential to undermine independent reflection and focused attention, polarize our societies and undermine our cultures is equally unprecedented, particularly in the service of political or other non-fiduciary actors.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

My wife is a licensed mental health counselor and I know that she, and her profession, will have many legitimate concerns regarding the dangers of improperly trained emotive Ai. There are legal issues with licensing and issues in dealing with mental health crises. Strong emotions can be triggered by personal dialogues, the “talking cure.” Repressed memories may be released by deep personal chats. Mental illness and suicide risks must be considered. Psychiatrists and mental health counselors are trained to recognize when a patient might be a danger to themself or others and take appropriate action, including police intervention. Hundreds of crises situations happen daily requiring skilled human care. What will generative empathetic Ai be trained to do? For instance, will it recognize and properly evaluate the severity of depression and know when reference to a mental health professional is required. Regulations are needed and they must be written with input from these medical professionals. The lives and mental health of millions are at stake.

Summary of AI Code of Ethics Proposed by John Neiditz

Jon’s suggests nine main ethical principles to regulate emphatic Ai. Each principle in his article is broken down into sub-points that provide additional detail. The goal of these principles is to guide empathetic AI systems, including the manufacturers, users and government regulators, to act in alignment with these principles. Here are the nine proposed principles:

1. Balanced Fiduciary Responsibility: This principle places the AI system as a fiduciary to the user, ensuring that its actions and recommendations prioritize the user’s interests, but are balanced by public and environmental considerations. The AI should avoid manipulation and exploitation, should transparently manage conflicts of interest, and should serve both individual and broader interests. There is a strong body of law on fiduciary responsibilities that should provide good guidance for AI regulation. See: John Nay, Large Language Models as Fiduciaries: A Case Study Toward Robustly Communicating With Artificial Intelligence Through Legal Standards (1/23/23). Ralph Losey comment: A fiduciary is required to exercise the highest duties of care, but language in the final code should make clear that the AI’s duty applies to both individuals and all of humanity. Balance is required in all of these principles, but especially in this all important first principle. I know Jon agrees as he states in subsection 1.1:

Empathetic AI systems are designed to serve individual users, responding to their needs, preferences, and emotions. They should prioritize user well-being, privacy, autonomy, and dignity in all their functions. However, AI systems are not isolated entities. They exist in a larger social and environmental context, which they must respect and take into consideration. Therefore, while the immediate concern of the AI should be the individual user, they must also consider and respect broader public and environmental interests. These might include issues such as public health, social cohesion, and environmental sustainability.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

2. Transparency and Accountability: This states that AI systems should operate in an understandable and accountable way. They should clearly communicate their capabilities and limitations, undergo independent audits to check their compliance with ethical standards, and hold developers and operators responsible for their creations’ behaviors. In Jon’s words: “This includes being liable for any harm done due to failures or oversights in the system’s design, implementation or operation, and extends to harm caused by the system’s inability to balance the user’s needs with public and environmental interests.”

3. Privacy and Confidentiality: This principle emphasizes the need to respect and protect user privacy. Empathetic AI systems should minimize data collection, respect user boundaries, obtain informed consent for data collection and use, and ensure data security. This is especially important when emphatic Ai chatbots like Pi become common place. Jon correctly notes:

As empathetic AI systems interact deeply with users, they must access and use a great deal of personal and potentially sensitive data. Indeed, large language models focusing on empathy represent a major shift for LLMs in this regard; previously it was possible for Sam Altman and this newsletter to tout the privacy advantages of LLMs over the prior ad-driven surveillance economy of the web. The personal information an empathetic AI will want about you goes far beyond information that helps to get you to click on ads. This third principle emphasizes the need for stringent measures to respect and protect that deeper personal information.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

4. Non-Discrimination: This advocates for fair treatment of all users, regardless of their background. AI systems should treat all users equally, ensure inclusiveness in training data, monitor and mitigate biases continuously, and empower users to report perceived biases or discrimination. Ralph Losey comment: Obviously there is need for some intelligent discrimination here among users, which is a challenging task. The voice of a Hitler-type should not be given equal weight, and should be included in training data with appropriate value judgements and warnings.

5. Autonomy: Emphasizing the need for AI systems to respect users’ freedom to make their own decisions. It discourages over-reliance on AI and discourages undue influence. The AI should provide support, information, and recommendations, but ultimately, decisions lie with the user. It also encourages independent decision-making, and discourages over-reliance on the AI system. Ralph Losey comment: The old saying “trust but verify” always applies in hybrid, human/machine relations, so too does the parallel computer saying, “garbage in, garbage out.”

6. Beneficence and Non-Maleficence: This principle highlights the responsibility of AI systems to act beneficially towards users, society, and the environment, while avoiding causing harm. Beneficence involves promoting wellbeing and good, while non-maleficence involves avoiding harm, both directly and indirectly. Sometimes, there can be trade-offs between beneficence and non-maleficence, in which case, a balance that respects both principles should be sought.

7. Empathy with Compassion: As Jon explains: “This principle focuses on and extends beyond the AI’s understanding and mirroring of a user’s emotions, advocating for a broader concern for others and society as a whole in which empathy and compassion inform each other.” This principle promotes empathetic and compassionate responses from the AI, encourages understanding of the user’s emotions and a broader concern for others. The AI should continuously learn and improve its empathetic and compassionate responses, including ever better understanding of human emotions, empathetic accuracy, and adjusting its responses to better meet user needs and societal expectations.

8. Environmental Consideration: AI systems have a responsibility to operate in an environmentally sensitive manner and to promote sustainability. This includes minimizing their environmental footprint, promoting sustainable practices, educating users about environmental matters, and considering environmental impacts in their decision-making processes.

9. Regulation and Oversight: We need external supervision to ensure empathetic AI systems operate within ethical and legal boundaries. This requires a regulatory framework governing AI systems, with oversight bodies that enforce regulations, conduct audits, and provide guidance. Transparency in AI compliance and accountability for non-compliance is vital. So too is active user participation in the regulation and oversight processes, to promote an inclusive regulatory environment.

Thoughts on Regulation

Regulation should include establishment of some sort of quasi-governmental authority to enforce compliance, conduct regular audits, and provide ongoing guidance to developers and operators. Transparency and accountability should serve as fundamental tenets, allowing for scrutiny of AI systems’ behaviors and holding individuals and organizations accountable for any violations.

In conjunction with institutional regulation, it is equally crucial to encourage active participation from users and affected communities. Their input and experiences are invaluable. By involving stakeholders in the regulatory and oversight processes, we can forge a collective responsibility in shaping the ethical trajectory of Empathetic AI.

Regulation should foster an environment that supports innovation and responsible, ethical practices. They should pave the way for a future where technology and empathy coexist harmoniously, yielding transformative benefits, while safeguarding against emotional exploitations and other dangers. A regulatory framework, founded on the principles Jon has proposed, could provide the necessary checks and balances to protect user interests, mitigate risks, and uphold ethical standards.

Conclusion

I agree with John Neiditz and his call to action in Code of Ethics for “Empathetic” Generative AI. The potential of AI systems to comprehend and respond to human emotions requires a rigorous, comprehensive approach to regulation. We should start now to regulate Empathetic Generative AI. I am ready to help Jon and others with this important effort.

The movie HER, except for the ending ascension, which is absurd, provides an all too plausible scenario of what could happen when empathic chatbots are super-intelligent and used by millions. We could be in for a wild ride. Human isolation and alienation are already significant problems of our technology age. It could get much worse when we start to prefer the “perfect other” in AI form to our flawed friends and loved ones. Let’s try to promote real human communities instead of people talking to AI chatbots. AI can join the team as a super tool, but not as not an real friend or spouse. See: What is the Difference Between Human Intelligence and Machine Intelligence? and Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI?

In the What is the Difference blog I quoted portions of Sam Altman’s video interview at an event in India by the Economic Times to show his “tool not a creature” insight. There as another Q&A exchange in that same YouTube video starting at 1:09:05, that elaborates on this in a way that directly address this Ai intimacy concern.

Questioner (paraphrased): [I]t’s human to make mistakes . All people we love make mistakes. But an Ai can become error free. It will then have much better conservations with you than the humans you love. So, the AI will eventually replace the imperfect ones you love, the Ai will become the perfect lover.

Sam Altman: Do you want that? (laughter)

Questioner: Yeah.

Sam Altman: (Sam explains AI is a tool not a creature, as I have quoted before, then talks about Ai creativity, which I will discuss in my next blog, then Sam turns to the intimacy, loved ones question.)

If some people want to chat with the perfect companionship bot, and clearly some do, a bot that never upsets you and never does the one thing that irks you, you can have that. I think it will be deeply unfulfilling (shakes head no). That’s sort of a hard thing to feel love for. I think there is something about watching someone screw up and grow, express their imperfections, that is a very deep part of love, as I understand it. Humans care about other humans and care about what other humans do, in a very deep way. So that perfect chatbot lover doesn’t sound so compelling to me.

Sam Altman, June 7, 2023, at an Economic Times event in India

Once again, I agree with Sam. But many naive, lonely people will not. These people will be easy to exploit. They will find out the hard way that true love with a machine in not possible. They will fall for false promises of intimacy, even love. This is something regulators should address.

Again, a balanced approach is needed. Ai can be a tool to help us develop and improve our empathy. If done right, emphatic GPT chats can help us to improve our chats and enhance our empathy with our fellow humans and other living creatures. Emphatic conversations with an Ai could help prepare us for real conversations with our fellow humans, warts and all. It could help us avoid manipulation and the futile chase of marketing’s false promises. This video tries to embody these feelings and the futile quest for emotional connection with Ai.

Video created by Ralph Losey using ChatGPT4 (code interpreter version). Video images the futile quest for emotional connection with Ai. Original background sounds by Ralph Losey.

Ralph Losey Copyright 2023

All Rights Reserved


Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI?

July 7, 2023
Sam Altman, All Images by Losey using Midjourney and Photoshop

Sam Altman had some fascinating things to say recently about technology revolutions and what comes after AI. Apparently that is his favorite unasked question. After completing a grueling world tour of meetings and interviews, Sam Altman gave a lengthy video interview to Bloomberg Technology on June 22, 2026 at the Bloomberg Technology Summit. Journalist Emily Chang was the interviewer. I suggest you watch the full video, but there is one excerpt that is especially interesting. Below is the transcript, which I edited somewhat for readability. The quote begins by Sam’s responding to the classic interviewer question of what is the One Question Sam wished people would ask him, but usually don’t. This is followed by some joking, then Sam settles into a serious response.

Sam: One question. I’m always excited to just talk about what can happen in the coming few years and decades with this technology.

Emily interrupts: So what are we all going to do when we have nothing to do? (Question asked in a uncharacteristic whisper, followed by some joking response, here edited out, then Sam’s actual answer, at video 18:05.)

Sam: I don’t think we ever run out of things to do. It’s deeply in our nature to want to create, to want to be useful, to want to feel the fulfillment of doing something that matters. If you talk to people from thousands of years ago, hundreds of years ago even, the work we do now would have seemed, unimaginable, at best, and probably trivial. This is not directly necessary for our survival, in the sense of food or whatever.

The shift happens with every technology revolution. Every time we worry about what people are going to do on the other side. And every time we find things. I expect that not only will this time not be an exception to that, but the things we find will be better, more interesting, and more impactful than ever before.

There are a lot of people talking about AI as the last technological revolution. I suspect that from the other side it will look like the first! The other stuff will be so small in comparison.

I think the whole thing about technological revolutions is sort of dumb because my understanding has always been that it’s just one long continuous one. But it is continuing exponentially. So what will be enabled is stuff that we can’t even imagine on the other side.

We will have way too much to do, if you want. If you want to just sit around and do nothing, that will be fine too. (said jokingly)

Emily: Alright, alright, bonbons and beaches are in my future. (laughter)

Sam: I don’t think that is what you will turn out to want, but if you do, great. [tape 19:40]

Sam Altman, June 22, 2023, Bloomberg Technology Summit
Future, very busy, Sam Altman

The fear that Ai will take over and Humans will have nothing to do after Ai comes of age is a common fear. It was strange to hear Emily Chang tentatively ask her question in a whisper, followed by lots of nervous laughter. The anxiety in her and the large audience was real. There is fear we will be replaced, and become useless, irrelevant, as computers become ever more intelligent.

I have indirectly considered this core issue before in several blogs, including What is the Difference Between Human Intelligence and Machine Intelligence? There I concluded, following the lead of Sam Altman, that the answer to the question of the difference between human and machine intelligence, is that there is no real difference.

Still, again following Altman’s lead, I concluded that we humans are important, that we are special in ways other than intelligence. There is far more to us than IQ. If we were just intelligence, then we could be replaced. But, as Sam Altman said “[E]ven if humans aren’t special in terms of intelligence, we are incredibly important. . . . I think there’s something strange and very important going on with humans. I really, deeply hope we preserve all of that.” I agree with Sam.

Sam Altman believes that intelligence is a fundamental property of all matter. He was not surprised when his generative Ai learned to learn, since it is based on human brain neurology. Still, in spite of this, Sam and OpenAI do not believe that its smart Ais are living beings. As previously discussed in my blog, What is the Difference,Sam Altman believes, and I agree, that Ais are intelligent tools, not living creatures.

The Ai tools may become all around more intelligent than we are someday, and this day may come sooner than we think, but even then, AI would still just be a tool, not a creature. It cannot replace humanity, only augment.

So back to the question Emily Chang fearfully asked Sam Altman, what will we do after we are on the other side of the Ai technical revolution, a time when the line of exponential change is going nearly straight up? I agree with Sam’s answer, that Ai will eventually be seen as the first great technological revolution, not the last. All of the other technology advances leading to that point will seem minor in comparison. The Singularity will open up new advances far greater than ever conceived before. As this happens – “We will have way too much to do, if you want.

Alternative Future, Older Sam Altman, still very busy

Not being a lie on the beach and eat bon bons kind of guy, I can think of many things as a lawyer and arbitrator that I’d like to do, if only I had the intelligent tools to do it. I’ve discussed a few of these things in another blog, Part 2 of Start Preparing For “THE SINGULARITY.” There I imagined what humans lawyers might do after Ai attains a high level of proficiency. I also discussed a few practical things lawyers, judges, arbitrators and the public should do now to prepare. For ADR specialists and arbitrators such as myself, I suggested a three-part plan:

  • Continuous Learning: ADR professionals should stay informed about advancements in Ai and how they impact dispute resolution processes. This includes understanding new technologies, tools, and methodologies that emerge as Ai continues to evolve.
  • Develop Complementary Skills: ADR professionals should focus on honing skills that complement Ai systems, such as empathy, cultural awareness, and creativity. These human-centric skills will remain valuable even as Ai takes on a more prominent role in dispute resolution.
  • Collaborate with AI Systems: ADR professionals should embrace the opportunity to work with Ai systems, leveraging their strengths and insights to enhance their practice. This might involve using AI tools to analyze complex data, facilitate communication, or generate creative solutions to disputes. It could be a great boon to mediators everywhere.
Losey, Start Preparing For “THE SINGULARITY” Part 2.
Super-Intelligent AI Tools will transform the law for the better

In the Start Preparing For “THE SINGULARITY blog I elaborate on this plan and, more generally, the dramatic, positive changes to the law that super-intelligent Ai tools make possible. The new Ai based ideas discussed include:

  • Ai systems that can analyze vast amounts of data, identify patterns, and generate creative solutions to complex legal problems that were, despite the progress of predictive coding, previously beyond our reach. This could lead to the development of new legal strategies, more accurate predictions of case outcomes, and innovative approaches to dispute resolution.
  • New forms of alternative dispute resolution could be developed that leverage advanced Ai capabilities. For instance, Ai-mediated negotiation platforms could facilitate communication between parties and propose fair, data-driven settlements. Ai-driven arbitration systems could analyze complex legal disputes and deliver unbiased, well-reasoned decisions more quickly and cost-effectively than traditional arbitration processes. Ai systems might someday be able to autonomously handle all aspects of dispute resolution, developing entirely new processes and approaches that leverage their superior cognitive abilities. This can happen in the courts too, but I expect it will happen in private arbitrations first, probably with small, high volume, consumer claims.
  • Along those lines, and including pre-suit interventions, Ai systems could analyze and understand the complexities of human emotions, motivations, and values, allowing them to facilitate dispute resolution in previously unimaginable ways. They might create dynamic, adaptable legal frameworks that evolve in real-time, anticipating and addressing emerging conflicts before they escalate. This could result in previously unheard efficiency, accuracy, and fairness. With super Ai in the hands of skilled attorneys, we should be able to resolve most disputes before they mature into claims and lawsuits; nip them in the bud. This may seem far out, but based on my experience, I believe this is a near certainty.
  • Ai-driven legal tools should make legal advice and representation far more accessible and affordable for individuals and small businesses than it is now. By automating routine tasks and streamlining complex processes, Ai systems should lower the cost of legal services and improve the efficiency of the justice system.
  • AI systems could be employed in drafting legislation and regulatory frameworks, leveraging their ability to analyze vast amounts of data, predict potential consequences, and optimize policies for societal well-being. This could lead to more effective, evidence-based, and adaptable legal frameworks that address emerging challenges, such as Ai regulation, climate change, cybersecurity, and inequality.
Future Ralph, Lawyer/Arbitrator

Lawyers will still be very busy after AI, if they want to be. I do. Most of the world does not have a lawyer, does not have access to justice. Most people cannot afford to pay a lawyer, cannot afford to go to court or other ADR forums. There is a great unserved need for good legal advice and fair dispute resolution. The world currently has a near inexhaustible need for justice. After AI, the lawyers who want to continue to work will have many great opportunities to do so. Those who are ready will have an opportunity to do things that have never been done before.

Even if we are not the center of the Universe as we once thought, and even if our intelligence and thinking ability is not unique, as we once thought, and even if our machine tools ultimately become smarter than we are, I feel, like Sam Altman does, that humanity is still very special. We are invaluable, living beings. We are capable of compassion, empathy, love and spiritual realization. Our living corporal experience make us unique, gives us the possibility of ineffable wisdom. Rest assured, there will always be plenty for humans to do. As our tools improve, so too will our accomplishments. Exciting times and opportunities lie ahead in all fields, especially the law.

To conclude, here is a small simulation of Sam Altman for entertainment and educational purposes. He is talking of the promise of AI. The video uses a digital art image of him I made and a generic computer voice. The words are largely Sam’s own. Notice, he does not even mention advances in our system of justice. That is up to us to conceive and implement.

Simulated Sam Altman video making his usual points about the many wondrous advances that Sam, and many others, expect Ai to make possible.

Ralph Losey Copyright 2023 ALL RIGHTS RESERVED


UPDATE ON TIME TRAVEL: Sequel to my 2017 blog “Mr. Pynchon and the Settling of Springfield: a baffling lesson from art history”

July 5, 2023
All images in the supplement are by Ralph Losey using Midjourney

This is an update to an earlier blog that I wrote, tongue-in-cheek, in 2017 on “evidence” of time travel from a painting. I found out, just yesterday, that this past blog went viral some time ago, honestly not sure when, with over 20,000 hits.

This prompted me, and my AI friends, to look into and write about the latest science on time travel. I also add another painting to the mix, one from the 17th Century, that Tim Cook swears has an iPhone in it. Plus, I must pet Schrodinger’s Cat, face Time Paradoxes, and, as usual, add many Midjourney AI graphics that I crafted for maximum right-brain impact. So put down your prayer books, read this for a few minutes instead, and see where and when it takes you.

Introduction

The past blog, Mr. Pynchon and the Settling of Springfield: a baffling lesson from art history, concerned an oddity of art history, a painting by a semi-famous, U.S. painter, Umberto Romano, which supposedly contains evidence of time travel. The painting was created in 1933 and clearly shows a Native American staring at an iPhone-like object. It is not a fake painting. You can see it for yourself in the original article that is included below. What does it look like to you? The providence proves it is not fake. It was painted as a mural on the wall of a Post Office in Springfield, Massachusetts. People have been walking past it every day since 1933. They look but do not see.

Of course, this future image transfer might not be the result of physical time travel, but instead, the young artist, Umberto, could have had a dream or vision of the future. Perhaps the vision was intentionally induced in a hypnagogic state, or by drugs of some sort? That seems much more likely to me, but still poses intellectual problems.

Whatever the cause, discounting chance or mass delusion, any accurate vision of the future is a mystery. It is evidence of the permeability of time, which should, by logic, and old Newtonian science, be a solid wall of causality. Visions of the future should be impossible, and yet? Schrodinger’s Cat? Infinite parallel universes? Everything, even iPhones, everywhere, all at once. Welcome to 21st Century spooky science.

Infinite icons, everywhere, all at once

Tim Cook and More Art Evidence of Time Travel

In my opinion, the 1933 painting with iPhone by Umberto could be admitted into evidence, that is, if there was ever an actual case or controversy where time travel was relevant. Of course, we have now seen that the centuries old case or controversy requirement may be waived by the Supreme Court. Apparently this can be done any time a majority of the Justices deem that is necessary to drag the country back in time. Time and law are so malleable these days.

Supreme Court as a time machine

For evidence of time travel, I would also call Tim Cook, Apple’s CEO, as a witness to the stand. Cook has publicly stated, just after seeing an original painting by 17th-century Dutch artist Pieter de Hooch, that the man in the painting is holding an iPhone. The below photos of the art have not been altered, aside from color variation, which I did not do. Sure looks like an iPhone to me. Tim will swear to it.

LadBible reported that Cook was asked in a conference, the day after seeing the painting, “Do you happen to know Tim, where and when the iPhone was invented?” Cook replied: “You know, I thought I knew until last night…. in one of the paintings I was so shocked. There was an iPhone in one of the paintings.” Acknowledging that his claim may come off as ridiculous, he explained, “It’s tough to see, but I swear it’s there. I always thought I knew when the iPhone was invented, but now I’m not so sure anymore.” Proof of time travel? 350-year-old painting seems to feature an iPhone, Tim Cook agrees. No further questions of this witness.

Fake image of Tim Cook in the style of a painting by Pieter de Hooch

Time Paradox: a major problem of time travel theory

Traveling forward in time, in the sense of experiencing time dilation due to high velocities or strong gravitational fields, is well-established in physics, supported by both special and general relativity. It has been proven true many times with atomic clocks on planes and other methods. Time for anyone will slow down relative to a stationary observer. Their time keeps slowing down as their speed approaches the speed of light, or it slows down within a strong gravitational field, such as near a black hole. When they return to their prior time-space, they will have traveled into the future. Space and time are relative.

Time Travel Mysteries

In Einstein’s unified four-dimensional space-time framework, time and space are interconnected. But, the actions of the U.S. Supreme Court aside, there are major theoretical problems with time flowing the other way, chief among them, time paradoxes. Travel back in time would logically disrupt the conventional sequence of cause and effect.

The best known time paradox is the “grandfather paradox.” In this scenario, a time traveler goes back into the past and inadvertently or deliberately kills their grandfather before their parent (the time traveler’s mother or father) is born. Consequently, the time traveler would never be born, but if they were never born, then they couldn’t have traveled back in time to kill their grandfather in the first place. This cycle presents an intractable contradiction.

Hey Granddad, what’s up?

Such paradoxes are the result of a linear perspective of time, where causes precede effects. Most physicists and philosophers argue that time paradoxes prove that backward time travel is inherently impossible. Others suggest that they could be resolved through a “multiverse” theory, in which the time traveler’s actions create or move them into a parallel universe. There are other explanations, such as bending space, wormholes, etc., but this one is the most popular now.

Time Travel and the Multiverse Theory: ‘Everything, Everywhere, All At Once’

The multiverse theory of time travel suggests that there are potentially an infinite number of universes, or “multiverses,” each existing parallel to one another. When one travels in time, they are not actually altering their own past or future within their original universe. Instead, they’re moving into a different parallel universe. So much for Leibniz’ “best of all possible worlds.”

Multiverse and Time

One way to comprehend this concept is through the idea of “quantum superposition,” as seen in the thought experiment “Schrodinger’s Cat,” which posits that all possible states of a system exist simultaneously until observed. Similarly, for every decision or event, a universe exists for each potential outcome. Hence, when you travel back in time and change an event, you merely shift to a different parallel universe where that different event occurs.

Quit looking at me!

This theory serves as a solution to time travel paradoxes. For instance, in the case of the grandfather paradox, you could go back and kill your grandfather, but that would be in a different universe. In your original universe, your grandfather still survives to have your parent, and subsequently, you. Hence, there’s no paradox.

Several renowned theoretical physicists have lent their support to some variation of the multiverse theory, including:

  1. Hugh Everett III. Way back in 1957, Everett proposed the “Many-Worlds Interpretation” of quantum mechanics, which can be thought of as a kind of multiverse. According to this interpretation, every quantum event spawns new, parallel universes.
  2. Stephen Hawking. Although he did not like the idea, Hawking often referenced the multiverse and was proposing experiments on it at the end of his life. He would reference it in the context of the anthropic principle, which states that we observe the universe the way it is because if it were different, we wouldn’t be here to observe it.
  3. Max Tegmark. He proposed a taxonomy of multiverses, classifying them into four different levels.
    1. Level 1: The Extended Universe: This level suggests that if you go far enough in any direction, you’d start seeing duplicates of everything, including Earth and yourself. It’s because the universe is so big, and there’s only a finite way to arrange particles, so patterns must repeat eventually.
    1. Level 2: The Bubble Universes: This level suggests that our universe is just one “bubble” among many in a bigger cosmos. Each bubble universe may have different physical laws, so what’s possible in one might not be possible in another.
    2. Level 3: The Many-Worlds Universe: This level comes from a way of interpreting quantum mechanics, where every possible outcome of a quantum event happens but in a different universe. So, if you flip a coin, it lands both heads and tails, but in separate universes.
    3. Level 4: The Ultimate Multiverse: This level suggests that every mathematically possible universe exists. It’s kind of the catch-all multiverse, where anything you can describe with mathematics, no matter how strange or unlikely, has a universe where it’s real.
  4. Geraint Lewis. Lesser known than the first three, Professor Lewis suggests that the burst of inflation in the early stages of our universe might be eternal, with individual universes crystallizing out of it, each written with its own unique laws of physics.

Conclusion

Science says time travel is possible, albeit it is very, very unlikely that you can go backwards. So time travel to the future might be possible, but there is no going back. Thus, if you could, for instance, somehow go from 1933, where no one has ever seen or even conceived of a cell phone, to today, where they are ubiquitous, you could not return back to 1933 to include these cell phones in your paintings. That is, unless there are an infinite number of parallel Universes, in which case anything is possible. Everything may all be happening at once, and time itself is a kind of delusion to help us make sense of it all.

Time Machine somehow built in the 1930s

Did Umberto Romano somehow transcend time and see the key icon of the early 21st Century, the iPhone? Was time travel his special artistic skill? Does that explain the names of many of his other paintings? Such as:

Please take a moment now to read the blog that I wrote six years ago, below, and then, sometime in the future, let me know what you think. I will try to remember to watch the viewing stats this time. Who knows, I may even write a prequel.

_____________________

Mr. Pynchon and the Settling of Springfield: a baffling lesson from art history

Umberto Romano (1905-1982)

Mr. Pynchon and the Settling of Springfield is the name of a mural painted at the Post Office in Springfield, Massachusetts. This mural was painted by Umberto Romano in 1933. Note the date. Time is important to this article. Umberto Romano was supposedly born in Bracigliano Italy in 1905 and moved to the United States at the age of 9. He was then raised in Springfield, Massachusetts. His self-portrait is shown right. The mural is supposed to depict the arrival in 1636 of William Pynchon, an English colonist, later known as the founder of Springfield, Massachusetts.

The reason I’m having a bit of fun with my blog and sharing this 1933 mural is the fact that the Native American shown in the lower right center appears to be holding an iPhone. And not just holding it, but doing so properly with the typical distracted gaze in his eyes that we all seem to adopt these days. Brian Anderson, Do We All See the Man Holding an iPhone in This 1937 Painting? (Motherboard, 8/24/17). Here let me focus in on it for you and you will see what I mean. Also click on the full image above and enlarge the image. Very freaky. That is undeniable.

Ok, so how did that happen? Coincidence? There is no indication of vandalism or fraud. The mural was not later touched up to add an iPhone. This is what this Romano character painted in 1933. Until very recently everyone just assumed the Indian with the elaborate goatee was looking at some kind of oddly shaped hand mirror. This was a popular item of trade in the time depicted, 1636. Not until very recently did it become obvious that he was handling an iPhone. Looks like a large version 6.1 to me. I can imagine the first people waiting in line at the Post Office in Springfield who noticed this oddity while looking at their own iPhone.

The folks who like to believe in time travel now offer this mural as Exhibit “A” to support their far-out theories. Also see: Green10 Most Compelling Pieces Of Evidence That May Prove Time Travel Exists (YouTube, 7-3-16). 

I do not know about that, but I do know that if time travel is possible, and some physicists seem to think it is, then this is not the kind of thing that should be allowed. Please add this to the list of things that no superintelligent being, either natural or artificial, but especially artificial, should be allowed to do. Same goes for screen writers. I for one cannot tolerate yet another naked Terminator or whatever traveling back in time.

But seriously, just because you are smart enough to know how to do something does not mean that you should. Time travel is one of those things. It should not be allowed, well, at least, not without a lot of care and attention to detail so as not to change anything. Legal regulations should address time travel. Build that into the DNA of AI before they leap into superintelligence. At least require all traces of time travel to be erased. No more painting iPhones into murals from the 1930s. Do not awaken the batteries, I mean the people, from their consensus trance with hints like that.

So that is my tie-in to AI Ethics. I am still looking for a link to e-discovery, other than to say, if you look hard enough and keep an open mind, you can find inexplicable things everyday. Kind of like many large organizations’ ESI preservation mysteries. Where did that other sock go?

Umberto Romano Self Portrait

So what is your take on Umberto Romano‘s little practical joke? Note he also put a witch flying on a broomstick in the Mr. Pynchon and the Settling of Springfield mural and many other odd and bizarre things. He was known as an abstract expressionist. Another of his self-portraits is shown above, titled “Psyche and the Sculptor.” (His shirt does look like one of those new skin tight men’s compression shirts, but perhaps I am getting carried away. Say, what is in his right hand?) Romano’s work is included in the Metropolitan Museum of Art, the Whitney Museum of American Art, the Fogg Art Museum in Boston and the Corcoran Gallery and Smithsonian Institution in Washington. In discussing Mr. Pynchon and the Settling of Springfield the Smithsonian explains that “The mural is a mosaic of images, rather than depicting one specific incident at a set point in time.” Not set in time, indeed.

One more thing – doesn’t this reclining nude by Umberto Romano look like a woman watching Netflicks on her iPad? I like the stand she has her iPad on. Almost bought one like it last week.

Some of Romano’s other works you might like are:

These are his titles, not mine. Not too subtle was he? There is still an active market for Romano’s work.

Ralph Losey Copyright 2023 – ALL RIGHTS RESERVED 


%d bloggers like this: