Exclusive Report: New Appellate Court Opinions on Artificial Intelligence and Hallucinations

October 4, 2023

I unearthed important new case law this week that has never been seen before or discussed. This article gives an exclusive report of three appellate court opinions that discuss artificial intelligence and hallucinations. This is a key issue of our times.

Fake image by Ralph of AI Zombies Mind Controlled by CIA.

The hallucinations in question are not, mind you, by an AI, although AIs play a part in the hallucinations. The hallucinations are by the plaintiffs themselves, including, just for instance, allegations of AI robot zombies and vast CIA conspiracies. Did you know Charles Barkley was an agent using mind control to turn humans into artificial intelligence? The pro se plaintiff wanted $35 Million in damages for that claim!

You may well wonder, but I assure you these appellate court opinions are all quite real.

Aljindi v. United States

I’ll start with a my favorite, the case of the PhD who invented AI, or claims to, Dr. Ahmad Aljindi. Aljindi v. United States, 2023 U.S. App. LEXIS 8069 *; 2023 WL 2778689 (Fed. Cir., 4/5/23); Aljindi v. United States, 2022 U.S. Claims LEXIS 2586 (Fed. Cl., Nov. 28, 2022). In his latest pro se suit, this time against the U.S. government for copyright infringement, Aljindi claims that he not only invented Artificial Intelligence, but also Information Security and Legacy Information Systems. Ahmad Aljindi, who has a history of pro se litigation, got a PhD in 2015 in Business Administration from an online school, NorthCentral University. His PhD dissertation must have been awe inspiring.

Image of Delusional Young Man at Work by Ralph using various ingenious AI tools.

Did he really invent all these things, or is he hallucinating? Hard to say, isn’t it. Better take this one all the way up to the Appeals Court. I am surprised the U.S. Supreme Court did not weigh in too.

The history of this case also seems like a hallucination. This suit, as first pled, “alleged various claims, including employment discrimination; intellectual property theft; ‘negligence and tort,‘” Aljindi v. United States, No. 2022-1117, 2022 U.S. App. LEXIS 12530, 2022 WL 1464476, at *1 (Fed. Cir. May 10, 2022). Aljindi’s pro se complaint included allegations of “ongoing judicial corruption, abuse, and torture in addition to the Government’s abuse and torture.” The usual thing.

Delusional thinking and hallucinations trouble many humans, not just Generative AI. Psychedelic art image by Ralph.

The law suit was dismissed by the District Court. Then the good doctor appealed. The appeals court affirmed the dismissal of course, but, and here is the funny part, the dismissal was only affirmed in part. That’s right, the appeals court remanded the case back to the trial judge, who must have been thrilled, since it is well known that they love to abuse and torture. Just ask any attorney. In ordering the remand the appellate court, no doubt with substantial help from its law clerks, explained it actions:

But we vacated-in-part the trial court’s dismissal because Dr. Aljindi’s complaint “mentioned copyrights law violations in the relief section,” which could “be liberally construed as a copyright infringement claim over which the Court of Federal Claims would have jurisdiction.” 2022 U.S. App. LEXIS 12530, [WL] at *3 (cleaned up). Accordingly, we remanded for the trial court “to consider the Government’s position that Dr. Aljindi’s complaint fails to state a claim for copyright infringement.” Id.

Aljindi v. U.S., 2023 U.S. App.LEXIS 8069 *2 (4/5/23)
Genius at work inventing AI and Cybersecurity. Digital image by Ralph.

Apparently some appellate law clerks wanted to read more of Aljindi’s amazing claims and talked their judges into a partial remand, out of an abundance of caution, of course. They were not disappointed. Aljindi on remand now claimed to have invented AI, Information Security and Legacy Information Systems, cause, why not? Al Gore did invent the Internet, after all.

These “unusual” claims were made by Dr. Aljini to try to support his pleading for copyright violation. Surprisingly, that tactic did not work. The copyright claims were dismissed by the trial judge because duh, you cannot copyright ideas, even hallucinatory ones. Aljini, of course, appealed again, much to the appellate clerks’ delight. I can almost see them pump fisting and saying, yes! Plan well done.

Young law clerks celebrate a rare moment of levity. Fake photo by Ralph.

The Court of Federal Claims took time, again, to write a per curiam affirmed opinion. All part, I suppose, of what Aljindi called “ongoing judicial corruption, abuse, and torture.” Here are some select quotes, again, you be the judge, hallucinations or not? (citations to the record omitted)

Dr. Aljindi argued on appeal that the “Government used [his] property in ALL formal AI Strategies published by the federal government . . . as [he had] discovered this entire scientific field in its entirety.”

Dr. Aljindi clarifies in his briefing, however, that his copyright claim is not founded on any alleged infringement of the copyrightable aspects of his dissertation; rather, he explains that “[t]he scientific intellectual property” at issue is “the discovery of the entire Information Security, AI, and LIS scientific field in its entirety and establishing this scientific field from scratch.” Dr. Aljindi arguing that “[e]verything is based on [his] scientific research and [his] own property“) … . Dr. Aljindi does not identify any specific expression of these ideas and concepts that the government allegedly copied; instead, he repeatedly contends generally that “everything built on top of [his] property is [his] property.”

[H]ow did these federal agencies . . . know about the relationship between AI, Information Security, and LIS without reading and taking my property and building on its formal scientific findings!

Aljindi v. U.S., 2023 U.S. App.LEXIS 8069 *2, *3 (4/5/23)

How indeed?

Sometimes tortured souls have delusions of grandeur to try to cope. I’ve done that a few times myself. Image of a mad genius by Ralph.

I can imagine Dr. Aljindi thinking to himself, how else could they have possibly known? It’s mine, all mine, I say, stolen by the evil feds. I will sue you all!

Yes, I swear, this is a real opinion, not an delusion. So are the next two, which, in some ways, are even better.

Emrit v. Barkley

This is another pro se case, they are the best for hallucinations, where the Third Circuit bothered to write a per curiam opinion on AI and hallucinations. Once again, I suspect the judges’ clerks talked them into it. Emrit v. Barkley, 2023 U.S. App. LEXIS 11188 *; 2023 WL 3300970 (3rd Cir., 5/8/23). The plaintiff here is infamous, having filed over 500 pro se lawsuit across the country. This one is against former NBA basketball player, Charles Barkley, and the Subway fast-food chain. It involves both AI and the CIA. Of course, the CIA has long been known to be using AI for nefarious ends. What we did not know, until this law suit enlightened us, is how closely involved Barkey and Subway were involved. Pro se plaintiff to the rescue!

Image by Ralph depicting Charles Barkley as an AI evil genius.

Emrit claims in his Appeal Brief that the “CIA utilizes advertisements of Charles Barkley, Subway, Fan Duel, and sporting goods to annoy or harass” [*2]  him. Id. at 5. Emrit requested $45 million in damages. Id. at 9.” The trial judge dismissed the original pleading as frivolous. Can you imagine? Still, Emrit appealed to the Third Circuit and tried again.

Emrit argued in his appeal that the Barkley Subway and other “advertisements provided a way for technology companies to ‘engage in a form of mind control to turn humans into artificial intelligence.’” Yup, Barkley and Subway are part of a secret CIA mission to turn humans into Artificial Intelligence. Apparently, all the big tech companies are in on it too. Maybe they have already been turned into AI. It’s not clear from the pleadings. What is clear is the allegation that the CIA is able to turn humans into AI by mind control using television and advertisements, especially ones with Charles Barkley in them. Who can resist the trance inducing eyes of Charles Barkley?

Those are not the kind of allegations that appellate court law clerks, usually fresh out of law school, read every day. Usually it is pretty boring stuff. One company suing another, blah, blah. I have no doubt the Clerks of the Third Circuit were happy to read this nonsense and eagerly passed the Barkley briefs around.

Law clerks at a glass table celebrating. Digital art image by Ralph.

Of course, the third Circuit affirmed the lower court dismissal without even a partial remand, “because Emrit’s complaint is frivolous.” Really? But what about copyright? I guess these clerks were not as persuasive as the ones in Aljindi v. United States. Still, they provided the explanations of the Barkley AI hallucinations in the per curiam opinion quoted above, and we are all better for that.

I have a suspicion that we have not seen the last of this particular hallucination. We may see it in a movie some day. Turning people into plain old zombies is getting kind of old. Robot-Artificial Intelligence zombies are much better. Plus, it is well known, that anything with AI in it these days sells, especially if they are crazy AIs. No doubt a copyright suit or two will eventually come out of all of this this as well.

Hallucinatory image by Ralph of humans turned into AI robots by CIA mind control.

Mateen v. FNU LNU

Now it’s the Fifth Circuit clerk’s time to have fun and write a per curiam affirmance opinion on a different AI hallucination. Mateen v. FNU LNU, 857 Fed. Appx. 209 (5th Cir. 2021). If you are at all squimish, you might not want to read on. By the way, the mysterious defendant in this case, FNU LNU, is an acronym commonly used in the justice system for when the identity of the person or persons charged or sued remains a mystery. In that case, they are often listed in court records as “Fnu Lnu,” shorthand for “First name unknown, Last name unknown.”

This one involves a pro se prisoner, Shazizz Mateen, aka Reginald Bowers, with a very serious criminal record. As a prisoner he sued in federal court in Texas unknown people in an unknown ambulance company and unknown people in an unknown hospital. Shazizz alleged that these unknown persons were all part of a “a vast conspiracy pursuant to which, inter alia, he was lobotomized and had an artificial-intelligence chip inserted into his brain that turned him into an android slave.” The appeal case was heard by Judge Jolly. I kid you not.

Fake AI Photo by Ralph of a prisoner after brain surgery.

It is bad enough to be in prison, but to also have an AI chip put in your brain, that’s tough. Then to have an AI chip turn you into an android slave? Well, it does not get any worse than that. Maybe he deserves it, who knows. Still, his allegations were bizarre enough that most law clerks would want to write an opinion about it for their judge. The law clerks here almost didn’t get their way. The appeals court panel, led by Judge Jolly, agreed to allow a per curiam affirmed opinion, but only if it was technically an “unpublished” opinion with no precedent value. That’s why it can only be found online as part of the Federal Appendix.

So apparently the law clerk’s got their way and the case was published, but it was unpublished at the same time. Sounds like a zombie opinion to me. Another sign that an AI chip in prisoners’ brains zombie film will likely be coming soon too.

AI zombie robots image created by Ralph using AI.

Conclusion

Humans are just as likely to be wrong and make things up as AIs are, maybe even more likely to do so, especially when they play lawyer. So do not be scared away from using AI tools like ChatGPT just because they sometime make mistakes and make things up, i.w. hallucinate. So do humans, even lawyers – just read the news. That is why lawyers still have job security to double check and quality control for errors and hallucinations.

Digital art by Ralph and his AIs depicting a human lawyer in court making a fool of himself.

By the way, no AI was used in the research for this article, nor for any of the writing. From my experience, GPTs are not that good at writing humor and do not understand the subtle nuances of satire. Still, AI is pretty good at accurate summaries, so I asked ChatGPT-4 to summarize this article. Here are its words.

Summary: The blog discusses three appellate court opinions centered on artificial intelligence (AI) and hallucinations. The discussed hallucinations are by the plaintiffs, not by AI, including outlandish claims like AI robot zombies and conspiracy theories involving Charles Barkley using mind control to turn humans into AI, with a demand for $35 million in damages.

  1. Aljindi v. United States:
    • Dr. Ahmad Aljindi, claiming to have invented AI, Information Security, and Legacy Information Systems, filed a pro se suit against the U.S. government for copyright infringement.
    • The trial court dismissed his claims, but the appellate court remanded for further consideration on the copyright infringement claim.
    • On remand, Aljindi continued his astounding claims, but the trial judge again dismissed them, resulting in another appeal.
  2. Emrit v. Barkley:
    • Pro se plaintiff filed a case against Charles Barkley and Subway, alleging a conspiracy with the CIA using advertisements for mind control and turning humans into AI.
    • The plaintiff sought $45 million in damages.
    • The case was dismissed as frivolous by the trial judge and affirmed by the Third Circuit, highlighting the wild accusations of mind control and AI transformation.
  3. Mateen v. FNU LNU:
    • A pro se prisoner alleged that unknown persons inserted an AI chip into his brain, turning him into an android slave as part of a vast conspiracy.
    • The appeals court allowed a per curiam affirmed opinion but noted it as unpublished with no precedent value.

The blog concludes by noting that humans, like AI, make mistakes and hallucinate, and while AI tools like ChatGPT might sometimes err, human oversight can help manage these issues, offering a degree of job security for legal professionals.

Chat GPT-4.0 summarizing this article

It is a decent, albeit dry summary. The summary does not seem to grasp the satirical and inside baseball humor of this article, well, at least attempts at humor. That confirms my point about AI’s not being ready for prime time comedy. Still, to be fair, I asked ChatGPT-4 to prepare jokes based on this article. Here is what it came up with. You be the judge of its quality.

Why did the man sue Charles Barkley and the CIA after eating a sandwich?

Because he believed the “sub-way” was leading him on a path to becoming an artificially intelligent robot, and he wanted his “byte” of the $45 million in damages for extra “processing” power! 🥪🤖

_______________

Why did the blogger love covering appellate court cases about AI and hallucinations? 

Because where else can you find “artificial intelligence” and “genuine delusions” co-starring in the same legal drama!

ChatGPT-4.0

Digital art image by Ralph summarizing this Blog.

Ralph Losey Copyright 2023. All Rights Reserved.


DefCon Chronicles: Hackers Response to President Biden’s Unprecedented Request to Come to DefCon to Hack the World for Fun and Profit

September 3, 2023

Hackers responded to the White House call by the thousands, including reporter-AI-hacker Ralph Losey, to try to break existing software in open contests. Ralph joined in the AI hack attack, but there were many other competitions to hack different systems. In this second in the Chronicle series we describe more of the details of the President’s policy, share some of the celebrity feds who came in person to make the President’s case and analyze the hackers’ response. In upcoming articles Ralph will report on the AI and other attacks at DefCon to find and kill computer bugs.

Computer AI Robo Bug image by Ralph Losey using Midjourney

The cybersecurity leadership of the White House and Department of Homeland Security personally attended DefCon 31. That includes the Homeland Security Department Secretary himself, Alejandro Mayorkas. The feds came to help officially open the conference, and then, with black hats in hand, to ask for help from DefCon hackers, answer their questions, offer employment to some, and make several new policy statements on consumer protection and national defense.

It looks like DefCon 31 was a breakthrough political event for hackers and DefCon. Never before had a government leader, especially the President of the United States, made a public call for hackers to help the country. Never before had White House experts, along with the dreaded Department of Homeland Security, asked hackers to go to Vegas to hack software. They even promised big cash awards in future DefCons. In Def Con 32 and 33, in 2024 and 2025, they promise to conclude a series of ongoing competitions that will go one throughout the years, leading to semi-finals and finals at DefCon 32 and 33. They promised awards of millions to winning teams, including a top prize of $4 million for the team that “best secures vital software.” See, Hackers to compete for nearly $20 million in prizes by using A.I. for cybersecurity, Biden administration announces. I already know the answer – unplug it! – but I don’t suppose they will accept that as correct. After all, its vital. So hack we must.

Hacker Girl by Ralph Losey

President Biden on AI and Cyber Policy

On July 21, 2023, the day of a big meeting with the White House and leading companies in AI, President Biden delivered a short speech reproduced here on artificial intelligence. Surprisingly, I agree with most everything he says in this excerpt. For more details on the meeting itself and the commitment to regulation the White House managed to obtain, see White House Obtains Commitments to Regulation of Generative AI from OpenAI, Amazon, Anthropic, Google, Inflection, Meta and Microsoft (August 1, 2023, e-Discovery Team).

President reading prepared statement on AI, 7/21/23

For more background on President Biden’s call for AI black hats to pen-test AI, see VEGAS BABY! The AI Village at DEFCON Sponsors Red Team Hacking to Improve Ethics Protocols of Generative AI.

The government leaders in attendance of DefCon 31 pleaded for hackers in many different seminars to try to break the alignment protections that AI software companies have created. This is a relatively new, and a very concerning problem that surprised people with the release of ChatGPT-3.5 then 4.0. The top feds also asked for hackers help to find and fix vulnerabilities and bugs in all types of software. They have done this in the past, but in very low key manner.

Fake Photo of Joe Biden wearing a black hat using Midjourney

Top feds leaders attending DefCon 31 spoke openly of the government’s work in cybersecurity defense and regulatory policy, but at the same time, were careful not to reveal classified secrets. I could see them struggling with this tension at times. The feds of all agencies were also blatant in their recruiting efforts, to try to get in the DefCon community to work for them. The feds, including especially the many DOD related agencies, understand the urgency of the need for skilled Hacker experts to protect the free world from constant, ongoing cyber attacks.

If hackers find and report these bugs, the software can be fixed before criminals and foreign governments use the vulnerabilities against us. These hacker investigations are needed to find and fix the flaws. It is hard, distasteful work, but needs to be done.

AI Bug Catching Hacker Finds a Big One, by Ralph using Midjourney

Federal Government Leaders at DefCon Policy Events

President Biden’s invite to hackers was echoed in the opening ceremonies in a low key way by DefCon’s founder, Jeff Moss, aka Dark Tangent (much more on Jeff later) and with more enthusiasm by the Secretary of Department of Homeland Security, who joined Jeff on stage to kick things off. Secretary Mayorkas, a Cuban refugee, has had a distinguished career as a criminal prosecutor and U.S. attorney in Los Angeles. He moved to Washington D.C. to take on a number of roles in the Obama administration, ending with Deputy Secretary of Homeland Security. He is not a cyber expert, and seemed a little uncomfortable at DefCon, but he knows the tremendous dangers of America’s extensive cyber vulnerabilities. He too asked for help from the black hats.

Alejandro Mayorkas, official portrait with Ralph’s photoshop Ai of flag and black hat

The two seeming polar opposites, Jeff Moss and Alejandro Mayorkas, opened DefCon 31 by announcing that the Fed’s existing “Hack DHS” bug bounty program would not only continue, but would expand its focus to include artificial intelligence. Mayorkas went on to say he was “very concerned” about potential cybersecurity, civil rights and privacy issues related to generative AI. “You see things we do not see, you discover things, and we need your help.” A lot of truth there. The DefCon hackers are among the best in the world at finding software vulnerabilities.

The DHA and DOD agencies, just like most large corporations, have an obvious recruiting problem with cyber experts. There are now thousands of unfilled vacancies. See eg., How DoD is thinking ‘outside the box’ to solve its cyber workforce challenges (Breaking Defense, 8/22/23). Sending the top brass to recruit at DefCon is about as far outside of the box as you can get for federal recruiting, although it has been going on for years in quiet mode, with some small success. DefCon hackers are, after all, a largely crazy, punkish counter-culture group.

Photo by Ralph using Midjourney “camera” at DefCon of typical hackers

I have talked to ethical hackers who look for vulnerabilities for a living, red hats doing penetrating testing. Many think the red team community should not be asked to help the government find bugs without getting fair payment for their work. To make this new government pitch work, the bug bounty cash awards and conditions need to be real and doled out to all the little guys as well, not just the big corporate teams. Although the government talks a big game now, in fact, in the first year of the Hack DHS bug bounty program the DHS only paid out $125,600 total. Whoopie Doo! The U.S. spends about a Trillion Dollars on defense and security every year. The $125,660 spend for bug bounties is just a little over $1,000 per vulnerability found, substantiated, and reported for errors. This is a pittance considering the skilled time required, and the fact a red teamer only gets paid if something that qualifies as a covered error is found. They deserve better pay. No one wants the red hats to go to the dark side and sell the bugs they find to the evil back hats. The money and glamour can be alluring.

Fantasy Black Hat Girl Photo image by Ralph using Midjourney

The corporations who make all of the defective software tested should pitch in and pay big supplements to the government program and add many more private bounty programs. Many corporations already have such programs, and they should be greatly increased. Let’s make the total public and private bug bounty program actual rewards at least $125 Million a year, not thousand, then we will see better results. The security of the free world will improve.

I heard grumbling from the hacker ranks about unfair exploitation of their time and skills. We really do not want these skilled workers selling out to true black hat criminals and terrorists, including foreign adversaries. Remember Vladimir Putin’s famous victory statement in 2017: “Artificial intelligence is the future not only for Russia, but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.”  Artificial Intelligence and Great Power Competition, With Paul Scharre, (Council on Foreign Relation), 3/28/23 . Putin has already hacked one election, don’t let him hack another. Unethical AI bots and social media easily combine to make powerful propaganda.

Putin wants to steal our AI, then hack and use it to conquer the world. Ralph’s Midjourney photo image,

Insecure by Negligent Design

Another important policy seminar to mention was called CISA/ONCD Secure by Design. It was led by Jen Easterly, who is known as CISAJen on Twitter (aka X). She was also part of the opening remarks with her boss, Alejandro Mayorkas. Jen gets high points from us for her talks and for her cyber cool look. Jen, in addition to being a hands-on and policy cyber expert, is also a big proponent of mental health. That is one reason she went public on Twitter recently regarding her brother’s recent suicide. Very sad and compelling motivation for her charity work in this field.

Jen Easterly, Photoshopped image by Ralph using AI

On that side-point, note that throughout DefCon 31 there were signs with a phone number for help and support of any kind, including twice a day Friends of Bill meetings. The hacker community was well protected by the hundreds of friendly, albeit sometimes crazy looking, men and women, called “Goons.” They provided security and host services, answering all questions with a caring smile. There was no violence at this 24,000 plus event. All was peace and calm at Caesar’s Convention Forum, far safer than the Caesar’s Palace Hotel itself where some of us had coughed up big bucks to stay. The Casino was loud, smoke filled, over priced with big gamblers and a few scantily dressed women. Typical Vegas. Not many DefCon type punk nerds hung out at Caesar’s Hotel. They mostly stayed in the convention area or cheaper nearby hotels. Next time I’ll do that too, as I’d rather just hang out with them and avoid the gambling fools.

Hacker girl standing out at DefCon crowds. Photo by Ralph using Midjourney camera

Back to the CISA/ONCD Secure by Design policy seminar. First, here is a translation of the acronyms and explanation of the title. The acronym ONCD stands for the Office of the National Cyber Director. This is the White House Office that advises the President on cybersecurity policy and strategy. Kemba Walden is the Acting National Cyber Director of the ONCD.

Photoshopped beta AI version of Kemba Walden

Acting ONCD Director Kemba Walden is a lawyer, formerly with Microsoft, digital crimes unit, so I bet she is good at recruiting all the hackers who got away. Here are a few video takes of her interview by the Dark Tangent himself, Jeff Moss, in another seminar, where, not surprisingly, Kemba distinguished herself well.

Kemba Walden interview by Jeff Moss at DefCon 31, video by Ralph Losey

Back again to never ending fed acronyms, CISA stands for the federal Cybersecurity and Infrastructure Security Agency, the group at Homeland Security that Jenn Easterly leads. Secure By Design is a key program of the CISA, which is more fully described by a series of government articles here. The policy discussion concerned possible regulation of software design to require companies, like Microsoft, just for instance (but really they are all insecure), to design their technology so that it is more secure. The same design problems also apply to hardware, and to Internet providers and the internet infrastructure itself. We are in a real cybersecurity mess right now. Everyone is getting hacked and put to significant extra security expenses. Hackers and cyber lawyers at DefCon probably know more about this than anyone.

Hacker lawyer at DefCon, Ralph Photo using Midjourney

It may seem incredible, but this design imperative for the security of computer products, is not, like cars, legally required by manufacturers. Profit motivates tech companies, not your safety. The only exceptions are companies who sell add-on security software and services. Cyber security is not part of the tech bro culture, the make it and sell it fast, get rick quick kids. Big tech is able to maximize profits by not designing everything from the ground up for security. Instead, they do what shareholders and consumers both want, they design tech for consumer convenience. There are many reasons security is not as high a priority as it should be, including tech’s near immunity from liability for damages caused by its defects. The clickthrough license agreements and laissez-faire laws have over-protected them for decades.

This explains why the proposed safety regulations in Security By Design are controversial in Big Tech. Still, individual hackers at DefCon seemed open to the idea of putting it to the Man. You might wonder why, since in the long run safe by design, might cut into their income. They earn a living by fixing the never ending spew of bad code that tech bros make. But, that’s a speculative long term consequence. In the here and now there is plenty of work for them to do. Sure, they want greater pay, especially for volunteer find a bug work, but the job market now is good for employees. The job shortage in cybersecurity is real. Plus, hackers are a skeptical bunch. They doubt the new government’s algorithmic safety policies will create real results. Just government talk, they think. I hope they are wrong.

Ralph photo using Midjourney of typical hackers in crowd at DefCon

The policy discussions in CISA/ONCD Secure by Design pertained to these issues, but not for long. Most of the time was devoted to providing attendees an opportunity to make written comments to the draft regulations CISA is now working on. This seminar was swamped, with insufficient seats and pens. Revisions had to be made old school, on paper. I can only imagine how many of the hackers in the policy village were actually lobbyists scribbling away, not real hackers at all. We did not attend this event, but could watch it later.

We missed it primarily for scheduling reasons, not to avoid the funny use of dead trees at DefCon. We wanted to AI compete, not meet, and these many seminars overlapped. Typically DefCon would have five or six seminars and classes going on at the same time, not to mention the hundreds of competitions and demonstrations, etc. There were many complaints about that. More logistics criticisms at the conclusion of the DefCon Chronicle series.

Conclusion

Bottom line, security for many software and hardware manufacturers is an afterthought. All too often when software safety is mentioned – “bug-free, safe software” – it is just a bogus marketing claim, a big lie. The inherent flaws in software code are well known in the hacker community, and are, in fact, the basis for the whole thriving cybersecurity industry. (In fairness, user errors and vulnerability to social engineering are also a leading cause of cyber vulnerabilities.) The government needs hacker help to alert the fixers of these problems.

Guessing this Hacker at DefCon is a fed, maybe NSA’s top recruiter? Midjourney photo.

Time will tell if this new White House effort to make cyber safe will succeed. If not, you can count on the attacks to continue. The bad guys like Putin and his puppets will continue to use our own stupidity and greed against us. I for one hope that idealism wins out before we start having more planes mysteriously fall from the sky and other engineered disasters.

Dictators Cyber Attack Us Daily, Image by Ralph using AIs

For background on the cyber war underway and the rush for Ai code superiority, see Ben Buchanan  and Andrew Imbrie‘s new book, The New Fire: War, Peace and Democracy in the Age of AI. These are Georgetown scholars now helping the White House as advisors. I highly recommend their book and hope to do a more detailed review of it later. It is a must read to understand the global politics of AI and cyber. I particularly like the general analysis of the three groups in AI tech, the Evangelists, the Cassandras and the Warriors, as well as the explanation of AI as the new Fire having three sparks: data, algorithms and computing power. It is a good framework to help anyone understand the fast changes now underway and the opportunities, dangers and politics involved.

Cover of Buchanan & Imbrie’s Book

For good background reading on hackers and the inherent insecurity of code and the internet today, see Fancy Bear Goes Phishing (5/23/23, Farrar, Straus and Giroux) by Scott Shapiro. This is another great book, which I highly recommend. I especially liked his carefully researched, beautifully written re-telling of five of the most famous hacks in history. Scott is a Professor of Law and Philosophy at Yale and was a presenter with Jenn Easterly at another DefCon Policy seminar called Lions and Tigers and Fancy Bears, Oh My!: A Cautionary Tale for our Cyber Future. This is another seminar that I wanted to attend, but could not due to logistics. DefCon31 described the seminar as a discussion on “how best to understand the challenge of information security; what we can learn from looking back; and how the decisions we make today to prioritize security by design will shape our future.” I hope to do a more in-depth book review soon.

Cover of Scott Shapiro’s Book

Stay tuned for the next episode of the DefCon Chronicles, coming soon!

Ralph Losey Copyright 2023. — All Rights reserved


DefCon Chronicles: Where Tech Elites, Aliens and Dogs Collide – Series Opener

August 21, 2023

From Boris to Bots: Our First Dive into the DefCon Universe. This begins a series of blogs chronicling the infamous DefCon event in Las Vegas. The next installment will cover President Biden’s unprecedented request for hackers to attend DefCon to hack AI, and the hackers enthusiastic response, including reporter-AI-hacker Ralph Losey, to break existing AI software in an open contest. In addition, nearly all of the top cybersecurity leadership of the White House and Department of Homeland Security personally attended DefCon, including the Homeland Security Department Secretary himself, Alejandro Mayorkas. They came to help officially open the conference and stayed to give multiple policy statements and answer all hacker questions. It was a true breakthrough moment in cyber history.

Boris seems unimpressed by his official DefCon Dog award

I attended DefCon 31, on August 10-15, 2023, as independent Press, accompanied by my co-reporter daughter, a former lobbyist with an English Lit background, and her dog, Boris. Our press status with special green badge had a high price tag, but it gave us priority access to everything. It also facilitated our interaction with notable figures, from the White House Science Advisor, Arati Prabhakar, to DefCon’s enigmatic founder, Dark Tangent.

DefCon is the world’s largest tech hacker “conference” – more like a inter-dimensional portal at the Caesars Forum. When we first checked in, we happened to meet the leader of DefCon Press and P.R. She fell for little Boris in a handbag, and declared him the official DefCon 31 dog! What an honor. Way to go Boris, who everyone thinks is a Chihuahua, but is really a Russian Terrier. Nothing is as it seems at DefCon. The guy you see walking around in shorts, who looks like a bearded punk rocker, may actually be a senior NSA fed. We will tell you why the NSA was there later in this series.

At DefCon, we immersed ourselves in a diverse crowd of over 24,000 elite tech experts from across the globe. This included renowned names in Cybersecurity, notably the formidable red team professionals. Most of these hackers are law-abiding entrepreneurs, as well as members of top corporate and federal red and blue teams. Several thousand were there just to answer President Biden’s call for hackers everywhere to come to DefCon to compete to break AI. Such a request had never been made before. Much more on this later, including my joining in the AI competition.

The tech experts, hackers all, came together for the thirty-first year of DefCon. We were drawn to participate, and in our case, also report on, the hundreds of large and small lectures and other educational events, demonstrations and vendor exhibitions. In addition, the really big draw was, as usual, the dazzling array of hacker challenges and competitions. Some of these are quiet serious with major prizes and rep at stake, and required pre-qualifications and success in entry rounds. But most were open to all who showed up.

Picture walking into a football stadium, but in place of athletes, you’re surrounded by the world’s tech elite, each donning distinctive hacker attire. As we flooded in by the thousands, it was a blend of seasoned pros and enthusiastic fans. I counted myself among the fans, yet I eagerly took on several challenges, such as the AI red team event. The sheer diversity and expertise of all participants was impressive.

The entrance boasted a towering, thirty-foot neon sparkling mural that caught my eye immediately. I’ve refined the photo to focus on the mural, removing the surrounding crowds. And, just for fun, there’s an alien addition.

Ralph entering Defcon 31

The open competitions came in all shapes and sizes: hacker vs. computers and machines of all types, including voting machines, satellites and cars; hacker vs. hacker contests; and hacker teams against hacker teams in capture the flag type contests. An article will be devoted to these many competitions, not just the hacker vs. AI contest that I entered.

There was even a writing contest before the event to compete for the best hacker-themed short story, with the winner announced at DefCon. I did not win, but had fun trying. My story followed the designated theme, was set in part in Defcon, and was a kind of sci-fi, cyber dystopia involving mass shootings with AI and gun control to the rescue. The DefCon rules did not allow illustrations, just text, but, of course, I later had to add pictures, one of which is shown below. I’ll write another article on that fiction writing contest too. There were many submissions, most were farther-out and better than my humble effort. After submission, I was told that most seemed to involve Ai in some manner. It’s in the air.

Operation Veritas - short story by R. Losey
Illustration by Ralph for his first attempt at writing fiction, submitted for judging in the DefCon 31 writing competition.

So many ideas and writing projects are now in our head from these four days in Vegas. One of my favorite lectures, which I will certainly write about, was by a French hacker, who shared that he is in charge of cybersecurity for a nuclear power plant. He presented in a heavy French accent to a large crowd on a study he led on Science Fiction. It included statistical analysis of genres, and how often sci-fi predictions come true. All of DefCon seemed like a living sci-fi novel to us, and I am pretty sure there were multiple aliens safely mingling with the crowd.

We provide this first Defcon 31 chronicle as an appetizer for many more blogs to come. This opening provides just a glimpse of the total mind-blowing experience. The official DefCon 31 welcome trailer does a good job of setting the tone for the event. Enlarge to full screen and turn up the volume for best affects!

DefCon 31 official welcome video

Next, is a brief teaser description and image of our encounter with the White House Science Advisor, Dr. Arati Prabhakar. She and her government cyber and AI experts convinced President Biden to issue a call for hackers to come to Defcon, to try to break (hack) the new AI products. This kind of red team effort is needed to help keep us all safe. The response from tech experts worldwide was incredible, over a thousand hackers waited in a long line every day for a chance to hack the AI, myself included.

We signed a release form and were then led to one of fifty or more restricted computers. There we read the secret contest instructions, started the timer, and tried to jail break the AI in multiple scenarios. In quiet solo efforts, with no outside tools allowed and constant monitoring to prevent cheating, we tried to prompt ChatGPT4 and other software to say or do something wrong, to make errors and hallucinate. I had one success. The testing of AI vulnerabilities is very helpful to AI companies, including OpenAI. I will write about this is in much greater detail in a later article, as AI and Policy were my favorite of the dozens of tracks at DefCon.

A lot of walking was required to attend the event and a large chill-out room provided a welcome reprieve. They played music there with DJs, usually as a quiet background. There were a hundred decorated tables to sit down, relax, and if you felt like it, chat, eat and drink. The company was good, everyone was courteous to me, even though I was press. The food was pretty good too. I also had the joy of someone “paying it forward” in the food line, which was a first for me. Here is a glimpse of the chill out scene from the official video by Defcon Arts and Entertainment. Feel it. As the song says, “no one wants laws on their body.” Again, go full screen with volume up for this great production,

Defcon 31 Chill Out room, open all day, with video by Defcon Arts and Entertainment, DefConMusic.org

As a final teaser for our DefCon chronicles, check out my Ai enhanced photo of Arati Prabhakar, whose official title is Director of the Office of Science and Technology. She is a close advisor of the President and member of the Cabinet. Yes, that means she has seen all of the still top secret UFO files. In her position, and with her long DOD history, she knows as much as anyone in the world about the very real dangers posed by ongoing cyber-attacks and the seemingly MAD race to weaponize AI. Yet, somehow, she keeps smiling and portrays an aura of restrained confidence, albeit she did seem somewhat skeptical at times of her bizarre surroundings at DefCon, and who knows what other sights she has been privy too. Some of the questions she was asked about AI did seem strange and alien to me.

Arati Prabhakar speaking on artificial intelligence, its benefits and dangers, Photoshop, beta version, enhancements by Ralph Losey

Stay tuned for more chronicles. Our heads are exploding with new visuals, feelings, intuitions and ideas. They are starting to come together as new connections are made in our brains’ neural networks. Even a GPT-5 could not predict exactly what we will write and illustrate next. All we know for certain is that these ongoing chronicles will include video tapes of our interviews, presentations attended, including two mock trials of hackers, as well as our transcripts, notes, impressions and many more AI enhanced photos. All videos and photos will, of course, have full privacy protection of other participants who do not consent, which the strict rules of Def Con require. If you are a human, Ai or alien, and feel that your privacy rights have been violated by any of this content, please let us know and we will fuzz you out fast.

DefCon 31 entrance photo by Def Con taken before event started

Ralph Losey Copyright 2023 (excluding the two videos, photo and mural art, which are Def Con productions).


Surprising Admissions by OpenAI Leaders Made in Recent Interviews

August 8, 2023

OpenAI’s head scientist, Ilya Sutskever, revealed in an interview by a fellow scientist near his level, Sven Strohband, how the emergent intelligence of his neural-net AI was the surprising result of scaling, a drastic increase of the size of the compute and data. This admission of surprise and some bewilderment was echoed in another interview of the CEO and President of OpenAI, Sam Altman and Greg Brockman. They said no one really knows how or why this human-like intelligence and creativity suddenly emerged from their GPT after scaling. It is still a somewhat mysterious process, which they are attempting to understand with the help of their GPT4. The interview was by a former member of OpenAI’s Board of Directors who also prompted them to disclose the current attitude of the company towards regulation. These new interviews are both on YouTube and will be shared here. The interview of Ilya Sutskever is on video, and the interview of CEO Sam Altman, and President, Greg Brockman, is an audio podcast.

Sam Altmann and Greg Brockman, Midjourney image by Ralph

Introduction

In these two July 2023 interviews of OpenAI’s leadership, they kind of admit that they lucked into picking the right model –  artificial neural networks on a very large scale. Their impromptu answers to questions by their peers can help you to understand how and why their product is changing the world. One interview, a pod cast, also provides an interesting glimpse into OpenAI’s preferred approach towards Ai regulation. You really should digest these two important YouTubes yourself, and not ask a chatbot to summarize it for you. What AI is Making Possible (YouTube video of Sutskever) and Envisioning Our Future With AI (YouTube audio podcast).

In the 25 minute video interview of Ilya Sutskever, the chief scientist of Open Ai, you will get a good feel for the man himself, how he takes his time to think and picks his words carefully. He comes across as honest and sincere in all of his interviews. My only criticism is his almost scary serious demeanor. This young man, a former student and successor to the legendary AI pioneer of neural-net and deep learning, Professor Geoffrey Hinton, could well be the Einstein of our day. Time will tell, but see for yourself.

Ralph’s Midjourney image of Ilya Sutskever,

I did manage to catch a brief moment of an inner smile by Ilya in this screen shot from the video. He provides a strong contrast with the nearly always upbeat executive leadership team, who did a podcast last month for Reid Hoffman, a past member of their Board of Directors.

Image of Ilya Sutskever from video What AI is Making Possible with a rare, almost smile

The interview of Ilya Sutskever was in response to excellent questions by Sven Strohband. The Youtube video of the interview Strohban called What AI is Making Possible. Sven is a Stanford PhD, computer scientist, who is now the Managing Director of Khosla Ventures. His questions are based on extensive knowledge and experience. Moreover, his company is a competitor of OpenAI. The answers from the deep thinking of Ilya Sutskever are somewhat surprising and crystal clear.

Ralph’s “Deep Thinking” Midjourney image of Ilya Sutskever

The second interview is of both the OpenAI CEO, Sam Altman, who has often been quoted here, and the President and co-founder of OpenAI, Greg Brockman. Both are very articulate in this late July 2023 audio interview by Reid Hoffman, former OpenAI Board member, and by Aria Finger. They call this podcast episode Envisioning Our Future With AI. The episode is part of Greg Hoffman’s Possible podcast series, also found on Apple podcasts. Greg was part of the initial group raising funds for OpenAI and, until recently, was on its Board of Directors. As they say, he knows where all the skeletons are buried, and got them to open up.

Greg Brockman by Ralph using Midjourney

This one hour interview covers all the bases, even asking about their favorite movie (spoiler – both Sam and Greg said HER). The interview is not technical, but it is informative. Since this is an audio-only interview, it is a good one to listen to in the background, although this is made difficult by how similar Sam and Greg’s voices sound.

Ilya Sutskever Video Interview – What AI is Making Possible

I have watched several videos of Ilya Sutskever, and What AI is Making Possible is the best to date. It is short, only twenty-five minutes, but sweet. In all of Ily’s interviews you cannot help but be impressed by the man’s sincerity and intellect. He is humble about his discovery, admits he was lucky, but he and his team are the ones who made it happen. They made AI real, and remarkably close to AGI, and they did it with a method that surprised most of the AI establishment and Big Tech, they used the human brain’s neural networks as a computer design. Most experts thought that approach was a dead end in Ai research, and would not go far. Surprise, the expert establishment was wrong and Ilya and his team were right.

Everyone was surprised, except for Geoffrey Hinton, who started the deep-thinking, neural net designs. But even he must have been astonished that his former student, Ilya, made the big breakthrough by simple size scaling. Moreover, Ilya did so way before Hinton’s competing team at Google. In fact, Hinton was so surprised and alarmed by how fast and far Ilya had gone with Ai, that he quit Google right after ChatGPT-4.0 came out. Then he began warning the world that Ai like GPT4 needed to be regulated and fast. ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (NYT, 5/5/23). These actions by Professor Hinton constitute an incredible admission. His protege, Ilya Sutskever, has to be smiling to himself from time to time after that kind of reaction to his unexpected win.

Image of Ilya Sutskever from video What AI is Making Possible

Ilya, and his diverse team of scientists and engineers, are the one’s who made the breakthrough. They are the real heroes here, not the promoters, fund raisers and management. Sam Altman and Greg Brockman’s key insight was to hire Ilya Sutskever and give him the space and equipment needed, hundreds of millions of dollars worth. By listening to Ilya, you get a good sense of how surprised he was to discover that the neural network approach actually worked, that his teachers and inner voice were right beyond his dreams. His significant engineering breakthrough came by “simply” scaling the size of neural network databases and computing power. Bigger was better and led to incredible intelligence. It is hard to believe, and yet, here it is. ChatGPT-4 does amazing things. Watch this interview and you will see what this means.

Greg Brockman and Sam Altman Audio Interview – Envisioning Our Future With AI

The hour long podcast, Envisioning Our Future With AI, interview of Brockman and Altman discusses the same surprise insight of scale, but from the entrepreneurs’ perspective. Listen to the Possible podcast at 16:57 to 18:24. By hearing the same thing from Sam, you get a pretty good idea of the key insight of scale. They are not sure why it works, nobody really is, including Ilya Sutskever, but they know it works, and so Sam and Greg went with it, boldly going where no one has gone before.

Greg Brockman by Ralph using Midjourney

The information Sam Altman and Greg Brockman provide, in their consistently upbeat Silicon Valley voice, pertains to their unique insights as the visionary front men. Their discussion of Ai Regulation is particularly interesting and starts at 18:34. It hints at many discussions the OpenAI Board has had over the years about Ai regulation, including two opposing views about product launch. Below is a excerpt, slightly edited for reading, of this portion of the podcast, starting at 19:07. (We recommend you listen to the full original podcast by Greg Hoffman.)

Midjourney image of a chatbot by Ralph who says: “Go ahead and try to regulate me. I’ll help.”

Question Aria Finger. What would you call for in terms of either regulation or global governance for bringing people in?

Answer by Sam Atlman. I think there’s a lot of anxiety and fear right now . . . I think people feel afraid of the rate of change right now. A lot of the updates that people at OpenAI, who work at OpenAI, have been grappling with for many years, the rest of the world is going through in a few months. And it’s very understandable to feel a lot of anxiety in that moment. 

We think that moving with great caution is super important, and there’s a big regulatory role there.  I don’t think a pause in the naive sense is likely to help that much. You know, we spent . . . somewhat more than six months aligning GPT-4 and  safety testing it since we finished training. Taking the time on that stuff is important. But really, I think what we need to do is figure out what regulatory approach, what set  of rules, what safety standards, will actually work, in the messy context  of reality. And then figure out how to  get that to be the sort of regulatory posture of the world. (20:32)

Lengthy Talking Question Follow-up by Reid Hoffman (former OpenAI Board member). You know, when people always focus on their fears  a little bit, like Sam, you were saying earlier, they tend to say, “slow down, stop,”  et cetera. And that tends to, I think, make a bunch of mistakes. One mistake is we’re kind of supercharging a bunch of industries and, you know, you want that, you want the benefit of that supercharging industry. I think that another thing is that one of the things we’ve learned with larger scale models, is we get alignment benefits. So  the questions around safety and safety precautions are better in the future, in some very arguable sense, than now. So with care, with voices, with governance, with spending months in safety testing, the ultimate regulatory thing that I’ve been suggesting has been along the lines of being able to remediate the harms from your models. So if something shows up that’s particularly bad, or in close anticipation, you can change it. That’s something I’ve already seen you guys doing in a pre-regulatory framework, but obviously getting that into a more collective regulatory framework, so that preferably everywhere in the world can sign on with that, is the kind of thing that I think is a vision. Do you have anything you guys would add to that, for when people think about what should be the way the people are participating?

Answer by Sam Altman (22:04). You touched on this, but to really echo it, I think what we believe in   very strongly, is that keeping the rate of change in the world relatively constant, rather than, say, go build AGI in secret and then deploy it all at once when you’re done, is much better. This idea that people relatively  gradually have time to get used to this incredible new thing that is going to transform so much of the world, get a feel for it, have time to update. You know, institutions and people do not update very well overnight. They need to be part of its evolution, to provide critical  feedback, to tell us when we’re doing dumb mistakes, to find the areas of great benefit and  potential harm, to make our mistakes and learn our lessons when the stakes are lower than they will  be in the future. Although we still would like   to avoid them as much as we can, of course. And I  don’t just mean we, I mean the field as a whole, sort of understanding, as with any new technology, where the tricky parts are going to be. 

I give Greg a lot of credit for pushing on this, especially when it’s been hard. But I think it is The Way to make a new technology like this safe. It is messy, it is difficult, it means we have to say a lot of times,  “hey, we don’t know the answer,” or, “hey, we were wrong there,” but relative to any alternative, I think this is the best way for society. It is the best way not only to get the safest outcome, but for the voices of all of society to have a chance to shape us all, rather than just the  people that, you know, would work in a secret lab.

Answer by Greg Brockman (23:51). We’ve really grappled with this question over time. Like, when we started OpenAI, really thinking about how to get from where we  were starting, which was kind of nothing in a lot of ways, to a safe AGI that’s deployed, that actually benefits all of humanity. How do you connect those two? How do you actually  get there? I think that the plan that Sam alludes to, of you just build in secret, and then you deploy it one day, there’s  a lot of people who really advocate for it and it has some nice properties. That means that  – I think a lot of people look at it and say, “hey there’s a technical safety problem of making sure the AI can even be steered, and there’s a  society problem. And that second one sounds really hard, but, I know technology, so I’ll just focus on this first one.” And that original  plan has the property that you can do that.  

But that never really sat well with me because I think you need to solve both of these  problems for real, right? How do you even know that your safety process actually worked. You don’t want it to be that you get one shot, to get this thing right. I think that there’s still a lot to learn, we’re still very much in the early days here, but this process that we’ve gone through, over the past four or five years now of starting to deploy this technology and to learn, has taught us so much.

We really weren’t in a position three, four years ago, to patch issues. You know, when there was an issue with GPT-3, we would sort of patch it in the way that GPT-3 was deployed, with filters, with non-model level interventions. Now we’re  starting to mature from that, we’re actually able to do model level interventions. It is definitely the case that GPT-4 itself is really critical in all of our safety pipelines. Being   able to understand what’s coming out of the model in an automated fashion, GPT-4 does an excellent job at this kind of thing. There’s a lot that we are learning and this process of doing iterative deployment has been really critical to that. (25:48)

Excerpt Envisioning Our Future With AI (slight editing for clarity) from 19:07 to 25:48.

“Possible” podcast interview of Sam Altman and Greg Brockman by Greg Hoffman.

Conclusion

Scaling the size of data in the LLM, and scaling the size of the compute, the amount of processing power put into the Neural-Network, is the surprising basis of OpenAI’s breakthrough with ChatGPT4. The scaling increase in size made the Ai work almost as good as the human brain. Size itself somehow led to the magic breakthrough in machine learning, a breakthrough that no one, as yet, quite understands, even Ilya Sutskever. Bigger was better. The Ai network is still not as large as the human brain’s neural net, not even close, but it is much faster, and like us, can learn on it own. It does so in its own way, taking advantage of its speed and iterative processes.

Human Brain Neurons

Large scale generative Ai now has every indication of intelligent thought and creativity, like a living human brain. Super-intelligence is not here yet, but hearing OpenAI and others talk, it could be coming soon. It may seem like a creature when it comes, but remember it is still just a tool, even though it is a tool with intelligence greater than our own. Don’t worship it, but don’t kill it either – Trust but Verify. It can bring us great good.

Verification requires reasonable regulations. The breakthrough in AI caused by scaling has impacted the attitude of Open AI executives and others towards current Ai regulation. As these interviews revealed, they want to get the input and feedback from the public, even messy critical input and complaints. This input from hundreds of millions of users provide information needed for revisions to the software. It allows the software to improve itself. The interviews revealed that GPT4 is already going that. Think about that.

OpenAI did not want to work in secret and then have super-intelligent, AGI level software, suddenly released, or worse escape, and stun the world. It would be the same level of public shock and disruption as flying saucers landing in Washington.

No one wants secret labs in a foreign dictatorship to do that either (except of course the actual and would be despots). The world needs a few years of constant but manageable change to get ready for the Singularity. Humans and our institutions can adapt, but we need some time. People will eventually get used to super-intelligent Ais and adapt to them. The Ai tech companies also need a few years to make course corrections and to regulate without stopping innovation. For more on these competing goals, and ideas on how to balance them, see the newly restated Intro and Mission Statement of AI-Ethics.com and related information at the AI Ethics web.

Balance is the way, and messy ad hoc processes, much like common law adjudication, seems to be the best method to walk that path, to find the right balance. At least, as these interview of Altman and Brockman revealed, that is the conclusion that OpenAI’s management has reached. There may be a bias here, but this process, which is very familiar to all attorneys and judges, seems like a good approach. This solution to Ai also means that the place and role of attorneys will remain important for many years to come. This is a trial and error, malleable, practice approach to regulation, a method that all litigation attorneys and judges in common law jurisdictions are very familiar with. That is a pleasant surprise.

Ralph Losey Copyright 2023 (does not include the quoted excerpts, nor the YouTube videos and podcast content)


%d