The Insights of Neuroscientist Blake Richards and the Terrible Bad Decision of OpenAI to Fire Sam Altman

Blake Richards is a rare academic with expertise in both computer science and neurology. He is an Associate Professor in the School of Computer Science and the Montreal Neurological Institute-Hospital at McGill University and also a Core Faculty Member at Mila, a community of more than 1,000 researchers specializing in machine learning. Unlike his legendary mentor, Professor Geoffrey Hinton, and the Board of OpenAI, Blake does not fear AI advancing too rapidly. To the contrary, he thinks the greater danger lies in old and current levels of AI. He thinks the hysteria about advanced artificial general intelligence is misplaced.

Image of Blake Richards in a neurocybernetic lab by Ralph.

Many now contend this fear of AI advancing too rapidly is the real reason Sam Altman was fired. The fearful board, including Sam’s friend, Ilya Sutskever, thought that Sam and Greg Brockman were moving too fast. Professor Richards believes these “safety” concerns are ultimately based on bad science, namely misunderstandings about evolution and natural selection. Professor Richards thinks, and I agree, that the greater danger is to continue with our current levels of mediocre AI. We should encourage perfection of advanced intelligence, not fear it.

Dumb AI makes more mistakes. Image by Visual Muse.

My last article on the chief scientist of Google’s Deep Mind supports the conjecture that Artificial General Intelligence (“AGI”) is coming soon. Shane Legg’s Vision: AGI is likely by 2028, as soon as we overcome AI’s senior moments. If Sam Altman was just fired because he favored release of AGI levels of AI, the rumored ChatGPT5 and beyond, then OpenAI has made a terrible mistake. Its scientists will continue to leave in droves and, unless Microsoft can save the day, OpenAI will now die in a circular firing squad of AI fears.

Corporate Circular Firing Squad images by Visual Muse.

Open AI’s competitors should use the current implosion of the company as an opportunity to quickly catch up. We need science to keep progressing, not fear mongering, go-slow regulators. Society needs the help of AGI, and beyond that, the help from superintelligence and The Singularity.

This article on Blake Richards opinions is based on an interview he recently gave to fellow AI neuroscientist Jon Krohn and my current readings about the shocking decision of Open AI to fire Sam Altman. The in-depth Q&A interview reveals Richards’ multifaceted view of intelligence, both artificial and natural, brain neurology, evolution and AI enhanced work. Great questions were asked by Dr. Krohn in the interview, which can be found on YouTube, Super Data Science Channel, Episode 729. It is entitled Universal Principles Of Intelligence (Across Humans and Machines). I highly recommend you watch the video. Jon Krohn’s entire Super Data Science series is pretty amazing and I look forward to more study of his ongoing, free instruction.

Metadata About This Article

This article is my first experiment with using AI to do a significant portion of the writing. I created a new GPT-4 program to do this, e-Discovery Team Writer. It is pretty easy to build your own programs now with GPT4, and no, my programs are not for sale. GPT level four is not good enough at writing for me to want to put my name on it. Of course, I also checked everything written for accuracy, plus the introduction on OpenAI firing of Altman, and this metadata section, were written entirely by me. Also, I spent a substantial amount of time editing the rest and and providing my own analysis. The e-Discovery Team Writer GPT does not have my style down yet and is not much of an original thinker. In general GPT4 based writing programs are not as good as any experienced human writers. They are still a long way from full human intelligence. For instance, GPT4 is incapable of subtle humor in writing. It can only tell stupid jokes, such as create a joke about AI not having a sense of humor. This was its best result out of three tries: “Why did the AI refuse to laugh at the comedian’s jokes? Because it was too busy analyzing the syntax and missed the pun-tuation!” It takes Dad jokes to a new low.

Robot telling bad dad jokes. Image in Cartoon Style by Visual Muse.

Perhaps when and if GPT5 is ever released, or some other replacement company to Open AI puts out something equivalent, then its intelligence as a legal technology writer may reach human level. Maybe it could even add self-effacing humor. It took me years to learn that, so I kind of doubt it. I do hope these AIs get better soon. I am already sick of these low IQ, human writer wannabes. When AI gets smarter, maybe then my involvement with blog writing could be limited to the more fun, creative aspects. Still, use of e-Discovery Team Writer did save some time and led to a new style of hybrid writing, for instance, it sometimes uses words that I never did, plus multiple paragraph headings. Please let me know what you think.

I used another GPT-4 application on this article that I created for blog illustrations, Visual Muse. I used it in my last blog too, Shane Legg’s Vision: AGI is likely by 2028, as soon as we overcome AI’s senior moments. This article on Blake Richards ideas builds on the concepts discussed in the Shane Legg article. Ideally, they should be read together. Legg and Richards are two of today’s shining lights in AI research. Studying their work leaves me confident that AGI is coming soon, as Legg predicts, and OpenAi’s board apparently fears. I may even live long enough to plug into The Singularity created by superintelligent computers that should follow. Now that should really be something! Satori anyone?

Seniors plugged into Singularity Superintelligence images using Sci-Fi styles.

Different Types of Intelligence

Beyond a Unitary Definition. Richards contends that intelligence cannot be confined to a singular definition. He emphasizes that different forms of intelligence are defined by varying norms and criteria of what is deemed good or bad. Intelligence, according to Richards, is fundamentally the ability to adhere to certain norms. This notion extends beyond cognitive capabilities to encompass behavioral norms vital for survival, societal functioning, and goal achievement. This perspective is pivotal in understanding the complexity of intelligence as it applies not just to humans, but also to AI systems. Here is how Richards explains it.

I think it’s worth noting that I don’t think that there is necessarily a unitary definition of intelligence. I
am a firm believer in the idea that there are different types of intelligence, but the thing that defines different types of intelligence are essentially different norms, different definitions of what is good or bad. How I’m tempted to define intelligence is to say, once you receive some kind of norm, something that says this is what’s desired, this is undesired, then intelligence is the ability to adhere to the norm. When we talk about an intelligent system, we’re talking about a system that is somehow capable of adhering to some norm, whatever that norm may be.

YouTube Interview at 10:30.

AI and Human Norms: Adhering to Expectations. A key aspect of Richards’ argument lies in the relationship between AI and human norms. He suggests that AI, particularly in its most advanced forms, is about adhering to norms akin to those of humans. This adherence isn’t just about accomplishing tasks but also involves understanding and integrating into human societal structures. The ability of AI to fulfill requests within a human organizational or societal context becomes a measure of its intelligence.

Evaluating AI Progress: Metrics and AGI. Richards approaches the evaluation of AI’s progress with a focus on metrics that represent the norms AI is designed to follow. These metrics, often in the form of datasets and benchmarks, help in assessing how well AI systems perform specific tasks. However, when discussing Artificial General Intelligence (AGI), Richards expresses skepticism about its measurability. He argues that intelligence is multifaceted, and AGI may be better understood as a collection of competencies across various metrics rather than a singular, overarching capability.

The Question of AGI: A Multifaceted View. Despite his reservations about AGI as a unitary concept, Richards remains optimistic about AI systems improving across a broad range of metrics. He likens this to human intelligence, where different skills and abilities contribute to a general sense of intelligence. Richards envisions AI systems that excel not just in singular tasks but across multiple domains, akin to human capabilities. Again, here are Richards own words explaining these important insights.

I don’t actually believe in artificial general intelligence, per se. I think that intelligence is necessarily a multifaceted thing. There are different forms of intelligence. Really when we’re talking about measuring artificial general intelligence, I think it’s almost impossible. What you can do is you can have a huge collection of different metrics that you apply. You can ask for the oodles and oodles of different metrics we have, how does this system perform across all of them? We might be then willing to say that you get closer to something like artificial general intelligence the more and more of these metrics you see improvements on across the board.

Certainly I think that’s not unreasonable. In the same way that we would say that a human being is generally intelligent if they can successfully pass the SATs well and successfully, I don’t know, write an essay that gets a positive response from the general public, or who knows what metrics you want to apply. You could have all sorts of different metrics that you apply to a person. Likewise, you could do the same to an AI. If they do well in it, you’d say it’s more generally intelligent. But I don’t think there’s any way to measure the broader concept of artificial general intelligence as a unitary idea from super
intelligence. I think that doesn’t actually even exist.

I don’t fully believe even in the concept of AGI, but here’s what I will say. I have optimism that we will see artificial intelligence systems that get better and better across a broad swath of these metrics, such that you no longer have a system that can only do one of the metrics, can only recognize images, but systems that can recognize images, write poetry, whatever you want, of the sort of metrics that we would be inclined to measure them on. Now, the reason I’m optimistic in that front is simply the data that I’ve received so far, which is that we’ve seen the models get better and better across broad swaths of metrics.

YouTube Interview at 17:00.

Optimism for AI’s Multidimensional Growth. Blake Richards provides a strong argument that reshapes traditional views of intelligence. His emphasis on norms and multifaceted competencies offers a new perspective on evaluating both human and artificial intelligence. While cautious about the concept of AGI, Richards’ overall optimism for AI’s potential to evolve across a broad spectrum of tasks is a consistent with his understanding of intelligence. His insights serve as a guidepost in this journey, encouraging a holistic, multi-dimensional view of intelligence in both humans and machines.

Traditional Scientific Drawing image of multidimensional intelligence.

Beyond Biomimicry

The Role of Functional Mimicry in AI’s Evolution. In the quest to enhance artificial intelligence, the concept of biomimicry — replicating biological processes — often emerges as a topic of debate. Blake Richards offers a nuanced perspective on this. He distinguishes between low-level biological mimicry and functional mimicry, arguing for the latter as a critical component in advancing AI.

Biomimicry vs. Functional Mimicry in AI Development. Richards posits that replicating the human brain’s low-level biology is not essential for creating AI systems that perform comparably or superiorly to humans. Instead, he emphasizes the importance of functional mimicry, which focuses on replicating the brain’s capabilities rather than its exact biological processes. This approach prioritizes capturing the essence of how the brain functions, adapting these capabilities into AI systems.

The Critical Role of Episodic Memory. A key example Richards uses to illustrate functional mimicry is episodic memory. Current large language models are very weak in this capability, which involves storing and recalling personal experiences, complete with sensory details and contextual understanding. This was discussed in the last article, Shane Legg’s Vision: AGI is likely by 2028, as soon as we overcome AI’s senior moments. I compared generative AI’s poor memory forgetfulness with “senior moments” in humans. It happens to people of all ages, of course. You have to laugh when you walk into a room and cannot recall why? It usually comes to you soon enough.

Caricature image art style of a guy about my age trying to remember why he walked into a room.

Richards argues that for AI to reach human-level performance across a wide range of tasks, it must have episodic memory, albeit not necessarily through the same cellular mechanisms found in the human brain. Here are Richards words on these key points of memory and biomimicry.

I think if you’re asking the question with respect to low level biology, the answer is no. We don’t need the
biomimicry at all. I think what is important is a sort of functional mimicry. There are certain functions that the brain can engage in, which are probably critical to some of our capabilities. If you want an AI system that can do as well as us more broadly, you need to give them these capabilities.

An example that I like to trot out often is episodic memory. One of the things that’s missing from current large language models, for example, is an episodic memory. Episodic memory refers to those memories that we have of our own lives, things that have happened to us, and they include details about the sensory experiences that we had, exactly what was said, where we were when it happened, et cetera. Those episodic memories are critical for our ability to really place the things that have happened to us in a specific place in a specific time, and use that to plan out the right next steps for achieving the goals we have in our life.

I think that it is assuredly the case that for large language models to get to the point where they can be as performant as human beings on as wide a range of tasks, you’re going to need to endow them with something like an episodic memory. Will it need to look like the specific cellular mechanisms for episodic memory that we have in the human brain? No, I think not. But I think that the broad functional principle will have to be there.

YouTube Interview at 21:18.

Episodic memory is also talked about in AI in terms of the mechanisms involved: “back propagation” and “long term credit assignments” for “reinforcement learning.” Richards explains this in a different interview, as something our brain can do quite well to provide us with interim episodic memory, but AI cannot do at all. It can only remember on very short terms. So perhaps, it would be a better analogy to say that an AI could remember on the short term why it went into a room, but not on a long term as to what is in the room, or what it looked like a month ago. See: Blake Richards—AGI Does Not Exist, YouTube interview on The Inside View at 1:01:00 – 1:04:30 (recommend the whole video).

Historical Evidence in AI’s Progress. Richards goes on to reflect on the history of AI development, noting that significant advancements have often resulted from capturing specific brain functionalities. He cites examples like the invariance properties of the visual system and attention systems in AI. These functionalities, critical in human cognition, have been successfully adapted into AI, not through direct biological mimicry but by understanding and replicating their functional principles.

Embracing Functional Principles in AI Evolution. As AI continues to evolve, the focus on functional mimicry may be key towards achieving more sophisticated and human-like capabilities. Let’s just hope AI does not get senior moments with age. Perhaps it will help us to overcome them.

Rethinking AI’s Existential Risks

Evolutionary Approach to AI Coexistence and Safety. Fears regarding the existential risks remain high with many calling for a halt to development. Blake strenuously disagrees based on his understanding of evolutionary biology and ecology. He thinks that fears of AI becoming a dominant, competing force against humanity arise from a misunderstanding of natural selection and species interactions. He believes that cooperation guides evolution most of the time, not competition.

Cooperative evolution image in futuristic digital art style.

This is an important new insight in the field of AI. Here is long excerpt of Blakes’ explanation.

The fear is based on a fundamental misunderstanding of the nature of natural selection and how species interactions actually work. And I think that’s in part due to the fact that most of the people saying these things, with all due respect to all of my colleagues, are people coming from a pure computer science background, who don’t actually know very much about biology and ecology and who don’t really understand fully how natural selection works. And the reason I say this is, when you look at the actual things that natural selection tends to favor and how evolution works, it’s not about dominance and competition between species. It’s all about finding a niche that works. You will successfully reproduce if you find a niche that actually positions you in a complimentary nature to all of the other species in the environment.

So generally speaking actually, competition and dominance are the exception to the rule in natural selection, not the key force. Instead, it’s actually mutualism and cooperation and complimentary niches that are what evolution really favors. The only time you have direct competition between two species, where there’s some kind of quest for dominance in the ecosystem, is when the two species really occupy the same niche. They’ve just happened to randomly evolve towards the same niche, and maybe one’s an invasive species or something like that, then you will see competition between the species. And there will be potentially a sort of winner and a loser. But I think the key point there is they have to occupy the same niche.

And this now brings me to why I don’t fear it with AI. AI does not occupy the same niche as human beings. AI is not seeking the same energy inputs. AI is not seeking the exact same raw materials. And in fact, when you look at our relationship to AI systems, we occupy perfectly complimentary niches. We are the critical determinant of most of the resources that AI needs. We’re the ones who produce the electricity. We’re the ones who produce the computer chips, who do all the mining necessary to get the materials for the computer chips, et cetera, et cetera. I could go on with a big long list. I think that the idea that an AI system would ever seek to extinguish us is absurd. Any AI system worth its salt, that is adhering to the norm of survival and reproduction, would actually seek the preservation of the human species above all. And furthermore, I think that what any AI system, that was actually truly intelligent and able to adhere to these norms of survival and reproduction, would do is figure out the best ways to work in a complimentary nature with human beings, to maximize our respective success at achieving our goals. That’s what natural selection and evolution would favor. That’s what an instinct to survival and reproduction would favor. And I think that that’s what we’re going to see in our society. And I’m really pretty confident about that pronouncement.

I think, when we look at humans, I think part of the reason that there’s this assumption that the AI will try to extinguish us all is because there has been a tendency, sometimes in human evolution, for humans to extinguish other species and to overstrain our capacity and not to act in a complimentary way to other species. . . . I think that the key point here is that, if humans continue to behave like this, we will not be
adhering to the norm of our own survival. We will eventually extinguish ourselves, if we continue to act in a non-complimentary nature to other species on earth. And so, that would, arguably, be an example of human stupidity, not human intelligence.

YouTube Interview at 27:50.

I love that last statement. It just goes to emphasize the need for artificial intelligence to quickly get smart enough to supplement our limited natural intelligence, to protect us from our own stupidity. Our danger is not with superintelligent AIs, instead it is with what we now have at baby GPT4 level, which is, as I have argued many times here, still kind of dumb. Here is Richards on this key point.

The possibility of superintelligence is what makes me more confident that the AIs will eventually cooperate with us. That’s what a superintelligent system would do. What I fear more, funnily enough, are dumb AI systems, AI systems that don’t figure out what’s best for their own survival, but which, instead, make mistakes along the way and do something catastrophic.

That, I fear much more. The analogy I always use is with the system in Dr. Strangelove. So in Dr. Strangelove, the nuclear holocaust that occurs is a result of a Russian doomsday device, that will automatically launch all of Russia’s nuclear weapons if Russia’s ever attacked. That’s not a very smart system, that’s not a superintelligence, but it leads to the end of the world, precisely because it’s this overly narrow dumb thing. And that’s actually what I fear much more than a rogue superintelligence.

YouTube Interview at 34:20.
Dr. Strangelove image using combined Sci-Fi, Photo Realistic art styles.

AI Safety: Beyond Fear, Towards Practical Measures. While Richards downplays the existential risks of superintelligent AI, he acknowledges the need for practical safety measures. He advocates for rigorous auditing and regulatory mechanisms akin to those in other industries, suggesting that AI models should undergo independent auditing to ensure their safety and reliability. He suggests this be done by independent auditing agencies, not by government regulations. As Richards put it:

The other option would be some more restrictive regulatory mechanisms implemented by government, that force auditing and various stress testing on models. I think the tough trouble with that is you might start to really impair the nascent AI economy if you take that kind of approach. . . . Like Europe, yes, exactly. And so, I personally wouldn’t advocate for that. I think we should first try these more voluntary auditing mechanisms, that would be driven by the desire to actually have your product be well certified.

YouTube Interview at 37:30

Richards also highlights the importance of legal accountability, especially in high-stakes applications such as self-driving cars, suggesting that companies should be held responsible for the performance and safety of their AI systems.

The Role of AI in Military and High-Risk Scenarios. Richards expresses serious concerns regarding the use of AI in military contexts. He argues that AI should augment, rather than replace human decision-making. This cautious approach stems from the potential for autonomous AI systems to make irreversible decisions in high-stakes scenarios, such as warfare, which could escalate conflicts unintentionally. Here are Richards’s thoughtful remarks.

I don’t know this is going to hold, unfortunately, but in an ideal world, AI systems would only ever be there to supplement human decision making in military applications. It would never be something where an AI is actually deciding to pull the trigger on something. That’s the kind of scenario that actually makes me really worried, both in terms of the potential for, I would hope that no one’s dumb enough to put autonomous AI systems in, say, nuclear chains of command vis-a-vis Dr. Strangelove, but even things like, if you’ve got fighter jets or whatever, that are controlled by autonomous AI, you could imagine there being some situation that occurs, that leads an autonomous AI to make a decision that then triggers a war.

YouTube Interview at 39:30.

A Forward-Looking Perspective on AI and Human Coexistence. Richards provides a compelling argument for why AI is unlikely to pose an existential threat to humanity. Instead, he envisions a future where AI and humans coexist in a mutually beneficial relationship, each fulfilling distinct roles that contribute to the overall health and balance of our shared ecosystem. His views not only challenge the prevailing fears surrounding AI, but also open up new avenues for considering how we might safely and effectively integrate AI into our society.

AI Helper image expressed in an abstract conceptual art style.

Human-AI Symbiosis

Automation, Creativity, and the Future of Work. In an era where the boundaries of AI capabilities are continuously being pushed, questions about the future of work and the role of humans become increasingly important. Blake Richards talks about the implications of automation on humans and emphasizes the importance of generality and diversity in human tasks.

Generality as the Essence of Human Intelligence. Richards identifies generality and the ability to perform a wide range of tasks as defining characteristics of human intelligence. He argues that humans thrive when engaged in diverse activities. He believes this multiplicity is crucial for emotional and intellectual development. This view challenges the trend toward extreme specialization in modern economies, which, according to Richards, can lead to alienation and a reduction in human flourishing. Again, this is another one of his key insights, and again, I totally agree. Here are his words.

What defines a human being and what makes us intelligent agents really is our generality, our ability to do many different tasks and to adhere to many different norms that are necessary for our survival. And I think that human beings flourish when they have the opportunity to really have a rich life where they’re doing many different things. . . . I actually think it’s worse for human emotional development if you’re just kind of doing the same thing constantly. So where I think we could have a real problem that way is if you have a fully, fully automated economy, then what are humans actually up to?

YouTube Interview at 46:54.

AI as a Tool for Enhanced Productivity, Not Replacement. Contrary to the dystopian vision of AI completely replacing human labor, Richards envisions a future where AI acts as a supplement to human capabilities. The AI tools enable us to do a wide variety of tasks well, not just one or two monotonous things.This optimistic view posits a symbiotic relationship between humans and AI, where AI enhances human creativity and productivity rather than diminishing human roles.

The Slow Progress of Robotics and Physical Automation. Addressing the feasibility of a fully automated economy, Richards points out the slow progress in robotics compared to AI. He notes that designing physical systems capable of intricate manipulations and tasks is a challenging, slow engineering process. Richards emphasizes that the sophistication of the human body, a result of natural selection and optimization, is difficult to replicate in robots. He predicts that while robots will assist in physical tasks, their capabilities will not match the versatility and adaptability of humans in the foreseeable future.

The Intellectual and Creative Economy. Richards’ primary concern is about automation of intellectual and creative work. Creative human activities should not be replaced by AI, they should be empowered. He hopes: “we’ll see these AI tools as supplements, as things that help artists and writers and lawyers and office workers be a hundred times more productive, but they’re still going to be in there doing their stuff.

Navigating the AI-Augmented Future. Blake Richards offers a realistic perspective on the role of AI in our future economy. It is consistent with our experience so far. In my case, like many others, the new tools have made my work far more creative than ever before. Richards emphasis on the importance of diversity in human work, and the potential for a beneficial human-AI partnership, provides a balanced view in the face of fears surrounding AI-driven automation.

illustrations of the future of AI Augmented Work. Photorealistic style on bottom and combined Surreal and Photorealistic style images on top! Click to see full sizes. (Great fun to create these with Visual Muse!)

Conclusion

Blake Richards’ insights present a revolutionary understanding of intelligence, both in humans and artificial systems. His emphasis on the diversity and multifaceted nature of intelligence challenges the traditional view of a singular, overarching definition. Richards’ perspective reshapes how we assess AI’s capabilities, suggesting a broad spectrum evaluation over multiple metrics rather than focusing on a singular measure of Artificial General Intelligence. This approach aligns more closely with human intelligence, which is not a monolithic construct but a composite of various skills and abilities. His optimism for AI’s growth across a wide range of capabilities offers a hopeful vision of the future, where AI systems excel not just in isolated tasks but in a multitude of domains, akin to human versatility. Richards’ ideas encourage a broader, more inclusive understanding of intelligence, which could redefine our approach to AI development and integration into society.

Moreover, Richards’ stance on the evolutionary path of AI and its coexistence with humans provides a balanced narrative amidst the prevalent fears of AI-driven dystopia. Too bad the Board of OpenAI did not have his advice before firing Sam Altman for being too good at his job.

Dynamic CEO fired portrayed in Graphic Novel Style.

By advocating for functional mimicry and emphasizing the importance of episodic memory in AI, Blake Richards underscores the potential for AI to evolve in a way that complements human abilities, rather than competes with them. Blake’s dismissal of the existential risks often associated with AI, rooted in a deep understanding of evolutionary biology, suggests a future where AI and humans thrive in a mutually beneficial relationship. This symbiosis, where AI augments human creativity and productivity, opens up new possibilities for a future where AI is an empowering tool rather than a replacement for human endeavor. Richards’ forward-looking perspective not only alleviates fears surrounding AI but also ignites excitement for the creative and collaborative potential of human-AI partnerships.

Ralph Losey Copyright 2023 — ALL RIGHTS RESERVED

3 Responses to The Insights of Neuroscientist Blake Richards and the Terrible Bad Decision of OpenAI to Fire Sam Altman

  1. […] Article Link: The Insights of Neuroscientist Blake Richards and the Terrible Bad Decision of OpenAI to Fire Sam Al… […]

  2. […] Article Link: The Insights of Neuroscientist Blake Richards and the Terrible Bad Decision of OpenAI to Fire Sam Al… […]

  3. […] There are a number of other other reasons that it would be a mistake to slow down now, some of which will be addressed next through the word of other scientists who agree with the keep on accelerating position. But before I switch to their wisdom in Part Two of this article, I must point out another fundamental error made by some of the slow-downers. They seem guilty of thinking of AI as a creature, not a tool. Not only that, but they think of it as an immoral creature, which, although superintelligent, still thinks nothing of wiping out us puny humans. Oh, please. That is a fanciful misinterpretation of evolution. See e.g. The Insights of Neuroscientist Blake Richards. […]

Leave a Reply

Discover more from e-Discovery Team

Subscribe now to keep reading and get access to the full archive.

Continue reading