Our related website, AI-Ethics.com, was completely updated this weekend. This is the first full rewrite since the web was launched in late 2016. Things have changed significantly in the past nine months and the update was overdue. The Mission Statement, which lays out the purpose of the web, remains essentially the same, but has been clarified and restated, as you will see. Below is the header of the AI Ethics web. Its subtitle is Law, Technology and Social Values. Just FYI, I am trying to transition my legal practice and speciality expertise from e-Discovery to AI Policy.
Below is the first half of the AI Ethics Mission Statement page. Hopefully this will entice you to read the full Mission Statement and check out the entire website. Substantial new research is shared. You will see there is some overlap with the Ai regulatory articles appearing on the e-Discovery Team blog, but there are many additional articles and new information not found here.
Our mission is to help mankind navigate the great dilemma of our age, well stated by Steven Hawking: “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.” Our goal is to help make it the best thing ever to happen to humanity. We have a three-fold plan to help humanity to get there: dialogue, principles, education.
Our focus is to facilitate law and technology to work together to create reasonable policies and regulations. This includes the new LLM generative models that surprised the world in late 2022.
Pros and Cons of the Arguments
Will Artificial Intelligence become the great liberator of mankind? Create wealth for all and eliminate drudgery? Will AI allow us to clean the environment, cure diseases, extends life indefinitely and make us all geniuses? Will AI enhance our brains and physical abilities making us all super-hero cyborgs? Will it facilitate justice, equality and fairness for all? Will AI usher in a technological utopia? See eg. Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI? People favoring this perspective tend to be opposed to regulation for a variety of reasons, including that it is too early yet to be concerned.
Or – Will AI lead to disasters? Will AI create powerful autonomous weapons that threaten to kill us all? Will it continue human bias and prejudices? Will AI Bots impersonate and fool people, secretly move public opinion and even impact the outcome of elections? (Some researchers think this is what happened in the 2016 U.S. elections.) Will AI create new ways for the few to oppress the many? Will it result in a rigged stock market? Will it bring great other disruptions to our economy, including wide-spread unemployment? Will some AI eventually become smarter than we are, and develop a will of its own, one that menaces and conflicts with humanity? Are Homo Sapiens in danger of becoming biological load files for digital super-intelligence?
Not unexpectedly, this doomsday camp favors strong regulation, including an immediate stop in development of new generative Ai, which took the world by surprise in late 2022. See: Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ (NYT, 3/29/23); the Open Letter dated March 22, 2023 of the influential Future of Life Institute calling for a “pause in the development of A.I. systems more powerful than GPT-4. . . . and if such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” Also see: The problems with a moratorium on training large AI systems (Brookings Institute, 4/11/23) (noting multiple problems with the proposed moratorium, including possible First Amendment violations). Can research really be stopped entirely as this side proposes, can Ai be gagged?
One side thinks that we need government imposed laws and detailed regulations to protect us from disaster scenarios. The other side thinks that industry self-regulation alone is adequate and all of the fears are unjustified. At the present time there are strongly opposing views among experts concerning the future of AI. Let’s bring in the mediators to help resolve this critical roadblock to reasonable AI Ethics.
Balanced Middle Path
We believe that a middle way is best, where both dangers and opportunities are balanced, and where government and industry work together, along with help and input from private citizens. We advocate for a global team approach to help maximize the odds of a positive outcome for humanity.
AI-Ethics.com suggests three ways to start this effort:
- Foster a mediated dialogue between the conflicting camps in the current AI ethics debate.
- Help articulate basic regulatory principles for government, industry groups and the public.
- Inspire and educate everyone on the importance of artificial intelligence.
To read the rest, jump to the AI Ethics website Mission page.
Ralph Losey Copyright 2023. All Rights Reserved