Our addiction to smart phone and social media has led to unintended negative consequences. We are in danger of becoming vapid carbon-based consumption machines. We are already extremely vulnerable to foreign power social manipulation, to propagandists and hostile mind-meddlers of all kinds. We are living in very dangerous times where information floods and confuses us all. Examining the 12 Predictions Made in 2015 in “Information → Knowledge → Wisdom” (2017). We need to cut down on our addiction and pressure the big-tech companies into positive action to protect our rights. We need to unplug more often to make sure we maintain our humanity. Science is wrong, we are not mere information processing machines. We are wise stardust with capacities beyond the imagination of most technocrats. Reverse the mantra of the sixties: Turn off, tune out and drop in. Look up. Look around. Resist. Act. See eg. Metz, Smartphones Are Weapons of Mass Manipulation (MIT Technology Review, 10/19/17).
Metablog
This is a rare blog where I write about someone else’s article – a blog about a blog. To make this more bizarre, the blog I’m moved to write about is a blog about a book, which I have been meaning to write about myself. So yes, this is a blog about a blog about a book that was almost a blog. But I get ahead of myself with this useless metadata. The blog that caught my interest is called: We’re Drowning in Data But Starved for Wisdom: We’re more than just intelligent machines (Thrive Global, 9-12-17). It is by a famous person whom I follow on Twitter, but have never written about before, Arianna Huffington. I used to think of her as a politically astute rich media person (Huffington Post) who was concerned about sleep. The Sleep Revolution: Transforming Your Life, One Night at a Time (Harmony, 2016).
Deep Thoughts About Technology
Arianna has apparently gotten enough sleep now and is thinking deep thoughts about technology. Well, at least her thoughts seem profound to me, although my reasoning is narcissisticly suspect. That’s because Arianna’s thoughts are much like my own. Although I may have been writing about the future of technology longer than Arianna, I have to admit that her short blog does a better job than me at expressing many of my core concerns. AI-Ethics.com.
The inspiration of Arianna’s blog is the latest popular book in AI, one that I am also quite enamored by, Life 3.0: Being Human in the Age of Artificial Intelligence. It is written by MIT professor Max Tegmark. I mentioned him and this book in my blog last month, More Additions to AI-Ethics.com: Offer to Host a No-Press Conference to Mediate the Current Disputes on AI Ethics, Report on the Asilomar Conference and Report on Cyborg Law. Arianna’s blog also mentions the debate I offered to mediate in a dialogue based conference. Arianna says:
What’s fascinating about the debate about artificial intelligence is that it isn’t just about the threat AI potentially represents to humanity, but – a much more interesting and consequential debate – about what it actually means to be human.
That’s a deep thought. One that I hope a conference someday will explore and discuss (not debate). Arianna, if you are listening, you have an open invitation to participate in this conference.
Max Tegmark, the founder of the Future of Life Institute, whom I also follow on Twitter, had this tweet to say recommending Arianna’s blog about his book:
I agree with @ariannahuff about tech: let’s aspire to be more than vapid carbon-based consumption machines.
That is as good a Twitter summary of her article as any. Among other things, Arianna replies to a now popular thought among AI scientists that human consciousness is just an epiphenomena of brain chemistry; thus when computers and general AI become advanced enough, people can be downloaded into and merged with computers. Many scientists see no fundamental difference between intelligence that can be replicated on computers and human consciousness. Here is Arianna’s response.
Of course, there are some who believe we are nothing but machines, and that to even bring up the idea that there’s something unique or sacred about humans or human consciousness is somehow anti-science. But science and qualities like awe and wonder – which have often gone hand-in-hand with scientific discovery – aren’t antithetical. They have co-existed for millennia. Here is how the astrophysicist Neil deGrasse Tyson described it: “When I say spiritual I am referring to a feeling you would have that connects you to the universe in a way that it may defy simple vocabulary,” he said. “We think of spirituality as an intellectual playground but the moment you learn something that touches an emotion rather than just something intellectual, I would call that a spiritual encounter with the universe.”
Although I am somewhat of a Kurzweilian, I think there is more to human consciousness than intelligence. I do not see us ever creating a being by creation of intelligence alone. Many share my skepticism. Again to quote Arianna:
If humans were simply intelligent machines, they could be seamlessly blended with the most intelligent of artificial intelligence with nothing essential lost. But if there is something unique and ineffable about being human, if there is such a thing as a soul, an inner essence, a consciousness beyond our minds, becoming more and more connected with that self – which is also what truly connects us with others – is what gives meaning to life. And it’s also what ultimately determines why technological progress decoupled from wisdom is so dangerous to our humanity. …
So AI is – or should be – forcing us to think seriously about what it is to be human. And then to take steps to protect our humanity from the onslaught of technology in every aspect [of] our lives as we’re becoming increasingly addicted to our smartphones and all our ubiquitous screens.
Danger of Too Much Information
This danger of information without knowledge or wisdom is one of the themes in my theory of the impact of technology on society. Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015); Examining the 12 Predictions Made in 2015 in “Information → Knowledge → Wisdom” (2017). I think it is the key theme of our historical era.
We are now dangerously awash in information, including misinformation and propaganda. Much of this is fueled by our increasing addiction to our smartphones, to our social media, including Facebook, Twitter and yes, even blogs like this. Foreign powers are exploiting our obsession with information. They are exploiting the gullibility and weak-mindedness of some to interfere with our democratic processes. It remains to be seen how this will all play out, but one thing is already clear, we have to be careful about the impact of AI on all of this. As Vladimir Putin said recently: Whoever leads in Artificial Intelligence will rule the world.
Our 2016 Election Was Hacked
I touched on this danger previously in my blog, New Draft Principles of AI Ethics Proposed by the Allen Institute for Artificial Intelligence and the Problem of Election Hijacking by Secret AIs Posing as Real People. There I cited the scholarly paper with the latest research by Oxford University on what happened in the 2016 U.S. election: Samuel C. Woolley and Douglas R. Guilbeault, Computational Propaganda in the United States of America: Manufacturing Consensus Online (Oxford, UK: Project on Computational Propaganda). The research proved that AI, primarily in the form of bots on Twitter, are probably responsible for tipping this close election to Trump:
Taken altogether, our mixed methods approach points to the possibility that bots were a key player in allowing social media activity to influence the election in Trump’s favour. Our qualitative analysis situates these results in their broader political context, where it is unknown exactly who is responsible for bot manipulation – Russian hackers, rogue campaigners, everyday citizens, or some complex conspiracy among these potential actors. …
The report exposes one of the possible reasons why we have not seen greater action taken towards bots on behalf of companies: it puts their bottom line at risk. Several company representatives fear that notifying users of bot threats will deter people from using their services, given the growing ubiquity of bot threats and the nuisance such alerts would cause. … We hope that the empirical evidence in this working paper – provided through both qualitative and quantitative investigation – can help to raise awareness and support the expanding body of evidence needed to begin managing political bots and the rising culture of computational propaganda.
As I said before, and feel compelled to say again, this is a serious issue that requires immediate action, if not voluntarily by social media providers, such as Facebook and Twitter, then by law. AI Ethics Work Should Begin Now. Etzioni’s proposed second rule of AI ethics should be adopted immediately: An A.I. system must clearly disclose that it is not human. How to Regulate Artificial Intelligence (NYT, 9/1/17); New Draft Principles of AI Ethics Proposed by the Allen Institute for Artificial Intelligence and the Problem of Election Hijacking by Secret AIs Posing as Real People. AI controlled bots posing as people is an immediate threat and must be outlawed now. This is fraud.
We cannot afford to have another election hijacked by secret AIs posing as real people, either on Twitter, Facebook or elsewhere. Legal changes under the current Administration are unlikely, so too are real changes by big corporate America. That means it is up to us, everyday citizens, to be vigilant against the AI enhanced propaganda. Do not count on big government to save you. Count on the Constitution and our legal system. That branch of the government is still functional and honest. Count on yourself, your inner wisdom. Self reliance is the American way. Trust that smart, skeptical Americans are a majority and will ultimately prevail over all would be dictators, imbeciles and geniuses alike. But verify. Exercise your rights, including especially your rights of free speech, assembly and to vote. Never give up and sink into mere vapid carbon-based consumption machines.
Arianna, who is herself a master of social influence and media, understands this danger all too well. Again, to quote her article:
Part of our wish list for our lives and our future should be disentangling wisdom from intelligence. In our era of Big Data and algorithms, they’re easy to conflate. But the truth is that we’re drowning in data and starved for wisdom. As Harari put it, “in the past censorship worked by blocking the flow of information. In the twenty-first century, censorship works by flooding people with irrelevant information. . .in ancient times having power meant having access to data. Today, having power means knowing what to ignore.”
e-Discovery, Truth, AI and the Law
Flooding people with irrelevant information is a situation very familiar to everyone in e-discovery. We have developed predictive coding and other AI based technologies to sort though the junk to find the key evidence. All of life has now become a document dump, one that is often covertly controlled by BIG BROTHER. AI can be used for the good to lead us out of this mess, not only for justice in the Law, but also to protect our democratic institutions. Alternatively, AI without ethics will not only mean a loss of justice, but also a loss of free elections, of the democratic processes at the heart of our way of life. We cannot and will not allow this to happen.
The Founding Fathers enacted the Bill of Rights to protect U.S. citizens from our own government. Distrust of government is hardwired into our Constitution. Follow the Constitution. Fight oppression. If you have sworn an oath to uphold the Constitution, including the all-important Bill of Rights, as I have, then be true to your oath. Trust no one. Including especially the government and its political leaders. That is the wisdom of our Founding Fathers. The same applies in spades to what foreign governments say or do, especially the dictatorships that we see in Russia and China, to name just a few. Do not count on the FBI to protect you. Be skeptical. Verify everything. Demand proof. Get to the truth. The same distrust should apply to all big organizations, including the new technology companies. Do not click away your soul. You are more than a machine. Fight addiction to social media.
Conclusion
The problem is ours to solve, both by AI Ethics principles and law, but also by our everyday conduct. Again, to quote the concluding words to Arianna’s article We’re Drowning in Data But Starved for Wisdom: We’re more than just intelligent machines:
The way to ensure a safe, beneficial and healthy relationship with technology is to begin by taking control of that relationship right now, when the technology is much more manageable. Or, as Tegmark put it, “one of the best ways for you to improve the future of life is to improve tomorrow.” We can be role models, he says, but we have to choose which sort of role model we want to be: “Do you want to be someone who interrupts all their conversations by checking their smartphone or someone who feels empowered by using technology in a planned or deliberate way? Do you want to own your technology or do you want your technology to own you? What do you want it to mean to be human in the age of AI?”
He urges us to have this discussion with everyone around us: “Our future isn’t written in stone and just waiting to happen to us. It’s ours to create. Let’s create an inspiring one together!”
I totally agree.
No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.
Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.
LikeLiked by 1 person
Oops. Last 2 paragraphs were cut off.
That’s why … although I think it is his best book .. Ray Kurzweil’s “How to Create a Mind” does not help because it exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.
The information processing metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.
LikeLiked by 1 person
Greg – For once we totally agree. Perhaps because I quoted a Greek?
LikeLike
Ralph … 🤣🤣🤣🤣🤣
LikeLike
If you get a chance, read “In Our Own Image”. Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.
This IT kind of thinking I blame on von Neumann who stated flatly that the function of the human nervous system is “prima facie digital”. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.
We ARE NOT born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
LikeLike