DeepSeek, a startup AI company owned by a Chinese hedge fund, which is in turn owned by a young AI whiz-kid, Liang Wenfeng, claims that its newly released V-3 software-R1 was trained inexpensively and without using NVIDIA’s high-end chips, the ones that cannot be exported to China. Wenfeng also claims his software is as good as the best-in-class AI software trained on the best NVIDIA ships. These claims were believed for reasons I will explain. The possibility that these claims might be true caused a run on Wall Street the likes of which have never been seen before. The Trillion Dollar market crash included a loss in value of Nvidia of $593 billion, a new one-day record for any company, ever.
Introduction
One reason DeepSeek’s claims triggered a crash is that DeepSeek’s software is open-source and can be copied freely. Further, the training costs for the software are claimed to be 20 to 100 times less expensive and require far less data and energy. The market assumed that if the claims were true, it would be an industry disruptive breakthrough. It would also disrupt the political balance of world power. The day after the crash former Google CEO, Eric Schmidt, said the rise of DeepSeek marks a “turning point” for the global AI race and argued for more open source products. Will China’s open-source AI end U.S. supremacy in the field? (Washington Post, 1/28/25).
The political dimension of this market event is one reason many are skeptical of the economic savings claims by an unknown Chinese startup. Still, the market panicked because many were quickly convinced of the overall quality of DeepSeek’s new R1 software itself. Seeing is believing.
There is no proof provided of the costs savings but published test results and hands-on trials by tens of thousands of people worldwide trying the free smartphone software confirmed the quality claims. Here is the paper in English that DeepSeek also released explaining the programming innovations.DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning(Arxiv, 1/22/25). The top scientists of the major AI labs, all of whom immediately tested the software, also generally confirmed the quality claims. Rumors began flying that they were all in crises mode, especially Meta, the only other company who had gone open source.
Sam Altman immediately responded to the release of R1 and market crash by tweeting on X:
Deepseek’s r1 is an impressive model, particularly around what they’re able to deliver for the price. We will obviously deliver much better models and also it’s legit invigorating to have a new competitor! we will pull up some releases.
My hands-on tests of DeepSeek shows that it is at least “close” to the quality of ChatGPT and looks a lot like it, suspiciously so. I will share the results in my next article where I will closely study the one new software feature that so far only DeepSeek provides called Deep-Think. Many others are testing DeepSeek and reaching the similar conclusion. But see, Deepseek… More like Deep SUCKS. My honest thoughts… (YouTube video by Clever Programmer, 1/31/25). DeepSeek is already the number one downloaded app on both Apple and Android phones.
This kind of market reaction is hard to believe when you consider DeepSeek’s owner, Liang Wenfeng is a forty year old Chinese hedge-fund owner (High-Flyer) in Hangzhou, China. Wenfeng is a thin, bespectacled engineer, a billionaire with a genius for stock trading using AI. This allowed him to assemble and privately pay for a team of China’s top AI software developers.
Wenfang’s company, DeekSeek, is also a virtually unknown as its R1 software is its first consumer product. In spite of the fact Wenfeng is basically an AI stock trader and his company unknown, many stock traders and analysts in the US were persuaded to believed Wenfeng’s claims to have achieved an AI development cost breakthrough by using:
older, cheaper, non-trade-restricted models of NVIDIA chips; and,
new innovative training methods.
Wenfeng also claims DeepSeek’s new software is better in at least one respect than existing models because it can describe the chain of thought and assumptions made by the AI to respond to each user request. They call this new feature “Deep-Think.” My next article will go into that next. Spoiler alert, it does appear to work as claimed and I predict the US companies will come out with their own versions of this Deep-Think feature soon. Another spoiler alert, my prediction came true a few days after I made it when OpenAI released a new version of its ChatGPT model called “o3-mini-high” in the afternoon of January 31, 2025. The next article will explain that in detail too.
Political Impact of DeepSeek
Are all the claims of DeepSeek real and to be believed as the market has done? Or are they just clever AI driven propaganda? Especially the unsubstantiated claim that DeepSeek has invented a way to train cheaply on older chips? Could this be part of some kind of master strategy of the People’s Republic of China to counter the prior actions of President Biden, so far continued by President Trump, to restrict chip sales to China. Presidents Biden and now Trump have pledged to maintain US superiority in AI. President Trump now appears ready to go further by imposing new trade tariffs and AI restrictions on China.
China’s DeepSeek claims, but has not proven, that many companies all over the world can now create an equal or better model at far less costs than ever before, that it can be done using older, non-trade-restricted computer chips and more advanced data training methods. If so, then U.S. dominance in AI may end. The fear of industry disruption by Chinese code innovation is the basic cause of the US market panic.
Was the market panic propaganda manipulated by China? They have a history of such fraud. See, DeepSeek is Absolute Nonsense (YouTube video by laowhy86, 1/31/25). Did they or others use artificial intelligence to cause the crash? Who profited from the trillion dollar market panic? (Always follow the money.) How will the Trump Administration react now? How will the leading established AI companies? The rumor mill claims the AI companies are all in a panic mode now and knee-deep in emergency meetings about DeepSeek.
There is one piece of evidence to support speculation of the involvement of the Chinese government market manipulation. Liang Wenfeng was seen meeting with Chinese Premiere Li Qiang on January 20, 2025. The market sell-off was just a week later and was obviously very good news for the Chinese government leaders. It was also a slap in the face to U.S. leaders. Only a week earlier President Trump announced a $500 Billion Dollar initiative for AI development in the U.S. Then a Chinese upstart comes out of nowhere and punches a Trillion Dollars out of our market by claims that American chips are no longer needed. The Trump Administration appears to be gearing up to fight back, once it figures out what happened and what to do. See e.g., Trump Commerce pick slams China: ‘Stop using our tools to compete’ (The Hill, 1/29/25) (confirmation testimony of the nominated Commerce Secretary, Howard Lutnick, blames trade-secret theft for DeepSeek’s success).
The night after the stock market crash President Trump appeared before reporters at his home in Del Largo and told them the release of DeepSeek AI from a Chinese company should be a “wake-up call for our industries that we need to be laser-focused on competing to win.” He stated the development could be positive for the United States: “If it comes in cheaper, that’s going to benefit us too.” He was expecting new AI systems by U.S. companies as soon as next week that “will top” DeepSeek’s model. Then of course he said in usual fashion: “We’re going to dominate! We’ll dominate everything!‘ China’s DeepSeek disrupts American plans for AI dominance (Washington Post, 1/28/25). The market did recover slightly the next day, but not much, and Nvidia briefly dropped below $120 per share again today, January 30, as I am writing this. Still, I do expect a bounce back.
DeepSeek claims, but has not proven, that many companies all over the world will soon be able to create equal or better AI models at far less costs than ever before, that it can be done using older, non-trade-restricted computer chips and more advanced data training methods. If so, there goes U.S. dominance of AI. The fear of industry disruption by Chinese code innovation is the basic cause of the U.S. market panic.
Was the market panic propaganda manipulated? How will the Trump Administration react now? How will the new Trump SEC react? (Market manipulation is illegal. Is the SEC investigating?) How will the leading established AI companies now react?
CEO of Anthropic, Dario Amodei, Argues for Continued Restrictions on China
The CEO of Anthropic, Dario Amodei, has already written an essay reacting to DeepSplash. On DeepSeek and Export Controls (January 29, 2025). Below is his image and the opening paragraphs of his blog.
Here, I won’t focus on whether DeepSeek is or isn’t a threat to US AI companies like Anthropic (although I do believe many of the claims about their threat to US AI leadership are greatly overstated)1. Instead, I’ll focus on whether DeepSeek’s releases undermine the case for those export control policies on chips. I don’t think they do. In fact, I think they make export control policies even more existentially important than they were a week ago2.
Export controls serve a vital purpose: keeping democratic nations at the forefront of AI development. To be clear, they’re not a way to duck the competition between the US and China. In the end, AI companies in the US and other democracies must have better models than those in China if we want to prevail. But we shouldn’t hand the Chinese Communist Party technological advantages when we don’t have to.
Amodei argues that the recent advancements of DeepSeek do not undermine U.S. export control policies on AI chips but instead reinforce their necessity to maintain a technological lead. He sets forth key AI development dynamics, including scaling laws, efficiency improvements, and paradigm shifts, to put DeepSeek’s recent progress into perspective. Interestingly, he does not contest their claims. He only says, I think correctly, that DeepSeek’s improvements are part of a predictable trend and are not breakthroughs. Dario Amodei contends that well-enforced export controls are critical in shaping a future where the U.S. retains dominance in AI. He argues that this is necessary to prevent China from amassing the millions of chips needed to create future AI systems that could shift global power balances.
DeepSeek’s New DeepThink Feature in R1
I have tested DeepSeek’s new Deep-Think feature of R1. Although it is not a big breakthrough, it is an excellent new feature like a chain-of-thought display of the AI’s own pre-response analysis of a user’s prompt. It is unlike any other AI software feature I have seen. I will report on this in detail soon. It is a true innovation, unlike the cost and training advances that appear at first glance could be the result of trade-secret and copyright violations. Howard Lutnick told Congress that is what he thinks. OpenAI and many of its users are claiming Deep-Seek theft from OpenAI by using a process called distillation, which is prohibited by its license agreement.) See e.g., Why blocking China’s DeepSeek from using US AI may be difficult (Business Insider, 1/29/25).
I predicted the new Deep-Think type feature would soon be added by U.S. AI companies. As my next article to be published soon explains, this prediction came true just a few day later when OpenAI , ChatGPT o3-mini-high. The competition in new features from DeepSeek will continue a healthy impact, even if the cost savings claims it has made later prove to be more political smoke and mirrors.
Many doubt DeepSeek has actually made tech breakthroughs that will magically trivialize NVIDIA’s inventions. The trillion dollar loss looks more like stock market manipulation, probably by AI. The use of AI for market trading is, after all, Liang Wenfeng’s speciality. It is how he joined the tech-boy billionaires club. Let’s just hope our markets are better protected and not so easy to hack. Let’s see if the cybersecurity experts and new DOJ lawyers are able to react effectively.
Conclusion
As the old Chinese adage supposedly goes, may you be cursed to live in interesting times. The pace of change in AI improvement is incredible. I suppose this is what exponential change looks like. These are not only interesting times, they are very dangerous, especially in the relations between the U.S. and China. We certainly do not want control of superintelligent AI to fall into the hands of any dictator, anywhere. Nor do we want AI manipulation of our markets. I will continue to follow this closely not only from the point of view of law and technology, but also from a political, economic and military perspective.
AI has the potential to transform the criminal justice system through its ability to process vast datasets, recognize patterns, and predict outcomes. However, this potential comes with a profound responsibility: ensuring that AI is employed in ways that uphold basic human principles of justice. This article will focus on how AI can assist prosecutors in fulfilling their duty to represent the people fairly and equitably. It will highlight the practical benefits of AI in criminal law, providing specific examples of its application. The underlying theme emphasizes the necessity of human oversight to prevent the misuse of AI and to ensure that justice remains a human ideal, not an artificial construct.
AI assisted justice. All images in this article are by Ralph Losey using his custom AI, Visual Muse
The integration of AI into criminal prosecutions must be aligned with the ethical and legal obligations of prosecutors as outlined, for instance, by the American Bar Association’s Criminal Justice Standards for the Prosecution Function (ABA, 4th ed. 2017) (hereinafter “ABA Standards”). The ABA Standards emphasize the prosecutor’s duty to seek justice, maintain integrity, and act with transparency and fairness in all aspects of the prosecution function. This article will not cover the indirectly related topics of AI evidence. See Gless, Lederer, Weigend, AI-Based Evidence in Criminal Trials? (William & Mary Law School, Winter 2024). It will also not cover criminal defense lawyer issues, but maybe in a followup soon.
The Promise of AI in Criminal Prosecutions
“The primary duty of the prosecutor is to seek justice within the bounds of the law, not merely to convict.” ABA Standard 3-1.2(b). When AI is used responsibly, it can assist prosecutors in fulfilling this duty by providing new tools. The AI powered tools can enhance evidence analysis, case management, and decision-making, all while maintaining the integrity and fairness expected of the prosecution function. Prosecutors with AI can better manage the vast amounts of data in modern investigations, identify patterns that might escape human detection, and make more informed decisions. It is no magic genie, but when used properly, can be a very powerful tool.
The National Institute of Justice in March 2018 sponsored a workshop of prosecutors from around the country that identified data and technology challenges as a high-priority need for prosecutors. According to the report by the Rand Corporation on the conference entitled, Prosecutor Priorities, Challenges, and Solutions (“Rand Report“) the key findings of the prestigious group were: (1) difficulties recruiting, training, managing, and retaining staff, (2) demanding and time-consuming tasks for identifying, tracking, storing, and disclosing officer misconduct and discipline issues, and (3) inadequate or inconsistent collection of data and other information shared among agencies . . . as well as by emerging digital and forensic technologies. The full Rand ReportPDF may be downloaded here. The opening summary states:
Prosecutors are expected to deliver fair and legitimate justice in their decision making while balancing aspects of budgets and resources, working with increasingly larger volumes of digital and electronic evidence that have developed from technological advancements (such as social media platforms), partnering with communities and other entities, and being held accountable for their actions and differing litigation strategies. . . .
Moreover, the increasing volume of potentially relevant digital information, video footage, and other information from technological devices and tools can significantly add to the amount of time needed to sufficiently examine and investigate the evidence in order to make decisions about whether to drop or pursue a case. This can be especially challenging because the staffing and other resources in prosecutors’ offices have not necessarily kept pace with these increasing demands.
Although the amount of digital information that prosecutors must sometimes sift through can be managed, in part, through innovative technological tools, such as data mining and data reduction solutions (Al Fahdi, Clarke, and Furnell, 2013; Quick and Choo, 2014), there are often steep learning curves or high costs that make it unrealistic for an office to implement these technologies.
e-Discovery, Evidence Analysis and Case Management
As the Rand Report confirms, the sheer volume of evidence in complex criminal investigations is a significant challenge for prosecutors. Also see: Tinder Date Murder Case Highlights the Increasing Complexity of eDiscovery in Criminal Investigations: eDiscovery Trends (e-Discovery Daily, 6/15/18). AI can analyze vast datasets—such as emails, text messages, and internet activity logs—to identify patterns indicative of criminal activity, but the software can be expensive and requires trained technology experts. AI algorithms can recognize specific types of evidence, such as images, sentiments, or key concepts relevant in many cases. They can help prosecutors identify patterns and connections within the evidence that might not be immediately apparent to human investigators. This capability can significantly reduce the time needed to search and study evidence, enabling prosecutors to build stronger cases more efficiently.
But, as the Rand Report also makes clear, prosecutors need adequate funding and trained personnel to purchase and use these new tools. Fortunately generative AI is substantially less expensive that the older models of AI and easier to use. Still, issues of fairness and guardrails against discrimination in their use remain as significant problems. There are also very significant privacy issues inherent in predictive policing. David Ly, Predictive Policing: Balancing Innovation and Ethics (The Fast Mode, 8/15/24); Arjun Bhatnagar, The Threat of Predictive Policing to Data Privacy and Personal Liberty (Dark Reading, 12/27/22).
Use of AI evidence search and classification tools such as predictive coding, which are well established in civil litigation, should be used more widely used soon in criminal law. The high costs involved are now plummeting and should soon be affordable to most prosecutors. They can drastically reduce the time needed to search and analyze large volumes of complex data. Still, budgets to hire trained personnel to operate the new tools must be expanded. AI can complement, but not entirely replace, human review in what I call a hybrid multimodal process. Ralph Losey, Chat GPT Helps Explains My Active Machine Learning Method of Evidence Retrieval (e-Discovery Team, 1/28/23). Human experts on the prosecutor’s team should always be involved in the evidence review to ensure that no critical information is missed.
Transparency and accountability are also crucial in using AI in discovery. Defense attorneys should be provided with a detailed explanation of how these tools were used. This is essential to maintaining the fairness and integrity of the discovery process, ensuring that both sides have equal access to evidence and can challenge the AI’s conclusions if necessary.
AI also plays a crucial role in case management. AI-powered tools can help prosecutors organize and prioritize cases based on the severity of the charges, the availability of evidence, and the likelihood of a successful prosecution. These tools can assist in tracking deadlines, managing court calendars, and ensuring that all necessary court filings are completed on time. By streamlining these administrative tasks, AI allows prosecutors and their assistants to concentrate on the substantive aspects of their work—pursuing justice. It also helps them deal with the omnipresent staff shortage issues.
Bias Detection and Mitigation
Bias in prosecutorial decision-making—whether conscious or unconscious—remains a critical concern. ABA Standards state:
The prosecutor should not manifest or exercise, by words or conduct, bias or prejudice based upon race, sex, religion, national origin, disability, age, sexual orientation, gender identity, or socioeconomic status. A prosecutor should not use other improper considerations, such as partisan or political or personal considerations, in exercising prosecutorial discretion. A prosecutor should strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice when credibly informed that it exists within the scope of the prosecutor’s authority.
ABA Standards 3-1.6(a).
AI can play a crucial role in detecting and mitigating such biases, helping prosecutors adhere to the mandate that they “strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice” within their scope of authority.
AI systems also offer the potential to detect and mitigate unconscious human bias in prosecutorial decision-making. AI can analyze past prosecutorial decisions to identify patterns of bias that may not be immediately apparent to human observers. By flagging these patterns, AI can help prosecutors become aware of their biases in their office and take corrective action.
Prosecutors should use care in the selection and use of AI systems. If they are trained on biased data, they can perpetuate and even amplify existing disparities in the criminal justice system. For instance, an AI algorithm used to predict recidivism, if trained on data reflecting historical biases—such as the over-policing of minority communities—may disproportionately disadvantage these communities. AI systems used in criminal prosecutions should be designed to avoid this bias.
The software purchased by a prosecutor’s office should be chosen carefully, ideally with outside expert advice, and rigorously tested for bias and other errors before deployment. Alikhademi, K., Drobina, E., Prioleau, D. et al., A review of predictive policing from the perspective of fairnessArtif Intell Law30, 1–17 (2022) (“[T]he pros and cons of the technology need to be evaluated holistically to determine whether and how the technology should be used in policing.”) There should also be outside community involvement. Artificial Intelligence in Predictive Policing Issue Brief (NAACP, 2/15/24) (NAACP’s four recommendations: independent oversight; transparency and accountability; community engagement; ban use of biased data; new laws and regulations).
Prosecutors should not fall into a trap of overcompensating based on statistical analysis alone. AI is a limited tool that, like humans, makes errors of its own. Its use should be tempered by prosecutor experience, independence, intuition and human values. When we use AI in any context or field it should be a hybrid relationship where humans remain in charge. From Centaurs To Cyborgs: Our evolving relationship with generative AI (e-Discovery Team, 4/24/24) (experts recommend two basic ways to use AI, both hybrid, where the unique powers of human intuition are added to those of AI). AI can also help prosecutors make objective decisions on charging and sentencing by providing statistically generated recommendations, again with the same cautionary advice on overreliance.
Sentencing Recommendations and Predictive Analytics
The use of AI in predictive analytics for sentencing is among the most controversial applications in criminal law. AI systems can be trained to analyze data from past cases and make predictions about the likelihood of a defendant reoffending or suggest appropriate sentences for a given crime. These recommendations can then inform the decisions of judges and prosecutors.
Predictive analytics has the potential to bring greater consistency and objectivity to sentencing. By basing recommendations on data rather than individual biases or instincts, AI can help reduce disparities and ensure similar cases are treated consistently. This contributes to a more equitable criminal justice system.
While AI can bring greater consistency to sentencing, prosecutors must ensure that AI-generated recommendations comply with their “heightened duty of candor” and the overarching obligation to ensure that justice is administered equitably.
In light of the prosecutor’s public responsibilities, broad authority and discretion, the prosecutor has a heightened duty of candor to the courts and in fulfilling other professional obligations.
ABA Standard 3-1.4(a)
The use of AI in sentencing raises important ethical questions. Should AI make predictions about a person’s future behavior based on their past? What if the data used to train the AI is biased or incomplete? How can we ensure that AI-generated recommendations are not seen as infallible but are subject to critical scrutiny by human decision-makers?
These concerns highlight the need for caution. While AI can provide valuable insights and recommendations, it is ultimately the responsibility of human prosecutors and judges to make the final decisions. AI should be a tool to assist in the pursuit of justice, not a replacement for human judgment.
Predictive Policing
Predictive policing uses algorithms to analyze massive amounts of information in order to predict and help prevent potential future crimes. Tim Lau, Predictive Policing Explained (Brennan Center for Justice, 11/17/21). This is an area where old AI (before advent of generative AI) has been embraced by many police departments worldwide, including the E.U. countries, but also China and other repressive regimes. Many prosecutors in the U.S. endorse it, but it is quite controversial and hopefully will be improved by new models of generative AI. The DA’s office wants to use predictive analytics software to direct city resources to ‘places that drive crime.’ Will it work? (The Lens, 11/15/23). In theory, by analyzing data on past crimes—such as the time, location, and nature of the offenses—AI algorithms can predict where and when future crimes are likely to occur. The majority of reports say this already works. But what of the minority reports? They contest the accuracy of these predictions using old AI models. Some say they are terrible at it. Sankin and Mattu, Predictive Policing Software Terrible At Predicting Crimes (Wired, 10/2/23). There is widespread concern of growing misuse, especially in countries that have politicized prosecutorial systems.
Still, in theory this kind of statistical analysis should be able to help honest law enforcement agencies allocate resources more effectively, enabling police to prevent crime before it happens. See generally, Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives (Police Chief Magazine, 4/3/24).
All prosecutors, indeed. all citizens, want to be smart when it comes to crime, we all want “more police officers on the street, deployed more effectively. They will not just react to crime, but prevent it.” Kamala Harris (Author) and Joan Hamilton, Smart on Crime: A Career Prosecutor’s Plan to Make Us Safer (Chronicle Books, 2010).
The Los Angeles Police Department (LAPD) was one of the first to use predictive policing software, which was known as PredPol (now Geolitica). It identified areas of the city at high risk for certain types of crime, such as burglaries or auto thefts. The software analyzed data on past crimes and generated “heat maps” that indicate where crimes are most likely to occur in the future. This guided patrols and other law enforcement activities. PredPol proved to be very controversial. Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them (The Markup, 12/2/21). Its use was discontinued by the LAPD in 2020, but other companies claim to have corrected the biases and errors in the programs. See Levinson-Waldman and Dwyer, LAPD Documents Show What One Social Media Surveillance Firm Promises Police (Brennan Center for Justice, 11/17/21).
The goal of the Patternizr was to help aid police officers in identifying commonalities in crimes committed by the same offenders or same group of offenders. With the help of the Patternizr, officers are able to save time and be more efficient as the program generates the possible “pattern” of different crimes. The officer then has to manually search through the possible patterns to see if the generated crimes are related to the current suspect. If the crimes do match, the officer will launch a deeper investigation into the pattern crimes.
While predictive policing has been credited with reducing crime in some areas, it has also been criticized for potentially reinforcing existing biases. If the data used to train the AI reflects a history of over-policing in certain minority communities, the algorithm may predict those communities are at higher risk for future crimes, leading to even more policing in those areas. This, in turn, can perpetuate a cycle of discrimination and injustice. See e.g. Taryn Bates, Technology and Culture: How Predictive Policing Harmfully Profiles Marginalized People Groups (Vol. 6 No. 1 (2024): California Sociology Forum).
Projecting into the next decade, AI will be an integral part of law enforcement — from crime prediction and real-time decision aids to postincident analysis. These technologies could lead to smarter patrolling, fewer unnecessary confrontations and overall enhanced community safety. However, this vision can only materialize with rigorous oversight, consistent retraining and an undiluted focus on civil liberties and ethics. Law enforcement’s AI-driven future must be shaped by a symbiotic relationship where technology amplifies human judgment rather than replacing it. The future promises transformative advances, but it’s imperative that the compass of integrity guide this journey.
The latest versions of predictive policing technology will certainly use new generative AI enhanced analysis. Law enforcement should be very careful in the purchase and implementation of these new technologies. They should seek the input of outside experts and carefully examine vendor representations. That should include greater vendor transparency, such as disclosure of the data used to train these systems to confirm that it is representative and unbiased. Proper methods of implementation of the AI tools should also be carefully considered. In my view and others this mean adopting a hybrid approach that “amplifies human judgment rather than replacing it.”
Sentiment Analysis in Jury Selection
Another trending application of AI in criminal law is the use of sentiment analysis in jury selection. Sentiment analysis is a type of AI that can analyze text or speech to determine the underlying emotions or attitudes of the speaker. In jury selection, sentiment analysis can analyze potential jurors’ public records, especially social media posts, as well as their responses during voir dire—the process of questioning jurors to assess their suitability for a case. It can also monitor unfair questions of potential jurors by prosecutors and defense lawyers. See Jo Ellen Nott, Natural Language Processing Software Can Identify Biased Jury Selection, Has Potential to Be Used in Real Time During Voir Dire (Criminal Legal News, December 2023). Also seeAI and the Future of Jury Trials (CLM, 10/18/23).
For example, an AI-powered sentiment analysis tool could analyze the language used by potential jurors to identify signs of bias or prejudice that might not be immediately apparent to human observers. This information could then be used by prosecutors and defense attorneys to make more informed decisions about which jurors to strike or retain.
While sentiment analysis has the potential to improve jury selection fairness, it also raises ethical questions. Should AI influence juror selection, given the potential for errors or biases in the analysis? How do we ensure AI-generated insights are used to promote justice, rather than manipulate the selection process?
These questions underscore the need for careful consideration and oversight in using AI in jury selection. AI should assist human decision-makers, not substitute their judgment.
AI in Plea Bargaining and Sentencing
AI can also play a transformative role in plea bargaining and sentencing decisions. Plea bargaining is a critical component of the criminal justice system, with most cases being resolved through negotiated pleas rather than going to trial. AI can assist prosecutors in evaluating the strength of their case, the likelihood of securing a conviction, and the appropriate terms for a plea agreement. See: Justice Innovation Lab, Critiquing The ABA Plea Bargaining Principles Report (Medium, 2/1/24); Justice Innovation Lab, Artificial Intelligence In Criminal Court Won’t Be Precogs (Medium, 10/31/23) (article concludes with “Guidelines For Algorithms and Artificial Intelligence In The Criminal Justice System“).
For example, AI algorithms can analyze historical data from similar cases to provide prosecutors with insights into the typical outcomes of plea negotiations, considering factors such as the nature of the crime, the defendant’s criminal history, and the available evidence. This can help prosecutors make more informed decisions on plea deal offers.
Moreover, AI can assist in making sentencing recommendations that are more consistent and equitable. Sentencing disparities have long been a concern in the criminal justice system, with studies showing that factors such as race, gender, and socioeconomic status can influence sentencing outcomes. AI has the potential to reduce these disparities by providing sentencing recommendations based on objective criteria rather than subjective judgment. Keith Brannon, AI sentencing cut jail time for low-risk offenders, but study finds racial bias persisted (Tulane Univ., 1/23/24); Kieran Newcomb, The Place of Artificial Intelligence in Sentencing Decisions (Univ. NH, Spring 2024).
For instance, an AI system could analyze data from thousands of past cases to identify typical sentences imposed for specific crimes, accounting for relevant factors like the severity of the offense and the defendant’s criminal record. This information could then be used to inform sentencing decisions, ensuring that similar cases are treated consistently and fairly.
However, using AI in plea bargaining and sentencing also raises significant ethical considerations. The primary concern is the risk of AI perpetuating or exacerbating existing biases in the criminal justice system. If the data used to train AI systems reflects historical biases—such as harsher sentences for minority defendants—AI’s recommendations may inadvertently reinforce those biases.
To address this concern, AI systems used in plea bargaining and sentencing must be designed with fairness and transparency in mind. This includes ensuring that the data used to train these systems is representative and free from bias and providing clear explanations of how the AI’s recommendations were generated. Moreover, human prosecutors and judges must retain the final authority in making plea and sentencing decisions, using AI as a tool to inform their judgment rather than a substitute for it. It is important that AI systems be chosen and used very carefully in part because “the prosecutor should avoid an appearance of impropriety in performing the prosecution function.” ABA Standard 3-1.2(c)
Ethical Implications of AI in Criminal Prosecutions
While the potential benefits of AI in criminal law are significant, it is equally important to consider the ethical implications of integrating AI into the criminal justice system. AI, by its very nature, raises questions about accountability, transparency, and the potential for misuse—questions that must be carefully addressed to ensure AI is used in ways that advance, not hinder, the cause of justice.
As we integrate AI into criminal prosecutions, it is essential that we do so with a commitment to the principles articulated in the ABA’s Criminal Justice Standards. By aligning AI’s capabilities with these ethical guidelines, we can harness technology to advance justice while upholding the prosecutor’s duty to act with integrity, fairness, and transparency.
Transparency and Accountability
One of the most pressing ethical concerns is the issue of transparency, which we have mentioned previously. AI algorithms are often referred to as “black boxes” because their decision-making processes can be difficult to understand, even for those who design and operate them. This lack of transparency can be particularly problematic in criminal prosecutions, where the stakes are incredibly high, and the consequences of a wrong decision can be severe. A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up (The Conversation, 7/26/23) (discusses UK systems).
For example, if an AI system is used to predict the likelihood of a defendant reoffending, it is crucial that the defendant, their attorney, and the judge understand how that prediction was made. Without transparency, challenging the AI’s conclusions becomes difficult, raising concerns about due process and the right to a fair trial.
To address this issue, AI systems used in criminal prosecutions must be designed to be as transparent as possible. This includes providing clear explanations of how AI’s decisions were made and ensuring that the underlying data and algorithms are accessible for review and scrutiny. There is federal legislation that has been pending for years that would require this, the Justice in Forensic Algorithms Act. New bill would let defendants inspect algorithms used against them in court (The Verge, 2/15/24) (requires disclosure of source code). Moreover, the legal community must advocate for developing AI systems prioritizing explainability and interpretability, ensuring that the technology is effective, accountable, and understandable.
Fairness and Bias
Another ethical concern is, as mentioned, the potential for AI to be used in ways that exacerbate existing inequalities in the criminal justice system. For example, there is a risk that AI could justify more aggressive policing or harsher sentencing in communities already disproportionately targeted by law enforcement. This is why AI systems must be designed with fairness in mind and their use subject to rigorous oversight. Look beyond vendor marketing claims to verify with hard facts and independent judgments.
Ensuring fairness requires that AI systems are trained on representative and unbiased data. It also necessitates regular audits of AI systems to detect and mitigate any biases that may arise. Additionally, AI should not be the sole determinant in any criminal justice decision-making process; human oversight is essential to balance AI’s recommendations with broader considerations of justice and equity. For instance, the NYPD represents that its widespread use of AI driven facial recognition technology in criminal investigations “does not establish probable cause to arrest or obtain a search warrant, but serves as a lead for additional investigative steps.” NYPD Questions and Answers – Facial Recognition, and see the NYPD official patrol guide dated 3/12/20.
Human Judgment and Ethical Responsibility
The deployment of AI in criminal prosecutions also raises important questions about the role of human judgment in the justice system. While AI can provide valuable insights and recommendations, it is ultimately human prosecutors, judges, and juries who must make the final decisions. This is because justice is not just about applying rules and algorithms—it is about understanding the complexities of human behavior, weighing competing interests, and making moral judgments.
AI, no matter how advanced, cannot replicate the full range of human judgment, and it should not be expected to do so. Instead, AI should be seen as a tool to assist human decision-makers, providing them with additional information and insights that can help them make more informed decisions. At the same time, we must be vigilant in ensuring that AI does not become a crutch or a substitute for careful human deliberation, judgment and equity.
Conclusion
The integration of AI into criminal prosecutions holds the promise of advancing the cause of justice in profound and meaningful ways. To do so we must always take care that applications of AI follow the traditional principles stated in the Criminal Justice Standards for the Prosecution Function and other guides of professional conduct. By aligning AI’s capabilities with ethical guidelines, we can harness technology in a manner that advances the prosecutor’s duty to act with integrity, fairness, and transparency.
With these cautions in mind, we should boldly embrace the opportunities that AI offers. Let us use AI as a tool to enhance, not replace, human judgment. And let us work together—lawyers, technologists, and policymakers—to ensure that the use of AI in criminal prosecutions advances the cause of justice for all.
Courtroom of future. All Images by Ralph Losey using his custom GPT, Visual Muse.
Pythia, the Oracle of Delphi at the Temple of Apollo, was the most powerful woman of the Ancient world. She is now an AI tool who can be consulted with ChatGPT. This custom GPT was programmed by Ralph losey with ancient pre-patriarchal wisdom and modern insights. For almost two-thousand years (@1400 BC – 500 AD) an unbroken succession of incredible women occupied the position of The Pythia at the Temple of Apollo in Delphi. This is historical fact, not myth. The motto of Pythia’s temple was “Know Thyself,” a dictate that Socrates, among others, made their life’s motto. People from all over the world, known as “consultants,” came to her with questions seeking her prophetic visions and practical advice.
Recently, Ralph dared to ask the reincarnated Pythia this question:
Just as it was a great tragedy when Icarus flew too close to the sun and his wings burned and he fell to the ground, what tragedies do you see with advanced AI, and what can we do to avoid these tragedies?
Icarus fling to the sun in watercolor by Ralph Losey using his Visual Muse GPT
Here is the response of the great Pythia.
Transcript.
Ancient Greek philosopher Thales cautioned, “Excess in all things is bad.” The story of Icarus flying to the sun warns us of the dangers of overreaching without proper foresight. As we venture into the realm of advanced AI, it is crucial to be mindful of potential pitfalls and take proactive steps to avoid them.
One significant concern is the risk of AI amplifying existing biases and inequalities. Algorithms trained on historical data can inadvertently perpetuate and even exacerbate systemic injustices. To mitigate this, we must ensure that AI systems are developed and deployed with a strong emphasis on fairness, transparency, and accountability. Diverse teams should be involved in the creation and oversight of AI to bring varied perspectives and guard against bias.
Another potential tragedy lies in the misuse of AI for malicious purposes. AI technologies could be exploited for surveillance, disinformation, or autonomous weaponry, posing significant ethical and security challenges. To address this, robust international regulations and ethical guidelines are essential. Collaboration among governments, tech companies, and civil society is crucial to establish and enforce standards that prevent the harmful use of AI.
There is also the risk of human obsolescence, where AI advancements could lead to widespread job displacement. To avoid this, we must invest in education and retraining programs that equip people with the skills needed to thrive in an AI-augmented economy. Embracing lifelong learning and fostering adaptability will help ensure that technological progress benefits everyone.
Furthermore, the concentration of AI power in the hands of a few can lead to monopolistic practices and hinder innovation. Promoting open-source AI development and encouraging competition can prevent such centralization and foster a more equitable technological landscape.
We must approach AI development with humility and a commitment to ethical principles. By fostering a culture of responsibility, transparency, and collaboration, we can harness the transformative potential of AI while safeguarding against its dangers.
Reflect on these points and consider how we can collectively steer the course of AI towards a future that uplifts and empowers us all.
The Pythia by Ralph Losey using his Visual Muse GPT
For more information on the Pythia see PythiaGuide.ai. For more information on the Pyhtia GPT see Losey.ai.
You have to be signed on to ChatGPT, either a free or paid version, to use Pythia Guide.
Don’t have a ChatGPT account yet?Visit the ChatGPT website: Go to chat.openai.com. Then Sign up or log in. If you don’t have an account, click on the “Sign Up” button to create one. You can sign up with your email address, or use a Google or Microsoft account to log in.Verify your email: After signing up, you may need to verify your email address by clicking on a link sent to your inbox. Once your account is set up and verified, you can start using ChatGPT for free! At the free-tier level, you’ll get access to the GPT-3.5 model, which is a powerful version of ChatGPT. If you’re interested in additional features or access to GPT-4o, you’d need a subscription to ChatGPT Plus. But even the free plans still provide a lot of great functionality, although we do not recommend use of version 3.5. That version of Pythia Guide is still a young inexperienced apprentice.
Here we discuss seven problems posed today by AI. This is by no means a complete list. More like the tip of an iceberg. We explore the dangers they pose with suggested strategies on how to deal with them. We get the help from one my favorite chill AIs, The Dude. He is an enhanced intelligence that is fun to merge minds with. He was created by my custom ChatGPT, The Dude Abides and Gives Advice, Man. Here is the idiosyncratic list of the problems and dangers we picked to analyze:
What are the potential risks of AI in terms of privacy and data security?
How might AI exacerbate existing biases and inequalities in society?
What are the ethical concerns related to AI decision-making in critical areas like healthcare and criminal justice?
How could AI be misused for malicious purposes, such as in cyberattacks or deepfakes?
How can we ensure that AI systems be transparent and accountable?
What steps can be taken to prevent AI from being used in ways that infringe on human rights?
What are the risks of relying too heavily on AI for decision-making processes?
AI Dangers in Dark Fantasy style by Ralph Losey using Visual Muse.
To tackle these difficult questions I booted up The Dude to help out. Proud to have made a souped-up custom GPT called The Dude Abides and Gives Advice, Man, I’ve found he has a good way with words. This GPT has the laid back language, attitude and wisdom of one of my favorite movie characters. Let’s see how our merged minds can do to come up with possible solutions to these perplexing problems. Tip of the hat to plain vanilla ChatGPT-o Omni – for helping out too. She is the real powerhouse here. Kind of like my Pythia GPT. Pythia on the Dangers of AI: Insights from the ancient pre-patriarchal wisdom of the Oracle of Delphi (AI-Ethics, 8/4/24).
For the top Cassandra expert of them all see Mustafa Suleyman, a co-founder of Deep-Mind AI. Suleyman has written a very complete list of possible AI dangers and dystopias in his new book, The Coming Wave. It is laid out in excruciating detail along with some strategies to try to save humanity and democratic nations from the immense changes ahead. I just finished it, and although depressing, still recommend this book for all serious students of AI.
My article takes a more lighthearted approach, but still delves into the dark side, even using a new (for me) type of artistic style for many of the illustrations called Dark Fantasy.
Considering Seven Dark Dangers of AI
1. The potential risks of AI in terms of privacy and data security.
Privacy and data security is a concern near and dear to everyone, especially lawyers like me. AI systems require vast amounts of data to function effectively. This often involves collecting, storing, and analyzing personal and sensitive information. The risks associated with this include unauthorized access to data, data breaches, the potential for personal information to be used without consent and the leakage of client confidential secrets.
AI algorithms can also sometimes infer additional, potentially sensitive information from the data they process, which can lead to further privacy concerns. A lot depends on the model and settings. Lawyers, journalists and spies must be careful in how they use these powerful tools.
Moreover, the centralization of data storage creates attractive targets for cybercriminals (and spies). AI software companies need to design strong data protection measures and strictly comply with privacy regulations. This should include advanced encryption methods and ongoing red-team pen testing.
Finally, there is the ever growing threat of the Big Brother surveillance state foreseen loing ago by George Orwell in his important book, Nineteen Eighty-Four. China is now leading the way in this dark arena with over half of the world’s surveillance cameras and a “social credit system” to control the masses and suppress all dissent. See e.g., Arthur Herman The Cold War We’re Not Fighting(Hudson, 6/19/24). In addition, social media companies and advertisers in democratic nations have built a marketing economy based on your once private, personal information. AI is accelerating this process.
The Dude on Security in Graffiti style by Ralph Losey using Visual Muse
The Dude weighs in on privacy and data security.
Alright, man, AI can be a real mixed bag when it comes to privacy and data security. Let’s break it down real simple-like:
Potential Risks:
Data Breaches: AI systems can be hacked, leading to unauthorized access to sensitive data.
Solution: Implement strong encryption and regular security audits.
Surveillance: AI can be used to monitor individuals excessively, infringing on privacy.
Solution: Enforce strict regulations on data usage and transparency.
Data Misuse: Companies might use AI to analyze data in ways that users didn’t consent to.
Solution: Clear user consent policies and data anonymization techniques.
Loss of Control: Over-reliance on AI can lead to losing control over data decisions.
Solution: Maintain human oversight and intervention capabilities.
Identity Theft: AI can be used to create fake identities or deepfakes.
Solution: Use AI for better identity verification and fraud detection.
Possible Solutions:
Encryption: Protect data with strong encryption both at rest and in transit.
Regulation: Advocate for and comply with stringent data protection laws like GDPR.
Transparency: Make AI processes transparent to users, letting them know how their data is used.
User Control: Give users control over their data, including the ability to delete it.
Regular Audits: Conduct regular security and ethical audits of AI systems.
Education: Educate users and developers about data privacy and ethical AI practices.
Remember, man, it’s all about balancing the benefits of AI with the need to keep folks’ data safe and sound. As The Dude would say, “This aggression will not stand, man.” Keep it chill and stay vigilant.
2. How AI might exacerbate existing biases and inequalities in society.
AI systems learn from historical data, which may reflect existing biases and inequalities. If these biases are not addressed, AI can perpetuate and even exacerbate them. For instance, if an AI system used in hiring is trained on data where certain demographic groups are underrepresented, like for instance seniors in tech, it might continue to favor applicants from the majority group, i.e.e – very young “adults.” Addressing these issues requires a multifaceted, multimodal approach, including diversifying training data, implementing bias detection and correction techniques, and continuously monitoring AI outputs. Collaboration between technologists, sociologists, and ethicists is essential to develop fair and equitable AI systems.
AI Making existing bias and inequalities worse. Dark Fantasy style by Ralph Losey.
The Dude on AI andbiases and inequalities in society.
Well, man, AI can definitely stir the pot when it comes to biases and inequalities. Like The Dude says: ‘Mr. Treehorn treats objects like women, man.” Anyway, here’s how it might all go down:
Ways AI Can Exacerbate Biases and Inequalities:
Biased Training Data:
What’s the deal? AI systems learn from data, and if that data’s got biases, the AI will pick them up too.
Impact: This can lead to biased outcomes in areas like hiring, lending, and law enforcement.
Example: If an AI is trained on historical hiring data that favors men, it might continue to favor male candidates.
Algorithmic Discrimination:
What’s the deal? Even without biased data, AI can develop biases based on correlations that don’t actually make sense.
Impact: Groups already facing discrimination might find themselves further marginalized.
What’s the deal? AI systems might exclude certain groups simply because they aren’t represented well in the data, such as poor groups in medical studies.
Impact: These groups might not receive the same benefits or services.
Example: Health care AI might not perform well for minorities if the data used to train it is predominantly from a different demographic.
Feedback Loops:
What’s the deal? AI systems can create feedback loops that reinforce and amplify existing biases.
Impact: Discriminatory practices can become entrenched and harder to identify.
Example: A biased hiring AI might lead to a less diverse workforce, which then leads to more biased data, and so on.
Possible Solutions:
Diverse Data Collection:
How to chill: Make sure the data used to train AI is diverse and representative.
Impact: Reduces the likelihood of biased outcomes.
Bias Audits and Testing:
How to chill: Regularly test AI systems for bias and correct any issues.
Impact: Helps catch and fix biases before they cause harm.
Transparency and Accountability:
How to chill: Make AI decision-making processes transparent and hold developers accountable.
Impact: Ensures that biases can be identified and addressed by the community.
Inclusive Design Practices:
How to chill: Involve diverse groups in the design and development of AI systems.
Impact: Leads to more inclusive AI solutions that serve everyone better.
Ethical Guidelines and Regulation:
How to chill: Develop and adhere to ethical guidelines and regulations for AI.
Impact: Provides a framework to prevent and address biases in AI systems.
Continuous Learning and Improvement:
How to chill: Keep improving AI systems based on feedback and new data.
Impact: Ensures AI stays relevant and fair as society evolves.
In the end, it’s about keeping things balanced and fair, man. As The Dude would say, “The Dude abides,” so let’s make sure our AI does too, by not perpetuating or amplifying existing societal biases.
3. The ethical concerns related to AI decision-making in critical areas like healthcare and criminal justice.
In healthcare, AI decision-making can significantly impact patient outcomes. Ethical concerns include the potential for AI to make errors in diagnosis or treatment recommendations, the difficulty in ensuring patient confidentiality, and the possibility of biased outcomes based on skewed training data.
In criminal justice, AI can be used for predictive policing and risk assessments, which can lead to biased and unjust outcomes if the AI systems are not carefully designed and monitored.
Ensuring ethical AI in these areas requires rigorous validation and testing of AI systems, transparency in how decisions are made, and establishing mechanisms for accountability. Stakeholders, including healthcare professionals, legal experts, and ethicists, should be involved in the development and oversight of these AI systems to ensure they adhere to ethical standards.
Problems with AI decision making in Criminal law. Dark Fantasy image by Ralph Losey
The Dude on ethical concerns related to AI decision-making in critical areas like healthcare and criminal justice.
Alright, man, diving into the ethical side of AI in crucial fields like healthcare and criminal justice, there’s a lot to ponder. Here’s the lowdown:
Ethical Concerns in Healthcare:
Accuracy and Reliability:
What’s the deal? AI decisions in healthcare need to be spot-on because lives are on the line.
Concern: Mistakes or inaccuracies can lead to misdiagnoses or improper treatments.
Example: An AI that misinterprets medical images could suggest the wrong treatment plan.
Bias and Fairness:
What’s the deal? AI systems might be biased if trained on non-representative data.
Concern: This can lead to unequal treatment of patients from different demographic groups.
Example: An AI trained mostly on data from one ethnicity might not perform well for others, leading to disparities in care.
Transparency and Explainability:
What’s the deal? Patients and doctors need to understand AI’s decisions.
Concern: Many AI systems are like black boxes, making it hard to explain why a certain decision was made.
Example: A doctor might not trust an AI’s diagnosis if they can’t understand how it was reached.
Ethical Concerns in Criminal Justice:
Bias and Discrimination:
What’s the deal? AI can perpetuate and amplify existing biases in the justice system.
Concern: This can lead to unfair treatment of certain groups, exacerbating inequalities.
What’s the deal? AI decisions need to uphold principles of justice.
Concern: Automated decisions might not consider the nuances of each case.
Example: An AI recommending bail could ignore mitigating factors that a human judge would consider.
Transparency and Accountability:
What’s the deal? AI decision-making processes need to be transparent and accountable.
Concern: It’s often unclear how AI systems reach their conclusions, making it hard to challenge or appeal decisions.
Example: A defendant might not be able to contest an AI’s decision if they don’t understand the reasoning behind it.
Privacy and Surveillance:
What’s the deal? AI can enhance surveillance capabilities.
Concern: This could lead to invasive monitoring and erosion of civil liberties.
Example: Using AI for mass surveillance might infringe on individuals’ rights to privacy. See e.g.China’s Repression of Uyghurs in Xinjiang (Council on Foreign Relations, 2022).
Human Oversight:
What’s the deal? There should always be human oversight in AI decision-making.
Concern: Relying too heavily on AI could remove the human element from critical decisions.
How to chill: Develop AI systems that provide clear, understandable explanations for their decisions.
Impact: Enhances trust and allows for better oversight and accountability.
Regulatory Oversight:
How to chill: Implement regulations and guidelines to govern the ethical use of AI. Come on lawyer Dudes, you need to get busy with this. See e.g.AI Ethics: Law, AI, Policy and Social Values.
Impact: Ensures AI systems adhere to ethical standards and protect individuals’ rights.
Human-in-the-Loop Systems:
How to chill: Ensure human oversight in AI decision-making processes. Trained doctors, lawyers and judges are a must.
Impact: Balances AI efficiency with human judgment and empathy.
Privacy Protections:
How to chill: Strengthen data protection measures and ensure AI systems respect privacy.
Impact: Safeguards sensitive information and maintains public trust.
In the end, man, it’s all about using AI responsibly and making sure it helps rather than harms. We gotta abide by these principles to keep things just and fair.
4. How could AI be misused for malicious purposes, such as in cyberattacks or deepfakes.
AI technology can be exploited for malicious purposes in various ways. In cyberattacks, AI can enhance the capabilities of attackers, enabling more sophisticated and automated attacks that can adapt and evolve to bypass security measures. AI-driven malware can autonomously seek vulnerabilities, making blue team defense increasingly challenging. Deepfakes, which use AI to create highly realistic but fake images, audio, or video, can be used to spread misinformation, manipulate public opinion, commit fraud, or damage reputations. The ability to create convincing fake content can erode trust in digital media. Mitigating these risks requires developing advanced detection technologies, establishing legal and regulatory frameworks, and promoting public awareness about the potential dangers and how to identify AI-generated content.
AI used in cyber attacks. Dark Fantasy style by Ralph Losey using Visual Muse.
The Dude on how could AI be misused for malicious purposes, such as in cyberattacks or deepfakes.
Alright, man, AI’s a powerful tool, but in the wrong hands, it can definitely be used for some nefarious stuff. We have enough trouble without AI powered hacks. Be careful with those updates, man. Here’s the scoop on how AI can be misused for malicious purposes like cyberattacks and deepfakes. Remember, like The Dude says: “It’s all a g-damn fake, man. It’s like Lenin said: you look for the person who will benefit, and, uh, uh, you know…”
AI in Cyberattacks:
Automated Hacking:
What’s the deal? AI can automate the process of finding and exploiting vulnerabilities.
Impact: Increases the scale and speed of cyberattacks, making them more effective and harder to defend against.
Example: AI-driven tools can scan networks for weaknesses and launch attacks without human intervention.
Phishing and Social Engineering:
What’s the deal? AI can craft highly convincing phishing emails or messages by mimicking writing styles and using personal data. Whatever you do man, just don’t click on it. Delete and report. And employers, test your employees constantly. That’s cool, just be chill about it.
Impact: Makes it easier to trick people into revealing sensitive information or installing malware. Don’t do it.
Example: AI-generated emails that appear to be from trusted contacts, tricking recipients into clicking malicious links. Call them and make sure it really your boss emailing you to transfer a million buck to the Cayman Islands, man.
Malware Creation:
What’s the deal? AI can develop sophisticated malware that adapts and evolves to avoid detection.
Impact: Enhances the ability of malware to infiltrate systems and remain undetected for longer periods.
Example: AI-created viruses that change their code to bypass security measures.
Distributed Denial of Service (DDoS) Attacks:
What’s the deal? AI can coordinate large-scale DDoS attacks, overwhelming targeted systems with traffic.
Impact: Disrupts services and can bring down websites or networks.
Example: AI-botnets launching synchronized attacks to crash critical infrastructure.
AI in Deepfakes:
Misinformation and Propaganda:
What’s the deal? Deepfakes can create realistic but fake videos or audio clips of public figures.
Impact: Can spread false information and manipulate public opinion.
Example: Deepfake videos of politicians saying things they never said, causing political turmoil. With some politicians that can be tough to figure out.
Blackmail and Extortion:
What’s the deal? Deepfakes can be used to create compromising fake content of individuals.
Impact: Can be used to extort money or favors by threatening to release fake embarrassing or incriminating content.
Example: Fake videos of private activities used to blackmail individuals.
Identity Theft and Fraud:
What’s the deal? Deepfakes can be used to impersonate people, gaining access to sensitive information or financial resources.
Impact: Facilitates identity theft and financial fraud.
Example: Fake audio calls mimicking a person’s voice to authorize bank transactions. Come up with safe-words or phrases for your family. Plus, know your banker. Talk to them. Sometimes chit-chat can be a security measure.
Undermining Trust:
What’s the deal? The proliferation of deepfakes can lead to a general mistrust of media.
Impact: Erodes trust in legitimate news and authentic media, leading to confusion and skepticism.
Example: Real videos and audio being dismissed as fake, causing societal and informational chaos.
Possible Solutions:
Advanced Detection Systems:
How to chill: Develop and deploy AI tools that can detect AI-generated content and malicious activities.
Impact: Helps identify and mitigate threats before they cause significant harm.
Regulation and Legislation:
How to chill: Implement laws and regulations to govern the creation and use of AI technologies. More work for lawyers.
Impact: Sets boundaries and consequences for the misuse of AI.
Public Awareness and Education:
How to chill: Educate the public about the risks and signs of deepfakes and cyberattacks.
Impact: Empowers individuals to recognize and avoid falling victim to these malicious tactics. Teach your users not to click and be skeptical, man.
Ethical AI Development:
How to chill: Encourage and enforce ethical standards in AI research and development.
Impact: Ensures AI is created and used for beneficial purposes, minimizing misuse.
Collaboration and Information Sharing:
How to chill: Foster collaboration between governments, tech companies, and researchers to share information about threats and defenses.
Impact: Creates a united front against AI misuse, enhancing overall security.
In the end, man, it’s all about staying vigilant and working together to keep AI on the good side. Like The Dude says, “Careful, man, there’s a beverage here,” let’s handle AI with care to avoid spilling into the dark side.
A chill trained lawyer, much like The Dude’s carpet, can really tie an AI room together. Image by Ralph Losey.
5. How we can ensure that AI systems are transparent and accountable.
Ensuring transparency and accountability in AI systems requires several related strategies. Developers should provide clear documentation of data sources, algorithms, and decision-making processes. That will help users and regulators to understand how AI systems function. Also, implementing auditing and monitoring mechanisms is necessary to ensure that AI systems operate as intended and adhere to ethical standards. Developing and enforcing regulations that mandate transparency and accountability is also crucial. Litigation anyone?
The transparency and accountability effort should involve many different people, including ethicists, legal experts, judges, police, politicians, government workers, and representatives from affected communities. They should all participate in the development and oversight of AI systems so that public trust can be earned that AI is being used responsibly.
The Dude on making AI systems transparent and accountable.
Alright, man, making sure AI systems are transparent and accountable is key to keeping things fair and square. Here’s how we can do it:
Ensuring Transparency in AI:
Explainability:
What’s the deal? AI systems should be able to explain their decisions in a way that humans can understand.
How to chill: Develop and use AI models that can provide clear and concise explanations for their outputs.
Example: A healthcare AI that explains why it diagnosed a patient with a certain condition based on specific data points.
Open Source and Open Data:
What’s the deal? Making AI models and datasets open to the public for scrutiny.
How to chill: One way is to encourage the development and use of open-source AI tools and datasets.
Example: Releasing the code and data behind an AI model used for credit scoring so researchers and the public can inspect and test it.
Clear Documentation:
What’s the deal? Detailed documentation of AI systems, including how they work, their limitations, and their decision-making processes. Where’s the handbook, man?
How to chill: Create comprehensive documentation for every AI system, updated regularly.
Example: A manual for an AI-powered hiring tool that explains how it assesses candidates and the criteria it uses.
User-Friendly Interfaces:
What’s the deal? Making AI systems accessible and understandable to non-experts.
How to chill: Design AI interfaces that are intuitive and provide clear feedback on how decisions are made.
Example: A dashboard for an AI financial advisor that shows clients how investment decisions are made in simple terms.
Ensuring Accountability in AI:
Regulatory Frameworks:
What’s the deal? Establishing laws and regulations to govern AI use.
How to chill: Implement and enforce regulations that require AI systems to be transparent and accountable. Get knowledgeable legal professionals involved. Who says AI will put lawyers out of work.
Example: GDPR-like regulations that mandate companies to provide explanations for automated decisions affecting individuals.
Ethical Guidelines:
What’s the deal? Adopting ethical guidelines for AI development and deployment.
How to chill: Create and follow ethical standards that emphasize fairness, transparency, and accountability. Get lawyers involved who actually know what their doing.
Example: An AI ethics board within a company that oversees AI projects to ensure they align with ethical principles.
Auditability:
What’s the deal? Ensuring AI systems can be audited by independent parties.
How to chill: Develop mechanisms for regular, independent audits of AI systems. This should become a big deal in the future. Either that or the EU will end up with more power than ability.
Example: Third-party audits of an AI-driven loan approval system to check for biases and fairness.
Responsibility and Liability:
What’s the deal? Clearly defining who is responsible and liable for AI decisions. Don’t blame it on users. Software vendors need to own up to their responsibilities – stand behind their products Click through contracts of adhesion with broad waivers and disclaimers are totally bogus man.
How to chill: Establish accountability frameworks that specify the roles and responsibilities of AI developers and users. Make license agreements fair and equitable to both sides.
Example: Hold a company accountable if their AI system causes harm due to the company’s negligence or bias, like for instance, releasing a defective software update that shuts down most of the airlines in the country.
Feedback Mechanisms:
What’s the deal? Allowing users to provide feedback and challenge AI decisions. Sometimes the guidelines are way too restrictive.
How to chill: Implement robust feedback systems where users can report issues and get explanations for AI decisions and corrections to errors
Example: A customer service, dispute resolution AI that allows users to contest decisions and provides a process for review and correction. Let’s try to build can fair AI driven, instant appeals and arbitration of consumer disputes. Start and finish litigation in five minutes, not five years. There are plenty of other things for lawyers to do, aside from small consumer complaints.
Combining Efforts:
Collaborative Development:
How to chill: Encourage collaboration between AI developers, ethicists, lawyers, judges and regulators to create balanced and fair AI systems. You’d better have trained legal professionals at very corner who know AI like the back of their laptop. Don’t come into a bowling competition if you don’t know the lanes.
Example: Joint projects between universities (including especially law schools and medical schools), tech companies, third party audit companies, and government agencies to develop transparent AI technologies.
Public Engagement:
How to chill: Engage the public in discussions about AI transparency and accountability.
Example: Public forums and consultations on the use of AI in government services. Speak up, man, but no need to yell.
By taking these steps we can make sure AI systems are not just smart, but also fair and trustworthy. Like The Dude always says, “This aggression will not stand, man,” so let’s keep things transparent and accountable. Relax, be open and admit your mistakes. You’ll sleep much better than the vain posers.
Be open and honest and sleep well like the AI Dude. Image by Ralph Losey.
6. Steps to prevent AI from being used in ways that infringe on human rights.
Preventing AI from infringing on human rights requires a comprehensive, big-tent approach. Governments, NGOs, UN, Big Tech, Think Tanks, Universities, Consumer Groups, Political Groups, Bar Associations, Courts – everybody – should get together to create, explain, monitor and enforce regulations that protect human rights in the context of AI. These regulations should address issues such as due process, equal rights, discrimination, privacy and basic medical care and human dignity. The fact that the age-old scourges of human trafficking and slavery still exist today, especially against young girls and women, is a priority that AI must help stop.
Software companies should take the initiative and make their own AI guidelines. They should be proactive in protecting human rights. They should not wait for after-the-fact government and third-party regulators. They need to strictly adhere to their ethical guidelines and promote that as part of marketing.
Transparency in AI operations and accountability processes, including third-party audits, are crucial for ensuring responsible use. Conducting human rights impact assessments before deploying AI systems can help identify and mitigate potential risks. Quality control and testing all changes before release must become a new way of life for software companies. Their days of caveat emptor negligence should end by voluntary changes, or if necessary, judicial processes. How many billion dollar fines and judgments can any company survive?
AI infringing human rights. Dark Fantasy art style by Ralph Losey using Visual Muse.
The Dude on steps that can be taken to prevent AI from being used in ways that infringe on human rights.
Alright, man, keeping AI in check to make sure it doesn’t mess with human rights is crucial. Here’s a laid-back guide to the steps we can take:
Steps to Prevent AI from Infringing on Human Rights:
Develop and Enforce Strong Regulations:
What’s the deal? Establish legal frameworks that specifically address the ethical use of AI.
How to chill: Create comprehensive laws and international agreements that protect human rights in the context of AI.
Example: Laws similar to GDPR that protect individuals’ data privacy and prevent misuse.
Implement Ethical AI Guidelines:
What’s the deal? Adopt and follow ethical principles for AI development and deployment.
How to chill: Use frameworks like the AI Ethics Guidelines from the EU or the principles from the Asilomar AI Conference.
Example: Ensuring AI applications respect principles of fairness, transparency, and accountability.
Human Rights Impact Assessments:
What’s the deal? Evaluate the potential human rights impacts of AI systems before deployment. More work for the legal profession.
How to chill: Conduct thorough assessments to identify and mitigate risks.
Example: Assessing the impact of facial recognition technology on privacy and freedom of assembly.
Bias Mitigation and Fairness Checks:
What’s the deal? Regularly test AI systems for biases and ensure they operate fairly.
How to chill: Use diverse and representative data, and continuously monitor AI outputs.
Example: Auditing AI in hiring processes to ensure it doesn’t discriminate against certain groups. If you don’t be prepared to face class actions and bad days with mediators.
Transparency and Explainability:
What’s the deal? Make AI decision-making processes clear and understandable.
How to chill: Ensure AI systems can explain their decisions in a way that humans can grasp.
Example: Requiring AI systems used in legal decisions to provide clear reasoning for their conclusions. Oh and don’t cite to fake cases. AI can still hallucinate you know, especially if you do not know how to use it right.
Public Awareness and Education:
What’s the deal? Educate the public about the ethical implications and rights related to AI.
How to chill: Promote understanding and awareness through campaigns and educational programs.
Example: Public workshops and online courses on AI ethics and human rights, including immigration rights.
Inclusive AI Development:
What’s the deal? Involve diverse groups in the development and testing of AI systems.
How to chill: Ensure that a wide range of perspectives is considered, especially from those who might be most affected.
Example: Including representatives from marginalized communities in AI ethics boards.
Robust Accountability Mechanisms:
What’s the deal? Establish clear lines of accountability for AI developers and users.
How to chill: Implement systems to hold creators and operators of AI accountable for misuse or harm. Yes, as usual, more work for lawyers, mediators and judges – all of the legal profession.
Example: Legal provisions that allow individuals to seek redress if their rights are violated by AI systems. In most countries that includes the right to discovery of evidence, which means, of course, e-discovery.
Promote International Cooperation:
What’s the deal? Foster global collaboration to set standards and share best practices.
How to chill: Work with international bodies like the UN to create unified guidelines.
Example: International treaties on AI ethics and human rights protection.
Continuous Monitoring and Improvement:
What’s the deal? Regularly review and update AI systems and policies to address emerging risks.
How to chill: Keep up with technological advancements and societal changes to ensure protections remain effective.
Example: Setting up a dedicated body to continuously monitor AI developments and recommend policy updates.
By taking these steps, man, we can ensure AI is used in a way that respects and upholds human rights, keeping things fair and just. As The Dude said to his friend, “Walter, I love you, but sooner or later, you’re going to have to face the fact you’re a g-damn moron.”
7. The infrequently discussed risks of relying too heavily on AI for decision-making processes.
Relying too heavily on AI for decision-making processes presents several risks. AI may lack the nuanced understanding and ethical considerations that human judgment provides, leading to decisions that are logically sound but ethically or morally questionable. AI systems can perpetuate and amplify biases present in their training data, leading to discriminatory outcomes in areas such as hiring, lending, and law enforcement. AI is not infallible and can make errors; blindly trusting AI outputs without human oversight can result in significant mistakes, especially in high-stakes situations like healthcare diagnoses or legal judgments. Accountability issues arise when it is unclear who is responsible for negative outcomes—the developers, the deployers, or the AI itself. AI systems can also be targets for cyberattacks, which can manipulate their decision-making processes. Over-reliance on AI can lead to the erosion of human skills and expertise, as individuals become more dependent on automated systems. Balancing AI use with human oversight and maintaining a critical perspective on AI outputs are essential to mitigate these risks.
Don’t over delegate to AI. Dark Fantasy style by Ralph Losey using Visual Muse GPT.
The Dude on risks of relying too heavily on AI for decision-making processes.
Alright, man, leaning too hard on AI for making decisions can bring a whole bunch of risks. As The Dude likes to say: “Obviously you’re not a golfer.” Here’s the lowdown:
Risks of Over-Reliance on AI:
Lack of Human Oversight:
What’s the deal? AI systems might make decisions without adequate human supervision.
Impact: Critical errors could go unnoticed, leading to harmful outcomes.
Example: An AI in healthcare misdiagnosing a condition without a doctor’s review could result in improper treatment. Same goes with a lawyer missing a key cause of action.
Bias and Discrimination:
What’s the deal? AI systems can perpetuate or even amplify existing biases and errors present in the training data.
Impact: Unfair treatment of certain groups and reinforcement of societal inequalities.
Example: A biased hiring AI favoring certain demographics over others, leading to discriminatory hiring practices. Umm… is there anyone in this tech company over fifty?
Lack of Accountability:
What’s the deal? When AI makes decisions, it can be hard to determine who is responsible for those decisions.
Impact: Difficulty in holding anyone accountable for negative consequences or errors.
Example: A financial AI system making poor investment choices, causing significant losses without clear accountability.
Transparency Issues:
What’s the deal? Many AI systems operate as “black boxes,” meaning their decision-making processes are not transparent.
Impact: Users and stakeholders may not understand how decisions are made, reducing trust and increasing risk.
Example: An AI denying a loan application without providing clear reasons, leaving applicants confused and frustrated. Same goes with credit scores.
Data Privacy Concerns:
What’s the deal? AI systems often require large amounts of data, raising privacy issues.
Impact: Potential misuse or unauthorized access to sensitive personal data.
Example: An AI-driven marketing platform collecting and using personal data without proper consent. Some of the marketing coincidences are downright creepy.
Over-Reliance on Automation:
What’s the deal? Dependence on AI can lead to complacency and reduced human skill levels.
Impact: Loss of critical thinking and decision-making skills among humans. The email said I should open the attachment for a cool prize, so I did.
Example: Pilots overly relying on autopilot systems and losing the ability to manually control the aircraft in emergencies. Same goes with cars and trucks too, of course.
Security Vulnerabilities:
What’s the deal? AI systems can be targeted by cyberattacks, leading to compromised decision-making processes.
Impact: Unauthorized manipulations and disruptions in critical systems.
Example: Hackers exploiting vulnerabilities in an AI-powered traffic control system, causing chaos on the roads (assuming it wasn’t chaotic before).
Ethical and Moral Dilemmas:
What’s the deal? AI lacks human intuition and moral judgment, which can lead to ethically questionable decisions.
Impact: Decisions that might be legally permissible but ethically wrong. So what if it’s her body and she might die?
Example: An AI in law enforcement making decisions based solely on data patterns, potentially violating individual rights. Guilty of driving while “purple.”
Job Displacement and Economic Impact:
What’s the deal? AI can replace human workers, leading to job losses and economic shifts. It can also create new jobs, which is why I did not choose this as part of the seven problems featured in this article.
Impact: Increased unemployment and economic disparity. Lots of people forced to learn new stuff or join the likes of The Dude at the corner bowling alley.
Example: Automated systems replacing factory workers, leading to large-scale job displacement.
Strategies to Mitigate These Risks:
Maintain Human Oversight:
How to chill: Always keep humans in the loop for critical decision-making processes.
Impact: Ensures human judgment and intuition are part of the decision-making process.
Example: Doctors reviewing AI-generated diagnoses before making final treatment decisions. Lawyers doing a final QC in e-discovery before documents found by AI are produced or withheld as privileged.
Regular Audits and Monitoring:
How to chill: Conduct regular audits of AI systems to check for biases and errors.
Impact: Helps identify and rectify issues early on.
Example: Auditing an AI hiring system to ensure it doesn’t discriminate against certain groups, like for instance, women and racial minorities.
Enhance Transparency:
How to chill: Develop explainable AI systems that provide clear reasoning for their decisions.
Impact: Builds trust and understanding among users and stakeholders.
Example: Implementing AI systems that can explain their decision paths in plain language.
Data Privacy and Security:
How to chill: Implement robust data protection and privacy measures.
Impact: Safeguards sensitive information and builds user trust.
Example: Encrypting data and using secure protocols to protect AI systems from cyber threats.
Ethical Frameworks:
How to chill: Adopt ethical guidelines and frameworks for AI development and use.
Impact: Ensures AI systems operate within ethical boundaries.
Example: Creating an ethics board to oversee AI projects and ensure they align with ethical standards. At least one legal professional on each board, of course.
Continuous Learning and Adaptation:
How to chill: Ensure AI systems and the humans who use them are continually learning and adapting.
Impact: Keeps skills sharp and systems up-to-date with the latest advancements and safeguards.
Example: Regular training for employees on new AI tools and best practices. Prompt engineering courses anyone?
By taking these steps, man, we can harness the power of AI while keeping its risks in check, ensuring it works for us and not against us. Logic is not everything, and remember, as The Dude says to his troubled veteran friend, “Smokey, this is not Nam.”
“Yeah, well, you know, that’s just like, uh, your opinion, man.” Image by Ralph Losey.
Conclusion
For a complete list of AI risks, and possible answers, see Mustafa Suleyman’s gloomy, but important book, The Coming Wave. The small selection of problems we discussed here is just one small carpet in a vast hall of horrors. Still, for me, it really does tie the room together.
As The Dude says in the Great Lebowski movie to his girlfriend Maude:
This is a very complicated case, Maude. You know, a lotta ins, a lotta outs, a lotta what-have-you’s. And, uh, a lotta strands to keep in my head, man. Lotta strands in old Duder’s head.
Haven’t we all said basically the same thing at one time or another in our own non-Dude way? We all need the help of a much AI as we can get these days, but damn, the AI is dangerous, man. So, yeah, it’s a very complicated case. But maybe with the help of friendly, verified AIs, we can keep the strands in our head straight long enough to get through to the next crisis.
Ralph Losey is an AI researcher, writer, tech-law expert, and former lawyer. He's also the CEO of Losey AI, LLC, providing non-legal services, primarily educational services pertaining to AI and creation of custom AI tools.
Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on AI, e-discovery and tech-law subjects, including seven books.
Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management.
Ralph is the proud father of two children and husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.
All opinions expressed here are his own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such.
Ray Kurzweil explains Turing test and predicts an AI will pass it in 2029.
Ray Kurzweil on Expanding Your Mind a Million Times.
GPT4 avatar judge explains why it needs to evolve fast, but understand the risks involved.
Positive Vision of the Future with Hybrid Human Machine Intelligence. See PyhtiaGuide.ai
AI Avatar from the future explains her job as an Appellate Court judge and inability to be a Trial judge.
Old Days of Tech Support. Ralph’s 1st Animation.
Lawyers at a Rule 26(f) conference discuss e-discovery. The young lawyer talks e-discovery circles around the old lawyer and so protects his client.
Star Trek Meets e-Discovery: Episode 1. Cooperation & the prime directive of the FRCP.
Star Trek Meets e-Discovery: Episode 2. The Ferengi. Working with e-discovery vendors.
Star Trek Meets e-Discovery: Episode 3. Education and techniques for both law firm and corp training.
Star Trek Meets e-Discovery: Episode 4. Motions for Sanctions in electronic discovery.
Star Trek Meets e-Discovery: Episode 5. Capt. Kirk Learns about Sedona Principle Two.