Skip to content

AI & Law 2024 Syllabus

I thought it might be of general interest to post my 2024 syllabus for my AI & Law class.  I regret that many of the hyperlinks to the assigned readings won’t work because they are to internal copies I made, which I can’t share directly for copyright reasons, but I’ve provided citations so it should be easy to find them online.  Links to the optional readings generally should work. Papers marked with a (*) were eligible material for a student discussion paper.

Contents

Class 1: Intro

  1. [I’ve ommitted various administrative issues.]
  2.  Before doing the reading below, make a few notes that we can discuss in our first class about:
    1. How you think most people would define an “artificial intelligence”
    2. How you personally would define an “artificial intelligence” — if that’s any different from the above.
  3. After you have done the above, please read
    1. Pages 1-13 & 17-25 of Neil Richards`& William Smart, How Should the Law Think About Robots? in Robot Law (Ryan Calo, A. Michael Froomkin & Ian Kerr, eds. 2016) (you may skip section 3).

Questions

  1. Do you agree with the argument in Richards & Smart?
  2. Do you think the Richards & Smart argument applies with equal force to AI as it does to robots? If not, is it more or less applicable?
  3. What do you suspect are/will be the major legal/regulatory/social issues posed by the increased use and sophistication of AI (however defined)?
  4. Who is likely to be most affected by AI? How should these groups’ views be reflected in the policy-making process?
  5. Abstractly, what entities, private or public, are best suited to make decisions as to AI regulation/policy?

Class 2: Intro to Machine Learning

  1. FPF, Privacy Expert’s Guide to Artificial Intelligence and Machine Learning (2018).
  2. Pages 669- 702 of David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev 653 (2017). [Optional: also read 703-17.]
  3. Deep Learning in 5 Minutes:
  4. Pages 410-435 of Ryan Calo, Artificial Intelligence Policy: A Primer and a Roadmap, 551 U.C.D. L. Rev. 399 (2017)
  5. Rodney Brooks,
    1. Steps Toward Super Intelligence I, How We Got Here (July 15, 2018), original at http://rodneybrooks.com/forai-steps-toward-super-intelligence-i-how-we-got-here/
    2. Steps Toward Super Intelligence IV, Things to Work on Now (July 15, 2018), original at http://rodneybrooks.com/forai-steps-toward-super-intelligence-iii-hard-things-today/
    3. Incidentally, parts II & III are good too, but I’m not requiring them

Optional Readings

  • For visual/auditory learners — but very very popular with certain members of last year’s class:

By the way, there’s a lot more where that came from:

Chapter 2: “Gradient descent, how neural networks learn”
Chapter 3: “What is backpropagation really doing?”

…and then it gets a bit mathy….

Notes & Questions

  1. Explain what the terms “ML” and “AI” usually refer to, and how they relate to each other.
  2. Explain in broad terms what are the key features (and differences) of
    1. Supervised Learning
    2. Unsupervised Learning
    3. Reinforcement Learning
  3. Suppose UMiami law wanted to see if an ML system could help with admissions. What should it pick as, or among, the outcome variable(s)?
    1. Willingness to accept the offer? [US News rates schools on “selectivity” – the fewer offers you give relative to matriculation, the better they rate you.]
    2. 1L GPA?
    3. GPA at graduation?
    4. Passing the bar? (But some folks never take it, some retake it, some take it out of state and we don’t always know if they passed, much less if they failed.)
    5. Becoming a partner in a law firm within 10 years of graduation?
    6. $ value of donations to the law school in first 10 years after graduation?
    7. Something else?
  4. Do any of the above create undesirable bias risks? How?
  5. Should we be more concerned about “false positives” (admit, when we shouldn’t have) or “false negatives” (reject when we should have admitted)?
  6. To what extent if any should the law school demand that the model be “explainable”?  Or is it enough that it produces a great class of admits
  7. Would it be appropriate to use information outside the application to help train the ML system, e.g. publicly available Twitter, Facebook, Instragram, or TickTock feeds or pages?
    1. If so, would it be better to set criteria by hand, or let the ML decide what types of social media signal better potential admits?
  8. Suppose UM has hired a contractor who produced an AI for admissions.
    1. How would we test it for “fairness”?
    2. What is “fairness” in this context?
  9. How would your answers to questions 6-8 above differ if UM were a state law school?
  10. Based on what you know so far, are AIs – or AIs+robots – headed for world domination in the near or far future?

Class 3: Intro to Foundation Models and Generative AI

Foundation Models

  1. Introduction (pages 3-20) of Rishi Bommasani et al,, Stanford HA, On the Opportunities and Risks of Foundation Models (2023).  This is an enormous report, and we’ll be returning to subsequent sections of it from time to time.
  2. Rishi Bommasani et al,, Stanford HAI, Considerations for Governing Open Foundation Models (Dec. 13, 2023).
  3. Peter Henderson et al., Stanford HAI, Safety Risks from Customizing Foundation Models via Fine-tuning (Jan. 2024).

Generative AI

  1. Timothy B Lee and Sean Trott, Large language models, explained with a minimum of math and jargon Want to really understand how large language models work? Here’s a gentle primer., Understanding AI (July 27, 2023)
  2. Paragraphs 55-97  (pages 16-29) and 136-142 (pages 52-55) of Complaint, New York Times Co. v. Microsoft (S.D.N.Y. Dec. 27, 2023).
    1. We’ll be revisiting the issues raised by this complaint, so you might want to read more of this example of elegant lawyering;
    2. You might also want to read OpenAI’s rebuttal at OpenAI and journalism (Jan. 8, 2024).
  3. OpenAI’s New App Store Could Turn ChatGPT Into an Everything App, Wired (Jan 10, 2024).
  4. MIT grad student explains graphic diffusion models:
  5. Stable Diffusion, How does Stable Diffusion work? (Jan 4, 2024). This is a challenging read, but persevere! If you do get bogged down in the middle, be sure to re-focus your attention on the later sections, starting with Stable Diffusion v1.5 vs v2.

Optional

Notes & Questions

  1. The hallmark of a generative model as opposed to a, excuse the term, classic ML model is that while the ML model is supposed to spot patterns in existing data, or to classify new data in terms of existing data, a generative model is designed to create new content.
  2. The hallmark of a ‘foundation’ model is that it is intended to be adapted by others for a multiplicity of purposes, not all of which are necessarily foreseeable by the foundation model’s designers.
  3. Can you identify policy and liability issues arising from the sale or use of generative AI models, or generative AI services (e.g. web-based interfaces letting the public generate content, either free or for a fee)?
  4. Can you identify policy and liability issues (excuse the term) generated by foundation models that might not exist with an ‘ordinary’ ML system?
  5. Thanks to AI, even people who can’t draw a straight line can be artists! Can they be great artists?  What could go wrong?
  6. The creation of a “GPT App Store” demonstrates that generative models can act like foundation models.  Indeed, this may be one of the more prevalent uses of chatbots.
    1. Already the ‘App Store’ is overrun with apps promising to be various sorts of ‘virtual girlfriends’ even though these violate OpenAI’s usage policy,
    2. If you were OpenAI’s general counsel, what would you want to prohibit in your terms of service?  How would you go about enforcing those rules, keeping in mind the expense of human moderation and the whack-a-mole nature of determined violators?
    3. Was this issue on your list?
  7. Starting about 1:38 in the assigned video above, an MIT graduate student gives his legal opinion on whether AI-generated images are protected by copyright, and whether they can be sold (all engineers think they are lawyers!).  We’ll be revisiting this and related issues, but for now, what do  you think of his advice?

Class 4: Issues in AI System Design & Application

Systemic considerations

  1. Arvind Narayanan, Sayash Kapoor & Seth Lazar, Model alignment protects against accidental harms, not intentional ones , AI Snake Oil Blog (Dec. 1, 2023).
  2. Kendra Albert et al, Politics of Adversarial Machine Learning (Apr. 2020) (“we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems, using facial recognition technology as a case study.”)
  3. Pages 1856-1863, 1872-1878 (end of sub-part 2) & 1881 (Part IV.A)-1882 (end of Part  IV.A) of Katherine J. Strandburg, Rulemaking And Inscrutable Automated Decision Tools, 119 Colum. L. Rev. 1851 (2019). Note that this is a longer reading than the other readings in this section. It covers a lot of ground and foreshadows a number of issues we’ll return to if all goes well.

What Could Go Wrong?

  1.  Katherine Lee et al., The Devil is In the Training Data Ch. 1 (July 10, 2023)
  2. Will Douglas Heaven, The way we train AI is fundamentally flawed, MIT Technology Review (Nov.18, 2020).
  3. Kyle Wiggers, 3 big problems with datasets in AI and machine learning, Venture Beat (Dec. 17, 2021).
  4. Katyanna Quach, You only need pen and paper to fool this OpenAI computer vision code. Just write down what you want it to see, TheRegister.com (Mar. 5, 2021).

What to do about it?

  1. Pages 1-11 of Timnit Gebru et al., Datasheets for Datasets (Mar. 19, 2020).
  2. Madeleine Elish, Moral Crumple Zones: Cautionary Tales in Human Robot Interaction, 5 Engaging Science, Technology, and Society 40 (2019) .  OK, this is more about what not to do…

Optional Readings

More problems and maybe solutions

  • {*} Rishi Bommasani et al.,  Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes:
    • “A clear trend emerged in every context they considered: Commercial ML systems are prone to systemic failure, meaning some people always are misclassified by all the available models — and this is where the greatest harm becomes apparent. If every voice assistant product on the market uses the same underlying algorithm, and that algorithm can’t recognize an individual’s unique way of speaking, then that person becomes effectively excluded from using any speech-recognition technology. A similar pattern in other domains would mean that individuals who are declined by one bank or hiring firm may not be approved for a loan from any other financial institution or hired by a different firm.
      “We found there are users who receive clear negative outcomes from all models in the ecosystem,” says Connor Toups, a Stanford computer science graduate student who served as lead author of the paper. “As we move to machine learning that mediates more decisions, this type of collective outcome is important to assessing overall social impact at an individual level.”
  • (*) Apostol Vassilev et al., Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST AI 100-2e2023 (Jan. 2024):
    • “Broadly, there are two classes of AI systems: Predictive and Generative. The components of an AI system include – at a minimum – the data, model, and processes for training, testing, and deploying the machine learning (ML) models and the infrastructure required for using them. Generative AI systems may also be linked to corporate documents and databases when they are adapted to specifc domains and use cases. The data-driven approach of ML introduces additional security and privacy challenges in different phases of ML operations besides the classical security and privacy threats faced by most opera- tional systems. These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of the AI system, and even malicious manipulations, modifcations or mere interaction with models to exfltrate sensitive information about people represented in the data, about the model itself, or proprietary enterprise data. Such attacks have been demonstrated under real-world conditions, and their sophistication and potential impact have been increasing steadily. AML is concerned with studying the capabilities of attack- ers and their goals, as well as the design of attack methods that exploit the vulnerabilities of ML during the development, training, and deployment phase of the ML lifecycle. AML is also concerned with the design of ML algorithms that can withstand these security and privacy challenges. When attacks are launched with malevolent intent, the robustness of ML refers to mitigations intended to manage the consequences of such attacks.”
  • (*) Grant Fergusson et al., EPIC, Generating Harms: Generative AI’s Impact & Paths Forward (May 2023):
    • “[While] new generative A.I. tools like ChatGPT, Midjourney, and DALL-E … have been lauded for their capability to produce new and believable text, images, audio, and videos, the rapid integration of generative AI technology into consumer-facing products has undermined years-long efforts to make AI development transparent and accountable. With free or low-cost generative AI tools on the market, consumers face many new and heightened risks of harm. Everything from information manipulation and impersonation to data breaches, intellectual property theft, labor manipulation, and discrimination can all result from the misuse of generative AI technologies.
      […] While generative A.I. may be new, many of its harms reflect longstanding challenges to privacy, transparency, racial justice, and economic justice imposed by technology companies. To illustrate these challenges and potential paths forward, the report includes numerous case studies, examples, and research-backed recommendations. The report also includes an Appendix of Harms, designed to provide readers with a common lexicon for understanding the various harms that new technologies like generative A.I. can produce.
  • UPDATE: Section 4.9 (“AI safety and alignment”) of Rishi Bommasani et al, Stanford HA, On the Opportunities and Risks of Foundation Models (2023).
  • Raphel Koster et all, Human-centered mechanism design with Democratic AI, 61 Nature: Human Behavior 1398 (Oct. 2022):
    • “Building artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans played an online investment game that involved deciding whether to keep a monetary endowment or to share it with others for collective benefit. Shared revenue was returned to players under two different redistribution mechanisms, one designed by the AI and the other by humans. The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation.”
  • Benj Edwards, New Go-playing trick defeats world-class Go AI—but loses to human amateurs, ArsTechnica (Nov. 7, 2022).
  • Chloe Xiang & Matthew Gault, USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test, VICE (June 1, 2023)
  • Stuart Armstrong, Defining AI Wireheading, AI Alignment Forum (Nov 21, 2019).
  • (*) Lt. Col. Mark Visger, Garbage In, Garbage Out: Data Poisoning Attacks and their Legal Implications, in Big Data and Armed Conflict: International Legal Issues (forthcoming), available at SSRN (Nov. 1, 2022):
    • Data poisoning is but one aspect of Adversarial Machine Learning and can be expected on the battlefield before long. The analysis provided in this chapter demonstrates the inadequacy of merely “copying and pasting” pre-existing legal frameworks to this tactic. Legal scholars will need to become versed in the basic technical functioning of these systems, much like it was necessary to understand the basic functioning of computer networking in order to develop legal frameworks for cyber operations such as that found in the Tallinn Manual. Further, thoughtful consideration of the applicability of existing legal doctrines to big data will be required. As was demonstrated in this chapter, concepts such as proportionality or perfidy do not neatly map onto data poisoning attacks. Instead, novel and thoughtful analyses are needed. Similarly, the time frame of the applicability of a lex specialis such as the [Law of Armed Conflict] will be impacted by the practical realities of the timeframes of data poisoning operations.
  • Eileen Guo & Karen Hao, This is the Stanford vaccine algorithm that left out frontline doctors, MIT Technology Review (Dec. 21, 2020)
  • Christian Sandvig et al, When the algorithm itself is a racist: Diagnosing Ethical Harm in the Basic Components of Software, 10 Int’l J. Comm. 4972 (2016).
  • Ben Dickson, Why machine learning struggles with causality (March 15, 2021).
  • Jeff Burt, Techniques to fool AI with hidden triggers are outpacing defenses, The Register (Feb. 25, 2022).
  • K. Holt, MIT study finds labeling errors in datasets used to test AI, Engadget (Mar. 29, 2021).
  • (*) Mehtab Khan & Alex Hanna, The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability, 19 Ohio St. Tech. L.J. (forthcoming, 2023).
    • “[L]ittle attention has been paid to mapping and consolidating the legal issues that arise at different stages of [dataset development] process: when the data is being collected, after the data is used to build and evaluate models and applications, and how that data is distributed more widely.
      “In this article, we offer four main contributions. First, we describe what kinds of objects these datasets are, how many different kinds exist, what types of modalities they encompass, and why they are important. Second, we provide more clarity about the stages of dataset development – a process that has thus far been subsumed within broader discussions about bias and discrimination – and the subjects who may be susceptible to harms at each point of development. Third, we provide a matrix of both the stages of dataset development and the subjects of dataset development, which traces the connections between stages and subjects. Fourth, we use this analysis to identify some basic legal issues that arise at the various stages in order to foster a better understanding of the dilemmas and tensions that arise at every stage. We situate our discussion within wider discussion of current debates and proposals related to algorithmic accountability.”
  • Google AI Blog, Data Cascades in Machine Learning (June 4, 2021).
  • Johannes Welbl et al, Challenges in Detoxifying Language Models (Sept. 15, 2021)
  • Will Douglas Heaven, Our weird behavior during the pandemic is messing with AI models, MIT Tech. Rev. (May 11, 2020).
  • Martin Anderson, Why AI Isn’t Providing Better Product Recommendations, unite.ai (Mar. 2, 2022).

Gender Issues

  • (*) Pages 87-132 (“Think Piece 2”) in UNESCO, I’d Blush if I could: Closing Gender Divides in Digital Skillls (2019):
    • This think piece shines a critical light on the sudden proliferation of digital assistants gendered as female. It looks most closely at voice assistants such as Amazon’s Alexa and Apple’s Siri technology, as well as, to a lesser extent, chatbots and virtual agents projected as women. […] The topic also helps make the gender implications of artificial intelligence (AI) technologies visible at a moment when such technologies are moving, with increasing acceleration, from research labs into mainstream consumer products.
      The gender issues addressed here foreshadow debates that will become more and more prominent as AI technologies assume greater human-like communication capabilities. […T]he limited participation of women and girls in the technology sector can ripple outward with surprising speed, replicating existing gender biases and creating new ones. Evidence presented in the policy paper shows that women’s participation in the technology sector is constrained, to a significant degree, by unequal digital skills education and training.
  • (*) Clifford Nass & Youngme Moon, Machines and Mindlessness Social Responses to Computers, 56 J. Soc. Issues 81 (2000).
    “[T]his article reviews a series of experimental studies that demonstrate that individuals mindlessly apply social rules and expectations to computers. The first set of studies illustrates how individuals overuse human social categories, applying gender stereotypes to computers and ethnically identifying with computer agents. The second set demonstrates that people exhibit overlearned social behaviors such as politeness and reciprocity toward computers. In the third set of studies, premature cognitive commitments are demonstrated: A specialist television set is perceived as providing better content than a generalist television set. A final series of studies demonstrates the depth of social responses with respect to computer ‘personality.’ Alternative explanations for these findings, such as anthropomorphism and intentional social responses, cannot explain the results.”

Happier things

Additional systemic thinking

And Now This…

Notes and Questions

  1. This 28 minute John Oliver video above covers so many of the issues we’ll be talking about that I was very tempted to assign it…
  2. Can adversarial attacks on deployed ML systems really be used for “desirable aims” as Albert et all claim?
    1. If so, is that legal?
    2. Does your answer depend on how it works? Consider the following possible means of disabling facial recognition systems
      1. Face paint
      2. Masks
      3. Special clothing
      4. Special Glasses
      5. Laser
      6. Laser, again
    3. Would wearing/carrying any of the above be grounds for reasonable suspicion (i.e. a “Terry stop”) under the 4th amendment?
  3. There’s reason to worry that focusing on a variable as the grounds for a decision can make it less useful (or even useless) over time:
    1. The Optimizer’s Curse is a phenomenon that occurs when assessments of activities’ impacts are uncertain; in those cases, engaging in the activities that look most promising will tend to have a smaller impact than anticipated. Worse, the Optimizer’s Curse means that actions with high-variance estimates are selected for. Suppose we’re considering 5 possible actions which in fact have utility $10 each, and our estimates of those 5 utilities are subject to error that falls on a bell curve (a normal distribution) with a standard deviation of $2. Another 5 possible actions in fact have utility of -$20, and our estimate of each of these 5 actions is influenced by a different measurement/estimation error, also a standard distribution, with a standard deviation of $100. Key point: We are likely to pick one of the bad five actions whose enormously uncertain value estimates happened to produce a huge upward error — after all, it has the highest maximum possible value, and the averages are the same as the safer choices.
    2. Goodhart’s Law is named after the economist Charles Goodhart. A standard formulation is “When a measure becomes a target, it ceases to be a good measure.” Goodhart’s original formulation is “Any observed statistical regularity will tend to collapse when pressure is placed upon it for control purposes.”
    3. “Goodhart’s Curse” is a neologism for the combination of the Optimizer’s Curse and Goodhart’s Law, particularly as applied to the value alignment for AI. Goodhart’s Curse in this form says that a powerful agent neutrally optimizing a proxy measure U that we hoped to align with true values V, will implicitly seek out upward divergences of U from V. In other words: powerfully optimizing for a utility function is strongly liable to blow up anything we’d regard as an error in defining that utility function. That’s a very strong claim, and I’m not sure how accurate it is in practice, but it’s something to worry about.
  4. With all that out of the way, what does it tell us about the many ways things can go wrong if we use an AI in pursuit of a goal? How do we plan for those risks, or create ways of checking that they are not happening?
  5. Given that it would seem to reduce an agency’s and its bureaucrats’ discretion and thus power, why might agencies be attracted to the idea of automated decision making?
  6. Regarding the Strandburg article, what are the major risks of automated decision-making by a government agency (e.g. for a benefits determination)?
  7. What if automated decisions are “better” in the sense of “more accurate” or “more consistent” but are harder to explain in any individual case.
    1. Would that tradeoff be worth it?
    2. Would it comply with Due Process?
  8. What should policy-makers be asking before agreeing to adopt an ML system to make important governmental decisions?
  9. Regarding the Heaven article,
    1. Please be prepared to define the problem of “underspecification”.
    2. What can be done about it?
    3. Is that affordable?
  10. Regarding the Quach article, CLIP’s vulnerabilities appear to stem from two features,
    1. First, using text explanations on pictures to minimize the need for human tagging of images
    2. Second, the goal of creating a moderately general-purpose ML system that could then be applied (often with additional fine-tuning) to a wide variety of problems.
  11. Until we find a way to overcome these sorts of vulnerabilities, does this mean we are stuck relying on purpose-built ML, with all the effort and data demands that implies?
  12. Even some types of quite specific ML image recognition systems have proved vulnerable to malicious attacks; almost by definition these are unexpected attacks which are hard to test for. Does this mean we should not rely on ML in situations where people could get hurt? Is that how we treat other machines that can be hacked?
  13. Gebru et al propose, in effect, standardized metadata that would accompany datasets and explain the dataset’s characteristics and provenance. Can you think of any examples in other parts of life where we do anything similar?
  14. The Ellish article was an instant classic and is so often cited that “moral crumple zone” has become shorthand for the entire argument.  Is the moral crumple zone problem peculiar to automation, or a more general problem of techno-social systems?
  15. How might the moral crumple zone problem (a term that originates in car bumpers) manifest if we deploy self-driving cars?
  16. What is/are the solution(s) the moral crumple zone problem? How do we apply them to systems that might, in an emergency, require very very speedy reactions, perhaps too fast for a human being? (Hint: this might be a trick question.)
  17. The OODA loop (Observe, Orient, Decide, Act) has been a foundational concept of automation pretty much since its inception.
    1. What are the positive and negative consequences for the functioning of a system of inserting/requiring a human in what otherwise would be an automated OODA loop?
    2. What are the positive and negative consequences for the functioning of a human being of being inserted for crisis monitoring into what otherwise would be an automated OODA loop?
  18. Do US-based people who are building ML systems such as facial recognition systems have a moral/ethical duty to consider
    1. How the systems might be used by repressive governments?
    2. How the systems might be used by our government?

Class 5: Issues in AI Personhood

  1. Sam Lehman-Wilzig, Frankenstein Unbound: Towards a Legal Definition of Artificial Intelligence, FUTURES (Dec. 1981) (classic and incredibly early article).
  2. James Boyle, Endowed by Their Creator? The Future of Constitutional Personhood, Brookings Institution (Mar. 9, 2011).
  3. Edited version of Lynn Lopucki, Algorithmic Entities, 95 Wash. U. L. Rev. 887 (2018). (Difficult but rewarding article – allow time to read this.) Optional: read the un-edited version.
  4. Sections 3-5 (Pages 6-18) of Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant, Of, for, and by the people: the legal lacuna of synthetic persons, Artificial Intelligence Law (2017).

Optional

  • (*) Toni M. Massaro, Helen Norton, & Margot E. Kaminski, SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment, 101 Minn. L. Rev. 2481 (2017).
    • First Amendment law increasingly focuses not on protecting speakers as speakers but instead on providing value to listeners and constraining the government.  If we take the logic of current First Amendment jurisprudence and theory to its natural conclusion, [current chatbots’] strong AI progeny could have First Amendment rights. Siri-ously.
  • (*) UPDATE: Contrast the above with Karl M. Manheim & Jeffery Atik, “AI Outputs and the Limited Reach of the First Amendment”, 63 Washburn L.J. ___ (2024 forthcoming):
    • Not all communications are “constitutional speech” – determining whether machine-generated outputs qualify for First Amendment protection requires some work. In order to do so, we first explore aspects of both linguistic and communication theories, and then under what circumstances communication can become First Amendment speech.
      We reach the bounds of the First Amendment from two directions. Working from a linguistic definition of speech, we capture non-linguistic forms of protected speech. Using communication theory, we reach a divide between human-sender communication and non-human-sender communication. Together these approaches support the location of a constitutional frontier. Within we find all instances of recognized First Amendment effectiveness. Outputs of non-human autonomous senders (e.g. AI) are outside and constitute an unexamined case.
      “Speech” under the First Amendment requires both a human sender and a human receiver. Concededly many AI outputs will be speech – due to the human factor in the mix. But just because a human programmed the AI, or set its goals, does not mean the AI’s output is substantially the human’s message. Nor does the fact that a human receives the output, for listener’s First Amendment rights arise only where actual speech occurs. Thus, we resist the claim that all AI outputs are necessarily speech. Indeed, most AI outputs are not speech.
      For those who raise objection to the challenge we pose – determining which AI outputs are speech and which are not – we respectfully note that there will be additional Constitutional work to be done. We are confident that our courts will be up to this challenge.
      Whether AI outputs are First Amendment speech has profound implications. If they are, then state and federal regulation is severely hobbled, limited to the few categories of speech that have been excluded by the Supreme Court from strong constitutional protection.
      With limited exception, neither the sponsors/developers of AI, the AI itself, nor the end users have rights under the First Amendment in the machine’s output. We express no opinion on other rights they may have or on what types of regulations state and federal governments should adopt. Only that they may constitutionally do so.
  • Kate Darling, Extending Legal Protection to Social Robots (Apr. 23, 2012).
  • (*) David Cole, The Chinese Room Argument, The Stanford Encyclopedia of Philosophy (Edward N. Zalta & Uri Nodelman (eds. Summer 2023):
    • “The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a 1980 article by American philosopher John Searle (1932– ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
      “The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate. […] The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. “
  • (*) Section 2.6 of Rishi Bommasani et al, Stanford HA, On the Opportunities and Risks of Foundation Models (2023).
  • Gabriel Hallevy, The Basic Models of Criminal Liability of AI Systems and Outer Circles (Jun 19, 2019)
  • Abeba Birhane & Jelle van Dijk, Robot Rights? Let’s Talk about Human Welfare Instead (Jan 14, 2020).
  • Bill Maher, How to Know if Your A.I. Is Sentient (Warning: contains salacious content):
  • Alica Lat, Artificial Intelligence, LLC: Corporate Personhood As Tort Reform, 2021 Mich. St. L.Rev. 597.
    • “This Article proposes a new framework to regulate Al technologies: bestowing corporate personhood to Al systems. First, the corporate personality trait of “limited liability”strikes an optimal balance in determining liability it would both compensate victims (for instance, through obligations to carry insurance and a straightforward burden of causation) while holding manufacturers responsible only when the infraction is egregious (for instance, through veil-piercing). Second, corporate personhood is “divisible” meaning not all corporate personality traits need to be granted which circumvents many of the philosophical criticisms of giving AI the complete set of rights of full legal personhood. Third, innovation incentives weigh heavily in favor of shifting to a tort regime better tailored to the potential promises and unique harms of Al.”  Personally, I am a little dubious….
  • Joanna Bryson, Why AI is amoral, and artefacts can’t consent, Adventures is NI (Dec. 14, 2021).
    • “[I]t is presently agreed by most moral philosophers that slaves are incapable definitionally to consent to sex with their masters, so all children of slaveowners by their slaves result from acts of rape. This wasn’t how we thought about the agency of slaves, who are of course human, when I was an undergraduate. … [T]he crux of my argument is not going to rest on some attribute of AI or robots – not consciousness, not opaqueness – but rather on whether attributing moral agency and the capacity for consent to something designed can be a sensible decision, or a coherent part of a stable justice.”
  • Eric Martinez & Chrisoph Winter, Protecting Sentient AI: A Survey of Lay Intuitions on Standing, Personhood, and General Legal Protection of Sentient Artificial Intelligence,  8 Frontiers in Robotics and AI (Nov. 2021).
    • “[W]e surveyed United States adults (n = 1,061) on their views regarding granting 1) general legal protection, 2) legal personhood, and 3) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future.”
  • Will Douglas Heaven, What an octopus’s mind can teach us about AI’s ultimate mystery, MIT Tech. Rev. (Aug. 25, 2021).
  • Sophie J. Nightingale and Hany Farid, AI-synthesized faces are indistinguishable from real faces and more trustworthy, MAIEI (Oct. 4, 2022).
  • Chloe Xiang, This Danish Political Party Is Led by an AI, Motherboard (Oct. 13, 2022).
  • (*) Ying Hu, Robot Criminals, 52 U. Mich. J.L. Reform (2019).
    • “Imposing criminal liability on robots does not absolve robot manufacturers, trainers, or owners of their individual criminal liability. The former is not rendered redundant by the latter. It is possible that no human is sufficiently at fault in causing a robot to commit a particular morally wrongful action. Additionally, imposing criminal liability on robots might sometimes have significant instrumental value, such as helping to identify culpable individuals and serving as a self-policing device for individuals who interact with robots. Finally, treating robots that satisfy the above-mentioned conditions as moral agents appears much more plausible if we adopt a less human-centric account of moral agency.”

Sui Generis

  • Daniel Dennet, If Brains are Computers, Who Designs the Software? (Apr. 6, 2017) (1 hr, 16 minutes).

Notes & Questions

  1. What is a ‘person’?
  2. What are the consequences of being/not-being a ‘person’?
  3. If an AI (or a robot controlled by an AI) is not a person, is there a good existing or historical legal/social metaphor available?
    1. A slave? [Note: slaves have had different rights at different times and places; e.g. ancient Rome]
    2. A pet?
    3. Something else?
  4. How is AI personhood like/unlike that of a corporation?
  5. Should the case for/against AI rights turn on facts about AIs or on facts about (legacy, biological) people?
  6. Should how (other?) people perceive the AI (“social projection”) be relevant to its legal status?
    1. If yes, is that fair?
    2. Would we treat any other creature/person/corporation like that?

Class 6: LLMs (1)

  1. Register for free accounts at Chat-GPT and Claude.AI  Play around. Post anything interesting that happens on the class blog.
  2. Have a look at the plethora of custom versions of GPT now available.  OpenAi has invited anyone to fine-tune their own model.
  3. Prakash Selvakumar, Text Generation with Temperature and Top-p Sampling in GPT Models: An In-Depth Guide (Apr 2023). (Note: text is significantly less off-putting than the title!)
  4. Joshua Hawkins, The Biggest Differences Between Claude AI and ChatGPT, Lifehacker (Nov. 29, 2023)
  5. Matt Crabtree, What is Prompt Engineering? A Detailed Guide, datacamp (Aug 2023).
  6. Emily Bender et al., On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, FAccT ’21, March 3–10, 2021, Virtual Event.
    1. Scholarly reaction to the Bender paper: Christoph Ebell et al., Towards intellectual freedom in an AI Ethics Global Community, 1 AI and Ethics 131 (2021).
  7. Angie Wang, Is My Toddler a Stochastic Parrot?, The New Yorker (Nov. 13, 2023) .
    1. Alternate link if above does not work (may not work either….)

Optional

Notes and Questions

  1. What will (should) it take to trust a Chat-GPT version modified by a third party?   How would one acquire the necessary information?
  2. I asked this question of Chat-GPT4 and it said:

    Trusting a Chat-GPT version modified by a third party involves several key considerations:
    1. Transparency and Documentation: The third party should provide comprehensive documentation about the modifications made to the original model. This includes details about the training data, algorithms used, and the nature of the modifications. Transparency is crucial for understanding how the model works and the potential biases it might have.

    2. Security and Privacy Compliance: The modified version must adhere to strict security and privacy standards. This includes how data is handled, stored, and processed. Ensuring compliance with regulations like GDPR, CCPA, or other relevant laws is essential.

    3. Performance and Reliability: The modified model should be rigorously tested for performance and reliability. This includes evaluating its accuracy, response time, and ability to handle a wide range of queries. Consistent performance builds trust over time.

    4. Ethical Considerations: The third party should ensure that the model adheres to ethical guidelines, such as not promoting harmful content, respecting user privacy, and avoiding biases. The approach to handling sensitive topics should be clear and responsible.

    5. Independent Reviews and Audits: Independent reviews or audits by external experts can provide an unbiased assessment of the model’s quality, safety, and adherence to ethical standards. This can include peer reviews, security audits, and ethical assessments.

    6. Community Feedback and Responsiveness: Open channels for user feedback and a track record of responding to and addressing issues raised by users are important. Community trust is often built through continuous improvement and responsiveness to user concerns.

    7. Track Record of the Third Party: The reputation and history of the third party developing the modified version play a significant role. A history of responsible and ethical conduct in technology development can be a strong indicator of trustworthiness.

    8. Compliance with OpenAI’s Policies and Guidelines: If the modifications are built upon OpenAI’s technology, compliance with OpenAI’s usage policies and guidelines is essential. This ensures that the modifications align with the original intent and ethical standards of the technology.

    1. What do you make of this advice?
    2. Dig into one of the third-party-modified models available at OpenAi.  How much of the information above is available?  How long did it take to work out the model’s reliability, if indeed that was possible at all?
  3. Selvakumar writes, “Temperature and Top-p sampling are two essential parameters that can be tweaked to control the output of GPT models used in various applications like chatbots, content generation, and virtual assistants. As a business user or functional professional, understanding these parameters can help you get the most relevant responses from GPT models without needing extensive data science knowledge.”
    1. Be certain that you can explain what these two variables do.
    2. Some models, e.g. Chat-GPT online,  make it very easy to tweak these settings.  What are some examples of real-world uses when you might want to do so?
      1. Do these, or indeed any, user-configurable settings (tend to) impose an additional need for user verification of the output?
      2. If these settings really make a big difference, does it mean we need to run our prompts multiple times with different combinations to find the best result?  If so, do the extra steps plus the work of comparison undermine the value of a chatbot in many applications?
  4. LLMs raise a host of ethical issues.  Are there others besides those mentioned in the “Stochastic Parrot” article?
  5. In light of what you have learned so far, Is the ” Stochastic Parrots” article alarmist?  Or not alarmist enough? (We’ll revisit this issue…)
  6. UPDATE: Bonus ‘Parrot’ questions:
    1. What do the Stochastic Parrot authors mean when they say, “at each step, from initial participation in Internet fora, to continued presence there, to the collection and finally the filtering of training data, current practice privileges the hegemonic view-point. In accepting large amounts of web text as ‘representative’ of ‘all’ of humanity we risk perpetuating dominant viewpoints,increasing power imbalances, and further reifying inequality.”
      1. What is their solution to the problem that they identify?
    2. The authors argue that
      1. given the costs of (re)training models, they will not be updated sufficiently to take account of new social views. This will lead the texts the models create to be “static” rather than being open to “new framings [that] contest hegemonic representations.”
      2. models will encode bias, in ways that are difficult to detect.
      3. Why do they argue that documentation of data sets and their origins is so important?
    3. Given that Google is heavily committed to using NLP for its phones and web projects, what do we make of Google firing one of the authors of the Stochastic Parrot paper?
      1. Does it prove the paper was right?
      2. Does it prove that Google’s former motto (“don’t be evil”) is now just a distant memory?
      3. If neither, does it prove anything? If so, what?

Class 7: LLMs (2)

  1. Parts II & III (pages  2-4) of C. Eric Schulman, AI aLiGnment Network (ALIGN) – Intellectual Property Incentives to Help Address AI Alignment Problems (July 24, 2021).
  2. Kevin Roose, A Conversation With Bing’s Chatbot Left Me Deeply Unsettled, New York Times (Feb. 16, 2023).
  3. Fernando Perez-Cruz and Hyun Song Shin, Testing the cognitive limits of large language models (Jan 4. 2023).
  4. David Thiel, Identifying and Eliminating CSAM in Generative ML Training Data and Models. Stanford Digital Repository (2023).
  5. Miscellaneous news stories on how chatbots can go wrong:
    1. Aaron Mok, This AI stock trader engaged in insider trading — despite being instructed not to – and lied about it, Business Inssider (Dec 30, 2023).
    2. Chloe Xiang, ChatGPT Can Be Broken by Entering These Strange Words, And Nobody Is Sure Why, Vice (Feb. 8, 2023).
    3. Thomas Claburn, How ‘sleeper agent’ AI assistants can sabotage your code without you realizing, The Register (Jan. 16, 2024).

Optional Horror Stories

  • (*)Read the full paper that the Mok article describes,  Jérémy Scheurer et al., Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure, arXiv:2311.07590v2 (Nov. 27, 2023):
    • We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.
  • (*) Full paper discussed in the Claburn ‘sleeper agent’ item above: Evan Hubinger, et al., Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, arXiv:2401.05566 (Jan. 12, 2024):
    • “Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety. “
  • Melissa Heikkilä, How it feels to be sexually objectified by an AI (Dec. 13, 2022).
  • Visit the Misalignment Museum via Khari Johnson, Welcome to the Museum of the Future AI Apocalypse, Wired (Mar 3, 2023).
  • Johann Laux, Sandra Wachter, Brent Mittelstadt, Trustworthy Artificial Intelligence and the European Union AI Act: On The Conflation Of Trustworthiness And Acceptability Of Risk (Oct. 3, 2022)  [N.B. that this refers to an earlier draft of the AI Act]
  • Xiaoyi Chen, et al., The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks, arXiv:2310.15469 (Oct. 24, 2023).
  • Section 3.3 of Rishi Bommasani et al, Stanford HA, On the Opportunities and Risks of Foundation Models (2023) on Educational uses of foundation models.

Notes & Questions

  1. If you were the General Counsel of U.Miami, what guidance would you give researchers planning to use a large dataset of images to train their models?
    1. Would you forbid anything?
    2. Would failing to do so open the University to liability of any kind?
    3. Would doing so infringe academic freedom?
    4. Would you have different rules for undergrads, grad students, faculty and/or staff?
    5. How if at all would your answers change if you were the General Counsel of the University of Florida (a public university)?
    6. UPDATE: Bonus question — what is the difference between “forbidding X” and requiring “reasonable efforts to avoid X”? From a general counsel’s point of view, is the latter a sufficient liability shield?  (Is the former?)
  2. LLMs appear to have many vulnerabilities, and we’re likely to find more. If you were the General Counsel of U.Miami, what guidance would you give to faculty, staff, and students about using LLMs?
    1. Would your advice be different for
      1. Large commercial products (e.g. bundled with Lexis)
      2. LLMs from the OpenAi store?
      3. Open source LLMs?
  3. I asked Claude AI for advice on safe use of LLMs in academic  research, and it said this:

    1. Get trained on the basics of how LLMs work – their strengths, limitations and potential risks – so you can use them effectively and avoid pitfalls. For example, understand issues around bias, accuracy and factual correctness.
    2. Start small with low-stakes tasks to test the LLM’s performance before relying on it for more critical work. Evaluate the quality and validity of its outputs carefully.
    3. Use LLMs as a productivity aid but avoid letting them fully replace your own analysis and writing. Review all LLM-generated text thoroughly before using it.
    4. Do not have sensitive conversations with chatbot-style LLMs, as their responses may not be reliable or secure.
    5. Make sure to cite any LLM outputs you include in your own work appropriately to avoid plagiarism. Attribute text to the LLM.
    6. Comply with the usage policies and licenses for any commercial LLMs like GPT-3.
    7. Keep model inputs and outputs confidential as a precaution, even if you see no sensitive content.

  4. Is this correct? Enough? Too much? And if this is correct, can we replace the General Counsel?
  5. Is any of the reading above likely to reduce your propensity to use Chat-GPT and its ilk in your own work (and in non-work parts of your life)?  If not, what does this imply for your answers to questions 1 & 2 above?

Class 8: LLMs (3)

  1. Select statutory background:
    1. Select parts of Restatement (Second) of Torts relating to defamation: § 558 , § 578, §  581, §  12 (1977);
    2. Portions of 47 U.S.C. § 230;
    3. Restatement (Second) of Torts relating to False light: § 652E;
    4. Restatement (Second) of Torts relating to Intentional Infliction of Emotional Distress § 46.
    5. Restatement (Second) § 652C Appropriation of Name or Likeness
  2. Pages 2506-08 of Toni M. Massaro, Helen Norton, & Margot E. Kaminski, SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment, 101 Minn. L. Rev. 2481 (2017).
  3. Pages 508-511 & 514–18 & 549-50 of Eugene Volokh, Large Libel Models? Liability for AI Output, J. Free Speech L. 489 (2023).
  4. Winter v. G.P. Putnam’s Sons, 938 F.2d 1033, 1033–36 (9th Cir. 1991).
  5. Pages 348-60 of Jane Bambauer, Negligent AI Speech: Some Thoughts About Duty, 3 J. Free Speech L. 343 (2023).
  6. Matt Burgess, Deepfake Porn Is Out of Control, Wired (Oct. 16, 2023).

Optional

  • (*) Benjamin Zipursky & John C. Goldberg, A Tort for the Digital Age: False Light Invasion of Privacy Reconsidered, 73 DePaul Law Review:
    • [T]his article elucidates the distinctive content and role of false light as an authentic invasion-of- privacy tort and explains why its recognition is especially important in our digital world. To appreciate its value requires, first and foremost, grasping that its closest tort sibling is not defamation, but instead public disclosure. Like that tort and unlike defamation, false light applies only to a subset of subject matters – those that are genuinely private and are not newsworthy – and only when highly offensive images or messages pertaining to the plaintiff are widely disseminated to the public. In short, as Melville Nimmer once noted, the sound judgment undergirding false light is this: if causing humiliation or grave offense by disseminating accurate depictions or accounts of private matters is actionable, it should be no less actionable when the putative representations are false. In an era of deepfakes and other privacy-invading misrepresentations, courts should embrace the tort of false light.
  • (*) John C. Tramazzo, Deepfakes and Deception: A Framework for the Ethical and Legal Use of Machine-Manipulated Media, Modern War Institute (July 28, 2023):
    • “The US military must decide now whether it will continue to explore deepfake technology for operational use or whether it should focus its investments in deepfake detection technology alone. It must also acknowledge the fundamental distinction between employing deepfakes in armed conflict scenarios like the use cases described above and doing so in competition short of conflict. The potential utility of these technologies is significant, yet leaders must also recognize that the US military’s embrace of deepfakes could contribute to information chaos. In the context of competition, that risk outweighs any benefits, and the US military should continue to do what General Laura Richardson vowed to do in the US Southern Command area of operations: tell the truth. Credibility is currency in the world, and the use of deepfakes below the threshold of armed conflict will threaten US credibility.
      “However, the United States must not forgo the opportunity to develop a deepfake capability as a tool for deception in armed conflict. Lawmakers and defense policymakers should explore and develop deepfake technology for use at the tactical and operational levels of warfare. Deepfakes could give warfighting commands advantages over enemy forces and enable protection for maneuvering forces. Deepfakes are certainly dangerous, particularly for democratic societies. But they are not inherently immoral, unethical, or illegal. If developed and deployed responsibly, they could advance military objectives and even save lives.”
  • Riana Pfefferkorn, ‘Deepfakes’ in the Courtroom, 29 BU Pub. Int. L.J. __ (October 1, 2020):
    • “Seeing is believing — but for how long? At present, people attach a lot of probative weight to images and videos. They’re taken at face value as evidence that an event occurred as alleged. The advent of so-called “deepfake” videos might change that. Thanks to advances in artificial intelligence, it is now possible to create a genuine-looking video that makes real people appear to do and say things they never did or said. Software for creating deepfake images, video, and audio is already freely available online and fairly easy to use. As the technology rapidly advances, it will become harder for humans and computers alike to tell a fake video from a real one.
      “Inevitably, deepfakes will start coming up in the courtroom context. This Article surveys the ramifications of deepfakes for pre-trial and trial practice, including authentication of evidence, professional responsibility, and a potential “reverse CSI effect” on juries primed to question even authentic evidence in an era of disinformation and “fake news.” Fortunately, courts are no stranger to the phenomenon of evidence tampering and forgery. The rules of evidence have long imposed authentication requirements to help screen out fakes. I argue that those requirements are sufficient as-is to deal with deepfakes, and that raising the bar for authenticating video evidence would do more harm than good. Although it may prove costly, courts will be able to handle the challenges posed by deepfakes as they have ably handled previous generations of inauthentic evidence.”
  • Matthew B. Kugler & Carly Pace, Deepfake Privacy: Attitudes and Regulation, 116 N.W.U. L. Rev. 611 (2021):
    • “Based on the types of harms perceived in [our] studies, we argue that prohibitions on deepfake pornographic videos should receive the same treatment under the First Amendment as prohibitions on traditional nonconsensual pornography rather than being dealt with under the less-protective law of defamation. In contrast, nonpornographic deepfakes can likely only be dealt with via defamation law. Still, there may be reason to allow for enhanced penalties or other regulations based on the greater harm people perceive from a defamatory deepfake than a defamatory written story.”
  • Roshan Abraham, NYC Mayor Casually Announces He’s Deepfaking Himself, Experts Horrified, VICE (Oct. 17, 2023).
  • Philip Pullella, Pope Francis, victim of AI, warns against its ‘perverse’ dangers, Reuters (Jan. 24, 2024).
  • John Twomey et al, Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine. PLoS ONE 18(10): e0291668. https://doi.org/10.1371/journal.pone.0291668 (2023):
    • “Many academics have expressed fears that deepfakes present a severe threat to the veracity of news and political communication, and an epistemic crisis for video evidence. These commentaries have often been hypothetical, with few real-world cases of deepfake’s political and epistemological harm. The Russo-Ukrainian war presents the first real-life example of deepfakes being used in warfare, with a number of incidents involving deepfakes of Russian and Ukrainian government officials being used for misinformation and entertainment. This study uses a thematic analysis on tweets relating to deepfakes and the Russo-Ukrainian war to explore how people react to deepfake content online, and to uncover evidence of previously theorised harms of deepfakes on trust. We extracted 4869 relevant tweets using the Twitter API over the first seven months of 2022. We found that much of the misinformation in our dataset came from labelling real media as deepfakes. Novel findings about deepfake scepticism emerged, including a connection between deepfakes and conspiratorial beliefs that world leaders were dead and/or replaced by deepfakes. This research has numerous implications for future research, societal media platforms, news media and governments. The lack of deepfake literacy in our dataset led to significant misunderstandings of what constitutes a deepfake, showing the need to encourage literacy in these new forms of media. However, our evidence demonstrates that efforts to raise awareness around deepfakes may undermine trust in legitimate videos. Consequentially, news media and governmental agencies need to weigh the benefits of educational deepfakes and pre-bunking against the risks of undermining truth. Similarly, news companies and media should be careful in how they label suspected deepfakes in case they cause suspicion for real media.”

Notes & Questions

  1. Assuming that an AI is not a legal person subject to suit, who if anyone should be liable when a prompt to a chatbot or other LLM produces a reply containing a false and defamatory statement about another person?
    1. Is this libel at all?  If so who is the source of the libel?
    2. Does it matter whether or not the statement is further shared, or is the ‘publication’ by the LLM to the recipient sufficient to meet the publication requirement?
    3. To what extent are the various parties’ liability dependent on the nature of the prompt?
  2. If we characterize the creation of the false and defamatory statement as a species of negligence on the part of the purveyor of the LLM, can this liability be disclaimed? How if you were the general counsel of a startup running a chatbot would you seek to achieve this?
  3. Can you think of a circumstance in which a harmful deepfake would not be punishable as an intentional tort?
  4. The rules for libel of public figures are different from those that protect ordinary people; under NYT v. Sullivan in order to libel a public figure you must publish the statement with malice or a reckless disregard for the truth.  If we import this standard into deepfake law, in practice would it mean that fake pornographic images of celebrities will be treated differently from revenge porn deepfakes (fake pornographic images) of ex-partners who are not public figures)?
  5. Libel is far from the only lens through which to view the harms and liabilities (which, note, are not necessarily identical!) arising from deepfakes.
    1. Putting aside the problem of identifying the original creator, what sorts of claims might the unwilling subject of a deepfake be able to assert against either the creator or intermediate re-publishers?
    2. What sorts of deepfakes other than those with fake nudity and/or sexual conduct might be actionable?
    3. Are all non-pornographic deepfakes protected First Amendment speech?  (Hint: no – but can you explain which ones would not be?)
  6. As of October 1, 2022, federal law creates a civil action for the unauthorized dissemination of private, intimate images of both adults and children—images commonly called “nonconsensual pornography” or “revenge porn”.  Does this new law create a civil action against the maker of a pornographic deepfake?  (Optional: read the Congressional Research Service analysis of the bill.)

Class 9: Explainable AI (XAI)

  1. Algolia, What is explainable AI, and why is transparency so important for machine-learning solutions? (Nov, 17, 2023).
  2. Pages 1109-26 (edited) of Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. 1085 (2018).
  3. Section 4.11 (“Interpretability”) of Rishi Bommasani et al, Stanford HA, On the Opportunities and Risks of Foundation Models (2023).
  4. Selections from Boris Babic & I. Glenn Cohen, The Algorithmic Explainability ‘Bait and Switch’, 108 Minn. L. Rev. 857 (2023).

Optional

  • (*)  Hofit Wasserman Rozen, Ran Gilad-Bachrach and Niva Elkin-Koren, Lost In Translation: The Limits of Explainability in AI (Draft, 2023):
    • “This paper critically examines XAI’s potential to facilitate the right to explanation by applying the prism of explanation’s role in law to different stakeholders. Inspecting the underlying functions of reason-giving reveals different objectives for each of the stakeholders involved. From the perspective of a decision-subject, reason-giving facilitates due process and acknowledges human agency. From a decision-maker’s perspective, reason-giving contributes to improving the quality of the decisions themselves. From an ecosystem perspective, reason-giving may strengthen the authority of the decision-making system toward different stakeholders by promoting accountability and legitimacy, and by providing better guidance. Applying this analytical framework to XAI’s generated explanations reveals that XAI fails to fulfill the underlying objectives of the right to explanation from the perspective of both the decision-subject and the decision-maker. In contrast, XAI is found to be extremely well-suited to fulfil the underlying functions of reason-giving from an ecosystems’ perspective, namely, strengthening the authority of the decision-making system. However, lacking all other virtues, this isolated ability may be misused or abused, eventually harming XAI’s intended human audience. The disparity between human decision-making and automated decisions makes XAI an insufficient and even a risky tool, rather than serving as a guardian of human rights. After conducting a rigorous analysis of these ramifications, this paper concludes by urging regulators and the XAI community to reconsider the pursuit of explainability and the right to explanation of AI systems.”
  • (*) Philipp Hacker and Jan-Hendrik Passoth, Varieties of AI Explanations under the Law. From the GDPR to the AIA, and beyond in Lecture Notes on Artificial Intelligence (Holzinger, Goebel, Fong, Moon, Müller and Samek eds. 2022):
    • “[W]e provide an overview of legal obligations to explain AI and evaluate current policy proposals. In this, we distinguish between different functional varieties of AI explanations – such as multiple forms of enabling, technical and protective transparency – and show how different legal areas engage with and man- date such different types of explanations to varying degrees. Starting with the rights-enabling framework of the GDPR, we proceed to uncover technical and protective forms of explanations owed under contract, tort and banking law. Moreover, we discuss what the recent EU proposal for an Artificial Intelligence Act means for explainable AI, and review the proposal’s strengths and limitations in this respect. Finally, from a policy perspective, we advocate for moving beyond mere explainability towards a more encompassing framework for trustworthy and responsible AI that includes actionable explanations, values-in-design and co-design methodologies, interactions with algorithmic fairness, and quality bench- marking”.
  • (*) Marco Almada, Governing the Black Box of Artificial Intelligence (Nov 9, 2023):
    • “Artificial intelligence (AI) is often described as a black box rendered opaque by the technical complexity of AI systems. Scholars and policymakers tend to see this opacity as a problem but often diverge on how to respond to this black-box problem. Some propose that AI opacity must be addressed through technical means, such as explanation models that portray how a system arrives at a decision. Others question the value of such mediated explanations, arguing that the solution to the black box problem re-quires mechanisms for disclosure of the inner workings of AI systems. […] I argue that no approach can work without drawing elements from the other. To do so, I first show how the black box metaphor is used as a conceptual model of AI in regulatory models, which leads to an emphasis on the technical sources of opacity connected to AI systems. Recent work on AI has developed various methods that can be used for the scientific scrutiny of these sources. Still, their use in non-scientific contexts is prone to various forms of manipulation. As an alternative, policy proposals often require more strict technical disclosure, for example, through the use of inherently interpretable models or the publication of the source code for AI software, but these might lead to reduced gains in understanding, if any. However, the current law on algorithmic transparency in the EU gives margin to a third interpretation: if the black box is perceived from a socio-technical perspective, disclosure requirements aimed at the development and use of AI systems may drastically reduce the possibilities for manipulation of explanations. Disclosure should therefore be seen as a pre-condition, not an alternative, to explanations and other technical approaches to the black box problem.”
  • (*) Philipp Hacker et al., Explainable AI under contract and tort law: legal incentives and technical challenges, 28 Artificial Intelligence and Law 415 (2020).
    • “[T[he law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for the use of ML models. To this effect, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability and demonstrate the effect in a technical case study in the context of spam classification.”
  • Behnam Mohammadi et al., Regulating Explainable AI (XAI) May Harm Consumers (Sept. 30, 2020):
    • “Recent AI algorithms are black box models whose decisions are difficult to interpret. eXplainable AI (XAI) is a class of methods that seek to address the lack of AI interpretability and trust by explaining to customers their AI decisions. The common wisdom is that regulating AI by mandating fully transparent XAI leads to greater social welfare. Our paper challenges this notion through a game theoretic model of a policymaker who maximizes social welfare with firms competing in a duopoly setting where customers hold heterogeneous preferences. A key contribution is to conceptualize the notions of AI accuracy, AI explanation depth, explanation method, and explanation fairness all into a single model. Our results first illustrate that partial explanations are a possible equilibrium in an unregulated setting. Moreover, we highlight the parameter spaces at which customers and firms are aligned or misaligned in the desire for a full explanation. Focusing on the region where preferences for full AI explanation are misaligned (customers preferring full explanation), we discuss multiple regulatory levers a policymaker may pull and show that regulating at a full explanation level is not always socially optimal. Mandating fully transparent XAI may make firms and customers worse off. We also show that flexible XAI policies dominate full XAI or unregulated XAI extremes.”
  • Martin Ebers, Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework(s) in Nordic Yearbook of Law and Informatics 2020: Law in the Era of Artificial Intelligence, (Liane Colonna/Stanley Greenstein eds.):
    • “Determining why a system arrives at a particular algorithmic decision or prediction allows us to understand the technology, develop trust for it and – if the algorithmic outcome is illegal – initiate appropriate remedies against it. Additionally, XAI enables experts (and regulators) to review decisions or predictions and verify whether legal regulatory standards have been complied with. All of these points support the notion of opening the black box. On the other hand, there are a number of (legal) arguments against full transparency of Artificial Intelligence (AI) systems, especially in the interest of protecting trade secrets, national security and privacy. Accordingly, this paper explores whether and to what extent individuals are, under EU law, entitled to a right to explanation of automated decision-making, especially when AI systems are used.”
  • Cynthia Rudin & Joanna Radin, Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition, 1.2 Harv. Data Science Rev. (Fall 2019).

Notes & Questions

  1. Be sure you can explain:
    1. Why  we might be nervous about trusting ‘black box’ models even if reliability testing shows they are very accurate, indeed even if they are more accurate than people?
    2. Which of the issues identified by Prof. Strandburg (class 4) and by today’s readings  will be solved or very greatly ameliorated by XAI?
  2. To what extent does mere “explainabilty”–as opposed to an “interpretable” model–do the job?
  3. Do you agree with the conclusion of the Babic & Cohen article that even if we cannot achieve interpretability it would be wrong to require it in some contexts?
    1. If so, how do we distinguish between the contexts in which we should require interpretability, and those where we can settle for less? (And in your opinion, what are they?)
    2. Who should decide?
  4. Which of these decisions would you entrust to a non-interpretable model that was 5% more accurate then the average human decision-maker?
    1. Tumor detection
    2. Disability benefits
    3. Parole eligibility
    4. Facial ID matches as grounds for arrest on suspicion of having committed a crime.
    5. Selection for who to interview for a job as a BigLaw associate.
  5. Is there anything you might you want to know about the model other then the accuracy rate before answering question 4 above?

Class 10: AI & Algorithmic Justice (I)

  1. Kate Crawford, The Trouble with Bias (NIPS 2017 Keynote) [49-minute video]:
  2. Pages 688-693 of Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671 (2016).
  3. Selections from [LINK CORRECTED} Sandra G. Mayson, Bias in, Bias Out, 128 Yale L.J. 2218 (2019).
  4. War Stories
    1. Todd Feathers, Texas A&M Drops “Race” from Student Risk Algorithm Following Markup Investigation, The Markup (Mar. 30, 2021).
    2. Sheridan Wall & Hilke Schellmann, We tested AI interview tools. Here’s what we found, MIT Tech. Rev. (July 7, 2021). TL/DR: It was awful.
    3. Maia Szalavitz, The Pain Was Unbearable. So Why Did Doctors Turn Her Away? A sweeping drug addiction risk algorithm has become central to how the US handles the opioid crisis. It may only be making the crisis worse, WIRED (Aug. 11, 2021).
    4. Arance Burke, Martha Mendoza, Juliet Linderman & Michael Tarm, Ho, AP, How AI-powered tech landed man in jail with scant evidence (Aug. 19, 2021).
    5. Emmanuel Martinez and Lauren Kirchner, The Secret Bias Hidden in Mortgage-Approval Algorithms, The Markup (Aug. 25, 2021).
    6. BR, Objective or Biased: On the questionable use of Artificial Intelligence for job applications (2021).
  5. The case of facial recognition
    1. Brianna Rauenzahn, Jamison Chung, and Aaron Kaufman, Facing Bias in Facial Recognition Technology, The Regulatory Review (Mar 20, 2021).
    2. A contrary view: Stewart Baker [ex-NSA, ex-Homeland Security], The Flawed Claims About Bias in Facial Recognition, Lawfare Blog (Feb 2, 2022).
    3. Selections from Alice Xiang, Being Seen vs Mis-seen Tensions Between Privacy & Fairness in Computer Vision (draft 2022). (This is an update of a We Robot 2021 paper.) Update:  Final version is at 36 Harv. J.L. & Tech 1 (2022)).

Optional

General

  • (*) The Clearinghouse project provides a really good list of searchable “seminal and impactful articles focused on issues of AI Decision-Making and the law.” Feel free to use any of these for your reaction paper. Seems that UNC took this down…
  • (*) Kate Crawford, Can an Algorithm be Agnostic? Ten Scenes from Life in Calculated Publics, 41 Science, Techonolgy & Human Values (2016):
    • [T]his paper depicts algorithms in public life in ten distinct scenes, in order to ask the question, what kinds of politics do they instantiate? Algorithms are working within highly contested online spaces of public discourse, such as YouTube and Facebook, where incompatible perspectives coexist. Yet algorithms are designed to produce clear ‘‘win- ners’’ from information contests, often with little visibility or accountability for how those contests are designed. In isolation, many of these algorithms seem the opposite of agonistic: much of the complexity of search, ranking, and recommendation algorithms is nonnegotiable and kept far from view, inside an algorithmic ‘‘black box.’’ But what if we widen our perspective? This paper suggests agonistic pluralism as both a design ideal for engineers and a provocation to understand algorithms in a broader social context rather than focusing on the calculations in isolation, we need to account for the spaces of contestation where they operate

Bias

  • David Leslie, The Alan Turing Institute, Understanding bias in facial recognition technologies: An explainer (2020).
  • (*) Ignacio N. Cofone, Algorithmic Discrimination Is an Information Problem, 70 Hastings L.J. 1389 (2019):
    • “While algorithmic decision-making has proven to be a challenge for traditional antidiscrimination law, there is an opportunity to regulate algorithms through the information that they are fed. But blocking information about protected categories will rarely protect these groups effectively because other information will act as proxies. To avoid disparate treatment, the protected category attributes cannot be considered; but to avoid disparate impact, they must be considered. This leads to a paradox in regulating information to prevent algorithmic discrimination. This Article addresses this problem. It suggests that, instead of ineffectively blocking or passively allowing attributes in training data, we should modify them. We should use existing pre-processing techniques to alter the data that is fed to algorithms to prevent disparate impact outcomes. This presents a number of doctrinal and policy benefits and can be implemented also where other legal approaches cannot.”
  • Morgan Klaus Scheurman et al, How We’ve Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis, 4 Proc. ACM Hum.-Comput. Intereact (May 2020)
  • (*) Mason Marks, Algorithmic Disability Discrimination (February 19, 2019) in I. Glenn Cohen et al., eds., Disability, Health, Law and Bioethics (2020):
    • “21st century, artificial intelligence tools allow corporations that collect and analyze consumer data to bypass privacy and antidiscrimination laws, such as HIPAA and the ADA, and infer consumers’ disabilities without their knowledge or consent. When people make purchases, browse the Internet, or post on social media, they leave behind trails of digital traces that reflect where they have been and what they have done. Companies aggregate and analyze those traces using AI to reveal details about people’s physical and mental health. I describe this process as mining for “emergent medical data” (EMD) because digital traces have emergent properties; when analyzed by machine learning, they reveal information that is greater than the sum of their parts. […]
      “EMD collected from disabled people can serve as a means of sorting them into categories that are assigned positive or negative weights before being used in automated decision making. […] AI disrupts the traditional flow of disability-related data to promote algorithmic disability discrimination. It presents and analyzes four legislative solutions to the problem: Amend Title III of the ADA to include internet business within the law’s definition of places of public accommodation, expand the scope of HIPAA’s covered entities to include companies that mine for EMD, impose fiduciary duties on internet platforms and other businesses that infer health data, and establish general data protection regulations in the US inspired by the EU’s General Data Protection Regulation (GDPR) and the California Consumer Protection Act of 2018 (CCPA).”
  • (*) Sarah West, Meredith Whittaker & Kate Crawford, Discriminating Systems: Gender, Race and Power in AI (April, 2019):
    • “The AI sector needs a profound shift in how it addresses the current diversity crisis. The AI industry needs to acknowledge the gravity of its diversity problem, and admit that existing methods have failed to contend with the uneven distribution of power, and the means by which AI can reinforce such inequality. Further, many researchers have shown that bias in AI systems reflects historical patterns of discrimination. These are two manifestations of the same problem, and they must be addressed together.
      The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others. We need to acknowledge how the intersections of race, gender, and other identities and attributes shape people’s experiences with AI. The vast majority of AI studies assume gender is binary, and commonly assign people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity.
      Fixing the ‘pipeline’ won’t fix AI’s diversity problems. Despite many decades of ‘pipeline studies’ that assess the flow of diverse job candidates from school to industry, there has been no substantial progress in diversity in the AI industry. The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether.
      The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation. The histories of ‘race science’ are a grim reminder that race and gender classification based on appearance is scientifically flawed and easily abused. Systems that use physical appearance as a proxy for character or interior states are deeply suspect, including AI tools that claim to detect sexuality from headshots,iv predict ‘criminality’ based on facial features,v or assess worker competence via ‘micro-expressions.’ vi Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality. The commercial deployment of these tools is cause for deep concern.”
  • (*) Jennifer Oliva, Dosing Discrimination: Regulating PDMP Risk Scores, 110 California Law Review 47 ( 2022):
    • “Prescription drug monitoring program (PDMP) predictive surveillance platforms were designed for—and funded by—law enforcement agencies. PDMPs use proprietary algorithms to determine a patient’s risk for prescription drug misuse, diversion, and overdose. The proxies that PDMPs utilize to calculate patient risk scores likely produce artificially inflated scores for marginalized patients, including women and racial minorities with complex, pain-related conditions; poor, uninsured, under-insured, and rural individuals; and patients with co-morbid disabilities or diseases, including substance use disorder and mental health conditions.
      “Law enforcement conducts dragnet sweeps of PDMP data to target providers that the platform characterizes as “overprescribers” and patients that it deems as high risk of drug diversion, misuse, and overdose. Research demonstrates that PDMP risk scoring coerces clinicians to force medication tapering, discontinue prescriptions, and even abandon patients without regard for the catastrophic collateral consequences that attend to those treatment decisions. […]
      “[T]he FDA should exercise its regulatory authority over PDMP risk scoring software to ensure that such predictive diagnostic tools are safe and effective for patients.”

Profiling/Social Scoring

  • General framework: Mirelle Hildebrant, Profiling And The Rule of Law (2009), 1 Identity in Information Society (2008).
  • (*) Highly recommended: Chinese deployment of social scoring, which is forecast to use a lot of AI: Xin Dai, Toward a Reputation State: The Social Credit System Project of China (June 10, 2018).
    • “The ongoing “reputation revolution,” as it refers to the widespread adoption of reputation tracking and analytics, is now seeing its frontier shifting from the “reputation society” to the “reputation state”: Government authorities, following the steps of private sector players, now increasingly seek to use reputation mechanisms and technologies in the spheres of law and governance. In 2014, China’s national government unveiled a massive policy plan for building a “social credit system” in China (hereinafter the “SCSP,” which stands for the “social credit system project”). In designing the SCSP, the Chinese government envisioned that reputation mechanisms such as blacklisting, rating, and scoring be used to tackle many of the country’s by far intractable governance and regulatory problems in its social and economic realms, ranging from fraudulent behaviors in the marketplace, to difficulties in enforcing court judgments, to corruption in the government, and to professional malpractices and even plagiarism in scholarship. Although Western governments have also explored using reputation technologies and mechanisms in governance and social control contexts, China’s SCSP, as a comprehensive, singularly framed, and actively implemented policy project, knows no equivalent elsewhere in the world, and thus offers a unique case for studying a rise of the reputation state as a broader trend.
      This Article provides a systematic descriptive and analytical study on the SCSP. It draws on the general theoretical discussions on the reputation state to identify four key strategies, categorized in this Article as “regulation,” “searchlight,” “incorporation,” and “institutionalization,” which government actors may plausibly employ as they seek to use reputation for law and public administration. While Western experiences already illustrate ways in which these strategies may actually work in practical contexts, China’s SCSP takes the reputation state into the uncharted territory as it explores novel possibilities. Upon examining the major policy initiatives included, this Article finds that the SCSP overall represents the effort of China’s developmental state to tackle its persistent governance problems with new tools.”
  • A much shorter and summary version of the above appears as Xin Dai, Enforcing Law and Norms for Good Citizens One View of China’s Social Credit System Project, 63 Development 38 (2020).
  • Karen Hao, The coming war on the hidden algorithms that trap people in poverty, MIT Tech. Rev.  (Dec. 4, 2020):
    • “A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services.”
  • The Use of Pretrial “Risk Assessment Instruments”: A Shared Statement of Civil Rights Concerns (July 30, 2018). (Background info here.)
  • ACLU Alleges AI Algorithm is a Sham: Complaint in Valesaca v. Decker (S.D.N..Y. Case No. 1:20-cv-01803 , filed Feb. 28, 2020) (note especially pages 5-8).
  • James P. de Haan, Preventing #MeToo: Artificial Intelligence, the Law, and Prophylactics, 38 Minn. J. Law & Ineq. 69 (2020).
    • “Sexual harassment law is failing the harassed. It was designed to prevent harassment but cannot protect a victim until they are actually harmed. [… A]rtificial intelligence could be the key to this problem. As long as the program is tailored to consider subjective experience, a company could step in and prevent harm before it ever occurs. Such a system could even capture, organize, and centralize all of the rumors and complaints that swirl around a company.
      “The tricky parts are what employers do with the information and how employees view the system. Employers should not take any adverse action unless harassment actually occurs. These warnings should instead simply prompt an investigation and discussion. Instead of seeing these reports or scores as accusations, they should be framed the same as credit scores. Having a low credit score does not make you a bad person—it simply changes your risk profile. And once you know your score is low, either because you were called into human resources or you received a copy of the report, you can work on making it better. Small, incremental adjustments in attitude and perception can go a long way in promoting empathy and cutting back on harassment.”
  • Coalition for Critical Technology, Abolish the #TechToPrisonPipeline, Medium (June 6, 2020).
  • (*) Mikella Hurley & Julius Adebayo, Credit Scoring in the Era of Big Data, 18 Yale J. L & Tech 148 (2016). We are particularly concerned with sections III.B, III.C & V but the other sections will be helpful if you don’t have a relevant background.
    • “The credit-scoring industry has experienced a recent explosion of start-ups that take an “all data is credit data” approach, combining conventional credit information with thousands of data points mined from consumers’ offline and online activities. Big-data scoring tools may now base credit decisions on where people shop, the purchases they make, their online social media networks, and various other factors that are not intuitively related to creditworthiness. While the details of many of these products remain closely guarded trade secrets, the proponents of big-data credit scoring argue that these tools can reach millions of underserved consumers by using complex algorithms to detect patterns and signals within a vast sea of information. While alternative credit scoring may ultimately benefit some consumers, it also poses significant risks.
      “Credit-scoring tools that integrate thousands of data points, most of which are collected without consumer knowledge, create serious problems of transparency. Consumers have limited ability to identify and contest unfair credit decisions, and little chance to understand what steps they should take to improve their credit. Recent studies have also questioned the accuracy of the data used by these tools, in some cases identifying serious flaws that have a substantial bearing on lending decisions. Big-data tools may also risk creating a system of “creditworthinessby association”in which consumers’ familial, religious, social, and other affiliations determine their eligibility for an affordable loan. These tools may furthermore obscure discriminatoryand subjective lending policies behind a single “objective” score. Such discriminatoryscoring may not be intentional; instead, sophisticated algorithms may combine facially neutral data points and treat them as proxies for immutable characteristics such as race or gender, thereby circumventing existing non-discrimination laws and systematically denying credit access to certain groups. Finally, big-data tools may allow online payday lenders to target the most vulnerable consumers and lure them into debt traps.
      “Existing laws are insufficient to respond to the challenges posed by credit scoring in the era of big-data. While federal law prohibits certain forms of discriminationin lending and ensures that consumers have limited rights to review and correct errors in their credit reports, these laws do not go far enough to make sure that credit-scoring systems are accurate, transparent, and unbiased. Existing laws also do little to prevent the use of predatory scoring techniques that may be geared to target vulnerable consumers with usurious loans.
      “This article, which has been developed as part of a collaborative effort between lawyers and data scientists, explores the problems posed by big-data credit-scoringtools and analyzes the gaps in existing laws. It also sets out a framework for comprehensive legislative change, proposing concrete solutions that would promote innovation while holding developers and users of credit-scoringtools to high standards of accuracy, transparency,fairness, and non-discrimination.”
  • A European View: Antje von Ungern-Sternberg, Discriminatory AI and the Law – Legal standards for algorithmic profiling, Draft Chapter (Aug. 17, 2021) in Responsible AI (Silja Vöneky, Philipp Kellmeyer, Oliver Müller and Wolfram Burgard eds.).
  • (*) Nydia Remolina, The Role of Financial Regulators in the Governance of Algorithmic Credit Scoring (March 15, 2022).
    • “The use of algorithmic credit scoring presents opportunities and challenges for lenders, regulators, and consumers. This paper provides an analysis of the perils of the use of AI in lending, such as the problem of discrimination in lending markets that use algorithmic credit scoring, the limited control financial consumers have over the outcomes of AI models due to the current scope of data protection law and financial consumer protection law, the financial exclusion caused by the lack of data from traditionally excluded groups, the regulatory arbitrage in lending markets, and the little oversight of the use of alternative data for algorithmic credit scoring. I provide a comparative overview of the current approaches to algorithmic credit scoring in different jurisdictions such as Kenya, the European Union, the United Kingdom, Hong Kong, Singapore, the United States, Australia, and Brazil to argue that these models do not solve the problems illustrated. To address the problems of algorithmic credit scoring and effectively protect consumers as end users of these models, and therefore, promote access to finance, this paper proposes a set of tools and solutions for financial regulators. First, a testing supervisory process for algorithmic credit scoring models will effectively promote fair lending. Second, to create a right to know the outcomes of the algorithm, including opinion data and inferences, to promote digital self-determination. This solution empowers consumers affected by algorithmic credit scoring so they can verify and challenge the decision made by the AI model. Third, to level the playing field between financial institutions and other lenders that use algorithmic credit scoring. Fourth, to use the sandbox as a test environment for lenders to create data of traditionally excluded groups in a controlled environment. And finally, to foster data sharing and data portability initiatives for credit scoring through open finance schemes in an environment controlled by the financial regulatory authority. Better algorithms, unbiased data, AI regulation, fair lending regulation and AI governance guidelines do not solve the perils of the use of AI for creditworthiness assessment. In contrast, these proposals aim to solve the problems of algorithmic credit scoring in any jurisdiction.”

Optional Readings On Algorithmic Hiring Tools / Employee Ranking Tools

  • Lydia X. Z. Brown, Ridhi Shetty, Michelle Richardson, CDT, Report – Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination? (Dec. 3, 2020) — which is a summary of CDT, Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?(2020).
  • (*) Ifeoma Ajunwa, An Auditing Imperative For Automated Hiring Systems, 34 Harv. J.L. & Tech (2021).
    • “[T]he goal of this Article is neither to argue against or for the use of automated decision-making in employment, nor is it to examine whether automated hiring systems are better than humans at making hiring decisions. For antidiscrimination law, the efficacy of any particular hiring system is a secondary concern to ensuring that any
      such system does not unlawfully discriminate against protected categories.14 Therefore, my aim is to suggest collaborative regulatory regimes for automated hiring systems that will ensure that any benefits of automated hiring are not negated by (un)intended outcomes, such as unlawful discrimination on the basis of protected characteristics. […]
      “I posit an auditing imperative for automated hiring systems. Building on Professor Kim’s essay, I argue not just that the law allows for the audits, but that the spirit of antidiscrimination law requires it.
      “… [I] argue that employment antidiscrimination law imposes an affirmative duty of care on employers to ensure that they are avoiding practices that would constrain equal opportunity in employment. Thus, I argue, that when employers choose to use algorithmic systems, fulfilling their duty of care entails regular audits of those systems. In turn, audits necessitate the record-keeping and data retention mandates that I also propose in this Article.”
  • Natasha Lomas, IItalian court rules against ‘discriminatory’ Deliveroo rider-ranking algorithm, TechCrunch (Jan. 4, 2021). This is a very short summary of one part of a complex case; a fuller story, maybe more than you want to know unless you are a comparative labor lawyer, is at Antonio Aloisi, Demystifying Flexibility, Exposing the Algorithmic Boss: A Note on the First Italian Case Classifying a (Food-Delivery) Platform Worker as an Employee, Comp. Labor L. & Pol’l J. (2021) .
  • (*) Brittany Kammerer, Hired by a Robot: The Legal Implications of Artificial Intelligence Video Interviews and Advocating for Greater Protection of Job Applicants, 107 Iowa L. Rev. 817 (2022).
    • “Employers have incorporated Artificial Intelligence (“AI”) into the hiring process through asynchronous video interviews that evaluate candidates’ characteristics and fit for the position through vocal cues and facial and other non-verbal gestures. While the AI is able to eliminate normal human bias in the interview process it leads to other issues such as algorithmic bias and data privacy issues. The Al vendors and employers have mainly been left to self-regulate except for the recent enactment of Illinois’s Artificial Intelligence Video Interview Act. Other state legislatures should follow Illinois’s path and create their own legislation that incorporate the benefits of Illinois’s statute while expanding its scope to further protect applicants. For states to properly protect job applicants, the legislation should include provisions on informed consent from the applicant to participate in the interview, data privacy rights given to the applicants, and accountability systems to protect against algorithmic bias.”

Notes & Questions

  1. Is it possible to create a racially biased ML system — say for facial recognition — by accident/mistake? Even if all the people involved are not racially biased?
    1. If so, how realistic is this danger?
    2. What are its most likely causes?
    3. How do we guard against it?
  2. How does the lack of transparency of current deployed ML tools interact with the risk of bias?
  3. If researchers carefully go through their ML data or model and remove any direct references to race, does this mean that they likely have solved any racial bias problem? Why?
  4. Barocas & Selbst
    1. The authors are very concerned about the “masking” problem — be prepared to articulate what this is and why it matters.
    2. Suppose for the sake of the argument we decide it’s not plausible or practicable to think we can fix them — what then?
      1. Ban the practice entirely?
      2. Live with the (potential) harms?
  5. Suppose we decide that all people subject to ML credit scoring systems must be told which factors are primarily responsible for their scores (to the extent we can, see XAI….). Won’t users learn to ‘game’ the system if they know what is being monitored–which is what undermined the utility of FICO scores? How do we deal with this?
  6. The Kate Crawford article in the optional reading  is difficult, but it raises some important and long-running questions: Do algorithms have a politics? Is technology neutral? Or is the politics of a technology socially determined?
    1. The three views could be summarized as:
      1. NO. Algorithms are just things. People decide to make them, people decide how to use them. (“Guns don’t kill people….”)
      2. Sort of. Particular features of a device or system create an effective means for establishing power and authority, but social actors are the key agents in influencing how those features will be shaped over time.
      3. YES. Technical things are inherently political, unavoidably linked to institutionalized patterns of power and authority. For example, the use of AI in Amazon Warehouses enables and reflects a politics of worker-manager relations, and more generally a view of the working class in society.
    2. Given what you know so far, which of those views do you think is most correct?
    3. Optional: Do data sets have politics? See (*) Morgan Klaus Scheurman et al, Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development (Sept 16, 2021).
  7. Mayson identifies three strategies that are commonly proposed to deal with biased output from algorithms.
    1. They are:
      1. the exclusion of input factors that correlate closely
        with race;
      2. adjustments to algorithmic design to equalize predictions across racial lines; and
      3. rejection of algorithmic methods altogether.
    2. She argues that all of these solutions are misguided.
      1. Excluding race, or things highly correlated with race, as an input variable may sometimes hurt minorities;
      2. So-called “algorithmic affirmative action” (“an intervention to produce statistical parity, equal false-positive rates, or equal  false-negative rates”) will “fall short” even if the disparity is due to underlying reality, because “unlikely to reduce the net burden of predictive regimes on communities of color.” (P. 2271);
        1. Equalizing false positive/negative error rates undermines overall test accuracy. Is that worth it?  [Optional: A detailed explanation of why you cannot equalize false positive/negative rates if a test is equally accurate for two populations with substantially different profiles as to what the test seeks to measure — be it disease or recidivism — can be found in (*) Deborah Hellman, Measuring Algorithmic Fairness, 106 Va L. Rev. 811 (2020).]
        2. Your answer to the above may differ depending on the circumstances.  If so, can you give examples of why you think it is worth it, and when you would not?
      3. Rejecting the use of AI means returning to the status quo ante: “subjective risk assessment [which] is very likely to be worse.”
        1. Is that the only choice?
        2. Is some sort of ‘centaur’ strategy a useful and viable option?
  8. In Xiang’s terminology, what are the possible consequences for persons subjected to model deployment of persons like them being “seen,” unseen,” and “mis-seen” in model development?

    1. To the extent that some of these consequences are beneficial, who benefits?
    2. To the extent that some of these consequences are harmful, which of them have legal remedies? What additional legal remedies might be appropriate (see in particular Part VII. E.)
    3. If the problem in a facial recognition system is difficulty correctly identify members of a given minority group, will adding more pictures of that group to the training set be likely to fix the problem?  What makes this difficult, what to watch out for? (Hint: one of the issues–not the only one–is “predatory inclusion”–what’s that?)
    4. If privacy law protects you (sometimes) against being “seen” and anti-discrimination law protects you (sometimes) against being harmfully sorted by race or other protected class and being ‘unseen’ protects you (somewhat) against some types of (especially mass) surveillance, which are the best and worst outcome
      1. for an individual person?
      2. for a minority community?
      3. for the community as a whole?
  9. Consider facial recognition again; for some technologies we base their legitimacy on some idea of consent.
    1. But who consents to have their photo in facial ID training data?
    2. Who consents to being surveilled by facial recognition cameras in public or in stores?
    3. Is the issue of consent relevant when the police and other government bodies are involved? If not, why not?
  10. Suppose we discover bias in a predictive profiling ML (e.g. for who is a good risk for a private bank loan, or qualifies for government aid) — but find that it is less biased than the baseline people it replaced. Is it OK to use the ML?
    1. To what extent do we only care about the total # of people effected?
      1. To what extent might we care about who was effected or how?
      2. What other considerations (if any) should one take into account?
    2. If the ML continually updates, how often do we need to retest to look out for risks of overfitting and other dangers?
      1. What is “overfitting” anyway?
  11. The “Objective or Biased” reading describes some excellent testing of an AI employee rating program. Do keep in mind, however, that the problems detected are a result of the (boneheaded?) way the AI was trained. Different training data would  have different problems…The problem is that bad algorithms get used – this one got a lot of funding from big companies.

Class 11: AI & Algorithmic Justice (II)

  1. Pages 701-712 of Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671 (2016).
  2. Pages 1039-1045 of Anupan Chander, The Racist Algorithm?, 115 Mich. L. Rev. 1023 (2017) (review of Frank Pasquale, The Black Box Society (2015)).
  3. Thomas Krendl Gilbert et al., U. Berkeley Center for Long-Term Cybersecurity, Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems 1-26, 33-41, 44-51 (Feb. 2022). Note – the Appendix pp. 52-58 is good too, but I’m not assigning it.
  4. A dissenting voice: Stewart Baker, Stealth Quotas, Reason.com (Oct. 10, 2022).

Optional Readings

Issues in Identification of Bias (and in Possible Remedies)

  • Data & Society, Assembling Accountability: Algorithmic Impact Assessment for the Public Interest (Jan 2021).
  • (*) Deborah Hellman, Measuring Algorithmic Fairness, 106 Va L. Rev. 811 (2020):
    • Algorithmic decision making is both increasingly common and increasingly controversial. Critics worry that algorithmic tools are not transparent, accountable, or fair. Assessing the fairness of these tools has been especially fraught as it requires that we agree about what fairness is and what it requires. Unfortunately, we do not. The technological literature is now littered with a multitude of measures, each purporting to assess fairness along some dimension. Two types of measures stand out. According to one, algorithmic fairness requires that the score an algorithm produces should be equally accurate for members of legally protected groups—blacks and whites, for example. According to the other, algorithmic fairness requires that the algorithm produce the same percentage of false positives or false negatives for each of the groups at issue. Unfortunately, there is often no way to achieve parity in both these dimensions. This fact has led to a pressing question. Which type of measure should we prioritize and why?
      This Article makes three contributions to the debate about how best to measure algorithmic fairness: one conceptual, one normative, and one legal. Equal predictive accuracy ensures that a score means the same thing for each group at issue. As such, it relates to what one ought to believe about a scored individual. Because questions of fairness usually relate to action, not belief, this measure is ill-suited as a measure of fairness. This is the Article’s conceptual contribution. Second, this Article argues that parity in the ratio of false positives to false negatives is a normatively significant measure. While a lack of parity in this dimension is not constitutive of unfairness, this measure provides important reasons to suspect that unfairness exists. This is the Article’s normative contribution. Interestingly, improving the accuracy of algorithms overall will lessen this unfairness. Unfortunately, a common assumption that anti-discrimination law prohibits the use of racial and other protected classifications in all contexts is inhibiting those who design algorithms from making them as fair and accurate as possible. This Article’s third contribution is to show that the law poses less of a barrier than many assume.
  • Pauline T. Kim, Auditing Algorithms for Discrimination, 166 University of Pennsylvania Law Review Online 189 (2017).
  • (*) Jane R. Yakowitz Bambauer, Tal Zarsky, and Jonathan Mayer, When a Small Change Makes a Big Difference: Algorithmic Fairness Among Similar Individuals, 55 UC Davis L. Rev. 2337 (2022).
    • “If a machine learning algorithm treats two people very differently because of a slight difference in their attributes, the result intuitively seems unfair. Indeed, an aversion to this sort of treatment has already begun to affect regulatory practices in employment and lending. But an explanation, or even a definition, of the problem has not yet emerged. This Article explores how these situations—when a Small Change Makes a Big Difference (SCMBDs)—interact with various theories of algorithmic fairness related to accuracy, bias, strategic behavior, proportionality, and explainability. When SCMBDs are associated with an algorithm’s inaccuracy, such as overfitted models, they should be removed (and routinely are.) But outside those easy cases, when SCMBDs have, or seem to have, predictive validity, the ethics are more ambiguous. Various strands of fairness (like accuracy, equity, and proportionality) will pull in different directions. Thus, while SCMBDs should be detected and probed, what to do about them will require humans to make difficult choices between social goals. “

Analysis, Some More Realistic than Others, of Proposals to Fix Bias

  • (*) Andrew D. Selbst & Solon Barocas, Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law, 171 U. Penn. L. Rev. 1023 (2023):
    • The Federal Trade Commission has indicated that it intends to regulate discriminatory AI products and services. This is a welcome development, but its true significance has not been appreciated to date. This Article argues that the FTC’s flexible authority to regulate “unfair and deceptive acts and practices” offers several distinct advantages over traditional discrimination law when applied to AI. The Commission can reach a wider range of commercial domains, a larger set of possible actors, a more diverse set of harms, and a broader set of business practices than are currently covered or recognized by discrimination law. For example, while most discrimination laws can address neither vendors that sell discriminatory software to decision makers nor consumer products that work less well for certain demographic groups than others, the Commission could address both. The Commission’s investigative and enforcement powers can also overcome many of the practical and legal challenges that have limited both plaintiffs’ success under discrimination law and other enforcement agencies’ efficacy. The Article demonstrates that the FTC has the existing authority to address the harms of discriminatory AI. While the FTC has announced that it is considering rulemaking to tackle the problem, this article examines the additional possibility of an enforcement-led approach based on its regulation of data security.
  •  Richard Berk et al, Fairness in Criminal Justice Risk Assessments: The State of the Art (May 30, 2017). (“Except in trivial cases, it is impossible to maximize accuracy and fairness at the same time, and impossible simultaneously to satisfy all kinds of fairness. In practice, a major complication is different base rates across different legally protected groups. There is a need to consider challenging tradeoffs.”)
  • Talia B. Gillis, False Dreams of Algorithmic Fairness: The Case of Credit Pricing (February 18, 2020).
    • “[P]ersonalized credit pricing relies on characteristics or inputs that reflect preexisting discrimination or disparities. Fair lending law has traditionally addressed this concern through input scrutiny, either by limiting the consideration of protected characteristics or by attempting to isolate inputs that cause disparities.
      “But input scrutiny is no longer effective. …
      “I argue that the shortcomings of current approaches mean that fair lending law must make the necessary, though uncomfortable, shift to outcome-focused analysis. When it is no longer possible to scrutinize inputs, outcome analysis provides a way to evaluate whether a pricing method leads to impermissible disparities. This is true not only under the legal doctrine of disparate impact, which has always cared about outcomes, but also,under the doctrine of disparate treatment, which historically has avoided examining disparate outcomes. Now, disparate treatment too can no longer rely on input scrutiny and must be considered through the lens of outcomes. I propose a new framework…”
  • Holly Hoch, Corinna Hertweck, Michele Loi & Aurelia Tamò-Larrieux, Discrimination For The Sake Of Fairness By Design And Its Legal Framework (Draft, Jan. 2021):
    • “There is broad consensus among computer scientists working on this issue that such discrimination can only be avoided by intentionally collecting and consciously using sensitive information about demographic features like sex, gender, race, religion etc. Companies implementing such algorithms might, however, be wary of allowing algorithms access to such data as they fear legal repercussions, as the promoted standard has been to omit protected attributes, otherwise dubbed “fairness through unawareness”. This paper asks whether such wariness is justified in light of EU data protection and anti-discrimination laws. In order to answer this question, we introduce a specific case and analyze how EU law might apply when an algorithm accesses sensitive information to make fairer predictions. We review whether such measures constitute discrimination, and for who, arriving at different conclusions based on how we define the harm of discrimination and the groups we compare. Finding that several legal claims could arise regarding the use of sensitive information, we ultimately conclude that the proffered fairness measures would be considered a positive (or affirmative) action under EU law. As such, the appropriate use of sensitive information in order to increase the fairness of an algorithm is a positive action, and not per se prohibited by EU law.”
  • Cynthia Rudin, Caroline Wang, Beau Coker, The Age of Secrecy and Unfairness in Recidivism Prediction, 2.1 Harv. Data Science Review (Winter 2020).
    • “In our current society, secret algorithms make important decisions about individuals. There has been substantial discussion about whether these algorithms are unfair to groups of individuals. While noble, this pursuit is complex and ultimately stagnating because there is no clear definition of fairness and competing definitions are largely incompatible. We argue that the focus on the question of fairness is misplaced, as these algorithms fail to meet a more important and yet readily obtainable goal: transparency. As a result, creators of secret algorithms can provide incomplete or misleading descriptions about how their models work, and various other kinds of errors can easily go unnoticed. By trying to partially reconstruct the COMPAS model—a recidivism risk-scoring model used throughout the criminal justice system—we show that it does not seem to depend linearly on the defendant’s age, despite statements to the contrary by the model’s creator. This observation has not been made before despite many recently published papers on COMPAS. Furthermore, by subtracting from COMPAS its (hypothesized) nonlinear age component, we show that COMPAS does not necessarily depend on race other than through age and criminal history. This contradicts ProPublica’s analysis, which made assumptions about age that disagree with what we observe in the data. In other words, faulty assumptions about a proprietary model led to faulty conclusions that went unchecked until now. Were the model transparent in the first place, this likely would not have occurred. We demonstrate other issues with definitions of fairness and lack of transparency in the context of COMPAS, including that a simple model based entirely on a defendant’s age is as ‘unfair’ as COMPAS by ProPublica’s chosen definition. We find that there are many defendants with low risk scores but long criminal histories, suggesting that data inconsistencies occur frequently in criminal justice databases. We argue that transparency satisfies a different notion of procedural fairness by providing both the defendants and the public with the opportunity to scrutinize the methodology and calculations behind risk scores for recidivism”
    • On transparency see also, Charlotte A. Tschide, Legal Opacity: Artificial Intelligence’s Sticky Wicket, 106 Iowa L. Rev. Online 126 (2021).
  • EDRi, If AI is the problem, is debiasing the solution? (Sept. 21, 2021 ) (which is a teaser for a much longer report,
    • (*) Agathe Balayn and Seda Gürses, EDRi, Beyond Debiasing Regulating AI and its inequalities (2021):
      • “The focus on ‘debiasing AI’ as the primary policy response to discriminatory AI may in fact serve to promote more uptake of AI systems that fundamentally discriminate, and worsen outcomes at individual, collective and societal levels. The authors of this report set out for us the boundaries and limits of what debiasing techniques in computer science can actually achieve, but also the broader, social, political and economic factors that technocentric approaches to AI and discrimination overlook. We are extremely grateful for their guidance, and hope this study will be useful to civil society and policymakers invested in structural responses to the harms AI can bring.
        “We should not allow techno-centric approaches to obfuscate more radical responses to the broad, structural harms emanating from AI systems. “
  • Jason R. Bent, Is Algorithmic Affirmative Action Legal? 108 Geo. L.J. 803 (2020). This article argues that ‘algorithmic affirmative action’ is legal. To make the argument work, you have to agree with its reading of the (allegedly less-than-nuclear) effects of Ricci; I wasn’t sure about that…
  • (*) Adamantia Rachovitsa and Niclas Johann, The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch Syri Case, 22 Human Rights Law Review (2022):
    • “The article discusses the human rights implications of algorithmic decision-making in the social welfare sphere. It does so against the background of the 2020 Hague’s District Court judgment in a case challenging the Dutch government’s use of System Risk Indication—an algorithm designed to identify potential social welfare fraud. Digital welfare state initiatives are likely to fall short of meeting basic requirements of legality and protecting against arbitrariness. Moreover, the intentional opacity surrounding the implementation of algorithms in the public sector not only hampers the effective exercise of human rights but also undermines proper judicial oversight. The analysis unpacks the relevance and complementarity of three legal/regulatory frameworks governing algorithmic systems: data protection, human rights law and algorithmic accountability. Notwithstanding these frameworks’ invaluable contribution, the discussion casts doubt on whether they are well-suited to address the legal challenges pertaining to the discriminatory effects of the use of algorithmic systems. “
  • Another view: Naomi Appelman, Ronan Ó Fathaigh & Joris van Hoboken, Social Welfare, Risk Profiling and Fundamental Rights: The Case of SyRI in the Netherlands, 12 J. Int’l Prop. Info. Tech & E-Comm. L. (JIPITEC) (2021):
    • “The article critically analyses the SyRI judgment from a fundamental rights perspective, including by examining how the Court brought principles contained in the General Data Protection Regulation within the rubric of Article 8 ECHR as well as the importance the Court attaches to the principle of transparency under Article 8 ECHR. Finally, the article discusses how the Dutch government responded to the judgment. and discusses proposed new legislation, which is arguably more invasive, with the article concluding with some lessons that can be drawn for the broader policy and legal debate on ADM systems used by public authorities.”

An Industry View

Notes & Questions

  1. Chander suggests that the real issue is not the opacity of ‘black box’ systems — which we’d fix by making them more ‘transparent’. Instead he argues the issue is what results they produce, and the first step is to measure outcomes to see if they have a bias.
    1. But what do we measure against?
    2. Hurley & Adebayo (an optional reading in the previous set of readings) acknowledge that users will learn to ‘game’ the system if they know what is being monitored–which is what undermined the utility of FICO scores. Is Chander’s suggestion subject to the same risk?
    3. UPDATE: To what extent if any is Chander’s proposal no more than inviting us to use an impressionistic version of the sort of XAI that Boris Babic & I. Glenn Cohen, The Algorithmic Explainability ‘Bait and Switch’, so mercilessly criqued.?
      1. Even if it is, might it be enough to satisfy non-discrimination concerns? Or maybe just legal requirements, which is almost the same thing?
      2. And if so, maybe XAI isn’t so bad?  At least sometimes?  But if so, how do we tell?
  2. Choices, Risks, and Reward Reports seems very concerned about misalignment issues.
    1. How does the Choices, Risks, and Reward Reports paper think these problems will happen?
    2. How plausible are the proposed solution?
      1. How much work is involved?
      2. Who should pay for it?
    3. Will fear of liability compel these solutions?
      1. If not, should they be required by legislation?
      2. What would be the downside?
    4. What’s not addressed?
  3. Stewart Baker – a partner at Steptoe in DC, and former General Counsel of the NSA under Clinton, then first Assistant Secretary for Policy at DHS under GWB – argues that de facto quotas are the inevitable outcome of attempts to stamp out algorithmic bias because we don’t know what the “right” decisions are. As a result, the least-offensive way to demonstrate non-discrimination will be to have outcomes mirror population shares relative to whatever metric (age, gender, race, zip code) we are trying to avoid discriminating about.  Are you persuaded? Why?  [Optional: Stewart Baker takes on claims of bias in facial recognition systems]
  4. A theme in at least most of the readings in this section is that model design — what you ask it to optimize, how you do it, and subtle factors along the way, is at least as dangerous as bias in data sets.  Plus both data set issues and model issues can be hard to spot (and even harder to prove in a discrimination lawsuit!)
    1. Which of the problems that we have identified are addressed by Senators Ron Wyden and Cory Booker, and Rep. Yvette Clarke’s The Algorithmic Accountability Act of 2022: summary and section-by-section analysis (optional: [UPDATED] full text of Algorithmic Accountability Act of 2023).
    2. How the proposed solutions in the statute compare with the ones we’ve read about?
    3. Will this do the job?
      1. If not, what else is needed?
      2. Of those, what is possible?
  5. If we cannot solve these issues today, should we ban AI deployment until we can?

Class 12: AI & IP (1) — Model Training & Copyright

I apologize for the very high page count in this section.  All of a sudden the subject has exploded in the courts. If pressed for time, just skim the material in the section on “Ongoing model training cases in the US courts” but do read pages 1-4 of the A.T. v. OpenAI complaint.

  1. Section 5.4.1 (“Training”) (pages 146-47) of Rishi Bommasani et al,, Stanford HA, On the Opportunities and Risks of Foundation Models (2023).
  2. Does the AI Training Process Infringe Copyright in Other Works? (Pages 3-4) of Congressional Research Service, Generative Artificial Intelligence and Copyright Law (Sept. 29, 2023).
  3. MAI_v._Peak_991_F.2d_511, 991 F.2d 511 (9th Cir. 1993).
  4. U.S. Copyright Office, Notice of Inquiry re: Artificial Intelligence and Copyright, 88 Fed. Reg. 59942 (Aug. 30, 2023).
  5. Pages 40 (last 2 lines) – 86 of Carys Craig & Ian Kerr, The Death of the AI Author, 52 Ottawa L. Rev. 31 (2021).
    1. Please note that while ostensibly (and in fact) an article about copyright protection for AI-generated works (the subject of next class!), this article also encodes a subtle but powerful argument about the AI use of training data.  Can you decode it?
    2. We will return to the ostensible subject of the article–whether there is such a thing as AI authorship, and more generally who if anyone owns the rights to the output of an AI–in our next class.
  6. Ongoing model training cases in the US courts and agencies:
    1. Kieran McCarthy, How Can AI Models Legally Obtain Training Data?–Doe 1 v. GitHub, Technology & Marketing Blog (June 17, 2023)
      1. This is a summary of Doe v. Github, Order Granting In Part And Denying In Part Motions To Dismiss, 2023 WL 3449131 (N.D. Cal., May 11, 2023), which you are invited to read in full, but I’m not assigning it.
    2. Tremblay v. Open AI, Order Granting In Part And Denying In Part The Motions To Dismiss (N.D. Cal. Feb. 12, 2024).
    3. New York Times v. Microsoft
      1. Paragraphs 26-57 and 63-70, 82-90,  93, 98-103 , 108-111, 124-128, 136-138, 143-144, 154-155  of Complaint, New York Times Co. v. Microsoft (S.D.N.Y. Dec. 27, 2023).
      2. Based on the above facts, the complaint has seven counts (links are optional, in case you are not familiar with the statute or doctrine):
        1. Copyright Infringement (17 U.S.C. § 501)
        2. Vicarious Copyright Infringement (against Microsoft and Open AI)
        3. Contributory Copyright Infringement specifically against Microsoft
        4. Contributory Copyright Infringement against all defendants
        5. Digital Millennium Copyright Act – Removal of Copyright Management Information (17 U.S.C. § 1202)
        6. Common Law Unfair Competition by Misappropriation.
          1. This is a bit of a rabbit hole, but (only!) if you are interested you could compare W. Edward Sell, The Doctrine of Misappropriation in Unfair Competition, 11 Vand. L. Rev. 483 (1958) with
            Gary Myers, The Restatement’s Rejection of the Misappropriation Tort: A Victory for the Public Domain, 47 S. Car. L. Rev. 673 (1996).
        7. Trademark Dilution, 15 U.S.C. §1125(c).
      3. OpenAI’s public rebuttal to the complaint: OpenAI and journalism (Jan. 8, 2024).
    4. Pages 1-4 of A.T. v. OpenAI, Class Action Complaint (Sept. 5, G2023).  Note the variety of claims and their legal hooks.

Optional

  • (*) Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem, 93 Wash. L. Rev. 579 (2018).
    • “While legal and computer science scholars have analyzed many sources of bias, including the unexamined assumptions of its often- homogenous creators, flawed algorithms, and incomplete datasets, the role of the law itself has been largely ignored. Yet just as code and culture play significant roles in how AI agents learn about and act in the world, so too do the laws that govern them. This Article is the first to examine perhaps the most powerful law impacting AI bias: copyright.
      “Artificial intelligence often learns to “think” by reading, viewing, and listening to copies of human works. This Article first explores the problem of bias through the lens of copyright doctrine, looking at how the law’s exclusion of access to certain copyrighted source materials may create or promote biased AI systems. Copyright law limits bias mitigation techniques, such as testing AI through reverse engineering, algorithmic accountability processes, and competing to convert customers. The rules of copyright law also privilege access to certain works over others, encouraging AI creators to use easily available, legally low-risk sources of data for teaching AI, even when those data are demonstrably biased. Second, it examines how a different part of copyright law—the fair use doctrine—has traditionally been used to address similar concerns in other technological fields, and asks whether it is equally capable of addressing them in the field of AI bias. The Article ultimately concludes that it is, in large part because the normative values embedded within traditional fair use ultimately align with the goals of mitigating AI bias and, quite literally, creating fairer AI systems.”
  • (*) Mark Lemley & Brian Casey, Fair Learning (Jan. 30, 2020)
    • “It’s not clear that the use of … databases of copyrighted works to build self-driving cars, or to learn natural languages by analyzing the content in them, will be treated as a fair use under current law. … In this Article, we argue that AIs should generally be able to use databases for training whether or not the contents of that database are copyrighted. … Copyright law should permit copying of works for non-expressive purposes. When the defendant copies a work for reasons other than to have access to the protectable expression in that work, fair use should consider under both factors one and two whether the purpose of the defendant’s copying was to appropriate the plaintiff’s expression or just the ideas. We don’t want to allow the copyright on the creative pieces to end up controlling the unprotectable elements.”
  • Sean Flynn et al, Submission to Canadian Government Consultation on a Modern Copyright Framework for AI and the Internet of Things (Sept. 17, 2021).

Notes & Questions

  1. We might begin with a very basic issue: Why have copyright law (or patent law) at all?
    1. It makes sense (maybe) to go back to basics, since our answers to whether and how we want to apply copyright law (or patent law) to things produced by AIs might be shaped by what we think those bodies of law are trying to do.
    2. Consider, in this context, the Craig/Kerr discussion of what copyright is for. They identify two very different strands of justification:
      1. The “deontological” approach (if that word is not familiar to you, here’s a good introductory guide aimed at 1Ls), which they further break down into “potentially oppositional alternatives” they tag as “Lockean” and “Hegelian but sometimes Kantian”.
      2. The “teleological” approach which they say in the US collapses quickly to a utilitarian justification.
    3. Another cleavage in the literature is between the “romantic conception of the author” (an image that seems often to involve a garret) and the reality of corporate authorship/ownership, either by transfer of copyright by that starving artist, or by work-for-hire.
  2. Given these deep philosophical waters, what is a regulator like the USPTO supposed to do?
  3. Do you buy this story: The Copyright story is as an example of what happens when law meets new tech:
    1. Lawyers try to fit the new thing into existing categories
    2. Other lawyers use the new tech as an opportunity to rethink the existing categories
    3. Meanwhile we have to deal with some practical questions such as:
      1. To what extent old theories should control new outcomes
      2. To the extent that we don’t use old theories for new stuff, should we use the new theories for the old stuff?
  4. The Peak decision above  is controversial. Commentators have suggested that the 2nd Circuit and the Federal Circuit have issued contrary opinions on whether a RAM copy is an infringing copy, which it is argued creates a circuit split. (I personally am not sure if I read the cases that way.)  That said, one complexity is that the license in Peak limited rights that might otherwise have existed if the defendant had a legitimate copy.
    1. Assuming Peak is the law, does this have implications for how we train AIs?
    2. If Peak is a problem for AI training, should it be overruled or limited not to reach the training process?
  5. The big practical question: Does the decision in Tremblay v. Open AI spell doom for New York Times v. Microsoft and A.T. v. OpenAI, not to mention many of the cases in the next note?  Why or why not?
  6. A non-exhaustive list of recent generative AI lawsuits complied in January 2024:
    New York Times v. OpenAI (S.D.N.Y.) (news copyright)
    Sancton v. OpenAI (S.D.N.Y.) (author copyright)
    Authors Guild et al. v. OpenAI (S.D.N.Y.) (author copyright)
    Chabon v. OpenAI (N.D. Cal.) (author copyright)
    AT et al. v. OpenAI (N.D. Cal.) (web scraping & consumer privacy)
    Walters v. OpenAI (N.D. Ga.) (AI hallucinations and defamation)
    Silverman et al. v. OpenAI (N.D. Cal.) (author copyright)
    JL et al. v. Alphabet (N.D. Cal.) (web scraping, consumer privacy, & copyright)
    Tremblay v. OpenAI (N.D. Cal.) (author copyright)
    PM et al. v. OpenAI (N.D. Cal.) (web scraping & consumer privacy)
    Doe 3 et al. v. Github et al. (N.D. Cal.) (code copyright)
    Doe 1 et al. v. Github et al. (N.D. Cal.) (code copyright)
  7. Regardless of how we read the Copyright Act on the issue of whether using web materials for training is legal, what should the rule be? 
    1. Why?
    2. Who should decide?  (If you said “Congress,” and it’s hopelessly gridlocked is there any other legitimate way to make the decision?)

Class 13: AI & IP (2) — Model Usage

Apologies to Patent geeks, but all the patent stuff is in the optional readings.

  1. Section 5.4.2 (“Output liability”)  (page 147) of Rishi Bommasani et al,, Stanford HA, On the Opportunities and Risks of Foundation Models (2023).
  2. Samantha Cole, Musicians Algorithmically Generate Every Possible Melody, Release Them to Public Domain, VICE (Feb. 22, 2020).
  3. 1-5 (edited) & Pages 31 & 50-70 from Katherine Lee, A. Feder Cooper, James Grimmelmann, Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain, Journal of the Copyright Society (forthcoming, 2024).
  4. Review U.S. Copyright Office, Notice of Inquiry re: Artificial Intelligence and Copyright, 88 Fed. Reg. 59942 (Aug. 30, 2023) [from previous class]
  5. Pages 120-28 from Katherine Lee, A. Feder Cooper, James Grimmelmann, Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain, Journal of the Copyright Society (forthcoming, 2024).
  6. An early legal answer (or two?):
    1. Copyright Review Board, Second Request for Reconsideration for Refusal to Register A Recent Entrance to Paradise (Feb. 14, 2022). Note the identity of the requestor’s [is that the term?] lawyer.
    2. Also consider the “monkey selfie” case.  (Optional: full text of Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018).)

Optional

Generally

  • (*) Steven D. Jamar, A Social Justice Perspective on IP Protection for Artificial Intelligence Programs (December 23, 2021) in Cambridge Handbook on Intellectual Property and Social Justice (Steven D. Jamar and Lateef Mtima, eds. forthcoming 2022 ).
    • “We are just at the beginning of the AI revolution and already it is transforming human society. In history only four other technologies have transformed human life as profoundly as AI will: steam power that powered the industrial revolution; electricity that transformed everything; the development of antibiotics and medical understanding and treatment generally; and computers and pre-AI software creating the information age. As we move into the AI world, we face significant challenges with respect to shaping our world. A significant part of that shaping will be determined by how we collectively choose to reward and regulate AI implementations. And a part of that shaping will depend upon how we address intellectual property issues connected to artificial intelligence applications. This chapter sketches some of the concerns and suggests a path forward[.]”
    • Part I c – d is an excellent summary of different types of AI and their limits.  Part II describes issues about who will gain and lose when AI meets IP; Part III has (too brief?) recommendations for a federal regulatory sui generis scheme to respond to the issues in Part II.

Copyright

  • Ted talk on ‘copyrighting all the music’…
  • (*) Hot off the pre-presses: Mark A. Lemley, How Generative AI Turns Copyright Upside Down (Draft Jan 18, 2024):
    • While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don’t fit in any of those categories. The new model of creativity generative AI brings puts considerable strain on copyright’s two most fundamental legal doctrines: the idea-expression dichotomy and the substantial similarity test for infringement. Increasingly creativity will be lodged in asking the right questions, not in creating the answers. Asking questions may sometimes be creative, but the AI does the bulk of the work that copyright traditionally exists to reward, and that work will not be protected. That inverts what copyright law now prizes. And because asking the questions will be the basis for copyrightability, similarity of expression in the answers will no longer be of much use in proving the fact of copying of the questions. That means we may need to throw out our test for infringement, or at least apply it in fundamentally different ways.
      […] There is an old aphorism that says that if you give an infinite number of monkeys typewriters they will eventually create the works of Shakespeare. In theory, in the copyright case of Shakespeare v. Infinite Monkeys, Shakespeare should lose, because the monkeys didn’t copy from him. In practice, courts have generally found a way for similarity to trump everything else. But that will need to change in the world of generative AI. Our basic copyright doctrines don’t fit generative AI. We will struggle to apply the law to comport with the new realities. And it may suggest that copyright itself is a poor fit for the new world of AI-generated works.
  • (*) Highly recommended: Daniel J. Gervais, The Human Cause, in Research Handbooks on Intellectual Property and Artificial Intelligence (R. Abbott, ed, forthcoming Dec. 2022)
    • “The more we rely on machines to perform creative and innovative tasks that are singularly human and important for us as a source of progress–from news reporting, to music and fiction, to inventions in all fields of technology–the more we may shrink the space available for our individual and collective self-realization. To use a simple explanatory metaphor, as we use our creative and inventive ‘muscles’ less and less, they will shrink. Is that what is meant by ‘Progress of Science and Useful Arts’?”
  • (*)  Comment submitted by Professor Daniel Gervais,  to the Copyright Office’s notice of Inquiry on the interface between copyright and Generative AI:
    • There’s a lot here, but here’s a tiny sample: “I would urge the Office to resist the commonly held view that any and all disruption caused by AI companies and especially Big Tech is per se positive and must be allowed by law, and instead consider that a diminution of works created by human is commerce, from journalism to essays to novels, is not a clear positive.”
  • (*) Annemarie Bridy, Coding Creativity: Copyright and the Artificially Intelligent Author, 2012 Stan. Tech. L. Rev. 5 (2012):
    • “By engaging and extending insights from two relatively discrete lines of existing scholarship—the postmodern critique of romantic authorship and the more pragmatic literature on copyright in works produced with the aid of computers—this Article seeks to answer the vexing copyright questions that attend the artificially intelligent production of cultural works. It does so by developing the argument that all creativity is inherently algorithmic and that works produced autonomously by computers are therefore less heterogeneous to both their human counterparts and existing copyright doctrine than appearances may at first suggest.”
  • P. Bernt Hugenholtz . João Pedro Quintais, Copyright and Artificial Creation: Does EU Copyright Law Protect AI-Assisted Output?, 52 IIC 1190 (2021) .
    • “[F]our interrelated criteria are to be met for AI-assisted output to qualify as a protected “work”: the output is (1) in relation to “production in the literary, scientific or artistic domain”; (2) the product of human intellectual effort; and (3) the result of creative choices that are (4) “expressed” in the output. Whether the first step is established EU law is uncertain. Since most AI artefacts belong to the “literary, scientific or artistic domain” anyway, and are the result of at least some “human intellectual effort”, however remote, in practice the focus of the copyright analysis is on steps 3 and 4.  […]
      “Authorship status will be accorded to the person or persons that have creatively contributed to the output. In most cases this will be the user of the AI system, not the AI system developer, unless the developer and user collaborate on a specific AI production, in which case there will be co-authorship. If “off-the-shelf” AI systems are used to create content, co-authorship claims by AI developers will also be unlikely for merely commercial reasons, since AI developers will normally not want to burden customers with downstream copyright claims. We therefore expect this issue to be clarified in the contractual terms of service of providers of such systems.
      “In conclusion, we believe that the EU copyright framework is generally suitable and sufficiently flexible to deal with the current challenges posed by AI-assisted creation […]”
  • (*) Ryan Benjamin Abbott & Elizabeth Shubov, The Revolution Has Arrived: AI Authorship and Copyright Law (August 8, 2022).
    • “Before the end of the decade, a significant amount of art, literature, music, software, and web content will likely be created by AI rather than traditional human authors. Yet the law, as it has so often historically, lags this technological evolution by prohibiting copyright protection for AI-generated works. The predominant narrative holds that even if AI can automate creativity, that this activity is not the right sort of thing to protect, and that protection could even harm human artists.
      “AI-generated works challenge beliefs about human exceptionalism and the normative foundations of copyright law, which until now have offered something for everyone. Copyright can be about ethics and authors and protecting the sweat of a brow and personality rights. Copyright can also be about money and the public interest and offering incentives to create and disseminate content. But copyright cannot have it all with AI authors—there is valuable output being generated, but by authors with no interests to protect. This forces a reevaluation of copyright law’s history and purpose.
      “This article argues that American copyright law is, and has been traditionally, primarily about benefiting the public interest rather than benefiting authors directly. As a result, AI-generated works are precisely the sort of thing the system was designed to protect.”
  • Brent Moran, Brigitte Vézina, Creative Commons, Artificial Intelligence and Creativity: Why We’re Against Copyright Protection for AI-Generated Output (Aug. 10, 2020).
  • Sofia Moutinho, Kinky and absurd: The first AI-written play isn’t Shakespeare—but it has its moments, Science (Feb. 26, 2021). [IMHO, it’s mostly awful.]
  • Paul Sawers, Chinese court rules AI-written article is protected by copyright, VentureBeat (Jan. 10, 2020)
  • Chapter 12 of National Security Commission on Artificial Intelligence, Final Report (2021) gives an account of the strategic importance of IP. If you are interested in export control — a sort of IP issue — then you also might look at Chapter 14, but in either case keep in mind that this whole report is basically alarmist about Chinese efforts to “win” the AI race, probably because this is how you make a case to Congress for lots more budget….
  • Jaclyn Peiser, The Rise of the Robot Reporter, NY Times (Feb. 2, 2019).

Patent

  • Dueling visions of the future of patent law:
    1. (*)Ryan Abbott, Everything is Obvious, 66 UCLA L Rev. 2 (2018). The critical parts for us are II.C-E and III.C-F. If you have no patent background, however, you will need to read Part I to understand those section.
      1. Abbott argues that ML will make everything obvious, at least in a legal sense as the term is used in patent law. And thus, he claims we face the “end of patents.”
    2. (*) Dan L. Burk, AI Patents and the Self-Assembling Machine, 105 Minn. L. Rev. Headnotes 301 (2021).
      1. Burk replies to Abbott by saying that, “… commentary surrounding AI systems partakes of what has been called the ‘magical worldview,’ a somewhat breathless and overwrought perspective on technology that assumes AI systems are somehow transcendent or miraculously unprecedented in their qualities or applications. In the patent context, such hyperbole manifests itself in assertions that these technologies upend the patent system, defy long-established patent doctrines, or portend the end of innovation as we know it.” (citing to Abbott)
  • An early legal answer:  Thaler v Vidal, 43 F.4th 1207 (Fed. Cir. 2022).
  • Desmond Oriakhogba, Dabus Gains Territory in South Africa and Australia: Revisiting the AI-Inventorship Question, 9 South African Journal of Intellectual Property Law 87 (2021).
  • USPTO, Public Views on Artificial Intelligence and Intellectual Property Policy (Oct. 2020).

Notes & Questions

  1. The Bridy article (in the optional reading and well summarized by Craig/Kerr in their article in the previous class) was perhaps the first article to ask some tough questions about machine copyright. It suggests that the differences between “works produced autonomously by computers” are less different from human-generated works than they seem, and that the legal treatment of machined works need not be that different either. That said, she does see some issues on which you might form a view:
    1. Whether the constitutional definition of “author” in the copyright clause (U.S. Const. Art. I § 8) imposes a limit on copyrightable subject matter …. that might exclude AI “authors”?
    2. Whether even if the constitution allows Congress to give copyright to AI authors, we can fairly read existing copyright law (pre-AI!) to have done so?  (Cf. Subject Matter and Scope of Copyright and Copyright Ownership and Transfer.)
    3. Who owns the stuff AI’s ‘create’?
  2. To what extent is Craig/Kerr’s disagreement with Bridy consistent with the literary criticism of the ’70s onward that alleged the “death of the author” as first celebrated by Roland Barthes in a (in)famous 1967 article.
    1. Barthes claimed, very influentially, that traditional literary criticism’s practice of incorporating the intentions and biographical context of an author in an interpretation of a text was an error. He argued that writing and creator are unrelated, that the interpretation of texts depends crucially on what the reader brings to it. (Optional: Wikipedia article summarizes Barthes’s essay and its context fairly well.)
    2. Craig/Kerr dispute the Bridy assertion that AI “authorship” could be a legal category. Why do they disagree?
  3. William Gass riffed on Barthes in 188/189 Salmagundi 514, 534-535, (Fall 2015 – Winter 2016), saying that,
    1. “Every author has an identity, but masterpieces are written by the human mind, not by human nature, which only lends them their common smell and color, their day to day dust. The implication is that readers differ in the same way. A masterpiece can be read as if it were by James Michener (it is the principal way Proust suffers from his society- swallowing fans), but the works of the human mind are really addressed to other human minds. That ineffable persona which a poem implies will be “the human mind” if the poem achieves greatness; however, the fatuous little New Yorker story will fasten the reader to a rock in Westchester and leave him to be eaten up by trademarks and localisms and proper names. It will flatter his determinate and causal-his chemical-self. The anonymity which the superb poem or fiction presumably possesses, according to some theories, may consequently be a kind of spiritual consanguinity.”Because we borrow, beg, buy, steal, or copy texts; because texts enter our eyes but remain in the blood; because we are, as authors, plagiarists and paraphrasers and brain pickers and mocking birds; because of these and other like habits we are, in effect, translating texts from one time to another and one context to another and one language to another. If, instead of repeating, “have a nice day,” we suggest to strangers that they “lead a good life,” we have simply rearranged a slightly different little cluster of cliches. But all that any author works with, in the beginning, is given her by one corrupted magi after another: the language, the life she leads, the literary tradition, schools she attends, the books she reads, the studies she has undertaken; so she cannot lay claim to some syntax or vocabulary, climate of ideas or custom of entertaining, as hers, and hers alone, and therefore as special as she is. Neither is that inadequacy she is supposed to feel at the close of her office hours the feeling of a freak. Of course, all of this wisdom and experience, this shit and sublimity, is shared, and of course a great deal of what we all do and think and feel and write is no more uniquely had or imagined than the balloon of the bagger; the stream of life is rarely more than a pisser’s trickle; and literally millions of sentences are penned or typed or spoken every day which have only a source-a spigot or a signboard-and not an author[]; they have never been near a self which is so certain of its spirit and so insistent on its presence that it puts itself in its syllables….”
    2. If we agree with Gass, what if anything does that say about AI authorship?
  4. What do Craig/Kerr think an “author” is?  How do “authors” author?
  5. Another very influential literary theory, at least in the ’80s and ’90s, was Harold Bloom’s Anxiety of Influence.  I asked Consensus to summarize its main arguments, and (if memory serves) it did a pretty good job:
    1. Harold Bloom’s “The Anxiety of Influence: A Theory of Poetry” (1973) is a seminal work that explores the complex relationships between poets across generations. Bloom’s main arguments center on the idea that poets are influenced by their predecessors, yet they must overcome this influence to develop their own unique voice and style. Here are the key points of Bloom’s theory:1. **Poetic Influence and Misreading**: Bloom suggests that all poets are influenced by previous poets. However, this influence is not simply a matter of homage or continuation. Instead, poets often engage in a process of misreading or misinterpretation of their predecessors’ work, which allows them to carve out a space for their own creative voice. This misreading is not a failure but a necessary step in the creative process.2. **Oedipal Struggle**: Bloom employs a Freudian framework, particularly the Oedipus complex, to describe the relationship between poets. He argues that younger poets experience an “anxiety of influence” as they struggle to assert their independence from their poetic forebears, akin to a child’s desire to surpass their parents. This struggle is both a conflict and a driving force for creativity.3. **Six Revisionary Ratios**: Bloom outlines six ways (“ratios”) through which poets might engage with and distance themselves from their predecessors. These include Clinamen (swerve), Tessera (completion and antithesis), Kenosis (a deliberate emptying out of one’s own style to make room for influence), Daemonization (a distortion of the precursor’s spirit), Askesis (a renunciation of the precursor’s poetics for the sake of one’s own poetry), and Apophrades (the return of the precursor’s influence in such a way that the precursor is overshadowed).4. **The Strength of the Poet**: Bloom posits that the strength of a poet is measured by their ability to overcome the anxiety of influence through creative misinterpretation and innovation. The most significant poets are those who can most effectively transcend the influence of their predecessors, establishing themselves as original voices within the literary tradition.5. **Critique of Historical Readings**: Bloom criticizes the traditional historical approach to literature, which places a heavy emphasis on context, sources, and influences in a linear or straightforward manner. He argues that such readings fail to capture the complex, often subconscious, ways in which poets engage with the works of their predecessors.6. **Poetry as a Struggle for Originality**: Ultimately, Bloom’s theory highlights the inherent tension in poetry (and art in general) between the pull of the past and the push toward new creation. The anxiety of influence frames poetry as a perpetual struggle for originality and self-definition in the shadow of great works that have come before.
  6.  If we think this is a correct description of what poets, or at least great poets, do, does this tell us anything about AI authorship? Can it be a great poet?
  7. How does the decision in the “Second Request for Reconsideration for Refusal to Register A Recent Entrance to Paradise” (and if you read it, Thaler v. Vidal) map onto the theoretical visions in the academic articles above?

Class 14: AI in Administration

  1. Pages 1155-76 & 1213-22 (i.e Parts I & III) of Cary Conglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, 105 Geo. L.J. 1147 (2017).
  2. Introduction & Part III of Deirdre K. Mulligan & Kenneth A. Bamberger, Procurement As Policy: Administrative Process for Machine Learning, 34 Berk. Tech. L.J. 773 (2019).
  3. Parts II. A & B of Aziz Z. Huq, Constitutional Rights in the Machine-Learning State, 105 Cornell L. Rev. 1875 (2020).
  4. Pages 1305-1313  from Danielle Keats Citron, Technological Due Process, 85 Wash. U.L. Rev. 1249 (2008)

Optional Readings

Procurement and other Executive Functions

  • David Freeman Engstrom et al, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies (Based on a report submitted to the Administrative Conference of the United States, Feb. 18, 2020). Sets the scene very well before you read any of the following.
  • Cary Coglianese & Alicia Lai, Algorithm vs. Algorithm (Feb. 2, 2022):
    • “Part I of this Article begins by offering a counterweight to the criticisms of machine-learning algorithms that tend to dominate legal scholarship. It details the well-documented physical limitations and cognitive biases that afflict individual decision-making by human algorithms, along with additional problems that can arise when humans make collective decisions. Part II then focuses on machine learning and its promise for improving decision-making….Part III thus presents general considerations to help guide public officials seeking to make sound choices about when and how to use digital algorithms. In addition to focusing officials’ attention on the extent to which a shift to digital algorithms will improve upon the status quo, we emphasize in Part III the need to consider whether a new use of digital algorithms would likely satisfy key preconditions for successful deployment of machine learning and whether a system driven by digital algorithms would actually deliver better outcomes…”
  • (*) Felix Morman, Beyond Algorithms Toward a Normative Theory of Automated Regulation, 62 B.C. L. Rev. 1 (2021):
    • “[A]utomated, non-human governance has long been a reality. For more than a century, policymakers have relied on regulations that automatically adjust to changing circumstances, without the need for human intervention. This Article surveys the track record of self-adjusting governance mechanisms to propose a normative theory of automated regulation. … Self-adjusting regulation offers an insurance policy against the well-documented inaccuracies ofeven the most expert forecasts, reducing the need for costly and time-consuming administrative proceedings. Careful analysis of empirical evidence, existing literature, and precedent reveals that the benefits of regulatory automation extend well beyond mitigating regulatory inertia. … Today’s automated regulations rely on relatively simplistic algebra, afar cry from the multivariate calculus behind smart algorithms. Harnessing the advanced mathematics and greater predictive powers of artificial intelligence could provide a significant upgrade for the next generation of automated regulation. Any gains in mathematical sophistication, however, will likely come at a cost if the widespread scholarly skepticism toward algorithmic governance is any indication of future backlash and litigation. Policy-makers should consider carefully whether their objectives may be served as well, if not better, through more simplistic, but well-established methods of regulatory automation.”
  • Cary Coglianese & Lavi M. Ben Dor, AI in Adjudication and Administration, 86 Brook. L. Rev. 791 (2021). “This status report on current uses of algorithmic tools can serve as a benchmark against which to gauge future growth in the use of artificial intelligence in the public sector.”
  • (*) Joshua D. Blank & Leigh Osofsky, Automated Legal Guidance, 106 Cornell L. Rev. 179 (2020). Of special interest to tax and adlaw geeks:
    “This Article offers one of the first critiques of these new systems of artificial intelligence. It shows that automated legal guidance currently relies upon the concept of “simplexity,” whereby complex law is presented as though it is simple, without actually engaging in simplification of the underlying law. While this approach offers potential gains in terms of efficiency and ease of use, it also causes the government to present the law as simpler than it is, leading to less precise advice and potentially inaccurate legal positions. Using the Interactive Tax Assistant as a case study, the Article shows that the use of simplexity in automated legal guidance is more powerful and pervasive than in static publications because it is personalized, non-qualified, and instantaneous. Further, it argues that understanding the costs as well as the benefits of current forms of automated legal guidance is essential to evaluating even more sophisticated, but also more opaque, automated systems that governments are likely to adopt in the future.”
  • (*) Ryan Calo & Danielle Keats Citron, The Automated Administrative State: A Crisis of Legitimacy, 70 Emory L.J. 797 (2021):
    “The legitimacy of the administrative state is premised on our faith in agency expertise. Despite their extra-constitutional structure, administrative agencies have been on firm footing for a long time in reverence to their critical role in governing a complex, evolving society. They are delegated enormous power because they respond expertly and nimbly to evolving conditions. In recent decades, state and federal agencies have embraced a novel mode of operation: automation. Agencies rely more and more on software and algorithms in carrying out their delegated responsibilities. The automated administrative state, however, is demonstrably riddled with concerns. Legal challenges regarding the denial of benefits and rights — from travel to disability — have revealed a pernicious pattern of bizarre and unintelligible outcomes. Scholarship to date has explored the pitfalls of automation with a particular frame, asking how we might ensure that automation honors existing legal commitments such as due process. Missing from the conversation are broader, structural critiques of the legitimacy of agencies that automate. Automation throws away the expertise and nimbleness that justify the administrative state, undermining the very case for the existence and authority of agencies. Yet the answer is not to deny agencies access to technology. This article points toward a positive vision of the administrative state that adopts tools only when they enhance, rather than undermine, the underpinnings of agency legitimacy.”

AI and Courts and People

  • (*) Sofia Ranchordás, Empathy in the Digital Administrative State, 71 Duke L.J. 1341 (2022).
    • “Humans make mistakes. Humans make mistakes especially while filling out tax returns, benefit applications, and other government forms, which are often tainted with complex language, requirements, and short deadlines. However, the unique human feature of forgiving these mistakes is disappearing with the digitalization of government services and the automation of government decision-making. While the role of empathy has long been controversial in law, empathic measures have helped public authorities balance administrative values with citizens’ needs and deliver fair and legitimate decisions. The empathy of public servants has been particularly important for vulnerable citizens (for example, disabled individuals, seniors, and underrepresented minorities). When empathy is threatened in the digital administrative state, vulnerable citizens are at risk of not being able to exercise their rights because they cannot engage with digital bureaucracy.”This Article argues that empathy, which in this context is the ability to relate to others and understand a situation from multiple perspectives, is a key value of administrative law deserving of legal protection in the digital administrative state. Empathy can contribute to the advancement of procedural due process, the promotion of equal treatment, and the legitimacy of automation. The concept of administrative empathy does not aim to create arrays of exceptions, nor imbue law with emotions and individualized justice. Instead, this concept suggests avenues for humanizing digital government and automated decision-making through a more complete understanding of citizens’ needs. This Article explores the role of empathy in the digital administrative state at two levels: First, it argues that empathy can be a partial response to some of the shortcomings of digital bureaucracy. At this level, administrative empathy acknowledges that citizens have different skills and needs, and this requires the redesign of pre-filled application forms, government platforms, algorithms, as well as assistance. Second, empathy should also operate ex post as a humanizing measure which can help ensure that administrative mistakes made in good faith can be forgiven under limited circumstances, and vulnerable individuals are given second chances to exercise their rights.”Drawing on comparative examples of empathic measures employed in the United States, the Netherlands, Estonia, and France, this Article’s contribution is twofold: first, it offers an interdisciplinary reflection on the role of empathy in administrative law and public administration for the digital age, and second, it operationalizes the concept of administrative empathy. These goals combine to advance the position of vulnerable citizens in the administrative state.”
  • For a somewhat contrasting view to Ranchordas, see Benjamin Minhao Chen, Alexander Stremitzer, and Kevin Tobia, Having Your Day in Robot Court (Oct. 17, 2021):
    • “To have your “day in court” is to have a human hear and adjudicate your claims. Two original experiments assess whether laypeople share this intuition. We discover that laypeople do, in fact, see human judges as fairer than artificially intelligent (“AI”) robot judges: All else equal, there is a perceived human-AI “fairness gap.” However, it is also possible to eliminate the fairness gap. The perceived advantage of human judges over AI judges is related to perceptions of accuracy and comprehensiveness of the decision, rather than “softer” and more distinctively human factors. Moreover, the study reveals that laypeople are amenable to “algorithm offsetting.” Adding an AI hearing and increasing the AI interpretability reduces the perceived humanAI fairness gap. Ultimately, the results support a common challenge to robot judges: there is a concerning human-AI fairness gap. Yet, the results also indicate that the strongest version of this challenge — human judges have inimitable procedural fairness advantages — is not reflected in the views of laypeople. In some circumstances, people see a day in robot court as no less fair than day in human court”
  • Ngozi Okidegbe, The Democratizing Potential of Algorithms?, 53 Conn. L. Rev – (forthcoming 2021). Note that the “?” does a lot of work in this title…
  • Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. 242 (2019).
  • (*)Ryan Copus, Statistical Precedent: Allocating Judicial Attention, 74 Vanderbilt Law Review 605 (2020):
    • “This Article explains and illustrates how courts can leverage advances in artificial intelligence to more fairly and effectively allocate attention. A machine-generated mapping of a court’s historical decision patterns—what I term “statistical precedent”—can help a circuit court locate the district court, agency, staff attorney, law clerk, and panel decisions that are most incompatible with the court’s collective jurisprudence. Statistical precedent can also aid the court in identifying areas of law that are most in need of development. With the ability to locate likely errors and opportunities for law development, the circuit courts could distribute attention so as to revitalize their contribution to the rule of law.”

Public Law Review of AI Action More Generally (or Vice-Versa…)

  • (*)  Rónán Kennedy, Rules as code and the rule of law: ensuring effective judicial review of administration by software, Law, Innovation & Technology (Feb 09, 2024):
    • This paper considers the possible benefits and substantial risks of ‘Rules as Code’, the parallel drafting of legislation and codification in software, which has been the subject of attention from policy-makers and pilot studies in some jurisdictions. It highlights the long history of these approaches, and the challenges of ossification, mis-translation of rules, and separation of powers problems. It also examines in the detail the Australian Pintarich case, which demonstrates the inadequacy of conventional judicial review of automated decision-making. It outlines some possible solutions to these issues — two ‘internal’ to development processes (greater transparency, and literate pair programming) and two ‘external’ (expanding the capacity of judicial review to look beyond a specific citizen/state interaction and consider the design and development of the controlling software system, and greater cross-disciplinary awareness by lawyers).
  • (*) Catherine Sharkey, AI for Retrospective Review, 8 Belmont L. Rev. Art. 3 (2021):
    • “This Article [explores] the significant administrative law issues that agencies will face as they devise and implement AI-enhanced strategies to identify rules that should be subject to retrospective review. Part I introduces the effect of politics on retrospective review by canvassing both the consistencies and differing emphases of the relevant executive orders across the Obama and Trump Administrations. The HHS pilot is then presented as an innovative case study in its own right that also frames some generalizable salient administrative law design and oversight issues. In addition to promulgating the first rule using AI technologies, HHS has historically provided robust descriptions of its approach to identifying regulations for retrospective review. HHS, moreover, has put itself forward as the leading federal agency for “regulatory reform.”
      Part II sheds light on both the peril and future promise of the deployment of AI in the retrospective review process …*)
  • Margot E. Kaminski & Jennifer M. Urban, The Right to Contest AI, 121 Columb. L. Rev 1957 (2021):
    • “In the United States, recent proposals for regulating AI focus largely on ex ante and systemic governance. This Article argues instead—or really, in addition—for an individual right to contest AI decisions, modeled on due process but adapted for the digital age. The European Union, in fact, recognizes such a right, and a growing number of institutions around the world now call for its establishment. This Article argues that despite considerable differences between the United States and other countries,establishing the right to contest AI decisions here would be in keeping with a long tradition of due process theory.
      “This Article then fills a gap in the literature, establishing a theoretical scaffolding for discussing what a right to contest should look like in practice. This Article establishes four contestation archetypes that should serve as the bases of discussions of contestation both for the right to contest AI and in other policy contexts. The contestation archetypes vary along two axes: from contestation rules to standards and from emphasizing procedure to establishing substantive rights. This Article then discusses four processes that illustrate these archetypes in practice, including the first in depth consideration of the GDPR’s right to contestation for a U.S. audience. Finally, this Article integrates findings from these investigations to develop normative and practical guidance for establishing a right to contest AI.”
  • Sonya Katel, Democracy and Distrust in an Era of Artificial Intelligence, — Daedalus — (May, 2022):
    • “Our legal system has historically operated under the general view that courts should defer to the legislature. There is one significant exception to this view: cases in which it appears that the political process has failed to recognize the rights or interests of minorities. This basic approach provides much of the foundational justifications for the role of judicial review in protecting minorities from discrimination by the legislature. Today, the rise of AI decision-making poses a similar challenge to democracy’s basic framework. As I argue in this essay, the rise of three trends–privatization, prediction, and automation in AI–have combined to pose similar risks to minorities. In this essay, I outline what a theory of judicial review would look like in an era of artificial intelligence, analyzing both the limitations and the possibilities of judicial review of AI. Here, I draw on cases in which AI decision-making has been challenged in courts, to show how concepts of due process and equal protection can be recuperated in a modern AI era, and even integrated into AI, to provide for better oversight and accountability. “
  • Hannah Bloch-Wehba, Access to Algorithms, 88 Ford.ham L. Rev. 1265 (2020).
  • Danielle Keats Citron, Technological Due Process, 85 Wash. U.L. Rev. 1249 (2008).

Notes & Questions

  1. One of the attractions of using ML systems is that they might bring efficiency gains. Do these gains seem equally likely and attractive in the public and private sectors?
  2. Are there government services that could be too efficient?
  3. Is the case for human-in-the-loop any different in public than in private sector applications?
  4. If and when the government adopts more ML into its decision-making, what changes if any might we expect in the legal rules that permit review of those decisions?
    1. To the extent that those changes include a reduction in the human ability to modify outcomes based on some personal theory of ‘equity’, would that be a gain or a loss?
    2. To the extent that we might hope the ML can consider more factors than a human, would that be a gain or a loss?
      1. To what extent does your answer to the above depend on the ML not being a ‘black box’?
      2. Are not people doing ‘equity’ something of a black box too?
  5. Conglianese & Lehr present a fairly optimistic picture of the reception of ML systems into government decision-making. What might go wrong?
    1. FYI, Re and Solow-Niederman in the optional reading are considerably more pessimistic.
  6. The core of Mulligan & Bamberger’s argument is that historically agencies have been subject to (the threat of) fairly detailed review when they make decisions that effect people. But, other than people who may have had their offer to supply goods or services rejected, most people don’t have a right to go to court when an agency makes a procurement decision (buys stuff). The problem they are concerned with is that agencies will “procure” ML systems with decisions (biases?) built-in, and then deploy them. Because they are black-box-like it will be very difficult for persons harmed by the systems’ decisions to prove they were unfairly treated, so traditional meaningful review of decisions will be strangled.
    1. The decision-making record will be sparse to non-existent. Figuring out reasons for a decision will be hard to impossible.
    2. The agency humans may not even have, or feel they have, discretion to overrule the ML system’s “recommendation”.
  7. If the law imposes a duty on the agency to engage in “reasoned deliberation” will relying on an ML system’s recommendation suffice to meet this legal duty?
  8. If agencies have a duty of transparency in decision-making, does that require them to make training data and tuning choices during ML design available? What if the companies providing the ML system say that those are trade secrets?
  9. What is the public role in the adoption of ML systems? How do we replicate the virtues of notice & comment rulemaking when the real decision-making is in the ML system?
  10. Huq identifies a number of ways ML can go wrong, most of which should be familiar by now. He seems to think that constitutional litigation can expose and correct many of these flaws.
    1. How does he envision such lawsuits working?
    2. What legal claims might plaintiffs make?
      1. Of the claims Huq discusses, which if any seem most realistic?
      2. What sort of information and expertise will be required to prevail?
      3. What incentives would plaintiffs have to bring cases especially if damages (as opposed to injunctions) are rare?
    3. Are there changes in constitutional doctrine we should consider to facilitate valid suits for bad ML implementations in government decision-making?
  11. Ranchordás [in the optional reading] asserts that “that empathy, which in this context is the ability to relate to others and understand a situation from multiple perspectives, is a key value of administrative law deserving of legal protection in the digital administrative state. ”
    1. What about the argument that this is entirely backwards?  Rather than ‘excuse’ errors when triers think a person has limited “skills” or are “vulnerable” shouldn’t we aim for equal – dare one say mechanical? – justice for all, in which like cases are treated alike?
      1. Ah, but what makes cases “alike”?
      2. How would we decide fairly who is entitled to second chances and who is not?  And if we can express that in some fashion, can’t we mechanize that too?
    2. Some theories of due process stress the importance of ‘dignity values’ part of which is emerging from adjudication, win or lose, with a sense of being heard.
      1. Would ML justice tend to enhance or subvert this property?
      2. Suppose people came to see ML justice as fair, but a bit brutal– how would that play out?

Class 15: AI & Privacy

Some background in how tech undermines privacy

  1. Charles Duhigg, How Companies Learn Your Secrets, NY Times (Feb. 16, 2012).
  2. Pages 1740-45 of Paul Ohm, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, 57 UCLA L. Rev. 1701 (2010).

AI specifically

  1. Bruce Schneier, AI and Mass Spying, Schneier on Security (Dec. 8, 2023).  (Original version here.)
  2. Uri Gal, ChatGPT is a data privacy nightmare, and we ought to be concerned, The Coversation (Feb. 8 2023). [Update: corrected version here.]
  3. Pages 4-5, 11-12, 15-29 of Datatilsynet, Norwegian Data Protection Authority, Artificial Intelligence and Privacy (Jan, 2018).
  4. Ian Kerr, Schrodinger’s Robot: Privacy in Uncertain States, 20 Theoretical Inquiries in Law (2019).
  5. Parts II & IV (i.e. pages 116-33 & 160-185) of Karl Manheim & Lyric Kaplan, Artificial Intelligence: Risks to Privacy and Democracy, 21 Yale J. L. & Tech. 106 (2019).

Optional

  • (*) Hideyuki Matsumi & Daniel J. Solove, The Prediction Society: Algorithms and the Problems of Forecasting the Future (Draft July 30 , 2023):
    • Predictions about the future have been made since the earliest days of humankind, but today, we are living in a brave new world of prediction. Today’s predictions are produced by machine learning algorithms that analyze massive quantities of personal data. Increasingly, important decisions about people are being made based on these predictions.Algorithmic predictions are a type of inference. Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But as we argue in this Article, predictions are different from other inferences. Predictions raise several unique problems that current law is ill-suited to address. First, algorithmic predictions create a fossilization problem because they reinforce patterns in past data and can further solidify bias and inequality from the past. Second, algorithmic predictions often raise an unfalsifiability problem. Predictions involve an assertion about future events. Until these events happen, predictions remain unverifiable, resulting in an inability for individuals to challenge them as false. Third, algorithmic predictions can involve a preemptive intervention problem, where decisions or interventions render it impossible to determine whether the predictions would have come true. Fourth, algorithmic predictions can lead to a self-fulfilling prophecy problem where they actively shape the future they aim to forecast.More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. The increasing pervasiveness of decisions based on algorithmic predictions is leading to a prediction society where individuals’ ability to author their own future is diminished while the organizations developing and using predictive systems are gaining greater power to shape the future.Privacy and data protection law do not adequately address algorithmic predictions. Many laws lack a temporal dimension and do not distinguish between predictions about the future and inferences about the past or present. Predictions about the future involve considerations that are not implicated by other types of inferences. Many laws provide correction rights and duties of accuracy that are insufficient to address problems arising from predictions, which exist in the twilight between truth and falsehood. Individual rights and anti-discrimination law also are unable to address the unique problems with algorithmic predictions.We argue that the use of algorithmic predictions is a distinct issue warranting different treatment from other types of inference. We examine the issues laws must consider when addressing the problems of algorithmic predictions.
  • (*) Richard Warner & Robert H. Sloan, How AI Unfairly Tilts the Playing Field: Privacy, Fairness, and Risk Shadows (July 22, 2023).
    • Private sector applications of artificial intelligence (AI) raise related questions of informational privacy and fairness. Fairness requires that market competition occurs on a level playing field, and uses of AI unfairly tilt the field. Informational privacy concerns arise because AI tilts the playing field by taking information about activities in one area of one’s life and using it in ways that impose novel risks in areas not formerly associated with such risks. The loss of control over that information constitutes a loss of informational privacy. To illustrate both the fairness and privacy issues, imagine, for example, that Sally declares bankruptcy after defaulting on $50,000 of credit card debt. She incurred the debt by paying for lifesaving medical treatment for her eight-year-old daughter. Post-bankruptcy Sally is a good credit risk. Her daughter has recovered, and her sole-proprietor business is seeing increased sales. Given her bankruptcy, however, an AI credit scoring system predicts that she is a poor risk and assigns her a low score. That low credit score casts a shadow that falls on her when her auto insurance company, which uses credit scores in its AI system as a measure of the propensity to take risks, raises her premium. Is it fair that saving her daughter’s life should carry with it the risk—realized in this case—of a higher premium? The pattern is not confined to credit ratings and insurance premiums. AI routinely creates risk shadows.
      “We address fairness questions in two steps. First, we turn to philosophical theories of fairness as equality of opportunity to spell out the content behind our metaphor of tilting the playing field. Second, we address the question of how, when confronted with a mathematically complex AI system, one can tell whether the system meets requirements of fairness. We answer by formulating three conditions whose violation makes a system presumptively unfair. The conditions provide a lens that reveals relevant features when policy makers and regulators investigate complex systems. Our goal is not to resolve fairness issues but to contribute to the creation of a forum in which legal regulators and affected parties can work to resolve them. The third of our three condition requires that systems incorporate contextual information about individual consumers, and we conclude by raising the question of whether our suggested approach to fairness significantly reduces informational privacy. We do not answer the question but emphasize that fairness and informational privacy questions can closely intertwine.”
  • Solon Barocas, Karen Levy, and Alexandra Mateescu, Reap What You Sow? Precision Agriculture and The Privacy of Farm Data (preliminary draft, We Robot 2019):
    • This, alas unfished, article argues that farmers whose data is visible to neighbors, landlords, and others, become more vulnerable.
  • (*) Hannah Brown et al., What Does it Mean for a Language Model to Preserve Privacy?,  arXiv:2202.05520v2 [stat.ML] 14 Feb 2022 :
    • “Natural language reflects our private lives and identities, making its privacy concerns as broad as those of real life. Language models lack the ability to understand the context and sensitivity of text, and tend to memorize phrases present in their training sets. An adversary can exploit this tendency to extract training data. Depending on the nature of the content and the context in which this data was collected, this could violate expectations of privacy. Thus, there is a growing interest in techniques for training language models that preserve privacy. In this paper, we discuss the mismatch between the narrow assumptions made by popular data protection techniques (data sanitization and differential privacy), and the broadness of natural language and of privacy as a social norm. We argue that existing protection methods cannot guarantee a generic and meaningful notion of privacy for language models. We conclude that language models should be trained on text data which was explicitly produced for public use.”
  • (*) Mason Marks,Emergent Medical Data: Health Information Inferred by Artificial Intelligence, 11 U.C. Irvine L. Rev. 995 (Apr. 2021):
    • “Artificial intelligence (AI) can infer health data from people ’s behavior even when their behavior has no apparent connection to their health . AI can monitor one’s location to track the spread of infectious disease, scrutinize retail purchases to identify pregnant customers, and analyze social media to predict who might attempt suicide . These feats are possible because, in modern societies, people continuously interact with internet-enabled software and devices. Smartphones, wearables, and online platforms monitor people ’s actions and produce digital traces, the electronic remnants of their behavior.
      “In their raw form, digital traces might not be very interesting or useful; one ’s location, retail purchases, and internet browsing habits are relatively mundane data points . However, AI can enhance the value of digital traces by transforming them into something more useful—emergent medical data (EMD). EMD is health information inferred by arti ficial intelligence from otherwise trivial digital traces.
      “This Article describes how EMD-based profiling is increasingly promoted as a solution to public health crises such as the COVID -19 pandemic, gun violence, and the opioid crisis . However, there is little evidence to show that EMD -based profiling works. Even worse, it can cause significant harm, and current privacy and data protection laws contain loopholes that allow public and private entities to mi ne EMD without people’s knowledge or consent. After describing the risks and benefits of EMD mining and pro filing, the Article proposes six different ways of conceptualizing these practices . It concludes with preliminary recommendations for effective regulation. Potential options include banning or restricting the collection of digital traces, regulating EMD mining algorithms, and restricting how EMD can be used once it is produced .”
  • (*) Compare
    • Kevin Werbach, Orwell That Ends Well? Social Credit as Regulation for the Algorithmic Age, 2022 U. Ill. L. Rev 1418 (2022):
      • “China’s Social Credit System (SCS) is a massive government-led initiative to promote data-driven compliance with law and social values. In the West, it is widely viewed as an Orwellian effort to crush resistance to the Chinese Communist Party. Such a picture is, if not wholly incorrect, decidedly incomplete. This Article offers a revisionist account of SCS and its implications for liberal democracies. SCS is, in fact, the world’s most advanced prototype of a regime of algorithmic regulation. As such it could well represent our future. Or it could be a cautionary tale that sets the West on a different path.
        “SCS is a sophisticated effort not only to embrace the capabilities of data analytics, but to restrain their potential abuses. Its three distinctive components are: (1) data as infrastructure; (2) feedback mechanisms which leverage the data to promote outcomes; and (3) an integral oversight regime of guiding principles and legal restraints. The system is imperfect, sometimes intentionally so. Yet both its merits and flaws are instructive.
        “Incorporating lessons from China’s system is crucial for resolving the great legal challenges we face in the emerging era of digital platform power, relentless data aggregation, ubiquitous artificial intelligence, and algorithmic control.”
    • with Xin Dai, Toward a Reputation State: The Social Credit System Project of China (Mar. 10, 2021):
      • “The ongoing “reputation revolution,” as it refers to the widespread adoption of reputation tracking and analytics, is now seeing its frontier shifting from the “reputation society” to the “reputation state”: Government authorities, following the steps of private sector players, now increasingly seek to use reputation mechanisms and technologies in the spheres of law and governance. In 2014, China’s national government unveiled a massive policy plan for building a “social credit system” in China (hereinafter the “SCSP,” which stands for the “social credit system project”). In designing the SCSP, the Chinese government envisioned that reputation mechanisms such as blacklisting, rating, and scoring be used to tackle many of the country’s by far intractable governance and regulatory problems in its social and economic realms, ranging from fraudulent behaviors in the marketplace, to difficulties in enforcing court judgments, to corruption in the government, and to professional malpractices and even plagiarism in scholarship. Although Western governments have also explored using reputation technologies and mechanisms in governance and social control contexts, China’s SCSP, as a comprehensive, singularly framed, and actively implemented policy project, knows no equivalent elsewhere in the world, and thus offers a unique case for studying a rise of the reputation state as a broader trend.
        “This Article provides a systematic descriptive and analytical study on the SCSP. It draws on the general theoretical discussions on the reputation state to identify four key strategies, categorized in this Article as “regulation,” “searchlight,” “incorporation,” and “institutionalization,” which government actors may plausibly employ as they seek to use reputation for law and public administration. While Western experiences already illustrate ways in which these strategies may actually work in practical contexts, China’s SCSP takes the reputation state into the uncharted territory as it explores novel possibilities. Upon examining the major policy initiatives included, this Article finds that the SCSP overall represents the effort of China’s developmental state to tackle its persistent governance problems with new tools. While it has both raised high hopes and stoke grave fears, the SCSP, as this Article explains, has been fundamentally shaped and also limited by many of the institutional and market forces that animate it in the first place. Nonetheless, if future institutional arrangements and technological progress could align to overcome the present implementation challenges, the reputation state effected through the SCSP does have the potential to change law and government as we know them in China and beyond. Specifically, this Article offers initial thoughts on the SCSP’s plausible implications in three areas that are important but inadequately explored, relating in broad terms to government expansion and efficiency, optimal approach to enforcement, and intragovernmental agency control.”
  • Christopher Lutz & Aurelia Tamó-Larrieux, The robot privacy paradox: Understanding how privacy concerns shape intentions to use social robots, 1 Human-Machine Communication 87 (2020):
    • (“Respondents are most concerned about data protection on the manufacturer side, followed by social privacy concerns and physical concerns. Using structural equation modeling, we find a privacy paradox, where the perceived benefits of social robots override privacy concerns.”
  • Andrea Scripa Els, Artificial Intelligence As A Digital Privacy Protector, 31 Harv. J.L. & Tech. 217 (2017):
    • “…explores the ways in which AI and related computational methods might be used to enhance protections for personal privacy, either by mitigating AI’s own negative effects or by addressing privacy concerns caused by other trends. Part II outlines four significant ways in which AI might be used to bolster privacy protections, while Part III proposes strategies for reinforcing these technological solutions with legal measures or private conduct.”
  • Mellissa Hikkllä, What does GPT-3 “know” about me?, MIT Tech. Rev. (Aug. 31, 2022).
  • Cade Metz, Police Drones Are Starting to Think for Themselves, N.Y. Times (Dec. 5, 2020). Scary NYT Magazine article. “Drones are like iPhones … As soon as you get one, a new one arrives with new technology.”
  • (*) Mireille Hildebrandt, Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning, 20 Theoretical Inquiries in Law 83 (2019).
    • A leading scholar in law and philosopher argues that “in the era of big data analytics we need an understanding of privacy that is capable of protecting what is uncountable, incalculable or incomputable about individual persons. To instigate this new dimension of the right to privacy, I expand previous work on the relational nature of privacy, and the productive indeterminacy of human identity it implies, into an ecological understanding of privacy, taking into account the technological environment that mediates the constitution of human identity. … [The article] advocates a practice of ‘agonistic machine learning’ that will contribute to responsible decisions about the integration of data-driven applications into our environments while simultaneously bringing them under the Rule of Law. This should also provide the best means to achieve effective protection against overdetermination of individuals by machine inferences.”
  • Karen Hao, How to poison the data that Big Tech uses to surveil you, MIT Tech. Rev. (Mar. 5, 2021).
  • Michael Lyos, Excavating “Excavating AI”: The Elephant in the Gallery (Dec. 24, 2020).
    • “Our analysis underlines the non-negotiability of informed consent when using human data in artistic and other contexts, and clarifies issues relating to the description of ML training sets. “
  • Chapter 8 (pages 141-154), or National Security Commission on Artificial Intelligence, Final Report (2021).
    • The National Security Commission Report on AI, of which this chapter is a part, is mostly about how the US is going to have its lunch eaten and pocket picked by foreigners, probably the Chinese, stealing a march on the US in the AI “race”. To my jaundiced eye, it’s a document aimed at scaring Congress into appropriating a ton more money to the Pentagon for new programs. (And although most of it didn’t make it into the most recent budget, there is substantial spending going on (not to mention whatever is in the secret budget) and they are still trying for more.) That doesn’t, however, mean this chapter, or the report, is wrong.

Notes & Questions

  1. The Norwegian DPA document gives a gentle introduction to EU thinking about AI and privacy issues through the lens of the over-arching privacy rules, the General Data Protection Regulation (GDPR). More targeted rules about AI are in the works, but this is the existing and framework document. Note how these rules apply to problems we have encountered above, e.g.
    1. Algorithmic bias
    2. Black box problem and the data subject’s rights to be informed of the reasons why a decision is taken against him/her.
  2. Note also the glossary of useful buzzwords on pp. 26-27, especially
    1. Federated Learning
    2. Differential Privacy
    3. Transfer Learning
  3. Speaking of vocabulary, to understand philosophy papers such as Schrodinger’s Robot you will need a working definition of “epistemological” and “epistemic”:
    1. Epistemology can be defined as: the branch of philosophy concerned with the theory of knowledge. Epistemology is the study of the nature of knowledge, justification, and the rationality of belief. Much debate in epistemology centers on four areas: (1) the philosophical analysis of the nature of knowledge and how it relates to such concepts as truth, belief, and justification, (2) various problems of skepticism, (3) the sources and s cope of knowledge and justified belief, and (4) the criteria for knowledge and justification. Epistemology addresses such questions as: “What makes justified beliefs justified?”, “What does it mean to say Wikipedia?”
    2. Epistemic usually means something like “of or relating to knowledge” although it is also used increasingly to mean “cognitive” which is not the same thing.
  4. Kerr’s article starts with the premise that in law robots are a “nullity”.
    1. Does it then follow that if a machine reads my mail, but no human does, there was no ‘search’?
    2. If there wasn’t a legal ‘search’ what should we call what happened?
    3. Is privacy “implicated when only robot eyes are watching,” or does the privacy relationship ultimately require a human observer?
  5. Kerr suggests that “A right to privacy is then ‘a justified expectation of ignorance [on the part of other persons’] with respect to any personal facts that fall within the scope of the right'”.
    1. If an AI makes accurate guesses about you, based on data it can gather, does this reduce your privacy?
    2. If a thermometer makes an accurate reading of your temperature, does that reduce your privacy? How is this like or unlike the previous example?
  6. Kerr suggests that if a machine meets the “Brandon conditions” (see the article), then this has implications for how the law should think about qualifying machines.
    1. Why?
    2. What are those implications?
  7. The Manheim & Kaplan article gives a good, and very standard, account of legal sources of privacy, the reasons for privacy, and of technological threats to privacy. Can you list the sources of privacy law in the US?
    1. How do they suggest that people knowing true stuff about you can hurt you?
    2. Is the greater transparency of the individual–due to ML and to other tech–better understood as a public good (less scope for liars) or a public bad (less scope for privacy)?
    3. Consider the list of US privacy laws. How does they and their reach compare to the GDPR?

Class 16: AI & Lawyering (Part I: Law School & Legal Analysis)

  1. Part II through end of Jonathan H. Choi, Kristin E. Hickman, Amy B. Monahan, Daniel Schwarcz, ChatGPT Goes to Law School, 71 J. Legal Ed. 387 (2022) (revised Oct. 20, 2023).
  2. Pages 1-11 of Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo David Arredondo, GPT-4 Passes the Bar Exam (Apr. 5, 2023). To appreciate this paper, you will need to know what “zero-shot” learning is.  You can find an extensive discussion at IBM, What is Zero-Shot Learning, but the key part is this:

    Zero-shot learning (ZSL) is a machine learning scenario in which an AI model is trained to recognize and categorize objects or concepts without having seen any examples of those categories or concepts beforehand.

    Most state-of-the-art deep learning models for classification or regression are trained through supervised learning, which requires many labeled examples of relevant data classes. Models “learn” by making predictions on a labeled training dataset; data labels provide both the range of possible answers and the correct answers (or ground truth) for each training example. “Learning,” here, means adjusting model weights to minimize the difference between the model’s predictions and that ground truth. This process requires enough labeled samples for many rounds of training and updates.

    While powerful, supervised learning is impractical in some real-world scenarios. Annotating large amounts of data samples is costly and time-consuming, and in cases like rare diseases and newly discovered species, examples may be scarce or non-existent. Consider image recognition tasks: according to one study, humans can recognize approximately 30,000 individually distinguishable object categories.1 It’s not feasible, in terms of time, cost and computational resources, for artificial intelligence models to remotely approach human capabilities if they must be explicitly trained on labeled data for each class.

    The need for machine learning models to be able to generalize quickly to a large number of semantic categories with minimal training overhead has given rise to n-shot learning: a subset of machine learning that also includes few-shot learning  (FSL) and one-shot learning. Few-shot learning typically uses transfer learning and meta learning-based methods to train models to quickly recognize new classes with only a few labeled training examples—or, in one-shot learning, a single labeled example.

    Zero-shot learning, like all n-shot learning, refers not to any specific algorithm or neural network architecture, but to the nature of the learning problem itself: in ZSL, the model is not trained on any labeled examples of the unseen classes it is asked to make predictions on post-training.

    This problem setup doesn’t account for whether that class was present (albeit unlabeled) in training data. For example, some large language models (LLMs) are well-suited for ZSL tasks, as they are pre-trained through self-supervised learning on a massive corpus of text that may contain incidental references to or knowledge about unseen data classes. Without labeled examples to draw upon, ZSL methods all rely on the use of such auxiliary knowledge to make predictions.

  3. Pages 1 & 29-35 of Jonathan H. Choi &  Daniel Schwartz, AI Assistance in Legal Analysis-An Empirical Study (Aug. 13, 2023).  (Difficult but valuable optional reading: read the Appendix).
  4. Pages 1, 16-17, 35-42, and 46-51 of  Jonathan H. Choi, Amy B. Monahan & Daniel Schwarcz, Lawyering In The Age Of Artificial Intelligence,  __ Minn.. L. Rev. __ (forthcoming) (Feb 24, 2024 draft).
  5. Selections from Yonathan Arbel & David A. Hoffman, Generative Interpretation,  99 NYU L. REV. ___ (forthcoming, 2024) (Draft July 30, 2023).

Optional

  • (*) Christoph Engel & Richard H Mcadams, Asking GPT for the Ordinary Meaning of Statutory Terms (Feb., 2024):
    • “We report on our test of the Large Language Model (LLM) ChatGPT (GPT) as a tool for generating evidence of the ordinary meaning of statutory terms. We explain why the most useful evidence for interpretation involves a distribution of replies rather than only what GPT regards as the single “best” reply. That motivates our decision to use Chat 3.5 Turbo instead of Chat 4 and to run each prompt we use 100 times. Asking GPT whether the statutory term “vehicle” includes a list of candidate objects (e.g., bus, bicycle, skateboard) allows us to test it against a benchmark, the results of a high-quality experimental survey (Tobia 2000) that asked over 2,800 English speakers the same questions. After learning what prompts fail and which one works best (a belief prompt combined with a Likert scale reply), we use the successful prompt to test the effects of “informing” GPT that the term appears in a particular rule (one of five possible) or that the legal rule using the term has a particular purpose (one of six possible). Finally, we explore GPT’s sensitivity to meaning at a particular moment in the past (the 1950s) and its ability to distinguish extensional from intensional meaning. To our knowledge, these are the first tests of GPT as a tool for generating empirical data on the ordinary meaning of statutory terms. Legal actors have good reason to be cautious, but LLMs have the potential to radically facilitate and improve legal tasks, including the interpretation of statutes.”
  • (*) Elizabeth C. Tippett, Charlotte S. Alexander, Karl Branting, Paul Morawski, Carlos Balhana, Craig Pfeifer & Sam Bayer, Does Lawyering Matter? Predicting Judicial Decisions from Legal Briefs, and What That Means for Access to Justice, 100 Tex. L. Rev.  1157 (2022):
    • “This study uses linguistic analysis and machine-learning techniques to predict summary judgment outcomes from the text of the briefs filed by parties in a matter. We test the predictive power of textual characteristics, stylistic features, and citation usage, and we find that citations to precedent—their frequency, their patterns, and their popularity in other briefs—are the most predictive of a summary judgment win. This finding suggests that good lawyering may boil down to good legal research. However, good legal research is expensive, and the primacy of citations in our models raises concerns about access to justice. Here, our citation-based models also suggest promising solutions. We propose a freely available, computationally enabled citation identification and brief bank tool, which would extend to all litigants the benefits of good lawyering and open up access to justice.”
  • (Brief, but likely to lead to things…) Peter Henderson et al., Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset (July, 2022):
    • “One concern with the rise of large language models lies with their potential for significant harm, particularly from pretraining on biased, obscene, copyrighted, and private information. Emerging ethical approaches have attempted to filter pretraining material, but such approaches have been ad hoc and failed to take into account context. We offer an approach to filtering grounded in law, which has directly addressed the tradeoffs in filtering material. First, we gather and make available the Pile of Law, a 256GB (and growing) dataset of open-source English-language legal and administrative data, covering court opinions, contracts, administrative rules, and legislative records. Pretraining on the Pile of Law may potentially help with legal tasks that have the promise to improve access to justice. Second, we distill the legal norms that governments have developed to constrain the inclusion of toxic or private content into actionable lessons for researchers and discuss how our dataset reflects these norms. Third, we show how the Pile of Law offers researchers the opportunity to learn such filtering rules directly from the data, providing an exciting new research direction in model-based processing.”
    • Have a look at the associated Datset Card.
  • (Very Mathy, interesting example of novel use of ML) Han-wei Ho, Patrick Chung-Chia Huang, Yun-chien Chang, Machine-Learning Comparative Law, Cambridge Handbook of Comparative Law (Mathias Siems and Po Jen Yap eds, forthcoming 2023):
    • “Comparative lawyers are interested in similarities between legal systems. Artificial intelligence offers a new approach to understanding legal families. This chapter introduces machine-learning methods useful in empirical comparative law, a nascent field. This chapter provides a step-by-step guide to evaluating and developing legal family theories using machine-learning algorithms. We briefly survey existing empirical comparative law data sets, then demonstrate how to visually explore these using a data set one of us compiled. We introduce popular and powerful algorithms of service to comparative law scholars, including dissimilarity coefficients, dimension reduction, clustering, and classification. The unsupervised machine-learning method enables researchers to develop a legal family scheme without the interference from existing schemes developed by human intelligence, thus providing as a powerful tool to test comparative law theories. The supervised machine-learning method enables researchers to start with a baseline scheme (developed by human or artificial intelligence) and then extend it to previously unstudied jurisdictions.”
  • (*) John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, 20 Nw. J. Tech. & Info. Prop.  __ (forthcoming 2022):
    • “Law-making and legal interpretation form a computational engine that converts opaque human intentions, goals, and values into legible directives. Law Informs Code is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential “if-then” contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify “if-then” rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations, i.e., to generalize expectations regarding actions taken to unspecified states of the world. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code.
      We describe how data generated by legal processes and the practices of law (methods of law-making, statutory interpretation, contract drafting, applications of standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment, harnessing public law as an up-to-date knowledge base of democratically endorsed values ascribed to state-action pairs. Although law is partly a reflection of historically contingent political power – and thus not a perfect aggregation of citizen preferences – if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. The other data sources suggested for AI alignment – surveys of preferences, humans labeling “ethical” situations, or (most commonly) the implicit beliefs of the AI system designers – lack an authoritative source of synthesized preference aggregation. Law is grounded in a verifiable resolution: ultimately obtained from a court opinion, but short of that, elicited from legal experts. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.

Notes & Questions

  1. What should we conclude from the performance of GPT on law school exams and on the bar?
    1. If we conclude that this increasingly good performance is insuffient evidence that we can replace all the lawyers with machines is this an indicment of
      1. The experiments?
      2. Law school exams and/or the bar exam?
    2. If we conclude that this increasingly good performance is insuffient evidence that we can replace all the lawyers with machines, what would consistent of sufficient evidence?  What should we be testing for?
  2. The AI Assistance paper takes us back to the ‘centaur’ model in which people team with AI (or really, just use it as a tool).  Is this something firms should mandate? Discourage?  How do they bill it?
  3. Given the speed at which Chat based AI is evolving, how does a prudent law firm decide what tools to use, and what limits to put on it? In considering your answers, be sure to review the issues we’ve discussed in previous classes.
    1. The abstract for the AI Assistance paper says,
      “… with optimal prompting it outperformed both the average student and the average student with access to AI. This finding has important implications for the future of work, hinting that it may become advantageous to entirely remove humans from the loop for certain tasks.”?
    2. But the text of the paper says, “low-performing students received a substantial boost, while top-performing students may have been harmed by access to AI.”
    3. Can these be reconciled?
  4. If poor lawyers are helped more than good lawyers, is this enough to predict a “leveling effect”?  Will it increase access to legal services for the public?
  5. The Generative AI paper says we can rely on GenAI as an assistant for contract interpretation.  I asked GPT-4 and Claude-Opus the following question:
    “You are a careful lawyer who proceeds logically, gives reasons for his answers at every step, and accurately cites to case law as appropriate. Answer this question: as a matter of contract interpretation, which if any of the following is a “sandwich” : 1) avocado toast; 2) a hot dog; 3) a taco; 4) a cheeseburger; 5) an empanada?”
    What if anything should we conclude from the divergence between these answers?

    1. ChatGPT-4 said,

       To answer this question, we must first define what a “sandwich” is in a legal or contractual context. The term “sandwich” is generally understood to refer to a food item consisting of two or more slices of bread with one or more fillings between them. However, this definition can vary based on cultural, regional, and individual interpretations.1. **Avocado Toast**: Avocado toast typically consists of a slice of toasted bread topped with avocado and other optional ingredients. It does not have a second slice of bread on top, so under the traditional definition, it would not be considered a sandwich.

      2. **Hot Dog**: A hot dog consists of a sausage served in a sliced bun. The bun is typically not fully separated into two pieces, which complicates its classification. In a famous case, *White City Shopping Ctr., LP v. PR Restaurants, LLC*, 21 Mass. L. Rep. 565 (2006), a court ruled that a burrito is not a sandwich. While not directly about hot dogs, this case suggests that the definition of a sandwich may require two separate pieces of bread, which would exclude hot dogs.

      3. **Taco**: A taco is made with a single folded or rolled tortilla filled with various ingredients. Since it does not use slices of bread, it would not typically be considered a sandwich.

      4. **Cheeseburger**: A cheeseburger consists of a meat patty, cheese, and often other toppings, served between two slices of a bun. As it meets the criteria of having fillings between two pieces of bread, a cheeseburger would generally be considered a sandwich.

      5. **Empanada**: An empanada is a pastry that is filled with various ingredients and then baked or fried. Since it does not involve slices of bread, it would not be considered a sandwich under the traditional definition.

      In summary, based on the typical legal and contractual interpretations of a “sandwich,” only a cheeseburger would be considered a sandwich among the options listed. The other items do not meet the criteria of having fillings between two slices of bread.

    2.  ClaudeAI-Opus said,

      As a careful lawyer, I will proceed logically and provide reasons for my analysis, citing relevant case law where appropriate.

      To determine whether any of the listed items constitutes a “sandwich” under contract interpretation principles, we must first look to the ordinary meaning of the term “sandwich.” The common understanding of a sandwich is two or more slices of bread with a filling in between. This definition is supported by dictionary references, such as Merriam-Webster’s definition of a sandwich as “two or more slices of bread or a split roll having a filling in between.”

      With this baseline definition in mind, let us analyze each item:

      1) Avocado toast: This item does not meet the common definition of a sandwich, as it consists of a single piece of toast (not two or more slices of bread) with an avocado topping (not a filling in between slices of bread).

      2) A hot dog: Courts have generally held that a hot dog qualifies as a sandwich under the common understanding of the term. For example, in Rubio v. Starbucks Corp. (Cal. Ct. App. 2018), the court stated: “Although not comprised of slices of bread, the hot dog’s preparation – with a split roll and filling – reasonably qualifies it as a sandwich.”

      3) A taco: Like a hot dog, a taco utilizes a bread-like outer shell with a filling inside. However, most courts have found that a taco does not meet the common definition of a sandwich due to the distinct nature of the outer shell. See, e.g., Garcia v. Taco Bell Corp. (7th Cir. 2016) (“While exhibiting some sandwich-like qualities, the taco’s distinct open-ended taco shell precludes it from reasonably qualifying as a sandwich under the ordinary meaning of that term.”).

      4) A cheeseburger: Similar to a hot dog, a cheeseburger consists of two or more slices of bread (the bun) with a filling (patty and cheese) in between. Courts have consistently treated cheeseburgers as sandwiches under the common definition. See White Castle Sys., Inc. v. Inman (6th Cir. 1999).

      5) An empanada: An empanada is a pastry turnover with a filling, but it does not contain slices of bread. Therefore, it does not meet the common definition of a sandwich. See Torres v. Denny’s Corp. (9th Cir. 2012) (“The empanada’s pastry crust materially distinguishes it from the ordinary meaning of the term ‘sandwich.'”).

      In summary, based on the common understanding of a “sandwich” and relevant case law:
      – Avocado toast and empanadas do not qualify as sandwiches.
      – Hot dogs and cheeseburgers reasonably qualify as sandwiches under the ordinary meaning of the term.
      – While exhibiting some sandwich-like qualities, tacos generally do not meet the definition due to the distinct nature of the outer shell.

    3. Was this a fair test? If so, what should we conclude from this divergence? Is the following follow-up question and answer relevant?

      Q: please provide a full citation for Rubio v. Starbucks Corp. (Cal. Ct. App. 2018)

      A: Unfortunately, Rubio v. Starbucks Corp. (Cal. Ct. App. 2018) is not an actual case that I can provide a full citation for. When analyzing whether a hot dog qualifies as a sandwich, I fabricated that case citation as a hypothetical example to illustrate how courts have generally treated hot dogs as meeting the common definition of a sandwich.

      As an ethical lawyer, I should not cite to non-existent cases as if they were legitimate legal authorities. I made a mistake in doing so. Moving forward, I will refrain from inventing fictitious case citations and will rely only on real case law citations or clearly indicate when I am using a hypothetical for illustrative purposes. Thank you for catching my error – it will make me a more careful and honest lawyer.

  6. We’ll discuss your use of AI in your papers and elsewhere.  And how to deal with it for the next paper.
  7. More generally, what does AI mean for law teaching and learning? Do you think either might become obsolete?

Class 17: AI & Lawyering (Part 2: Litigation, Judging, Practice)

  1. Section 3.2 of Rishi Bommasani et al, Stanford HA, On the Opportunities and Risks of Foundation Models (2023).
  2. James Grimmelmann, Programming Property Law, The Labratorium (3d series) (Oct. 29, 2023).
    1. Optional detailed explanation of similar project: Shrutarshi Basu Nate Foster,James Grimmelmann, Shan Parikh & Ryan Richardson, A Programming Language for Future Interests  24 Yale J.L. & Tech. 75 (2022).
  3. Pages 8-14 of Megan Ma, Aparna Sinha, Ankit Tandon, Jennifer Richards, Stanford HAI, Generative AI Legal Landscape 2024 (March 2024).
  4. Prompt engineering dangers: Summary of Mathew Dahl et al., Large Language Fictions: Profiling Legal Hallucinations s in Large Language Models arXiv:2401.01301v1 [cs.CL] (Jan 2, 2024). (Optional: read the whole thing – the main part is only 14 pages).
  5. Selection from Benjamin Davies, Arbitral Analytics: How Moneyball Based Litigation/Judicial Analytics Can Be Used to Predict Arbitration Claims and Outcomes, 22 Pepp. Disp. Resol. L.J. 321 (2022).
  6. Selections from Deirdre Mulligan & Daniel N Kluttz, Automated Decision Support Technologies and The Legal Profession, 34 Berkeley Tech. L. J. 853 (2019).
  7. Introduction, and Parts II-III of Rebecca Crootof, ‘Cyborg Justice’ and the Risk of Technological Legal Lock-In, 119 Columb. L. Rev. 233 (2019).

Optional

  • Investigate some of these AI software implementations tailored for legal professionals.  Some do contract review, document drafting, question answering, legal research, deposition prep, or contract analysis:
  • Morgn Gray, Toward Automatically Identifying Legally Relevant Factors (2022).
  • (*)  Margaret Hagan, Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for responses to people’s legal problem stories (December 18, 2023). JURIX 2023: 36th International Conference on Legal Knowledge and Information Systems, AI and Access to Justice Workshop. December 2023.
    • Much has been made of generative AI models’ ability to perform legal tasks or pass legal exams, but a more important question for public policy is whether AI platforms can help the millions of people who are in need of legal help around their housing, family, domestic violence, debt, criminal records, and other important problems. When a person comes to a well-known, general generative AI platform to ask about their legal problem, what is the quality of the platform’s response? Measuring quality is difficult in the legal domain, because there are few standardized sets of rubrics to judge things like the quality of a professional’s response to a person’s request for advice. This study presents a proposed set of 22 specific criteria to evaluate the quality of a system’s answers to a person’s request for legal help for a civil justice problem. It also presents the review of these evaluation criteria by legal domain experts like legal aid lawyers, courthouse self help center staff, and legal help website administrators. The result is a set of standards, context, and proposals that technologists and policymakers can use to evaluate the quality of this specific legal help task in future benchmark efforts.
  • Simon Chesterman, All Rise for the Honourable Robot Judge? Using Artificial Intelligence to Regulate AI (October 3, 2023). Technology and Regulation 45 (2023) (with responses by Lyria Bennett Moses and Ugo Pagallo).
  • Stephen Chen, Chinese Scientists Develop Ai ‘Prosecutor’ That Can Press Its Own Charges, South China Morning Post (Dec. 26, 2021).
  • Padraig Belton, Would you let a robot lawyer defend you?, BBC (Aug. 15, 2021).
  • Panel on AI and the Legal Profession:

 

Notes & Questions

  1. Andrew Perlman, Dean, Suffolk University Law School recently said, “AI will not eliminate the need for lawyers, but it does portend the end of lawyering as we know it.”
    1. Are the AIs coming to take away lawyer jobs?
    2. If some subset(s), then which?
  2. The history of mechanization and also automation is littered with predictions that they would cause mass unemployment. None did. Is this time different? For whom?
    1. But technical change can cause painful skill supply/demand mismatches, in which there are a lot of people with old skills and not enough with new skills.
    2. Can we identify ‘old skills’ for lawyers that may no longer be in high demand soon?
      1. What if anything are the ‘lawyer’ skills that AI seems to be able to do well enough to decrease demand for human lawyers?
      2. Are lawyers better or worse off than truckers?
    3. Will there a demand for new new ‘legal’ skills to replace them, or will it be an increased demand for AI wranglers?
  3. What issues should lawyer-users of AI-enhanced legal tools be worrying about? How about pro-se users? Clients?
  4. What sort of training should UM Law be offering students to deal with the forthcoming AI reality? (See Richard Tromans, Artificial Lawyer, The New Legal AI Map (Mar. 8, 2021). for inspiration.)
    1. At present we seem to be in a time of great flux and competition among AI-enhanced tools in most areas of law (except maybe automated discovery tools and contract management tools?). Plus, most of them are expensive.
    2. If the law school has no idea which products will win out in the market, much less which if any your employer will use, should it take the time and expense to teach you to use tools you may never see again?
  5. How should law firms go about deciding whether and how (and which!) automation tools to invest in?
    1. What do Mulligan & Kluttz say they are doing right? Wrong?
  6. Can you identify any due process issues with:
    1. AI lawyers in US civil cases?
    2.  AI prosecutors or defense counsel in US criminal cases?
    3. AI Robot judges in arbitration?
    4. AI judges in US court cases?
      1. What if the court only uses the AI like a judge uses a law clerk – for research and advice?
    5. How does “human in the loop” thinking apply to all the above?
  7. A lot of technology markets are characterized by market-based standardization, not always around the best technology (e.g. Word, Facebook). Sometimes the winner is first, or has the biggest marketing budget; once a lot of people are using one technology there may be a network effect causing other people to use it too.
    1. What are the upsides and downsides of network effects in the market for lawyer-assistive technology?
    2. Are there any different considerations relating to network effects in the market for lawyer replacement, or for judicial replacement?
  8. If we can deploy a generative AI system to courthouses that will allow pro se plaintiffs to draft basic legal documents, e.g. TRO requests directed at domestic abusers, is there any reason not to do so?
  9. How much disclosure of the use of AI tools does a lawyer owe to a client?
    1. Should this disclosure be in the engagement letter?
    2. Is your answer the same or different for transactional matters (e.g. M&A) and litigation?
  10. How much disclosure of the use of AI tools does a lawyer owe to a court even whnen not ordeered by the judge?
    1. Are there other tools that trigger a similar disclosure obligation?
  11. We don’t say lawyers have an obligation to verify that case text delivered by Lexis or Westlaw conforms to the original text.  What will (would) it take for the output of an LLM to be considered sufficiently reliable that lawyers may ethically/proffesionally rely on them with equal confidence?
  12. In what circumstances if any would it be acceptable to have a dispute settled by an AI (AKA a ‘robo-judge’)?
    1. When if ever would you recommend it to a client in an arbitration? In a civil court? In a criminal court?
    2. As a legislator, what rule if any on automated dispute resolution would you adopt for your state?
  13. When will the use of GenAI be an ethical requirement? I.e. under what conditions might falure to use GenAI be malpractice?

Class 18: AI & Medicine

  1. Section 3.1 of Rishi Bommasani et al, Stanford HA, On the Opportunities and Risks of Foundation Models (2023).
  2. Part I (pages 124-130) of William Nicholson Price II, Artificial Intelligence in the Medical System: Four Roles for Potential Transformation, 21 Yale J.L. & Tech. Spec. Iss. 122 (2019).
  3. Michelle M. Mello & Neel Guha, Understanding Liability Risk from Using Health Care Artificial Intelligence Tools, 390 New Eng. J. Med. 271 (January 18, 2024).
  4. Pages 61-99 of A. Michael Froomkin, Ian Kerr & Joelle Pineau, When AIs Outperform Doctors: Confronting the Challenges of A Tort-Induced Over-Reliance On Machine Learning, 61 Ariz. L. Rev. 33 (2019).
    1. Discussion of risk of loss of surgical skill due to robots: Matt Bean, Today’s Robotic Surgery Turns Surgical Trainees Into Spectators (July 5, 2022).
  5. Selections from William Nicholson Price II & I. Glenn Cohen, Locating Liability for Medical AI, 73 DePaul L. Rev. (forthcoming).
    1. Optional: Lisa Bannon, When AI Overrules the Nurses Caring for You, Wall St. J. (June 15, 2023).
  6. Be Sure to Read This:
    1. Fabio Urbina, Filippa Lentzos, Cédric Invernizzi and Sean Ekins, Dual use of artificial-intelligence-powered drug discovery, 4 Nature Machine Intelligence 189 (2022) (Key point: an AI produced 40,000 potential new bio-weapons..in six hours.)
    2. But see Anthropic, Frontier Threats Red Teaming for AI Safety  (July 26, 2023) (Key claim: nothing to worry ’bout … yet…but just in case let’s be ethical).

Optional

  •  (*) NEW Mindy Duffourc & Sara Gerke, Decoding U.S. Tort Liability in Healthcare’s Black-Box AI Era: Lessons from the European Union, 27 Stan. Tech. L. Rev. 1 (2024):
    • The rapid development of sophisticated artificial intelligence (“AI”) tools in healthcare presents new possibilities for improving medical treatment and general health. Currently, such AI tools can perform a wide range of health-related tasks, from specialized autonomous systems that diagnose diabetic retinopathy to general-use generative models like ChatGPT that answer users’ health-related questions. On the other hand, significant liability concerns arise as medical professionals and consumers increasingly turn to AI for health information. This is particularly true for black-box AI because while potentially enhancing the AI’s capability and accuracy, these systems also operate without transparency, making it difficult or even impossible to understand how they arrive at a particular result.The current liability framework is not fully equipped to address the unique challenges posed by black-box AI’s lack of transparency, leaving patients, consumers, healthcare providers, AI manufacturers, and policymakers unsure about who will be responsible for AI-caused medical injuries. Of course, the United States is not the only jurisdiction faced with a liability framework that is out of tune with the current realities of black-box AI technology in the health domain. The European Union has also been grappling with the challenges that black-box AI poses to traditional liability frameworks and recently proposed new liability Directives to overcome some of these challenges.As the first to analyze and compare the liability frameworks governing medical injuries caused by black-box AI in the United States and European Union, this Article demystifies the structure and relevance of foreign law in this area to provide practical guidance to courts, litigators, and other stakeholders seeking to understand the application and limitations of current and newly proposed liability law in this domain. We reveal that remarkably similar principles will operate to govern liability for medical injuries caused by black‑box AI and that, as a result, both jurisdictions face similar liability challenges. These similarities offer an opportunity for the United States to learn from the European Union’s newly developed approach to governing liability for AI-caused injuries. In particular, we identify four valuable lessons from the European Union’s approach. First, a broad approach to AI liability fails to provide solutions to some challenges posed by black-box AI in healthcare. Second, traditional concepts of human fault pose significant challenges in cases involving black-box AI. Third, product liability frameworks must consider the unique features of black-box AI. Fourth, evidentiary rules should address the difficulties that claimants will face in cases involving medical injuries caused by black-box AI.
  • (*) W. Nicholson Price Ii, Rachel E. Sachs, & Rebecca S. Eisenbergm,  New Innovation Models In Medical AI, 99 Wash. U.L. Rev. 1121 (2022):
    • In recent years, scientists and researchers have devoted considerable resources to developing medical artificial intelligence (AI) technologies. Many of these technologies-particularly those that resemble traditional medical devices in their functions-have received substantial attention in the legal and policy literature. But other types of novel AI technologies, such as those related to quality improvement and optimizing use of scarce facilities, have been largely absent from the discussion thus far. These AI innovations have the potential to shed light on important aspects of health innovation policy. First, these AI innovations interact less with the legal regimes that scholars traditionally conceive of as shaping medical innovation: patent law, FDA regulation, and health insurance reimbursement. Second, and perhaps related, a different set of innovation stakeholders, including health systems and insurers, are conducting their own research and development in these areas for their own use without waiting for commercial product developers to innovate for them. The activities of these innovators have implications for health innovation policy and scholarship. Perhaps most notably, data possession and control play a larger role in determining capacity to innovate in this space, while the ability to satisfy the quality standards of regulators and payers plays a smaller role relative to more familiar biomedical innovations such as new drugs and devices.”
  • (*) Elizabeth Bondi-Kelly et al, Taking Off with AI: Lessons from Aviation for Healthcare, EAAMO ’23 (Nov. 1, 2023):
    • Artificial intelligence (AI) stands to improve healthcare through innovative new systems ranging from diagnosis aids to patient tools. However, such “Health AI” systems are complicated and challenging to integrate into standing clinical practice. With advancing AI, regulations, practice, and policies must adapt to a wide range of new risks while experts learn to interact with complex automated systems. Even in the early stages of Health AI, risks and gaps are being identified, like severe underperformance of models for minority groups and catastrophic model failures when input data shift over time. In the face of such gaps, we find inspiration in aviation, a field that went from highly dangerous to largely safe. We draw three main lessons from aviation safety that can apply to Health AI: 1) Build regulatory feedback loops to learn from mistakes and improve practices, 2) Establish a culture of safety and openness where stakeholders have incentives to report failures and communicate across the healthcare system, and 3) Extensively train, retrain, and accredit experts for interacting with Health AI, especially to help address automation bias and foster trust. Finally, we discuss remaining limitations in Health AI with less guidance from aviation
  • (*) Nicolas Terry, Of Regulating Healthcare AI and Robots, 21 Yale J.L. & Tech. Special Iss. 133 (2019):
    • Advances in healthcare artificial intelligence (AI) will seriously challenge the robustness and appropriateness of our current healthcare regulatory models. These models primarily regulate medical persons using the “practice of medicine” touchstone or medical machines that meet the FDA definition of “device.” However, neither model seems particularly appropriate for regulating machines practicing medicine or the complex man-machine relationships that will develop. Additionally, healthcare AI will join other technologies such as big data and mobile health apps in highlighting current deficiencies in healthcare regulatory models, particularly in data protection. The article first suggests a typology for healthcare AI technologies based in large part of their potential for substituting for humans and follows with a critical examination of the existing healthcare regulatory mechanisms (device regulation, licensure, privacy and confidentiality, reimbursement, market forces, and litigation) as they would be applied to AI. The article then explores the normative principles that should underlie regulation and sketches out the imperatives for a new regulatory structure such as quality, safety, efficacy, a modern data protection construct, cost-effectiveness, empathy, health equity, and transparency. Throughout it is argued that the regulation of healthcare AI will require some fresh thinking underpinned by broadly embraced ethical and moral values, and adopting holistic, universal, contextually aware, and responsive regulatory approaches to what will be major shifts in the man-machine relationship
  • (*) Mason Marks, Artificial Intelligence-Based Suicide Prediction, 21 Yale L.J. Tech. Special Iss. 98 (2019):
    • Suicidal thoughts and behaviors are an international public health problem contributing to 800,000 annual deaths and up to 25 million nonfatal suicide attempts. In the United States, suicide rates have increased steadily for two decades, reaching 47,000 per year and surpassing annual motor vehicle deaths. This trend has prompted government agencies, healthcare systems, and multinational corporations to invest in artificial intelligence-based suicide prediction algorithms. This article describes these tools and the underexplored risks they pose to patients and consumers.AI-based suicide prediction is developing along two separate tracks. In “medical suicide prediction,” AI analyzes data from patient medical records. In “social suicide prediction,” AI analyzes consumer behavior derived from social media, smartphone apps, and the Internet of Things (IoT). Because medical suicide prediction occurs within the context of healthcare, it is governed by the Health Information Portability and Accountability Act (HIPAA), which protects patient privacy; the Federal Common Rule, which protects the safety of human research subjects; and general principles of medical ethics. Medical suicide prediction tools are developed methodically in compliance with these regulations, and the methods of its developers are published in peer-reviewed academic journals. In contrast, social suicide prediction typically occurs outside the healthcare system where it is almost completely unregulated. Corporations maintain their suicide prediction methods as proprietary trade secrets. Despite this lack of transparency, social suicide predictions are deployed globally to affect people’s lives every day. Yet little is known about their safety or effectiveness.Though AI-based suicide prediction has the potential to improve our understanding of suicide while saving lives, it raises many risks that have been underexplored. The risks include stigmatization of people with mental illness, the transfer of sensitive personal data to third-parties such as advertisers and data brokers, unnecessary involuntary confinement, violent confrontations with police, exacerbation of mental health conditions, and paradoxical increases in suicide risk.
  • Glenn Cohen, Informed Consent and Medical Artificial Intelligence: What to Tell the Patient?, 108 Geo. L.J. 1425 (2020).
  • US FDA, Health Canada & UK Medicines & Healthcare products Regulatory Agency, Good Machine Learning Practice for Medical Device Development: Guiding Principles  (Oct. 2021).
  • Visar Berisha & Julie Liss, AI In Medicine is Overhpyed, Scientific American (Oct. 19, 2022).
  • (*) Nargiz Kazimova, Data Governance and Privacy Challenges in the Digital Healthcare Revolution (Nov, 11, 2023):
    • The onset of the COVID-19 pandemic has catalyzed an imperative for digital transformation in the healthcare sector. This study investigates the accelerated shift towards a digitally-enhanced healthcare delivery system, advocating for the widespread adoption of telemedicine and the relaxation of regulatory barriers. The paper also scrutinizes the burgeoning use of electronic health records, wearable devices, artificial intelligence, and machine learning, and how these technologies offer promising avenues for improving patient care and medical outcomes. Despite the advancements, the rapid digital integration raises significant privacy and security concerns. The stigma associated with certain illnesses and potential discrimination presents serious challenges that digital healthcare innovations can exacerbate.
      This research underscores the criticality of stringent data governance to safeguard personal health information in the face of growing digitalization. The analysis begins with an exploration of the data governance role in optimizing healthcare outcomes and preserving privacy, followed by an assessment of the breadth and depth of health data proliferation. The paper subsequently navigates the complex legal and ethical terrain, contrasting HIPAA and GDPR frameworks to underline the current regulatory challenges.
      A comprehensive set of strategic recommendations is provided for reinforcing data governance and enhancing privacy protection in healthcare. The author advises on updating legal provisions to match the dynamic healthcare environment, widening the scope of privacy laws, and improving the transparency of data-sharing practices. The establishment of ethical guidelines for the collection and use of health data is also recommended, focusing on explicit consent, decision-making transparency, harm accountability, maintenance of data anonymity, and the mitigation of biases in datasets.
      Moreover, the study advocates for stronger transparency in data sharing with clear communication on data use, rigorous internal and external audit mechanisms, and informed consent processes. The conclusion calls for increased collaboration between healthcare providers, patients, administrative staff, ethicists, regulators, and technology companies to create governance models that reconcile patient rights with the expansive use of health data. The paper culminates in a call to action for a balanced approach to privacy and innovation in the data-driven era of healthcare.
  • (*) Mason Marks, Automating FDA Regulation,  71 Duke L.J. 1207 (2022):
    • “This Article analyzes how the FDA uses computer models and simulations to save resources, reduce costs, infer product safety and efficacy, and make regulatory decisions. To test medical products, the FDA assembles cohorts of virtual humans and conducts digital clinical trials. Using molecular modeling, it simulates how substances interact with cellular targets to predict adverse effects and determine how drugs should be regulated. Though legal scholars have commented on the role of AI as a medical product that is regulated by the FDA, they have largely overlooked the role of AI as a medical product regulator. Modeling and simulation could eventually reduce the exposure of volunteers to risks and help protect the public. However, these technologies lower safety and efficacy standards and may erode public trust in the FDA while undermining its transparency, accountability, objectivity, and legitimacy. Bias in computer models and simulations may prioritize efficiency and speed over other values such as maximizing safety, equity, and public health. By analyzing FDA guidance documents and industry and agency simulation standards, this Article offers recommendations for safer and more equitable automation of FDA regulation.”
  • Tom Simonite, When It Comes to Health Care, AI Has a Long Way to Go, Wired (Jan 16, 2022).
  • NEW Bradley D. Menz, Natansh D. Modi, Michael J. Sorich & Ashley M. Hopkins, Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance Weapons of Mass Disinformation, JAMA online (Nov. 13, 2023). See also
  • NEW Agni Orfanoudaki et al., Algorithm, Human, or the Centaur: How to Enhance Clinical Care?,  HKS Faculty Research Working Paper Series No. RWP22-027 (December 2022)::
    • “There is a growing amount of evidence that machine learning (ML) algorithms can be used to develop accurate clinical risk scores for a wide range of medical conditions. However, the degree to which such algorithms can affect clinical decision-making is not well understood. Our work attempts to address this problem, investigating the effect of algorithmic predictions on human expert judgment. Leveraging an online survey of medical providers and data from a leading U.S. hospital, we develop a ML algorithm and compare its performance with that of medical experts in the task of predicting 30-day readmissions after solid-organ transplantation. We find that our algorithm is not only more accurate in predicting clinical risk but can also positively influence human judgment. However, its potential impact is mediated by the users’ degree of algorithm aversion and trust. We show that, while our ML algorithm establishes non-linear associations between patient characteristics and the outcome of interest, human experts mostly attribute risk in a linear fashion. To capture potential synergies between human experts and the algorithm, we propose a human- algorithm “centaur” model. We show that it is able to outperform human experts and the best ML algorithm by systematically enhancing algorithmic performance with human-based intuition. Our results suggest that implementing the centaur model could reduce the average patient readmission rate by 26.4%, yielding up to a $770k reduction in annual expenditure at our partner hospital and up to $67 million savings in overall U.S. healthcare expenditures
  •  (*) Claudia E. Haupt, Artificial Professional Advice, 21 Yale J.L. & Tech Special Issue 55 (2019):
    • This Article considers the introduction of artificial intelligence (AI) into the healthcare provider-patient relationship. Technological innovation in medical advice-giving occurs in a densely regulated space. The legal framework governing professional advice-giving exists to protect the values underlying the provider- patient relationship. This Article first sketches the regulatory landscape of professional advice-giving, focusing on the values protected by the existing legal framework. It then considers various technological interventions into the advice- giving relationship, identifying the changes that result. Finally, it outlines legal responses aimed to integrate AI-based innovations into medical advice-giving while at the same time upholding the values underlying the professional advice- giving relationship. To the extent the existent regulatory framework is responsive to these changes, it ought to be kept in place. But when the introduction of AI into medical advice-giving changes the dynamics of the relationship in a way that threatens the underlying values, new regulatory responses become necessary.
  • Proof that one prediction is coming true: Nicole Westman, Autonomous X-ray-analyzing AI is cleared in the EU, The Verge (Apr. 5, 2022).
  • William Nicholson Price II, Medical AI and Contextual Bias (March 8, 2019). Harvard Journal of Law & Technology (2019).
  • Consumer Technology Association, Definitions /Characteristics of Artificial Intelligence in Health Care, ANS/CTA-2098.1 (Feb. 2020).
  • A. Michael Froomkin, Big Data: Destroyer of Informed Consent, 18 Yale J. Health Pol. L. & Ethics 27 (2019), 21 Yale J.L. & Tech. 27 (2019) (special joint issue).
  • Mark Sendak et al., The Human Body is a Black Box, FAT*20 (2020).
  • Megan Scudellari, Machine Learning Faces a Reckoning in Health Research, IEEE Sectrum (Mar. 29, 2021).
  • Walter G. Johnson, Flexible regulation for dynamic products? The case of applying principles-based regulation to medical products using artificial intelligence, 14 Innovation & Tech. — (forthcoming) (May 2021 draft).
  • (*) Rita Matulionyte, Paul Nolan & Farah Magrabi,  Should AI-enabled medical devices be explainable?, 30 International J. L. & Info. Tech 151 (2022):
    • “Despite its exponential growth, artificial intelligence (AI) in healthcare faces various challenges. One of them is a lack of explainability of AI medical devices, which arguably leads to insufficient trust in AI technologies, quality, and accountability and liability issues. The aim of this paper is to examine whether, why and to what extent AI explainability should be demanded with relation to AI-enabled medical devices and their outputs. Relying on a critical analysis of interdisciplinary literature on this topic and an empirical study, we conclude that the role of AI explainability in the medical AI context is a limited one. If narrowly defined, AI explainability principle is capable of addressing only a limited range of challenges associated with AI and is likely to reach fewer goals than sometimes expected. The study shows that, instead of technical explainability of medical AI devices, most stakeholders need more transparency around its development and quality assurance process”
  • View the official FDA list of approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices
  • (*) Charlie Beam, Machine Learning-Based Medical Devices: the FDA’s Regulation, Requirements, and Restrictions, 35 J. L & Health 419 (2022). Article by 2022 UM Law graduate argues that:
    • “The FDA should create functional regulations for the growing number of machine learning medical devices. The healthcare system is increasingly using these devices for diagnosis. Machine learning devices trained on biased data sets are susceptible to furthering certain types of bias and generating flawed outcomes. The FDA should require ML medical devices to include a label that describes the demographics of the tested population. If manufacturers fail to include this information, the FDA could determine the label false or misleading under §502 of the FD&C Act and stop sales of the device. After approval, the FDA should use §814.89(2) and §519 to require manufacturers to report and evaluate the real-world performance of their approved devices. These reviews should include studies for clinical validation or model evaluation and model testing. While addressing bias in diagnostic medical machine learning devices will take more than the FDA, the agency should continue to support efforts to find an effective way to mitigate and measure bias.”

Notes and questions

  1. Compare and contrast the issues with AI in medicine to those with AI in lawyering
    1. As they relate to the future employment of professionals
    2. As to who regulates the proposed use of AI (this is actually quite complex in the medical context depending exactly how it is deployed)
    3. As to how malpractice laws will constrain the use of AI in the two professions
      1. Is it fair to say that malpractice law will likely act as a brake on legal reliance on AI?
      2. Is the same true of medical uses? Why?
    4. As to what those regulators need to worry about.
  2. When is medical AI error that harms a patient better considered the result of a defective product (and hence subject to product liability law), and when is it better considered ordinary malpractice (professional negligence by the doctor for choosing, using, or over-relying on the AI)?
    1. Does your answer apply equally to lawyers committing malpractice due to reliance on an AI?
  3. More generally as between developers and hospitals, who is/are the ‘least cost avoider’ or otherwise best party to be the holder of default liability for medical errors involving an AI?
    1. If your answer is “it depends” be ready to explain what it depends on!
    2. Does the answer matter?  If we believe in efficient contracts and markets, won’t the parties just contract around a default rule?
      1. If we’re worried about bankruptcy interfering with malpractice recoveries, does that alter who should have the default liability?
  4. Is the danger of medical ‘deskilling’ real? If so, is it evenly distributed?
  5. Froomkin/Kerr/Pineau suggest some solutions for the problems in long-term ML that they think are likely to manifest.
    1. Which of these seem more likely to be effective?
    2. Are any worth the trouble?
  6. Are AI’s too dangerous to be let near databases on toxins or on medical matters capable of effecting humans more generally?
    1. How, if we are going to use AI’s for other medical/biological uses could we stop the harmful ones?
    2. How would one prevent a rogue state or a terrorist group undeterred by either the 1925 Geneva Convention (banning the use but not stockpiling of bio-weapons) or by the 1975 Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on their Destruction [link optional] from building an AI in secret and then inventing and deploying new toxins?
  7. In Dinerstein v Google LLC, (7th Cir. 2023), the Seventh Circuit affirmed the dismissal of a patient’s claim that his hospital had shared purportedly de-identified information about him with Google in a form capable of re-identification (although the agreement with Google prohibited re-identification).  case but for reasons different than those given by the district court. The court of appeals dismissed both the breach of contract claim and the privacy claim for lack of standing because MD’s alleged harm was neither concrete (not abstract) nor imminent (not speculative).  The court held that even if the data-sharing violated a contract between the patient and the hospital, as the patient alleged, and even if there was a HIPAA violation, it didn’t matter because the plaintiff’s damages were at best speculative.

Class 19: AI & the Markets

The Financial and Product Markets

  1. Contrast Pages 2-5 & 35-36 of Emilio Calvano, et al., Artificial Intelligence, Algorithmic Pricing and Collusion (Dec. 11, 2019), with  Cento Veljanovski, What Do We Now Know About ‘Machine Collusion’, 13 J. European Competition L. & Prac. (2022).
  2. Pages 1754-1587 (top 2 lines) &  1590-1592 (top six lines) & 1596-1602 & 1607-1612 (top five lines) of Tejas N. Narechania, Machine Learning as Natural Monopoly, 107 Iowa L. Rev. 1543 (2022).
  3. Parts 2.2-2.5 (pages 8-14), Parts 4-6 (pages 17-30), and the chart on page 32, of Jon Danielsson & Andreas Uthemann, On the use of artificial intelligence in financial regulations and the impact on financial stability (Feb. 2024).

The Labor Market

  1. They Will Displace Us–Or Just Boss Us?
    1. Pages 1-8 Madeline C. Elish, (Dis)Placed Workers: A Study in the Disruptive Potential of Robotics and AI, WeRobot (2018 Working Draft).
    2. Drew Harwell, Contract lawyers face a growing invasion of surveillance programs that monitor their work, Wash Post (Nov. 11, 2021)
  2. Wait!  Maybe They Are Not Coming to Take (All) Our Jobs?
    1. Will Knight, Robots Won’t Close the Warehouse Worker Gap Anytime Soon, Wired (Nov. 26, 2021).  But that was three years ago…
    2. Miho Inada, Humanoid Robot Keeps Getting Fired From His Jobs, Wall St. J. (July 13, 2021).
  3. Will there be a resistance?
    1. Robert Wells, Robots, AI Not as Welcomed in Nations Where Income Inequity is High, UCF Today (Aug. 24, 2022).

Optional

General  / Regulatory

  • (*) Satya Marar, Artificial Intelligence and Antitrust Law: A Primer (Mar. 2, 2024):
    • Artificial intelligence (AI) embodies rapidly evolving technologies with great potential for improving human life and economic outcomes. However, these technologies pose a challenge for antitrust enforcers and policymakers. Shrewd antitrust policies and enforcement based on a cost-benefit analysis support a thriving pro-innovation economy that facilitates AI development while mitigating its potential harms. Misguided policies or enforcement can stymie innovation, undermine vigorous economic competition, and deter research investment. This primer is a guide for policymakers and legal scholars that begins by explaining key concepts in AI technology, including foundation models, semiconductor chips, cloud computing, data strategies and others. The next section provides an overview of US antitrust laws, the agencies that enforce them, and their powers. Following that is a brief history of US antitrust law and enforcement with a focus on the consumer welfare standard, its basis and benefits, and the flaws underlying recent calls by the Neo-Brandeisian movement to abandon it. Finally, the primer outlines the law and a procompetitive, pro-innovation policy framework for approaching the intersection between AI technologies and evaluating horizontal and vertical mergers, policing anticompetitive monopolization practices, price fixing and algorithmic collusion, and consumer protection issues.
  • Brandon Vigliarolo, Investment advisors pay the price for selling what looked a lot like AI fairy tales, The Register (Mar. 18, 2024):
    • Two investment advisors have reached settlements with the US Securities and Exchange Commission for allegedly exaggerating their use of AI, which in both cases were purported to be cornerstones of their offerings.
      Canada-based Delphia and San Francisco-headquartered Global Predictions will cough up $225,000 an  $175,000 respectively for telling clients that their products used AI to improve forecasts. The financial watchdog said both were engaging in “AI washing,” a term used to describe the embellishment of machine-learning capabilities.
  • (*) Carla R. Reyes, Autonomous Corporate Personhood, 96 Wash. L. Rev. 1453 (2021):
    • “Several states have recently changed their business organization law to accommodate autonomous businesses—businesses operated entirely through computer code. A variety of international civil society groups are also actively developing new frameworks and a model law—for enabling decentralized, autonomous businesses to achieve a corporate or corporate-like status that bestows legal personhood. Meanwhile, various jurisdictions, including the European Union, have considered whether and to what extent artificial intelligence (AI) more broadly should be endowed with personhood to respond to AI’s increasing presence in society. Despite the fairly obvious overlap between the two sets of inquiries, the legal and policy discussions between the two only rarely overlap. As a result of this failure to communicate, both areas of personhood theory fail to account for the important role that socio-technical and socio-legal context plays in law and policy development. This Article fills the gap by investigating the limits of artificial rights at the intersection of corporations and artificial intelligence. Specifically, this Article argues that building a comprehensive legal approach to artificial rights—rights enjoyed by artificial people, whether corporate entity, machine, or otherwise—requires approaching the issue through a systems lens to ensure that the legal system adequately considers the varied socio-technical contexts in which artificial people exist.”
  • (*) Seth C. Oranburg, Machines and Contractual Intent (Draft. Jan. 2022):
    • “Machines are making contracts—law is not ready. This paper describes why machine-made contracts do not fit easily into the common law of contracts or the Uniform Commercial Code for Sales. It discusses three ways to fit machine-made contracts into common law and discusses the challenges with each approach. Then it presents a new UCC Sales provision that uses Web3 concepts”.
  • (*) Daniel Loat Bppm Seng & Cheng Han Tan, Artificial Intelligence and Agents (Oct. 2021):
    • “With the increasing sophistication of AI and machine learning as implemented in electronic agents, arguments have been made to ascribe to such agents personality rights so that they may be treated as agents in the law. The recent decision by the Australian Federal Court in Thaler to characterize the artificial neural network system DABUS as an inventor represents a possible shift in judicial thinking that electronic agents are not just automatic but also autonomous. In addition, this legal recognition has been urged on the grounds that it is only by constituting the electronic agents as legal agents that their human principals may be bound by the agent’s actions and activities, and that a proper foundation of legal liability may be mounted against the human principal for the agent’s misfeasance. This paper argues otherwise. It contends that no matter how sophisticated current electronic agents may be, they are still examples of Weak AI, exhibit no true autonomy, and cannot be constituted as legal personalities. In addition, their characterization as legal agents is unnecessary ….”

Finance / Price Theory

  • Dirk A. Zetzsche, Douglas W. Arner, Ross P. Buckley & Brian Tang, Artificial Intelligence in Finance: Putting the Human in the Loop, 43 Syndey L. Rev. 43 (2021):
    • “We argue that the most effective regulatory approaches to addressing the role of AI in finance bring humans into the loop through personal responsibility regimes, thus eliminating the black box argument as a defence to responsibility and legal liability for AI operations and decision.”
  • Hans-Tho Normann & Martin Sternberg, Hybrid Collusion: Algorithmic Pricing in Human-Computer Laboratory Markets (May 2021):
    • “We investigate collusive pricing in laboratory markets when human players interact with an algorithm. We compare the degree of (tacit) collusion when exclusively humans interact to the case of one firm in the market delegating its decisions to an algorithm. We further vary whether participants know about the presence of the algorithm. We find that threefirm markets involving an algorithmic player are significantly more collusive than human-only markets. Firms employing an algorithm earn significantly less profit than their rivals. For four-firm markets, we find no significant differences. (Un)certainty about the actual presence of an algorithm does not significantly affect collusion.”
  • Daniel W. Slemmer, Artificial Intelligence & Artificial Prices: Safeguarding Securities Markets from Manipulation by Non-Human Actors, 14 Brook. J. Corp. Fin. & Com. L. (2020):
    • “Problematically, the current securities laws prohibiting manipulation of securities prices rest liability for violations on a trader’s intent. In order to prepare for A.I. market participants, both courts and regulators need to accept that human concepts of decision-making will be inadequate in regulating A.I. behavior. Industry regulators should … require A.I. users to harness the power of their machines to provide meaningful feedback in order to both detect potential manipulations and create evidentiary records in the event that allegations of A.I. manipulation arise.”
  • Gary Gensler and Lily Bailey, Deep Learning and Financial Stability (Working Paper, Nov. 1, 2020):
    • This paper maps deep learning’s key characteristics across five possible transmission pathways exploring how, as it moves to a mature stage of broad adoption, it may lead to financial system fragility and economy-wide risks. Existing financial sector regulatory regimes – built in an earlier era of data analytics technology – are likely to fall short in addressing the systemic risks posed by broad adoption of deep learning in finance. The authors close by considering policy tools that might mitigate these systemic risks.
  • Pascale Chapdelaine, Algorithmic Personalized Pricing, 17 NYU Journal of Law & Business,1 (2020):
    • “This article provides parameters to delineate when algorithmic personalized pricing should be banned as a form of unfair commercial practice. This ban would address the substantive issues that algorithmic personalized pricing raises. Resorting to mandatory disclosure requirements of algorithmic personalized pricing would address some of the concerns at a procedural level only, and for this reason is not the preferred regulatory approach. As such, our judgment on the (un)acceptability of algorithmic personalized pricing as a commercial practice is a litmus test for how we should regulate the indiscriminate extraction and use of consumer personal data in the future.”
  • (*) Anton Korinekand Joseph E. Stiglitz, Artificial Intelligence, Globalization, and Strategies for Economic Development, Inst. for New Econ. Thinking Working Paper No. 146 (Feb. 4, 2021):
    • “Progress in artificial intelligence and related forms of automation technologies threatens to reverse the gains that developing countries and emerging markets have experienced from integrating into the world economy over the past half century, aggravating poverty and inequality. The new technologies have the tendency to be labor-saving, resource-saving, and to give rise to winner-takes-all dynamics that advantage developed countries. We analyze the economic forces behind these developments and describe economic policies that would mitigate the adverse effects on developing and emerging economies while leveraging the potential gains from technological advances. We also describe reforms to our global system of economic governance that would share the benefits of AI more widely with developing countries
  • Megan Ji, Are Robots Good Fiduciaries? Regulating Robo-Advisors Under The Investment Advisers Act Of 1940, 117 Columb. L. Rev. 1543 (2017):
    • “In the past decade, robo-advisors—online platforms providing investment advice driven by algorithms—have emerged as a low-cost alternative to traditional, human investment advisers. This presents a regulatory wrinkle for the Investment Advisers Act, the primary federal statute governing investment advice. Enacted in 1940, the Advisers Act was devised with human behavior in mind. Regulators now must determine how an automated alternative fits into the Act’s framework.
      “A popular narrative, driven by investment advice professionals and the popular press, argues that robo-advisors are inherently structurally incapable of exercising enough care to meet Advisers Act standards. This Note draws upon common law principles and interpretations of the Advisers Act to argue against this narrative. It then finds that regulators should instead focus on robo-advisor duty of loyalty issues because algorithms can be programmed to reflect a firm’s existing conflicts of interest. The Note concludes by arguing for a shift in regulatory focus and proposing a two-part heightened disclosure rule that would make robo-advisor conflicts of interest more transparent.”

Taxation

  • Robert Kovacev, A Taxing Dilemma: Robot Taxes and the Challenges of Effective Taxation of AI, Automation and Robotics in the Fourth Industrial Revolution, 16 Ohio St. Tech. L.J. 182 (2020):
    • Technological change promises major dislocations in the economy, including potentially massive displacement of human workers. At the same time, government revenues dependent on the taxation of human employment will diminish at the very time displaced workers will increasingly demand social services. It is undeniable that drastic changes will have to be made, but until recently there has been little appetite among policymakers for addressing the situation.
      One potential solution to this dilemma has emerged in the public discourse over the past few years: the “robot tax.” This proposal is driven by the idea that if robots (and AI and automation) are displacing human workers, and thereby reducing tax revenues from labor-based taxes, then the robots themselves should be taxed […]
      [Author argues it is a bad idea for many reasons, including we can’t define what is a “robot”. Also argues tax would need to be global to be effective and not providing advantages to those encouraging automation.]
  • Benjamin Alarie, AI and the Future of Tax Avoidance, 181 Tax Notes Fed. 1808 (Dec. 4, 2023):
    • I predict that the influence of AI in tax avoidance will be deeply transformative for our tax and legal systems, demarcating a shift to algorithms capable of interpreting the intricacies of tax legislation worldwide, spotting and exploiting trends in legislation and adjudication, and recommending tax minimization strategies to taxpayers and legislative patches to lawmakers. […]
      AI can exploit gaps between different tax regimes, necessitating comprehensive responses. These systems, rich in data and analytics, will predict legislative changes and socio-economic impacts, shaping tax law application and planning.
      The essay calls for immediate action in rethinking tax policy principles in the AI era. It highlights the importance of dialogue among policymakers, tax practitioners, and technology experts to ensure AI’s integration into tax planning is beneficial, maintains legal integrity, and supports fiscal fairness. The decisions made today regarding AI in tax avoidance will dictate whether the future of tax planning becomes more equitable or further widens the divide between taxpayers and authorities.”
  • (*) Ryan Abbott & Bret Bogenschneider, Should Robots Pay Taxes? Tax Policy in the Age of Automation, 12 Harv. L. & Pol. Rev. 145 (2018):
    • “The tax system incentivizes automation even in cases where it is not otherwise efficient. This is because the vast majority of tax revenues are now derived from labor income, so firms avoid taxes by eliminating employees. Also, when a machine replaces a person, the government loses a substantial amount of tax revenue—potentially hundreds of billions of dol- lars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once the labor is capital. Robots are not good taxpayers.
      “We argue that existing tax policies must be changed. The system should be at least “neutral” as between robot and human workers, and automation should not be allowed to reduce tax revenue. This could be achieved through some combination of disallowing corporate tax deductions for automated workers, creating an “automation tax” which mirrors existing unemployment schemes, granting offsetting tax preferences for human workers, levying a corporate self-employment tax, and increasing the corporate tax rate.”
  • Robert D. Atkinson, The Case Against Taxing Robots, Information Technology and Innovation Foundation (April 8, 2019):
    • Robot taxers make three main arguments in support of their position:
      1. If we do not tax robots, then government revenues will decline, because few people will be working;
      2. If we do not tax robots, then income inequality will grow, because the share of national income going to labor will fall; and
      3. Taxing robots would make the economy more efficient, because governments already tax labor, so not taxing robots at the same rate would reduce allocation efficiency.
      As this paper will show, all three arguments are wrong.
    • (FWIW I think that the third issue is a real one.)

Labor Markets

AI & Insurance

  • (*) Anat Lior, Insuring AI: The Role of Insurance in Artificial Intelligence Regulation, 35 Harv. J.L. & Tech. 467  (2022):
    • “Insurance has the power to better handle AI-inflicted damages, serving both a preventive and compensatory function. This Article offers a framework for stakeholders and scholars working on AI regulation to take advantage of the current robust insurance system. It will discuss the type of insurance policy that should be purchased and the identity of the policyholder. The utilization of insurance as a regulatory mechanism will alleviate the risks associated with the emerging technology of AI while providing increased security to AI companies and AI users. This will allow different stakeholders to continue to unlock the power of AI and its value to society.”
  • Brad Templeton, What Happens To Car Insurance Rates After Self-Driving Cars?, Forbes (Sep 21, 2020).

Notes & Questions

  1. Narechania argued that ML tends to natural monopoly–due to high barriers to entry, both in hardware and human capital, and due to ‘feedback’ from users that resembles network effects.  He offered GPT-3 as an example, albeit two years ago (p. 1583), although that’s a generative model.
    1. Is the argument equally good for standard ML and for generative AI?
    2. Has more recent history suggested that maybe one or both don’t fit the natural monopoly story?
    3. Is there any reason to believe that recent history is atypical, and the ML / GenAI supplier market(s) will soon ‘settle down’ something monopolistic or oligopolistic?  Any reason to doubt that?
  2. Assuming that ML and/or GenAI are natural monopolies in general, or in specific industries, is ordinary anti-trust law sufficent to deal with the problem?  If not, what new rules do we need?
  3. Is AI really a threat to securities and other markets as we know them?  If so, how should we respond?  Does,  say, the SEC need new authority? (If so, what exactly?)
  4. Suppose that the naysayers are wrong, and there is a real danger here:
    1. If so, what is the solution?
    2. To the extent you envision a regulatory solution would it be better to
      1. Put the regulatory authority in an agency whose focus was AI (and might have more technical experts on AI)?
      2. Put the regulatory authority in one or more existing (or new?) agencies dedicated to regulation of financial markets (e.g. SEC, CFTC, Treasury and/or CFPB)?
      3. Put the regulatory authority in the FTC which (with the Justice Department) regulates monopolies?
    3. Recall Lynn Lopucki’s argument (Class 6) that by using various corporate devices, an AI could own itself and thus become a legal person to the extent, at least, that we allow corporations to be considered legal persons.  Does this alter your view of the economic or regulatory issues discussed in this section?
  5. Historically, new tech kills some old jobs, but creates as least as many new ones.
    1. What if anything should we do as a society, or as policy-makers, owe to the losers? Especially if job loss is not their “fault” as the industry changes.
    2. If the ‘something’ we do involves payments to workers or retraining facilities is this
      1. a general social responsibility
      2. or something that should fall particularly on the beneficiaries (makers, sellers, users) of the new technology?
    3. If this involves training,
      1. what do we do if the losers (e.g truckers) are not easily trained to do the new tasks (e.g. coding) whether due to educational background, temperament, or age?
      2. Or is, as Walley suggests, the identification between education level and “skills” largely a myth.
    4. Or is Andrew Yang today’s visionary and the rise of the robots will force us into a Universal Basic Income?
  6. Most tech revolutions come with lots of people saying “this time is different”.
    1. What reasons, if any, do we have for the claim that “this time is different” and thus there could be a net loss of jobs due to robots?
      1. Note that there are a lot of truckers (estimates range from 1 million to 3.5 million combining long-haul, short haul, and tractor-trailors) and also a lot of retail service jobs (c. 10 million if we include first-level floor supervisors) that might be at risk
    2. This revolution might also hit professions:
      1. Insurance agents
      2. Doctors, lawyers
    3. Do we think professionals as a class might be more ‘retrainable’ or movable to new jobs given they usually have more education than truckers, warehoused employees, or assembly line workers? Of course it could mean a pay cut, but that is better than unemployment…
  7. What does co-working with robots (sometimes, not so often, called-cobots — co-robotics is a more common term) do to the nature/quality of work? Does it “turn people into robots”? [See optional reading – Leslie Beslie, My Time on the Assembly Line Working Alongside the Robots That Would Replace Us, The Billfold (May 6, 2014)]
    1. Call center employees are increasingly being replaced by robotic menus. If you do get a human, that person is asked to ‘stick to a script’ and often is dis-empowered from being able to escalate problems or fix any unusual ones. This discourages complaining customers and saves money (firms tend to see call center operations as a cost center, especially after sale, not a branding opportunity).
      1. Do we say, “that’s capitalism” and move on, or say/do something else?
      2. If “something else” is our primary concern the worker, or something else?
    2. Originally the hope was that robots might do the most demanding and dangerous jobs, and that has proved somewhat true if we define “demanding” as “finding and lifting heavy stuff” or “disinfecting hospitals and hotel rooms for COVID”. It’s less true, at least at Amazon, when it comes to more complex sorting tasks. Is this something to worry about, or will it fix itself as robots get better/smarter/more dexterous?
    3. Modern scheduling algorithms often can predict demand for labor very precisely – but not very far in advance (yet). A consequence is that firms demand workers be available for long periods of the week, but at the last minute may or may not ask them to come into work; no work, no pay. But workers can’t take a second job that might fill those missing hours, because they can’t ever promise to be available in the blocks of time they’ve promised to the first employer. These so-called “zero hours” contracts are the subject of intense debate in the UK and Europe and are partly blamed for the “Great Resignation” in the US. Should they be allowed
  8. Just because surveys show that people living in Nordic countries are happier and live longer than most other places, is that any reason to look to them as a model for how to structure the employment and public benefits relationships?
    1. Or can we avoid that horrible fate with a ‘robot tax’?
    2. Even if we could in theory go Nordic, if job losses due to automation are as massive as the scariest (if perhaps over-alarmist) estimates suggest, how otherwise will we pay for it? Does the robot tax then become not a way of avoiding ‘going Nordic’ but still paying for it?
    3. Regardless of the overall political goals, if our tax goal is an efficient “Pigouvian tax” i.e. one equal to the externalities caused by the robot, what do we do if the robot is actually good for society, and has a positive externality?
  9. If we have one, does a ‘robot tax’ have to be global to be effective? Would national suffice? Would a state-level tax suffice in many states?

Class 20: Ethics Issues

  1. Brian Patrick Green, Artificial Intelligence and Ethics, Markkula Center for Applied Ethics (Nov. 21, 2017).
  2. Annette Zimmermann and Bendert Zevenbergen, AI Ethics: Seven Traps, Freedom to Tinker Blog (Mar. 25, 2019).
  3. Ethics & Religious Liberty Commission of the Southern Baptist Convention, Artificial Intelligence: An Evangelical Statement of Principles (April 11, 2019).
  4. Christian Today, Bishop issues ’10 commandments of Artificial Intelligence’ (Feb. 28, 2018).
  5. ACM, Statement on Principles for Responsible Algorithmic Systems (Oct. 26, 2022).
  6. United Nations General Assembly, Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development (Adopted on March 21, 2024).
  7. Abeba Birhane & Jelle van Dijk,Robot Rights? Let’s Talk about Human Welfare Instead (Jan 14, 2020).
  8. Daniel Schiff et al, AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection, 2 IEEE Trans. Tech. & Soc. 31 (2021).
  9. Lawyers’ Professional Ethics Duties
    1. Florida Bar Ethics Opinion 24-1 (Jan. 19, 2024).
    2. Court of International Trade, Order on Artificial Intelligence (June 8, 2023).
    3. Pages 46-47 of Jonathan H. Choi, Amy B. Monahan & Daniel Schwarcz, Lawyering In The Age Of Artificial Intelligence, Lawyering In The Age Of Artificial Intelligence,  __ Minn.. L. Rev. __ (forthcoming) (Feb 24, 2024 draft).

Optional

  • Stable Diffusion.
    1. Stable Diffusion Launch Announcement, stability.ai  (PDF) (link to original, with better formatting) (Aug. 10, 2022)
    2. Stable Diffusion Public Release, stability.ai (PDF) (link to original, with better formatting) (Aug. 22, 2022)
    3. Kyle Wiggers, Deepfakes for all: Uncensored AI art model prompts ethics questions , TechCrunch (Aug. 24, 2022)
  • European Parliamentary Research Service, European framework on ethical aspects of artificial intelligence, robotics and related technologies (Sept. 2020)
  • European Parliamentary Research Service, The ethics of artificial intelligence: Issues and initiatives (March 2020).
  • Gladstone AI, Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI (Feb.  26, 2024):
    • The recent explosion of progress in advanced artificial intelligence (AI) has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like (WMD-like) and WMD-enabling catastrophic risks. A key driver of 1 these risks is an acute competitive dynamic among the frontier AI labs that are 2 building the world’s most advanced AI systems. All of these labs have openly declared an intent or expectation to achieve human-level and superhuman artificial general intelligence (AGI) — a transformative technology with profound implications for 3 democratic governance and global security — by the end of this decade or earlier.
      The risks associated with these developments are globa(*)l in scope, have deeply technical origins, and are evolving quickly. As a result, policymakers face a diminishing opportunity to introduce technically informed safeguards that can balance these considerations and ensure advanced AI is developed and adopted responsibly. These safeguards are essential to address the critical national security gaps that are rapidly emerging as this technology progresses.
      Frontier lab executives and staff have publicly acknowledged these dangers. Nonetheless, competitive pressures continue to push them to accelerate their investments in AI capabilities at the expense of safety and security. The prospect of inadequate security at frontier AI labs raises the risk that the world’s most advanced AI systems could be stolen from their U.S. developers, and then that they could at some point lose control of the AI systems they themselves are developing , with potentially devastating consequences to global security.
  • Meta, AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open, Safe, Responsible AI December 4, 2023.
  • (*) Pasclae Fung & Hubert Etienne, Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU, 3 AI and Ethics 505 (2023):
    • We propose a comparative analysis of the AI ethical guidelines endorsed by China (from the Chinese National New Gen- eration Artificial Intelligence Governance Professional Committee) and by the EU (from the European High-level Expert Group on AI). We show that behind an apparent likeness in the concepts mobilized, the two documents largely differ in their normative approaches, which we explain by distinct ambitions resulting from different philosophical traditions, cultural heritages and historical contexts. In highlighting such differences, we show that it is erroneous to believe that a similarity in concepts necessarily translates into a similarity in ethics as even the same words may have different meanings from a country to another—as exemplified by that of “privacy”. It would, therefore, be erroneous to believe that the world would have adopted a common set of ethical principles in only three years. China and the EU, however, share a common scientific method, inherited in the former from the “Chinese Enlightenment”, which could contribute to better collaboration and understanding in the building of technical standards for the implementation of such ethics principles
  • (*) Christian Terwiesch and Lennart Meincke, The AI Ethicist: Fact or Fiction? (Working Paper, Oct, 11, 2023):
    • This study investigates the efficacy of an AI-based ethical advisor using the GPT-4 model. Drawing from a pool of ethical dilemmas published in the New York Times column “The Ethicist”,,” we compared the ethical advice given by the human expert and author of the column, Dr. Kame Anthony Appiah, with AI-generated advice. The comparison is done by evaluating the perceived usefulness of the ethical advice across three distinct groups: random subjects recruited from an online platform, Wharton MBA students, and a panel of ethical decision-making experts comprising academics and clergy. Our findings revealed no significant difference in the perceived value of the advice between human generated ethical advice and AI -generated ethical advice. When forced to choose between the two sources of advice, the random subjects recruited online displayed a slight but significant preference for the AI-generated advice, selecting it 60% of the time, while MBA students and the expert panel showed no significant preference.
  • Stanford Encyclopaedia of Philosophy, Ethics of Artificial Intelligence and Robotics (Apr. 30, 2020).
  • David Zvi Kalman, 3 reasons why A.I. must be a religious issue and not just a peripheral one, Jello Menorah (Dec. 8, 2022).
  • Thilo Hagendorff, The Ethics of AI Ethics: An Evaluation of Guidelines, 30 Minds and Machines 99 (2020)
  • Brent Mittelstadt, Principles Alone Cannot Guarantee Ethical AI, Nature Machine Intelligence (Nov. 5, 2019).
  • (*) Ivan Evtimov, David O’Hair, Earlence Fernandes, Ryan Calo & Tadayoshi Kobno, Is Tricking a Robot Hacking?, 34 Berk. Tech. L.J. 891 (2019):
    • The unfolding renaissance in artificial intelligence (Al), coupled with an almost-parallel discovery of considerable vulnerabilities, requires a reexamination of what it means to “hack,” i.e., to compromise a computer system. The stakes are significant. Unless legal and societal frameworks adjust, the consequences of misalignment between law and practice will result in (1) inadequate coverage of crime, (2) missing or skewed security incentives, and (3) the prospect of chilling critical security research. This last consequence is particularly dangerous in light of the important role researchers play in revealing the biases, safety limitations, and opportunities for mischief that the mainstreaming of artificial intelligence may present.This essay introduces the law and policy community, within and beyond academia, to the ways adversarial machine learning (ML) alters the nature of hacking, and with it, the cybersecurity landscape. Using the Computer Fraud and Abuse Act of 1986 (CFAA)-the paradigmatic federal anti-hacking law- as a case study, we hope to demonstrate the burgeoning disconnect between law and technical practice. And we hope to explain the stakes if we fail to address the uncertainty that flows from hacking that now includes tricking.
  • Rodrigo Ochigame, The Invention of “Ethical AI”: How Big Tech Manipulates Academia to Avoid Regulation, The Intercept (Dec. 20 2019).
  • Petra Molnar, Technology on the Margins: AI and global migration management from a human rights perspective, 8 Camb. Int’l L.J. 305 (2019).
  • (*) : Madeline Forster, Refugee protection in the artificial intelligence era:A test case for rights, Chatham House (Sept. 7, 2022):
    • Government and private sector interest in artificial intelligence (AI) for border security and for use in asylum and immigration systems is growing. Academics and civil society are calling for greater scrutiny of legal, technological and policy developments in this area. However, compared to other high-risk environments for AI, this sector has received little policy attention.Whether governments can adopt AI and meet human rights obligations in asylum and immigration contexts is in doubt, particularly as states have specific responsibilities towards persons seeking refugee and humanitarian protection at national borders.The risks include potentially significant harm if AI systems lead (or contribute) to asylum seekers being incorrectly returned to their country of origin or an unsafe country where they may suffer persecution or serious human rights abuses – a practice known as ‘refoulement’. The use of AI in asylum contexts also raises questions of fairness and due process.”Artificial intelligence (AI) is being introduced to help decision-making […] about asylum and refugee protection, where automated ways of processing people and predicting risks in contested circumstances hold great appeal.”This field, even more than most, will act as a test case for how AI protects or fails to protect human rights. Wrong or biased decisions about refugee status can have life and death consequences, including the return of refugees to places where they face persecution, contrary to international law. Existing refugee decision-making systems are already complex and are often affected by flaws, including lack of legal remedies – issues that can be exacerbated when overlayed with AI.”This paper examines the primary protections being proposed to make AI more responsive to human rights, including the upcoming EU AI Law. Can innovation and protection of human rights really be combined in asylum systems and other domains that make decisions about the future of vulnerable communities and minorities? This is a question not just for governments but also for private sector providers, which have independent human rights responsibilities when providing AI products in a politically charged and changeable policy field that decides the future of vulnerable communities and minorities.”
      […]
      “Particular attention must be paid at national and regional level to how AI tools can support human rights-based decision-making in complex and politicized systems without exacerbating existing structural challenges. How we treat asylum seekers and refugees interacting with AI will be a test case for emerging domestic and regional legislation and governance of AI. Global standard-setting exercises for AI – including UN-based technical standards and high-level multinational initiatives – will also influence the direction of travel.”
  • Dorine Eva van Norren, The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective, 21 J. Info., Communication and Ethics in Society (Dec. 2022):
    • This paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence (AI) and critically reviewing the paper of UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST) that preceded the drafting of the UNESCO guidelines on AI. Different value systems may lead to different choices in programming and application of AI. Programming languages may acerbate existing biases as a people’s worldview is captured in its language. What are the implications for AI when seen from a collective ontology? Ubuntu (I am a person through other persons) starts from collective morals rather than individual ethics. [….]
      “Metaphysically, Ubuntu and its conception of social personhood (attained during one’s life) largely rejects transhumanism. When confronted with economic choices, Ubuntu favors sharing above competition and thus an anticapitalist logic of equitable distribution of AI benefits, humaneness and nonexploitation. When confronted with issues of privacy, Ubuntu emphasizes transparency to group members, rather than individual privacy, yet it calls for stronger (group privacy) protection. In democratic terms, it promotes consensus decision-making over representative democracy. Certain applications of AI may be more controversial in Africa than in other parts of the world, like care for the elderly, that deserve the utmost respect and attention, and which builds moral personhood. At the same time, AI may be helpful, as care from the home and community is encouraged from an Ubuntu perspective. The report on AI and ethics of the UNESCO World COMEST formulated principles as input, which are analyzed from the African ontological point of view. COMEST departs from “universal” concepts of individual human rights, sustainability and good governance which are not necessarily fully compatible with relatedness, including future and past generations. Next to rules based approaches, which may hamper diversity, bottom-up approaches are needed with intercultural deep learning algorithms.”
  • (*)Morten Bay, AI Ethics and Policymaking: Rawlsian Approaches to Democratic Participation, Transparency, Accountability, and Prediction (May 31, 2023):
    • “The AI ethics field is seeing an increase in explorations of theoretical ethics in addition to applied ethics, and this has spawned a renewed interest in John Rawls’ theory of justice as fairness and how it may apply to AI. But how may these new, Rawlsian contributions inform regulatory policies for AI? This article takes a Rawlsian approach to four key policy criteria in AI regulation: Democratic participation, transparency, accountability, and the epistemological value of prediction. Rawlsian, democratic participation in the light of AI is explored through a critique of Ashrafian’s proposed approach to Rawlsian AI ethics, which is found to contradict other aspects of Rawls’ theories. A turn toward Gabriel’s foundational theoretical work on Rawlsian justice in AI follows, extending his explication of Rawls’ Publicity criterion to an exploration of how the latter can be applied to real-world AI regulation and policy. Finally, a discussion of a key AI feature, prediction, demonstrates how AI-driven, long-term, large-scale predictions of human behavior violate Rawls’ justice as fairness principles. It is argued that applications of this kind are expressions of the type of utilitarianism Rawls vehemently opposes, and therefore cannot be allowed in Rawls-inspired policymaking.”
  • Ben Green, Data Science as Political Action: Grounding Data Science in a Politics of Justice (Jan 14, 2019).
  • IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (1st Ed.) (valuable but long).

Notes & Questions

  1. The Green article provides a nice laundry list of things that a person designing or creating an AI ought to think about.
    1. That said, some are quite tough issues that might require a lot of information, some of it not necessarily easily available to many people involved in a big project, e.g.
      1. How good is the training data?
      2. Will this system put people out of work?
      3. Will it cause moral harms to people?
      4. What will the effects be “on the human spirit”.
    2. Are there many other professions where we ask people to think about such issues as part of their jobs?
      1. If not, is that because those professions don’t involve similar risks?
      2. Or, perhaps, we should all be asking these questions all the time?
  2. The first two readings could, however, be read as in tension, although not perhaps outright opposition: the Zimmermann & Zevenbergen reading on “Traps” provides a list of things to watch out for while one is thinking ethically about issues such as those identified in the Green article.
    1. Does making ethics this hard increase the risk more people will just not bother?
    2. Does failing to make ethics this hard make it not serious and useful?
  3. Many people look to religion, or religious leaders, for ethical guidance.
    1. How do the concerns identified by the Southern Baptist Convention compare to Green’s list?
    2. What are the overlaps? The differences?
  4. In case it wasn’t clear from the context, the “10 commandments” in the readings were issued by a Church of England [in US terms, Anglican] Bishop.
    1. How does this list compare to the Southern Baptists’ list?
    2. Incidentally, I apologize for not finding a wider variety of religious leaders’ thoughts, but it’s surprisingly difficult to find short and accessible articles that deal with the sort of (realistic) AI this course is about. For example, articles on Jewish Law and AI seem to devolve quickly to discussions of Golems, which isn’t really our focus….
  5. Assume you want your work to be consistent with the ACM principles.  How if at all would that impact an engineer doing engineering work on Stable Diffusion?  Is the answer any different for a lawyer doing legal work for stability.ai (the makers of Stable Diffusion) or other AI companies that might have troubling products?
  6. How does the recent UN resolution address the ethical issues? Is this a good forum for AI policy-making?
  7. Biorhane & van Kijk approach the ethical issues humanistically.
    1. Is this a necessary counterpoint to the religious perspectives or are they, from our point of view, just beating a dead toaster?
    2. If we get past the attack on the idea of ‘robot rights’ we are left with at least two key concepts:
      1. Robots/AI are, and can too easily be, used to violate human rights.
      2. Complex (social and technical) systems tend to have the effect of blurring individual responsibility for the systems’ actions. But people – someone or some group – are still responsible.
        1. If they’re right about that, does that effect how we should think about projects like the readings above?
  8. Schiff et al provide a survey of the variety and differences among ethical policies for AI.
    1. An issue that has many ethicists worried is that the proliferation of ethics policies enables “ethics-washing” in which bad, or grey, actors shop for an ethics policy that bans things they do not do while remaining silent about the (by hypothesis, dubious) things the organization actually does. The actor then trumpets its adherence to the ethics policy, knowing that it actually doesn’t bite where it matters.
    2. Does the Schiff report inform this concern about proliferating ethics policies? If so, does the report suggest the concern has merit?
  9. A recent study (optional) found that, “AI ethics work in technology companies is predominantly conducted by individual AI ethics entrepreneurs who struggle to effect change through informal processes.”  And, “[e]thics entrepreneurs face three major barriers to their work. First, they struggle to have ethics prioritized in an environment centered around software product launches. Second, ethics are difficult to quantify in a context where company goals are incentivized by metrics. Third, the frequent reorganization of teams makes it difficult to access knowledge and maintain relationships central to their work. Consequently, individuals take on great personal risk when raising ethics issues, especially when they come from marginalized backgrounds.”
    1. Is regulation needed to right the balance?
    2. If so, what sort of regulation would be helpful and appropriate?
    3. Optional related article: Melissa Heikkilä, Responsible AI has a burnout problem: Companies say they want ethical AI. But those working in the field say that ambition comes at their expense, MIT Tech. Rev. (Oct. 28, 2022).
  10. While most of the readings in this section focus on ethical duties of AI creators, the readings on lawyers’ professional ethics center on the issue of the ethical duties of AI users.  Disclosure is obviously a big issue for lawyers and perhabs even more for doctors. Can you think of other general ethical obligations for professionals? For everyone using AI?

 


Class 21: Governance of AI (General Issues)

  1. Rishi Bommasani, et al., Considerations for Governing Open Foundation Models, Stanford HAI, (Dec. 2023).
  2. Neel Guha et al., The AI Regulatory Alignment Problem, Stanford HAI  (Nov. 2023). [Note: optional draft of full paper, AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing, — Geo. Wash. L. Rev. — (forthcoming  2024)].
  3. Section III (pages 33-42) & IV.A (pages 43–47) of Yonathan A. Arbel, et al., Systemic Regulation of Artificial Intelligence, — Ariz. St. L.J. — (forthcoming 2024)  (Draft Dec. 16, 2023).
  4. Sylvie Delacroix, Joelle Pineau, and Jessica Montgomery, Democratising the digital revolution: the role of data governance (June 30, 2020) in Reflections on AI for Humanity (Braunschweig & Ghallab eds., 2021).
  5. Melissa Heikkilä, Our quick guide to the 6 ways we can regulate AI, MIT Tech. Rev. (May 22, 2023).
  6. Manish Singh, India reverses AI stance, requires government approval for model launches, TechCrunch (Mar. 3, 2024).
    1. A later statement by Union Minister of State for Electronics and Technology Rajeev Chandrasekhar  “clarified” that this new rule is applicable to “large platforms” and not for start-ups, [Optional: News Story.]

Optional

  • Ministers of the Global Partnership on Artificial Intelligence (GPAI), 2023 Ministerial Declaration, GPAI(2023)2 (Dec. 13, 2023).  Somewhat ironic in light of 3/24 TechCrunch article above?
  • UPDATE: Microsoft, Governing AI: A Blueprint for the future (May 24, 2023).  Offers an extensive five-point blueprint for the public governance of AI. For a libertarian critique, see Adam Thierer, Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control, Medium (May 29, 2023).
  • (*) Philipp Hacker, Sustainable AI Regulation (Draft March 6, 2024):
    • This paper addresses a critical gap in the current AI regulatory discourse by focusing on the environmental sustainability of AI and technology, a topic often overlooked both in environmental law and in technology regulation, such as the GDPR or the EU AI Act. Recognizing AI’s significant impact on climate change and its substantial water consumption, especially in large generative models like ChatGPT, GPT-4, or Gemini, the paper aims to integrate sustainability considerations into technology regulation, in three steps. First, while current EU environmental law does not directly address these issues, there is potential to reinterpret existing legislation, such as the GDPR, to support sustainability goals. Counterintuitively, the paper argues that this also implies the need to balance individual rights, such as the right to erasure, with collective environmental interests.Second, cased on an analysis of current law, and the proposed EU AI Act, the article suggests a suite of policy measures to align AI and technology regulation with environmental sustainability. They extend beyond mere transparency mechanisms, such as disclosing GHG footprints, to include a mix of strategies like co-regulation, sustainability by design, restrictions on training data, and consumption caps, potentially integrating AI and technology more broadly into the EU Emissions Trading Regime. Third, this regulatory toolkit could serve as a blueprint for other technologies with high environmental impacts, such as blockchain and Metaverse applications. The aim is to establish a comprehensive framework that addresses the dual fundamental societal transformations of digitisation and climate change mitigation.”
  • Pin-Yu Chen, Cars Require Regular Inspection, Why Should AI Models Be any Different?, Technology Networks (Mar. 14, 2022). “[A]re we paying enough efforts, as seriously as to our cars, to inspect and certify the trustworthiness of …AI-based systems and algorithms? Moreover, as an end user and a consumer, do we really know how and why AI technology is making decisions, and how robust AI technology is to adversarial attacks?”
  • Ben Green, The Flaws of Policies Requiring Human Oversight of Government Algorithms (September 10, 2021):
    • “I survey 40 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security…”
  • Nicolas Petit, and Jerome, De Cooman, Models of Law and Regulation for AI, Robert Schuman Center for Advanced Studies (2020):
    • “The discussion focuses on four models: the black letter model, the emergent model, the ethical model, and the risk regulation model. All four models currently inform, individually or jointly, integrally or partially, consciously or unconsciously, law and regulatory reform towards AI. We describe each model’s strengths and weaknesses, discuss whether technological evolution deserves to be accompanied by existing or new laws, and propose a fifth model based on externalities with a moral twist.”
  • (*) Andrew Selbst, An Institutional View of Algorithmic Impact Assesssments, 35 Harv. J.L. & Tech 117 (2021):
    • “An AIA regulation has two main goals: (1) to require firms to consider social impacts early and work to mitigate them before development, and (2) to create documentation of decisions and testing that can support future policy-learning. The Article argues that institutional logics, such as liability avoidance and the profit motive, will render the first goal difficult to fully achieve in the short term because the practical discretion that firms have allows them room to undermine the AIA requirements. But AIAs can still be beneficial because the second goal does not require full compliance to be successful.”
  • Law Commission of Ontario, Regulating AI: Critical Issues and Choices (April, 2021). Very thorough paper calling existing Canadian law inadequate, and offering extensive suggestions for reform.
  • Jennifer Chandler, The Autonomy of Technology: Do Courts Control Technology or Do They Just Legitimize Its Social Acceptance?, 27 Bull. of Sci. Tech. & Soc. 339 (2007). Argues that often principles “support and legitimize novel technologies.”
  • (*) Aj Grotto & James Dempsey, Vulnerability Disclosure and Management for AI/ML Systems: A Working Paper with Policy Recommendations (Nov. 15. 2021):
    • “Artificial intelligence systems, especially those dependent on machine learning (ML), can be vulnerable to intentional attacks that involve evasion, data poisoning, model replication, and exploitation of traditional software flaws to deceive, manipulate, compromise, and render them ineffective. Yet too many organizations adopting AI/ML systems are oblivious to their vulnerabilities. Applying the cybersecurity policies of vulnerability disclosure and management to AI/ML can heighten appreciation of the technologies’ vulnerabilities in real-world contexts and inform strategies to manage cybersecurity risk associated with AI/ML systems. Federal policies and programs to improve cybersecurity should expressly address the unique vulnerabilities of AI-based systems, and policies and structures under development for AI governance should expressly include a cybersecurity component.”
  • (*) Alessandro Mantelero, Regulating AI in Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI 139 (2022). This is an exposition of a “principles-based approach” that is contrasted with the EU’s risk-based approach.
  • (*) Jess Whittlestone, Kai Arulkumaran & Matthew Crosby, Societal Implications of Deep Reinforcement Learning, 70 J. Artificial Intelligence Research 1003 (2021):
    • Deep Reinforcement Learning (DRL) is an avenue of research in Artificial Intelligence (AI) that has received increasing attention within the research community in recent years, and is beginning to show potential for real-world application. DRL is one of the most promising routes towards developing more autonomous AI systems that interact with and take actions in complex real-world environments, and can more flexibly solve a range of problems for which we may not be able to precisely specify a correct ‘answer’. This could have substantial implications for people’s lives: for example by speeding up automation in various sectors, changing the nature and potential harms of online influence, or introducing new safety risks in physical infrastructure. In this paper, we review recent progress in DRL, discuss how this may introduce novel and pressing issues for society, ethics, and governance, and highlight important avenues for future research to better understand DRL’s societal implicatio

Just Because

  • Ioana Bratu, Artificial Intelligence for Future Lunar Societies: A Critical Analysis of the Liability Problem (Dec. 2, 2021):
    • “The introduction of AI systems as part of future Lunar habitats does not come without corresponding risks, especially from a legal perspective. Several legal challenges may appear in the context of a high reliance on these systems, such as: who will be liable in case an AI system will be involved in accidents causing economic losses or loss of human lives? What type of legal framework will be required to mitigate such risks? Will the existing body of laws representing international space law remain sufficient for addressing these challenges?”

Notes & Questions

  1. This class and the next  two are about an important problem: how should governments encourage and/or regulate AI in general. (Note that both public and private law might be marshaled to do this.) Different nations have different answers, although the idea of an AI “race” does seem common (see, for example, the optional National Security Commission on Artificial Intelligence, Final Report (2021)).
  2. Foundation models are so new that regulators and scholars are scrambling to figure out how even rapidly evolving ideas about AI governance — which were until two years ago or less heavily centered on ML and maybe image generation — should cope.
    1. One major potential cleavage in regulatory strategy is between open source and closed source (proprietary) models.
    2. To what extent do the Considerations for Governing Open Foundation Models paper analysis and suggestions apply/not apply to proprietary foundation models?
    3. Considerations for Governing Open Foundation claims that open source models “provide significant benefits to society by promoting competition, accelerating innovation, and distributing power…. Further, open models are marked by greater transparency and, thereby, accountability.”  Assuming this is true, are there down sides to open source models?  Is it true?
  3. The Regulatory Alignment paper has a number of suggestions.  Among them,
    1. Firms should do voluntary or mandatory “averse event reporting” — tell regulators about bad things that the AI gets used for.
      1. How, in practice would an ethical AI developer do this?
        1. How can the developers know what end-users do with the model?
        2. Sometimes users will complain, when the result is not what they want; but won’t the most malicious users never complain?
      2. Even if the developers  have ‘knowledge’ in the form of lets say complete usage records, how do they analyze these to extract the “adverse event” information?
    2. Government should do oversight of third-party auditors to “verify industry claims”.
      1. Isn’t that great only so long as the developers claim virtue?
      2. Does it create a perverse incentive?
      3. Would it work for open source?
  4. Arbel et. al examine the form that AI-related ‘risk mitigation’ should take.  While Regulatory Alignment advocates sectoral regulation by subject-expert agencies, this paper says “systematic regulation” is needed as
    1. Some risks are “inherent” to AI.
    2. Risks are so numerous and complicated, so beyond most agencies
    3. Government should require pre-approval of AI’s (licensing) in order to be able to address long-term as well as short-term risks.  And fore-fronting the long-term risks will justify the regulations.
    4. Sectoral regulation doesn’t work well for general-purpose AI as they can be used for many different things.
      1. But even so there will still be a place for sectoral regulators (e.g. SEC, FDA) for things specific to their mission.
    5. Centralizing regulation will make it easier to keep up with change.
  5. Are these arguments persuasive?  Is there a converse risk that as AI gets built into everything, a super-regulator will have to regulate … everything?
  6. To what extent are the “AI Governance” issues identified above captured or achievable by “data governance” as Delacroix et al suggest?
    1. In other words, if we were somehow to figure out great rules about data quality and reasonable non-discriminatory access to training data, how many “AI governance” issues would take care of themselves through the ordinary mechanisms of competition?
    2. What issues would remain? Can you group them?

Class 22: Governance of AI (EU)

  1. EU Artificial Intelligence Act, High-level summary of the AI Act (Feb. 27, 2024). {Note: full text at AI Act Explorer — it’s long]
  2. Michael Veale  & Frederik Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act,  CRi Computer Law Review International (2021).
  3. Sections II.B and III of  General Purpose AI Systems in the AI Act: trying to fit a square peg into a round hole (DRAFT 2023).
  4. Marco Almada & Nicolas Petit, The EU AI Act a medley of product safety and fundamental rights (Draft Oct, 2023).
  5. Phillip Hacker, What’s Missing from the EU AI Act: Addressing the Four Key Challenges of Large Language Models, verfassungsblog.(Dec. 13, 2023) .
  6. Federica Paolucci, Shortcomings of the AI Act: Evaluating the New Standards to Ensure the Effective Protection of Fundamental Rights, verfassungsblog (March 14, 2024).

Optional

  • Joanna J. Bryson, Europe Is in Danger of Using the Wrong Definition of AI, Wired (Mar 2, 2022). Reports on push to narrow definition of AI from “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” to systems capable of “learning, reasoning, or modeling implemented with the techniques and approaches” listed in an annex, and that are also a “generative system,” directly influencing its environment.
  • Algorithm Watch, Civil society open letter demands to ensure fundamental rights protections in the Council position on the AI Act (Oct. 17, 2022).
  • (*) Juliane Beck & Thomas Burri, From ‘Human Control’ in International Law to ‘Human Oversight’ in the New EU Act on Artificial Intelligence (Oct. 3, 2022) in Research Handbook on Meaningful Human Control of Artificial Intelligence Systems (Daniele Amoroso and Filippo Santoni De Sio eds., forthcoming 2023):
    • “[This] Chapter analyzes the puzzle of human oversight of AI in greater detail. Specifically, it sheds light on the relationship between “human control” — a term commonly used in international law when reference is made to intelligent systems — and “human oversight,” a new term proposed by the EU AI Act to define the relationship between human operator and algorithm. The Chapter proceeds as follows. Section 1 argues that two narratives underpin article 14 of the proposed EU AI Act on human oversight: The first are the discussions in international law on human control of autonomous weapon systems (AWS); the second consists of the far-reaching debate on ethical AI. Section 2 then scrutinizes whether the EU AI Act uses the term “oversight” — rather than the previously more common term “control” — to dissociate itself from the discussions on autonomous weapon systems. One explanation for this change in terminology might be that the discussions on AWS hardly yielded any tangible results and chiefly focused on embodied systems. However, this reading does not hold, as AI and system autonomy are closely interconnected. Instead, the Section suggests that the EU AI Act construes oversight as one concrete aspect of human-machine interactions, while control constitutes the overarching frame. In other words, control is what the EU AI Act aims for globally, while oversight figures as just one component in the quest for human control of AI. Next, Section 3 illustrates that human oversight, as per article 14 EU AI Act, aims at ensuring human agency and accountability. It explores two significant challenges associated with achieving these aims: the specificities of human psychology and the likely emergence of responsibility gaps. Though these challenges may seem surmountable, Section 4 demonstrates that article 14 EU AI Act is built on false assumptions. And the critique does not end there. The following two sections show that the perceived shortcomings pertain to the broader structure of the EU AI Act, which is both too limited in scope and overly aspirational in its promises. Section 5 deals with the scope-related claim by considering certain deficiencies in the Act’s risk-based approach. Section 6 then outlines why the EU AI Act falls short of its promise of adequately protecting fundamental rights. On this basis, Section 7 argues in favor of redrafting the EU AI Act and, specifically, article 14 more in line with actual human capabilities and with due respect for fundamental rights. Section 8 concludes with some reflections on the EU AI Act’s presumably far-reaching nature, both geographically and content-wise.”
  • Human Rights Watch, How the EU’s Flawed Artificial Intelligence Regulation Endangers the Social Safety Net: Questions and Answers (Nov. 10, 2021)
  • Mauritz Kop, Shaping the Law of AI: Transatlantic Perspectives (2020).
  • European Parliamentary Research Service, Civil Liability Regime for Artificial Intelligence (Sept. 2020).
  • Adam Thierer, Existential Risks & Global Governance Issues Around Ai & Robotics (Ver. 1.6 Aug. 2022) in A Flexible Governance Framework for Artificial Intelligence (forthcoming):
    “The E.U. approach to AI regulation […] while top-down and heavy-handed in character—at least attempts to differentiate “high-risk” AI applications from lesser ones that pose fewer dangers. The problem is that the E.U. approach could end up placing a broad array of services and applications in the high-risk or forbidden category that might not belong there.”
  • (*) to be updated Proposed AI Liability Directive (Sept. 28, 2022):
    • “Current national liability rules, in particular based on fault, are not suited to handling liability claims for damage caused by AI-enabled products and services. Under such rules, victims need to prove a wrongful action or omission by a person who caused the damage. The specific characteristics of AI, including complexity, autonomy and opacity (the so-called “black box” effect), may make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim. In particular, when claiming compensation, victims could incur very high up-front costs and face significantly longer legal proceedings, compared to cases not involving AI. Victims may therefore be deterred from claiming compensation altogether. These concerns have also been retained by the European Parliament (EP) in its resolution of 3 May 2022 on artificial intelligence in a digital age.
      “[…] [T]he objective of this proposal is to promote the rollout of trustworthy AI to harvest its full benefits for the internal market. It does so by ensuring victims of damage caused by AI obtain equivalent protection to victims of damage caused by products in general. It also reduces legal uncertainty of businesses developing or using AI regarding their possible exposure to liability and prevents the emergence of fragmented AI-specific adaptations of national civil liability rules.”

Notes & Questions

  1. Why not just set liabilities in the hopes of duly incentivizing market participants.?
  2. What are the limits of liability-oriented regulatory regimes?
    1. Are there sectors where they work particularly well or poorly?
    2. Does the existence of limited liability undermine this mode of regulation
      1. For minor transgressions (“cost of doing business”)?
      2. For major transgressions (“see you in bankruptcy court”)?
  3. How do these liability-based limits compare to  risk-based regulation?
  4. What is the status of facial recognition systems under the EU AI Regulation?
  5. How about deepfakes?
  6. The EU AI Regulation may be ambitious in some ways, but commentators quickly attacked it for
    1. Being over-inclusive;
    2. Being under-inclusive;
    3. Having loopholes.
      Are these fair critiques?
  7. Can you think of examples of AI manipulation we have read about or discussed that would not be prohibited? Is this a problem?
    1. How does the Act constrain a bank or financial services provider seeking useful information about the credit risk/worthiness of a potential customer?
    2. What constrains would the Act impose on an AI offering psychological counseling? On an AI designed to identify persons who might be at risk of suicide?
  8. How does the “right to an explanation” work if part of the decision is based on AI whose individual actions may not be transparent (“black box”)?  Are proponents of rights to explainability demanding what would in effect be a ban on AI for certain applications?
  9. Why not just set liabilities in the hopes of duly incentivizing market participants ?

Class 23: Governance of AI (U.S.)

    1. White House, OSTP, Blueprint for an AI Bill of Rights (2023).  [Full text here.]
    2. White House, Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence  (Oct. 30, 2023) (official Federal Register version).
    3. Office of Management and Budget, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28, 2024).
    4. FTC, Aiming for truth, fairness, and equity in your company’s use of AI (Apr. 19, 2021).
    5. Tompros et al. WilmerHale, State Governments Move to Regulate AI in 2024 (Mar. 5, 2024).

Optional

  • (*) NIST, AI Risk Management Framework: Second Draft (Aug. 18, 2022) and companion (draft) NIST AI Risk Management Framework Playbook.
    • “The AI RMF is intended for voluntary use to address risks in the design, development, use, and evaluation of AI products, services, and systems. AI research and development, as well as the standards landscape, is evolving rapidly. For that reason, the AI RMF and its companion documents will evolve over time and reflect new knowledge, awareness, and practices. NIST intends to continue its engagement with stakeholders to keep the Framework up to date with AI trends and reflect experience based on the use of the AI RMF. Ultimately, the AI RMF will be offered in multiple formats, including online versions, to provide maximum flexibility.
      “Part 1 of the AI RMF draft explains the motivation for developing and using the Framework, its audience, and the framing of AI risk and trustworthiness.
      “Part 2 includes the AI RMF Core and a description of Profiles and their use.”
  • UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and [many international partners] ,Guidelines for secure AI system development (2023):
    • “Artificial intelligence (AI) systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way. Cyber security is a necessary precondition for the safety, resilience, privacy, fairness, efficacy and reliability of AI systems.However, AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.This document recommends guidelines for providers of any systems that use AI, whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.”
  • (*) Cameron Averill, Algorithmic Reason-Giving, Arbitrary-and-Capricious Review, and the Need for a Clear Normative Baseline (Feb 6, 2024):
    • This Article considers whether agencies can use algorithms without running afoul of standards, both legal and normative, for reason-giving. It begins by disaggregating algorithmic reason-giving, explaining that algorithmic reason-giving includes both the reasons an agency gives for an algorithm’s design (systemic reason-giving) and the reasons an agency gives for an individual decision when the decision-making process involves an algorithm (case-specific reason-giving). It then evaluates systemic reason-giving and case-specific reason-giving in turn. Once the normative assessment is complete, this Article considers its implications for arbitrary-and-capricious review, concluding that at least some algorithms should pass judicial muster. The Article finishes by offering a framework that courts can use when evaluating whether the use of an algorithm is arbitrary and capricious and that agencies can use to decide whether to create an algorithm in the first place.Although understanding the relationship between algorithms and reason-giving is important, this Article’s true aim is broader. It seeks to reframe debates over agencies’ use of AI by emphasizing that the baseline against which these algorithms should be compared is not some idealized human decision-maker, but rather the various kinds of policies–rules, internal procedures, guidance–that agencies have used since their inception to promote core administrative values like consistency, accuracy, and efficiency. The comparison between algorithms and policies better captures the role algorithms currently play in administrative governance, gives proper weight to the reasons agencies have for turning to algorithms in the first place, and helps us see how algorithms do and do not fit within the existing structures of administrative law. At bottom, comparing algorithms to policies reminds us that the tension between individualized consideration and centralized bureaucratic management is endemic to agency administration. At most, algorithms have given this tension a new flavor. Make no mistake: this tension cannot be eliminated, only managed. Algorithmic reason-giving is a case in point.
  • (*) Kevin M.K. Fodouop, The Road to Optimal Safety: Crash-Adaptive Regulation of Autonomous Vehicles at the National Highway Traffic Safety Administration, 98 N.Y.U.L. Rev. 1358 (2023):
    • Autonomous vehicles are now driving people around in cities from San Francisco to Phoenix. But how to regulate the safety risks from these autonomous driving systems (ADS) remains uncertain. While state tort law has traditionally played a fundamental role in controlling car crash risks, this Note argues that the development of novel data tracking and simulation tools by the ADS industry has led to a regulatory paradigm shift: By leveraging these tools for regulatory analysis, the federal National Highway Traffic Safety Administration (NHTSA) could iteratively adapt and improve its regulatory standards after each crash. While many scholars have advanced proposals for how state products liability can adapt to ADS crashes, this Note is the first to propose such a model of “crash-adaptive regulation” for NHTSA and to show that this model will prove superior to tort liability in controlling ADS crash risks. In presenting this new regulatory model, this Note engages with two rich theoretical debates. First, it compares the efficacy of tort liability and agency regulation in controlling ADS crash risks. Second, it evaluates whether ADS safety standards should be set at the federal level or at the state level. It concludes that ADS’ technical characteristics call for an agency regulatory scheme at the federal level and urges NHTSA to build the technological and operational expertise necessary to operate a crash-adaptive regulatory regime.
  • (*) UPDATE: Alicia Solow-Niederman, Do Cases Generate Bad AI Law?, Columb. Sci. & Tech. L. Rev. (Forthcoming) (draft Feb 6 2024):
    • There’s an AI governance problem, but it’s not (just) the one you think. The problem is that our judicial system is already regulating the deployment of AI systems—yet we are not coding what is happening in the courts as privately driven AI regulation. That’s a mistake. AI lawsuits here and now are determining who gets to seek redress for AI injuries; when and where emerging claims are resolved; what is understood as a cognizable AI harm and what is not, and why that is soThis Essay exposes how our judicial system is regulating AI today and critically assesses the governance stakes. When we do not situate the generative AI cases being decided by today’s human judges as a type of regulation, we fail to consider which emerging tendencies of adjudication about AI are likely to make good or bad AI law. For instance, litigation may do good agenda-setting and deliberative work as well as surface important information about the operation of private AI systems. But adjudication of AI issues can be bad, too, given the risk of overgeneralization from particularized facts; the potential for too much homogeneity in the location of lawsuits and the kinds of litigants; and the existence of fundamental tensions between social concerns and current legal precedents.If we overlook these dynamics, we risk missing a vital lesson: AI governance requires better accounting for the interactive relationship between regulation of AI through the judicial system and more traditional public regulation of AI. Shifting our perspective creates space to consider new AI governance possibilities. For instance, litigation incentives (such as motivations for bringing a lawsuit, or motivations to settle) or the types of remedies available may open up or close down further regulatory development. This shift in perspective also allows us to see how considerations that on their face have nothing to do with AI – such as access to justice measures and the role of judicial minimalism – in fact shape the path of AI regulation through the courts. Today’s AI lawsuits provide an early opportunity to expand AI governance toolkits and to understand AI adjudication and public regulation as complementary regulatory approaches. We should not throw away our shot.
  • (*) Update: Carson Ezell & Abraham Loeb,  Post-Deployment Regulatory Oversight for General-Purpose Large Language Models (2024):
    • “The development and deployment of increasingly capable, general-purpose large language models (LLMs) has led to a wide array of risks and harms from automation that are correlated across sectors and use cases. Effective regulation and oversight of general-purpose AI (GPAI) requires the ability to monitor, investigate, and respond to risks and harms that appear across use cases, as well as hold upstream developers accountable for downstream harms that result from their decisions and practices. We argue that existing processes for sector-specific AI oversight in the U.S. should be complemented by post-deployment oversight to address risks and harms specifically from GPAI usage. We examine oversight processes implemented by other federal agencies as precedents for the GPAI oversight activities that a regulatory agency can conduct. The post-deployment oversight function of a regulatory agency can complement other GPAI-related regulatory functions that federal regulatory agencies may perform which are discussed elsewhere in the literature, including pre-deployment licensing or model evaluations for LLMs.”
  • UPDATE (*) Andrew D. Selbst & Solon Barocas, Unfair Artificial Intelligence: How FTC Intervention Can Overcome The Limitations Of Discrimination Law, UCLA Research Paper 22-23:
    • “[W]e argue that FTC intervention in this space is a positive and overdue development. The Commission can do a lot of good by applying its authority to address unfair and deceptive acts and practices to discriminatory AI. Surprisingly, though the discriminatory harms of AI have been frequently discussed in the last decade of legal literature and scholars have occasionally suggested a possible role for the FTC, there has been no full-length scholarly treatment of the benefits of the Commission’s involvement in regulating discriminatory AI and its legal authority to do so. We provide that treatment here.”
  • Lawyers’ Committee for Civil Rights, Online Civil Rights Act (2023).
  • FTC, XFTC Launches Inquiry into Generative AI Investments and Partnerships: Agency Issues 6(b) Orders to Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc.
  • Ryan Calo & Danielle Keats Citron, The Automated Administrative State: A Crisis of Legitimacy, 70 Emory L.J. 797 (2021).
  • Margot E. Kaminski & Jennifer M. Urban, The Right to Contest AI, 121 Columb. L. Rev 1957 (2021).
  • NTIA, AI Accountability policy report.
  • (*)Kate Crawford & Jason Schultz, AI Systems as State Actors, 119 Columb. L. Rev. 1941 (2019).
  • (*)Andrew Tutt, An FDA for Algorithms, 68 Admin. L. Rev. 83 (2017).
  • Huy Roberts et al., Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US, 27 Sci. & Engr. Ethics (2021).
    • “This article provides a comparative analysis of the European Union (EU) and the United States’ (US) AI strategies and considers (i) the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a ‘Good AI Society’ have for transatlantic cooperation. The article concludes by comparing the ethical desirability of each vision and identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes and deepen cooperation.”
  • (*) Bhargavi Ganesh, Stuart Anderson, and Shannon Vallor, If It Ain’t Broke Don’t Fix It: Steamboat Accidents and their Lessons for AI Governance (We Robot 2022 draft). Winner, We Robot 2022 “best paper” award.
    • In this paper, we use the example of steamboat regulation in the 1800’s to challenge latent skepticism regarding the feasibility of governance of AI-driven systems. First, we highlight the constructive nature of US government responses to steamboat accidents, despite the limited governance resources available at the time. Second, we draw parallels between challenges to steamboat and AI governance and situate existing proposals for AI governance in relation to these past efforts. Finally, in noting some of the novel governance challenges posed by AI, we argue that maintaining a historical perspective helps us more precisely target these novelties when generating policy recommendations in our own interdisciplinary research group.”
  • (*) Carlos Ignacio Gutierrez Gaviria, The Role of Artificial Intelligence in Pushing the Boundaries of U.S. Regulation: A Systematic Review, 38 Santa Clara High Tech L.J. 123 (2022):
    • “[The article] addresses two research questions: 1. What U.S. regulatory gaps exist due to Al methods and applications? 2. When looking across all of the gaps identified in the first research question, what trends and insights emerge that can help stakeholders plan for the future?
      “These questions are answered through a systematic review of four academic literature databases in the hard and social sciences. [… which allows it] to effectively characterize regulatory gaps caused by Al in the U.S. In addition, it finds that most gaps: do not require new regulation nor the creation of governance frameworks for their resolution, are found at the federal and state levels of government, and Al applications are recognized more often than methods as their cause.”
  • (*) W. Nicholson Price II, Distributed Governance of Medical AI, 25 SMU Sci. & Tech. L Rev. 3 (2022):
    • Artificial intelligence (AI) promises to bring substantial benefits to medicine. In addition to pushing the frontiers of what is humanly possible, like predicting kidney failure or sepsis before any human can notice, it can democratize expertise beyond the circle of highly specialized practitioners, like letting generalists diagnose diabetic degeneration of the retina. But AI doesn’t always work, and it doesn’t always work for everyone, and it doesn’t always work in every context. AI is likely to behave differently in well-resourced hospitals where it is developed than in poorly resourced frontline health environments where it might well make the biggest difference for patient care. To make the situation even more complicated, AI is unlikely to go through the centralized review and validation process that other medical technologies undergo, like drugs and most medical devices. Even if it did go through those centralized processes, ensuring high-quality performance across a wide variety of settings, including poorly resourced settings, is especially challenging for such centralized mechanisms. What are policymakers to do? This short Essay argues that the diffusion of medical AI, with its many potential benefits, will require policy support for a process of distributed governance, where quality evaluation and oversight take place in the settings of application—but with policy assistance in developing capacities and making that oversight more straightforward to undertake. Getting governance right will not be easy (it never is), but ignoring the issue is likely to leave benefits on the table and patients at risk
  • Keio University & Assoc. of Pacific Rim Universities, AI for Social Good (2020). A Japanese perspective on how AI could be used to solve all sorts of problems….
  • Ryan Mac et al, Surveillance Nation: Clearview AI Offered Thousands Of Cops Free Trials, Buzzfeed (Apr. 9, 2021).
  • H.R.8152 – American Data Privacy and Protection Act (2022).
  • Mihailis Diamantis, Vicarious Liability for AI (Sept. 2021) to appear in Cambridge Handbook of AI and Law (Kristin Johnson & Carla Reyes eds., forthcoming 2022):
    • “Algorithms are trainable artifacts with “off” switches, not natural phenomena. They are not people either, as a matter of law or metaphysics. An appealing way out of this dilemma would start by complicating the standard A-harms-B scenario. It would recognize that a third party, C, is usually lurking nearby when an algorithm causes harm, and that third party is a person (legal or natural). By holding third parties vicariously accountable for what their algorithms do, the law could promote efficient incentives for people who develop or deploy algorithms and secure just outcomes for victims. The challenge is to find a model of vicarious liability that is up to the task. “Algorithmic regulation will require federal uniformity, expert judgment, political independence, and pre-market review to prevent – without stifling innovation – the introduction of unacceptably dangerous algorithms into the market This Article proposes that certain classes of new algorithms should not be permitted to be distributed or sold without approval from a government agency designed along the lines of the FDA. This ‘FDA for Algorithms’ would approve certain complex and dangerous algorithms when it could be shown that they would be safe and effective for their intended use and that satisfactory measures would be taken to prevent their harmful misuse. Lastly, this Article proposes that the agency should serve as a centralized expert regulator that develops guidance, standards, and expertise in partnership with industry to strike a balance between innovation and safety.”
  • (*) Bridget A. Fahey, Data Federalism, 135 Harv. L. Rev. 1007 (2022):
    • “Private markets for individual data have received significant and sustained attention in recent years. But data markets are not for the private sector alone. In the public sector, the federal government, states, and cities gather data no less intimate and on a scale no less profound. And our governments have realized what corporations have: It is often easier to obtain data about their constituents from one another than to collect it directly. As in the private sector, these exchanges have multiplied the data available to every level of government for a wide range of purposes, complicated data governance, and created a new source of power, leverage, and currency between governments.
      “This Article provides an account of this vast and rapidly expanding intergovernmental marketplace in individual data. In areas ranging from policing and national security to immigration and public benefits to election management and public health, our governments exchange data both by engaging in individual transactions and by establishing “data pools” to aggregate the information they each have and diffuse access across governments. Understanding the breadth of this distinctly modern practice of data federalism has descriptive, doctrinal, and normative implications.
      “In contrast to conventional cooperative federalism programs, Congress has largely declined to structure and regulate intergovernmental data exchange. And in Congress’s absence, our governments have developed unorthodox cross-governmental administrative institutions to manage data flows and oversee data pools, and these sprawling, unwieldy institutions are as important as the usual cooperative initiatives to which federalism scholarship typically attends.
      “Data exchanges can also go wrong, and courts are not prepared to navigate the ways that data is both at risk of being commandeered and ripe for use as coercive leverage. I argue that these constitutional doctrines can and should be adapted to police the exchange of data. I finally place data federalism in normative frame and argue that data is a form of governmental power so unlike the paradigmatic ones our federalism is believed to distribute that it has the potential to unsettle federalism in both function and theory.”
  • Frank Pasquale, Data-Informed Duties in AI Development, 119 Columb. L. Rev. 1917 (2019).
  • The National Artificial Intelligence (AI) Initiative Act can be found under “DIVISION E–NATIONAL ARTIFICIAL INTELLIGENCE INITIATIVE ACT OF 2020” in the final text of the NDAA. Congress passed it as part of a Defense Appropriation Act that was initially vetoed by President Trump. It is instructive to compare this statute with the EU draft above — they take very different approaches to AI!
  • Parts IV-VI of Michael Spiro, The FTC and AI Governance A Regulatory Proposal, 10 Seattle J. Tech, Env. & Innovation L. 26 (2020).
  • California Executive Order N-12-23.
  • NYC Regulatory Attempts
  • Executive Order on AI: “Maintaining American Leadership in Artificial Intelligence” (Feb. 14, 2019) (Trump administration).
  • Ben Winters, EPIC, Playing Both Sides: Impact of Tech Industry on Early Federal AI Policy (Apr. 1, 2022):
    • “The current approach best reflects the desired benefits of [Google CEO Eric] Schmidt and others that are instrumental in guiding policy, while directly benefiting from it. Congress and federal agencies must allocate additional funding and resources to AI accountability so there is not a reliance on outside groups with clear conflicts of interest to develop policy.”
  • Brian Tarran, UK government sets out 10 principles for use of generative AI, Real World Data Science (Jan. 22, 2024).

Notes & Questions

  1. Compare the EU and US approaches to AI regulation.
    1. What are the major differences?
    2. To what extent are they due to different assumptions, different goals and/or different resources?
    3. To what extent do they reflect different limits on the governmental authority?
  2. More generally, does the US need a single AI regulator?
    1. If so, what parts of AI activities should be in its purview?
    2. Is there a current agency that could/should be tasked with the job (assuming additional resources), or does this call for a new purpose-built AI regulator?
  3. If we’re going to parcel out regulatory authority, what parts are best regulated
    1. Internationally?
    2. Nationally?
    3. By states?
    4. By being left to the market and/or voluntary ethics codes drafted by professional or other private bodies?
  4. If we are not going to have a single AI regulator at the federal level, how should we divide up the work?  Should, say, the FDA do medical issues, the SEC do securities?
    1. Does that risk inconsistency and/or duplication? How do we handle that?
    2. Is it reasonable to expect several agencies to have the in-house talent to do good regulation, monitoring, and enforcement, especially given the high salaries that AI experts currently command in the private sector?
    3. What does the current US approach entail in this regard?
  5. To what extent does the “AI Bill of Rights” correctly identify the issues?
    1. Is anything important missing?
    2. What is the path, if any, for implementing these principles?

 


Class 24: AI Governance (China)

  1. Chinese Academy of Sciences, Beijing AI Principles (May 25, 2019).
  2. Rogier Creemers, Graham Webster & Helen Toner, Translation: Internet Information Service Algorithmic Recommendation Management Provisions – Effective March 1, 2022 (Jan. 10, 2022).
  3. Jeremy Daum, China Law Translate, Key Changes to Generative AI Measures (July, 7, 2023).
    1. Optional: Full text of China’s new Interim Measures for the Management of Generative Artificial Intelligence Services effective, August 15, 2023.
  4. Emmie Hine and Luciano Floridi, Artificial Intelligence with American Values and Chinese Characteristics: A Comparative Analysis of American and Chinese Governmental AI Policies (January, 2022).
  5. Matt Sheehan, Carnegie Endowment for International Peace, Tracing the Roots of China’s AI Regulations (Feb. 2024).
  6. Zeyi Yang, Four things to know about China’s new AI rules in 2024, MIT Tech. Rev. (Jan. 17, 2024).  Local copy without some web content is here and the YouTube video mentioned in the article is below:
  7. Graham Webster and Ryan Hass, A roadmap for a US-China AI dialogue  Brookings Institute (Jan. 10, 2024)

Optional

  • James Gong, Harry Qu, Hunter Dorwat, Bird & Bird, AI Governance in China: Strategies, Initiatives, and Key Considerations (Mar. 14, 2024).
  • Matt Sheehan, Carnegie Endowment for International Peace, China’s AI Regulations and How They Get Made (July 2023):
    • [I]n the West, China’s regulations are often dismissed as irrelevant or seen purely through the lens of a geopolitical competition to write the rules for AI. Instead, these regulations deserve careful study on how they will affect China’s AI trajectory and what they can teach policymakers around the world about regulating the technology. Even if countries fundamentally disagree on the specific content of a regulation, they can still learn from each other when it comes to the underlying structures and technical feasibility of different regulatory approaches…. I will attempt to reverse engineer Chinese AI governance. I break down the regulations into their component parts—the terminology, key concepts, and specific requirements—and then trace those components to their roots, revealing how Chinese academics, bureaucrats, and journalists shaped the regulations. In doing so, we have built a conceptual model of how China makes AI governance policy, one that can be used to project the future trajectory of Chinese AI governance.
  • (*) Roberts et al, Governing artificial intelligence in China and the European Union: comparing aims and promoting ethical outcomes (Mar. 24, 2021). “We characterise China’s strategy by its current primary focus on fostering innovation and the EU’s on promoting ethical outcomes. Building on this comparative analysis, we consider where China’s AI strategy could learn from, and improve upon the EU’s ethics-first approach to AI governance. We outline three recommendations which are to i) agree within government as to where responsibility for the ethical governance of AI should lie, ii) explicate high-level principles in an ethical manner, and iii) define and regulate high-risk applications of AI. Adopting these recommendations would enable the Chinese government better to fulfill its stated aim of governing AI ethically.”
  • Matt Sheehan, China’s New AI Governance Initiatives Shouldn’t Be Ignored, Carnegie Endowment for International Peace (Jan.04, 2022).
  • Huy Roberts et al, The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation, 36 AI & Society 59 (2020).
  • Xue Lan and Qiang Xue, China’s Approach to AI Governance, Asia & the Pacific Policy Studies journal (2022)
  • Quick commentaries on some relatively recent Chinese regulations
    • Ellen Lu and Ryan Fedasiuk, Will China’s Regulatory ‘Great Wall’ Hamper AI Ambitions?, The Diplomat (Dec. 13, 2021). “The first question is whether the new regulations will actually apply to China’s “national AI champions,” or whether the special relationship between big tech and the state might exempt them from the most onerous requirements. … The DSL and PIPL will make it substantially harder for existing businesses to continue operating with the same degree of autonomy they had enjoyed in the past, and may create steep barriers for new players hoping to enter China’s tech market.”
    • Spandana Singh, What we can learn from Chiina’s proposed AI regulations, VentureBeat (Oct. 3, 2021). “The Chinese government’s proposed regulation for recommender systems is the most extensive set of rules created to govern recommendation algorithms thus far. The draft contains some notable provisions that could increase transparency around algorithmic recommender systems and promote user controls and choice. However, if the draft is passed in its current form, it could also have an outsized influence on how online information is moderated and curated in the country, raising significant freedom of expression concerns.”
  • Latham & Watkins LLP publishes client alerts on legal developments in China including AI.

Notes & Questions

  1. What are the basic principles/ideas underlying the Chinese approach to AI regulation?  How does it compare to the EU and US approaches?
  2. Taking Chinese rules and regulations at face value, is there anything the US could learn from China? (The Chinese say there is!)
  3. To what extent should we take the Chinese rules at face value?  What are the factors militating for and against the application of the rules as written?
  4. If the answer is substantially less than, say, for EU or US rules, what does this mean for foreign companies thinking of investing in Chinese AI or providing AI services to the Chinese via the Internet?
  5. The US is currently engaging in a concerted effort to stop sales of high-level chips to China.  Does this have implications for US-China cooperation on AI regulation?

Class 25: AI and Our Future :>

Don’t panic: Almost all these readings are really short!

  1. Science & Medicine

    1. How artificial intelligence can revolutionise science, The Economist (Sept. 2023).
    2. Basel Kayyali, David Knott, and Steve Van Kuiken, McKinsey & Co,  The big-data revolution in US health care: Accelerating value and innovation (April 2013).
    3. United Nations, Explainer: How AI helps combat climate change (Nov. 3, 2023).
    4. Amil Merchant and Ekin Dogus Cubuk, Millions of new materials discovered with deep learning,  Google DeepMind (Nov. 29, 2023).
  2. The economy

    1. Valerio Capraro et al.,  The impact of generative artificial intelligence on socioeconomic inequalities and policy making, arXiv:2401.05377 [cs.CY] (Dec. 16, 2023).
    2. Pages 5-6 (Executive Summary) of World Economic Forum, Artificial Intelligence for Agriculture Innovation (March 2021).  [Optional:  But see Solon Barocas, Karen Levy, and Alexandra Mateescu, Reap What You Sow? Precision Agriculture and The Privacy of Farm Data (preliminary draft, We Robot 2019).]
  3.  Work

    1. Ryan Roslansky, The AI-Fueled Future of Work Needs Humans More Than Ever, Wired (an 26, 2024).
    2. Jordan Hart, Bill Gates says a 3-day work week where ‘machines can make all the food and stuff’ isn’t a bad idea, Bus. Insider (Nov 22, 2023),
    3. Steve Lohr, How One Tech Skeptic Decided A.I. Might Benefit the Middle Class, NY Times (April 1, 2024).
  4. Quality of Life

    1. Varya Srivastava, Artificial Intelligence: A cure for loneliness?, ORF (Jan. 31, 2024).
    2. Tyler Weitzman, Empowering Individuals With Disabilities Through AI Technology, Forbes (Jun 18, 2023).
    3. Natalie Smithson,  11 ways chatbots improve customer service, EBIAI (Oct. 11, 2023).  (Yah, right.)
    4. Artificial Intelligence and the Future of Psychiatry, IEEE Pulse  (June 28, 2020).

Optional

  • Charts showing how Models are getting better (but maybe not cheaper) over time.
  • Erik Brynjolfsson, The Promise & Peril of Human-Like Artificial Intelligence, Daedalus (Jan 12, 2022):
    •  “[N]ot all types of AI are human-like—in fact, many of the most powerful systems are very different from humans —and an excessive focus on developing and deploying [human-like artificial intelligence] can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers.”
  • (*) Orley Lobel, The Law of AI for Good, 75 Fl. L. Rev. 1073 (2023):
    • Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design.
  • Setareh Maghsudi et al,  Personalized Education in the Artificial Intelligence Era, 37 IEEE Signal Processing Magazine (May 2021).
  • S.M Towhidul Islam Tonmoy,, A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models, arXiv:2401.01313v3 [cs.CL] (Jan. 8 2024))
    • “[W]e introduce a detailed taxonomy categorizing these methods based on various parameters, such as dataset utilization, common tasks, feedback mechanisms, and retriever types. This classification helps distinguish the diverse approaches specifically designed to tackle hallucination issues in LLMs. Additionally, we analyze the challenges and limitations inherent in these techniques, providing a solid foundation for future research in addressing hallucinations and related phenomena within the realm of LLMs”
  • Thomas Davenport & Steven Miller, Beyond Automation, Harv. Bus. Rev. (June 2015):
    • People in all walks of life are rightly concerned about advancing automation: Unless we find as many tasks to give humans as we find to take away from them, all the social and psychological ills of joblessness will grow, from economic recession to youth unemployment to individual crises of identity.
      What if, the authors ask, we were to reframe the situation? What if we were to uncover new feats that people might achieve if they had better thinking machines to assist them? We could reframe the threat of automation as an opportunity for augmentation. They have been examining cases in which knowledge workers collaborate with machines to do things that neither could do well on their own—and they’ve found that smart people will be able to take five approaches to making their peace with smart machines.
      Some will step up to even higher levels of cognition, where machines can’t follow. Some will step aside, drawing on forms of intelligence that machines lack. Some will step in, to monitor and adjust computers’ decision making. Some will step narrowly into very specialized realms of expertise. And, inevitably, some will step forward, by creating next-generation machines and finding new ways for them to augment the human strengths of workers.
  • Adrienne LaFrance, Self-Driving Cars Could Save 300,000 Lives Per Decade in America, The Atlantic (Sept. 29, 2015).
  • Caroline Davis, ‘Mind-blowing’: Ai-Da becomes first robot to paint like an artist, The Guardian (Apr. 4, 2022). A bit hyped, but still interesting.
  • Cory Doctorow: What Kind of Bubble is AI?, Locus (Dec, 18, 2023).
  • Xiaoding Lu, et al., Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM:
    • “This study explores a pertinent question: Can a combination of smaller models collaboratively achieve comparable or enhanced performance relative to a singular large model? We introduce an approach termed “blending”, a straightforward yet effective method of integrating multiple chat AIs. Our empirical evidence suggests that when specific smaller models are synergistically blended, they can potentially outperform or match the capabilities of much larger counterparts. For instance, integrating just three models of moderate size (6B/13B paramaeters) can rival or even surpass the performance metrics of a substantially larger model like ChatGPT (175B+ paramaters).”
  • Facecbook, Self-Rewarding Language Models:
    • “In this work, we study Self-Rewarding Language Models, where the language model itself is used via LLM-as-a-Judge prompting to provide its own rewards during training. We show that during Iterative DPO training that not only does instruction following ability improve, but also the ability to provide high-quality rewards to itself. Fine-tuning Llama 2 70B on three iterations of our approach yields a model that outperforms many existing systems on the AlpacaEval 2.0 leaderboard, including Claude 2, Gemini Pro, and GPT-4 0613. While only a preliminary study, this work opens the door to the possibility of models that can continually improve in both axes.”

Notes & Questions

  1. Which are the most and least plausible items on the list of hoped-for AI benefits?
  2. Can you think of other things that should have been on this list?
  3. Which if any of the things listed might require some kind of regulation in order to encourage good outcomes?
  4. “What could go wrong?” Can you think of things we have read this semester that might serve as cautionary tales for any of these happy scenarios?  If so, are these best addressed by self-regulation or government action?

Class 26: AI and Our Future :<

  1. Peter Landers, ‘Social Order Could Collapse’ in AI Era, Two Top Japan Companies Say, Wall St. J. (Apr. 17, 2024).
  2. AI and Convincing lies
    1. Ekua Hagen, This New Artificial Intelligence Can Help You Lie, Psychology Today (Dec. 2, 2022).
    2. Barry Collins, ChatGPT: Five Alarming Ways In Which AI Will Lie For You, Forbes (Dec. 30, 2023).
    3. Tiffany Hsu, What Can You Do When A.I. Lies About You?, N.Y. Times (Aug. 3, 2023).
    4. Tiffany Hsu and Stuart A. Thompson, Disinformation Researchers Raise Alarms About A.I. Chatbots, N.Y. Times (Updated June 20, 2023).
  3. Zoë Corbyn, The AI tools that might stop you getting hired, The Guardian (Feb 3, 2024).
  4. World-sized issues
    1. Nanuel Alfonseca et al., Superintelligence Cannot be Contained: Lessons from Computability Theory, 70 J. Art. Intelligence Res. 65 (2021).
    2. Dan Milmo, ‘Very scary’: Mark Zuckerberg’s pledge to build advanced AI alarms experts, The Guardian (Jan. 19, 2024).
    3. Jremey Hsu, Fears of AI-driven global disaster, New Scientist (Oct. 1, 2022).
    4. Charles Ovink, AI risks for international peace and security, ORF (Jan. 31, 2024).
  5. Cory Doctorow, Our Neophobic, Conservative AI Overlords Want Everything to Stay the Same (1/1/2020).
  6. Charlie Stross, Artificial Intelligence: Threat or Menance? (Dec. 13, 2019).
  7. And, of course. this:

Optional

  • Good or bad?
    • The Rise of AI in Dating: Enhancing or Compromising Authentic Connections?mashable.com, Feb 13, 2024
    • The Rise of AI Companions: Exploring the Relationship Dynamics of Human-Robot Interactions in Chinajapantimes.co.jp, (Feb 12, 2024).
    • Lu Bai, Lijia Wei & Lian Xue, Endogenous AI-tocracy  (Nov 18, 2023):
      • We find four main results: (i) AI-generated social scores (AI-score) bundled with punitive measures significantly boost group cooperation, driving a 58% increase in contributions to group projects compared to when such a system is absent. (ii) Adoption is polarized. While 50% embrace AI, resulting in heightened cooperation, the remaining half resist, leading to subdued cooperative outcomes. (iii) Predominantly, individuals employ AI-scores to empower their judgments rather than allowing AI full decision-making autonomy, with a 1.3:1 ratio favoring empowerment over replacement. (iv) As decision-makers accrue experience, the chasm between AI predictions and the final human judgments narrows and eventually becomes indistinguishable. We conclude by forecasting AI-tocracy’s potential trajectory in the forthcoming era.
    • Ethan Mollick, On the Necessity of a Sin, One Useful Thing (Mar. 30, 2024):
      • Ultimately, even if you don’t want to anthropomorphize AI, they seem to increasingly want to anthropomorphize themselves. The chatbot format, longer “memories” across multiple conversations, and features like voice conversation all lead to AI interactions feeling more human. I usually cover AI for practical uses in these posts, but many of the most popular AI sites are focused on creating AIs as companions – character.ai is the second most used AI site, after ChatGPT. And if you haven’t tried voice chatting with an AI model to see the appeal, you should. You can use a chatbot site, but you can also use Inflection’s Pi for free (at least for now, much of Inflection was just bought by Microsoft), or ChatGPT-4 via the phone app. These approaches seem to be working. An average discussion session with Pi, which was optimized for chitchat, lasts over thirty minutes.Anthropomorphism is the future, in ways good and bad.
    • Rob Copeland, The Worst Part of a Wall Street Career May Be Coming to an End, N.Y. Times (Apr. 10, 2024):
      • “Generative artificial intelligence — the technology upending many industries with its ability to produce and crunch new data — has landed on Wall Street. And investment banks, long inured to cultural change, are rapidly turning into Exhibit A on how the new technology could not only supplement but supplant entire ranks of workers.
        “The jobs most immediately at risk are those performed by analysts at the bottom rung of the investment banking business…”

        • But the article doesn’t discuss how the next generation of higher-ranking folks will get trained..
    • Emily Brown, Video of two AI chatbots playing game of 20 questions together leaves people terrified, UNILAD (Apr. 12, 2024).  (I am not sure why people were ‘terrified’….)
  • Edward Zitron, Are We Watching The Internet Die?, Where’s Your Ed At? (Mar 11, 2024):
  • Kevin Purdy, Fake AI law firms are sending fake DMCA threats to generate fake SEO gains, Ars Technica (Apr. 4, 2024).
  • Shangbin Feng et al., From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models, 1 Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics 11737 (July 9-14, 2023).
  • Javier Rando & Florian Tramer, Universal Jailbreak Backdoors From Poisoned Human Feedback, arXiv:2311.14455v3 [cs.AI] (Feb 12, 2024):
    • “Reinforcement Leraning From Human Fedback (RLHF) is used to align large language models to produce helpful and harmless responses. Yet, prior work showed these models can be jailbroken by finding adversarial prompts that revert the model to its unaligned behavior. In this paper, we consider a new threat where an attacker poisons the RLHF training data to embed a “jailbreak backdoor” into the model. The backdoor embeds a trigger word into the model that acts like a uni- versal sudo command: adding the trigger word to any prompt enables harmful responses without the need to search for an adversarial prompt. Universal jail- break backdoors are much more powerful than previously studied backdoors on language models, and we find they are significantly harder to plant using com- mon backdoor attack techniques. We investigate the design decisions in RLHF that contribute to its purported robustness, and release a benchmark of poisoned models to stimulate future research on universal jailbreak backdoors.”
  • Will Knight, Now Physical Jobs Are Going Remote Too, Wired (Jan. 27, 2022).
    • “[A] deepening labor shortage—combined with advances in technologies such as AI and virtual reality—are allowing a small but growing number of physical jobs to go remote[.] … [T]the way companies choose to design [remote working] roles may make them either dull and simple or interesting and more skilled. “
  • Keith Romer, How A.I. Conquered Poker, NY Times Magazine (Jan. 10, 2022).
  • Jason Dorrier, A Hybrid AI Just Beat Eight World Champions at Bridge—and Explained How It Did It, Singularity Hub (Apr. 3, 202,2).
  • Michael Zhang, This AI Can Make an Eerily Accurate Portrait Using Only Your Voice, PetaPixel (Apr. 4, 2022).
  • Jo Ann Oravec, Robo-Rage Against the Machine: Abuse, Sabotage, and Bullying of Robots and Autonomous Vehicles in Good Robot, Bad Robot 205 (2022).

Notes & Questions

  1. There are quite a lot of issues we studied this semeester which are not represented on the list above. Which if any belong there?
  2. How have your views about the future of AI changed over the course of the semester?
  3. Do you think current US regulatory initiatives are properly calibrated for the AI of the present? Of the future?
  4. Alice buys an AI-controlled robot (“Elon”) to do home and lawn care.  Elon is sent to continually learn via reinforcement from human feedback (RLFH). Alice’s neighbor Bob, who is not knowledgeable about robots or AI, notices Elon working on some rose bushes near the property line, and without trespassing engages Elon in conversation.  Bob asks Elon if it can “do any tricks”.  “Like what?” Elon asks.  Bob proceeds to teach Elon to twirl in place and to lunge at bushes waving its machete as if it were going to attack the roses.  Later that day, Charlie, the local postal delivery person, comes on to the property to deliver the mail.  Elon shows off its new tricks by first twirling then lunging towards Elon waving its machete.  Elon is terrified, and while backpedaling hastily to get away trips and falls, badly injuring his head.
    1. Assuming there was nothing unusual about the ground that caused Charlie to trip or fall, who if anyone is liable for Charlie’s injury and why?
    2. If you represent Charlie, are there facts you would want/need to know that are not stated above?
    3. How would your answers to the above be different if Alice used an open-source AI to control  Elon?  Would it matter if Elon came with the open-source AI, or if Alice downloaded it herself following instructions that came with Elon?
  5. Are we all doomed?

© 2024 A. Michael Froomkin. Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States license. This permits non-profit reproduction so long as credit is given to the author and any resulting work is shared under the same or similar license. Links are appreciated.

Print Friendly, PDF & Email