Artificial Intelligence Technology and the Law

  • Civil Litigation Discovery Approaches in the Era of Advanced Artificial Intelligence Technologies
    For nearly as long as computers have existed, litigators have used software-generated machine output to buttress their cases, and courts have had to manage a host of machine-related evidentiary issues, including deciding whether a machine’s output, or testimony based on the output, could fairly be admitted as evidence and to what extent. Today, as litigants begin contesting cases involving aspects of so-called intelligent machines–hardware/software systems endowed with machine learning algorithms and other artificial intelligence-based models–their lawyers and the judges overseeing their cases may need to rely on highly-nuanced discovery strategies aimed at gaining insight into the nature of those algorithms, the underlying source code’s parameters and limitations, and the various possible alternative outputs the AI model could have produced given a set of training data and inputs.  A well-implemented strategy will lead to understanding how a disputed AI system worked and how it may have contributed to a plaintiff’s alleged harm, which is necessary if either party wishes to present an accurate and compelling story to a judge or jury. Parties in civil litigation may obtain discovery regarding any non-privileged matter that is relevant to any party’s claim or defense and that is proportional to the needs of the case, unless limited by a court, taking into consideration the following factors expressed in Federal Rules of Civil Procedure (FRCP) Rule 26(b): The importance of the issues at stake in the action The amount in controversy The parties’ relative access to relevant information The parties’ resources The importance of the discovery in resolving the issues, and Whether the burden or expense of the proposed discovery outweighs its likely benefit. Evidence is relevant to a party’s claim or defense if it tends “to make the existence of any fact that is of consequence to the determination of the action more or less probable that it would be without the evidence.” See Fed. R. Evid. 401.  Even if the information sought in discovery is relevant and proportional, discovery is not permitted where no need is shown. Standard Inc. v. Pfizer Inc., 828 F.2d 734, 743 (Fed. Cir. 1987). Initial disclosures Early in a lawsuit, the federal rules require parties to make initial disclosures involving the “exchange of core information about [their] case.”  ADC Ltd. NM, Inc. v. Jamis Software Corp., slip op. No. 18-cv-862 (D. NM Nov. 5, 2018).  This generally amounts to preliminary identifications of individuals likely to have discoverable information, types and locations of documents, and other information that a party in good faith believes may be relevant to a case, based on each parties’ claims, counterclaims, facts, and various demands for relief set forth in their pleadings.  See FRCP 26(a)(1).  A party failing to comply with initial disclosure rules “is not allowed to use” the information or person that was not disclosed “on a motion, at a hearing, or at a trial, unless the failure was substantially justified or is harmless.”  Baker Hughes Inc. v. S&S Chemical, LLC, 836 F. 3d 554 (6th Cir. 2016) (citing FRCP 37(c)(1)).  In a lawsuit involving an AI technology, individuals likely to have discoverable information about the AI system may include: Data scientists Software engineers Stack engineers/systems architects Hired consultants (even if they were employed by a third party) A company’s data scientists may need to be identified if they were involved in selecting and processing data sets, and if they trained, validated, and tested the algorithms at issue in the lawsuit.  Data scientists may also need to be identified if they were involved in developing the final deployed AI model.  Software engineers, depending on their involvement, may also need to be disclosed if they were involved in writing the machine learning algorithm code, especially if they can explain how parameters and hyperparameters were selected and which measures of accuracy were used.  Stack engineers and systems architects may need to be identified if they have discoverable information about how the hardware and software features of the contested AI systems were put together.  Of course, task and project managers and other higher-level scientists and engineers may also need to be identified. Some local court rules require initial or early disclosures beyond what is required under Rule 26.  See Drone Technologies, Inc. v. Parrot SA, 838 F. 3d 1283, 1295 (Fed. Cir. 2016) (citing US District Court for the Western District of Pennsylvania local rule LPR3.1, requiring, in patent cases, initial disclosure of source code and other documentation… sufficient to show the operation of any aspects or elements of each accused apparatus, product, device, process, method or other instrumentality identified in the claims pled of the party asserting patent infringement…”)) (emphasis added).  Thus, depending on the nature of the relevant AI technology at issue in a lawsuit, and the jurisdiction in which the lawsuit is pending, a party’s initial disclosure burden may involve identifying the location of relevant source code (and who controls it), or they could be required to make source code and other technical documents available for inspection early in a case (more on source code reviews below).  Where the system is cloud-based operable on a machine learning as a service (MLaaS) platform, a party may need to identify the platform service where its API requests are piped. Written discovery requests Aside from the question of damages, in lawsuits involving an AI technology, knowing how the AI system made a decision or took an action may be highly relevant to a party’s case, and thus the party seeking to learn more may want to identify information about at least the following topics, which may be obtained through targeted discovery requests, assuming, as required by FRCP 26(b), the requesting party can justify a need for the information: Data sets considered and used (raw and processed) Software, including earlier and later versions of the contested version Software development process Sensors for collecting real-time observational data for use by the AI model Source code Specifications Schematics Flow charts Formulas Drawings Other documentation A party may seek that information by serving requests for production of document and interrogatories.  In the case of document requests, if permitted by the court, a party may wish to request source code to understand the underlying algorithms used in an AI system, and the data sets used to train the algorithms (if the parties’ relevant dispute turns on a question of a characteristic of the data, e.g., did the data reflect biases? Is it out of date? Is it of poor quality due to labeling errors? etc.).  A party may wish to review software versions and software development documents to understand if best practices were followed.  AI model development often involves trial and error, and thus documentation regarding various inputs used, algorithm architectures selected and de-selected, and the hyperparameters chosen for the various algorithms–anything related to product development–may be relevant and should be requested.  In a lawsuit involving an AI system that uses sensor data (e.g., cameras providing image data to a facial recognition system), a party may want to obtain documentation about the chosen sensor to understand its performance capabilities and limitations. With regard to interrogatories, a party may use interrogatories to ask an opposing party to explain its basis for assertions, made in its pleadings or contentions regarding a challenged AI system, such as: The basis underlying a contention about the foreseeability by a person (either the system’s developer or its end user) of an AI system’s errors The basis for the facts regarding the transparency of the system from the developer’s and/or a user’s perspective The reasonableness of an assertion that a person could foresee the errors made by the AI system The basis underlying a contention that a particular relevant technical standard is applicable to the AI system The nature and extent of the contested AI system’s testing conducted prior to deployment The basis for alleged disparate impacts from an automated decision system The identities and their involvement in making final algorithmic decisions leading to a disparate impact, and how and how much they relied on machine-based algorithmic outputs The modeled feature space used in developing an AI model and its relationship to the primary decision variables at issue (e.g., job promotion qualifications, eligibility for housing assistance) Who makes up the relevant scientific community for the relevant technology and what are the relevant industry standards to apply to the disputed AI system and its output Source code reviews Judges and/or juries are often asked to measure a party’s actions against a standard, which may be defined by one or more objective factors.  In the case of an AI system, judging whether a standard has been met may involve assessing the nature of the underlying algorithm.  Without that knowledge, a party with the burden of proof may only be able to offer evidence of the system’s inputs and results, but would have no information about what happened inside the AI system’s black box.  That may be sufficient when a party’s case rests on a comparison of the system’s result or impact with a relevant standard; but in some cases, understanding the inner workings of the system’s algorithms, and how well they model the real world, could help buttress (or undermine) a party’s case in chief and support (or mitigate) a party’s potential damages.  Thus a source code review may be a necessary component of discovery in some lawsuits. For example, assume a technical standard for a machine learning-based algorithmic decision system requires a minimum accuracy (i.e., recall, precision, and/or f1 score), and the developer’s documentation demonstrates that its model met that standard.  An inspection of the source code, however, might reveal that the “test size” parameter was set too low (compared to what is customary), meaning most of the available data in the data set was used to train the model and the model may suffer from overfitting (and maybe the developer forgot to cross-validate).  A source code review might reveal those potential problems.  a source code review might also reveal which features were used to create the model and how many features were used compared to the number of data observations, both of which might reveal that the developer overlooked an important feature or used a feature that caused the model to reflect an implicit bias in the data. Because of source code’s proprietary and trade secret nature, parties requested to produce their code may resist inspection over concerns about the code getting out into the wild.  The burden falls to the requestor to establish a need and that procedures will safeguard the source code.  Cochran Consulting, Inc. v. Uwatec USA, Inc., 102 F.3d 1224, 1231 (Fed. Cir. 1996) (vacating discovery order pursuant to FRCP 26(b) requiring the production of computer-programming code because the party seeking discovery had not shown that the code was necessary to the case); People v. Superior Court of San Diego County, slip op. Case D073943 (Cal. App. 4th October 17, 2018) (concluding that the “black box” nature of software is not itself sufficient to warrant its production); FRCP 26(c)(1)(G) (a court may impose a protective order for trade secrets specifying how they are revealed). Assuming a need for source code has been demonstrated, the parties will need to negotiate terms of a protective order defining what constitutes source code and how source code reviews are to be conducted.  See Vidillion, Inc. v. Pixalate, Inc., slip. op. No. 2:18-cv-07270 (C.D. Cal. Mar. 22, 2019) (describing terms and conditions for disclosure and review of source code, including production at a secure facility, use of non-networked standalone computers, exclusion of recording media/recording devices by inspectors during review, and handling of source code as exhibits during depositions). In terms of definitions, it is not unusual to define source code broadly, relying on principles of trade secret law, to include things that the producing party believes in good faith are not generally known to others and have significant competitive value such that unrestricted disclosure to others would harm the producing party, and which the producing party would not normally reveal to third parties except in confidence or has undertaken with others to maintain in confidence.  Such things may include: Computer instructions (reflected in, e.g., .jupyter or .py files) Data structures Data schema Data definitions (that can be sharable or expressed in a form suitable for input to an assembler, compiler, translator, or other data processing module) Graphical and design elements (e.g., SQL, HTML, XML, XSL, and SGML files) In terms of procedure, source code inspections are typically conducted at the producing party’s law firm or possibly at the developer’s facility, where the inspection can be monitored to ensure compliance with the court’s protective order.  The inspectors will typically comprise a lawyer for the requesting party along with a testifying expert who should be familiar with multiple programming languages and developer’s tools.  Keeping in mind that the inspection machine will not have access to any network, and no recordable media or recording devices will be allowed in the space where the inspection machine is located, the individuals performing the review will need to ensure they’ve requested all the resources installed locally to facilitate inspection testing, including applications to create virtual servers to simulate remote API calls, if that is an element of the lawsuit.  Thus, the reviewers might request in advance that the inspection machine be loaded with: The above-listed files Relevant data sets A development environment such as a Jupyter notebook or similar application to facilitate opening python or other source code files and data sets. In some cases, it may be reasonable to request a GPU-based machine to create a run-time environment for instances of the AI model to explore how the code operates and how the model handles inputs and makes decisions/takes actions. Depending on the nature of the disputed AI system, the relevant source code may be embedded on hardware devices (e.g., sensors) that the party’s do not have access to.  For example, in a case involving the cameras and/or lidar sensors installed on an autonomous vehicle or as part of a facial recognition system, the party seeking to review source code may need to obtain third-party discovery via a subpoena duces tecum, as discussed below. Subpoenas (Third-Party) Discovery If source code is relevant to a lawsuit, and neither party has access to it, one or both of them may turn to a third party software developer/authorized seller for production of the code, and seek discovery from that entity through a subpoena duces tecum. It is not unusual for third parties to resist production on the basis doing so would be unduly burdensome, but often as likely they will resist production on the basis that its software is protected by trade secrets and/or is proprietary and disclosing it to others would put their business interests at risk.  Thus, the party seeking access to the source code in a contested AI lawsuit should be prepared for discovery motions in the district where the third-party software developer/authorized seller is being asked to comply with a subpoena. A court “may find that a subpoena presents an undue burden when the subpoena is facially overbroad.” Wiwa, 392 F.3d at 818. Courts have found that a subpoena for documents from a non-party is facially overbroad where the subpoena’s document requests “seek all documents concerning the parties to [the underlying] action, regardless of whether those documents relate to that action and regardless of date”; “[t]he requests are not particularized”; and “[t]he period covered by the requests is unlimited.” In re O’Hare, Misc. A. No. H-11-0539, 2012 WL 1377891 at *2 (S.D. Tex. Apr. 19, 2012).  Additionally, FRCP 45(d)(3)(B) provides that, “[t]o protect a person subject to or affected by a subpoena, the court for the district where compliance is required may, on motion, quash or modify the subpoena if it requires: (i) disclosing a trade secret or other confidential research, development, or commercial information.” But, “the court may, instead of quashing or modifying a subpoena, order appearance or production under specified conditions if the serving party: (i) shows a substantial need for the testimony or material that cannot be otherwise met without undue hardship; and (ii) ensures that the subpoenaed person will be reasonably compensated.” FRCP 45(d)(3)(C). Thus, in the case of a lawsuit involving an AI system in which one or more of the parties can demonstrate it/they have a substantial need to understand how the system made a decision or took a particular action, a narrowly-tailored subpoena duces tecum may be used to gain access to the third party’s source code.  To assuage the producing party’s proprietary/trade secret concerns, the third party may seek a court-issued protective order outlining terms covering the source code inspection. Depositions Armed with the AI-specific written discovery responses, document production, and an understanding of an AI system’s source code, counsel should be prepared to ask questions of an opponent’s witnesses, which in turn can help fill gaps in a party’s understanding of the facts relevant to its case.  FRCP 30 governs depositions by oral examination of a party, party witness, or third party to a matter.  In a technical deposition of a fact or party witness, such as a data scientist, machine learning engineer, software engineer, or stack developer, investigating the algorithm behind an AI model will help answer questions about how and why a particular system caused a particular result that is material to the litigation.  Thus, the deposition taker would want to inquire about some of the following issues: Which algorithms were considered? Were they separately tested? How were they tested? Why was the final algorithm chosen? Did an independent third party review the algorithm and model output? With regard to the data set used to create the AI model, the deposition taker will want to explore the following: What data sets were used for training, validation, and testing of the algorithm? How was testing and validation conducted, and were alternatives considered? What sort of exploratory data analysis was performed on the data set (or sets) to assess usability, quality, and implicit bias? Was the data adequate for the domain that the developer was trying to model, and could other data have been used? With regard to the final model, the deposition taker may want to explore the following issues: How old is the model? If it models a time-series (e.g., a model based on historical data that tends to increase over time), has the underlying distribution shifted enough such that the model is now outdated? If newer data were not considered, why? How accurate is the model, and how is accuracy measured? Finally, if written discovery revealed an independent third party reviewed the model before it was deployed, the deposition taker may want to explore the details about the scope of the testing and its results.  If sensors are used as the source for new observational data fed to an AI model, the deposition taker may want to learn why those sensors were chosen, how they operate, their limitations, and what alternative sensors could have been used instead. In an expert deposition, the goal of the deposition shifts to exploring the expert’s assumptions, inputs, applications, outputs, and conclusions for weaknesses.  If an expert prepared an adversarial or counterfactual model to dispute the contested AI system or an opposing expert’s model, a litigator should keep in mind the factors in Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) and FRE 702, when deposing the expert.  For example, the following issues may need to be explored during the deposition: How was the adversarial or counterfactual modeled developed? Can the expert’s model itself be challenged objectively for reliability? Was the model and technique used subject to peer review and/or publication? What was the model’s known or potential rate of error when applied to facts relevant to the lawsuit? What technical standards apply to the model? Is the model based on techniques or theories that have been generally accepted in the scientific community? Conclusion This post explores a few approaches to fact and expert discovery that litigants may want to explore in a lawsuit where an AI technology is contested, though the approaches here are by no means exhaustive of the scope of discovery that one might need in a particular case. Read more »
  • Congress, States Introduce New Laws for Facial Recognition, Face Data – Part 2
    In Part I, new proposed federal and state laws governing the collection, storage, and use of face (biometric) data in connection with facial recognition technology were described.  If enacted, those new laws would join Illinois’ Biometric Information Privacy Act (BIPA), California’s Consumer Data Privacy Act (CCPA), and Texas’ “biometric identifier” regulations in the governance of face-related data.  It is reasonable for businesses to assume that other state laws and regulations will follow, and with them a shifting legal landscape creating uncertainty and potential legal risks.  A thoughtful and proactive approach to managing the risks associated with the use of facial recognition technology will distinguish those businesses that avoid adverse economic and reputational impacts, from those that could face lawsuits and unwanted media attention. Businesses with a proactive approach to risk management will of course already be aware of the proposed new laws that were described in Part I.  S. 847 (the federal Commercial Facial Recognition Privacy Act of 2019) and H1654-2 (Washington State’s legislative house bill) suggest what’s to come, but biometric privacy laws like those in California, Texas, and Illinois have been around for a while.  Companies that do business in Illinois, for instance, already know that BIPA regulates the collection of biometric data, including face scans, and has created much litigation due to its private right of action provision.  Maintaining awareness of the status of existing and proposed laws will be important for businesses that collect, store, and use face data. At the same time, however, federal governance of AI technologies under the Trump Administration is expected to favor a policy and standards governance approach over a more onerous command-and-control-type regulatory agency rulemaking approach (which the Trump administration often refers to “barriers”).  The takeaway for businesses is that the rulemaking provisions of S. 847 may look quite different if the legislation makes it out of committee and is reconciled with other federal bills, adding to the uncertain landscape. But even in the absence of regulations (or at least regulations with teeth) and the threat of private lawsuits (neither S. 847 nor H1654-2 provide a private right of action for violations), managing risk may require businesses that use facial recognition technology, or that directly or indirectly handle face data, or otherwise use the result of a facial recognition technology, to at least minimally self-regulate.  Those that don’t, or those that follow controversial practices involving monetizing face data at the expense of trust, such as obfuscating transparency about how they use consumer data, are more likely to see a backlash. In most cases, companies handling data already have privacy policies and terms of service (TOS) or end-user licensing agreements that address user data and privacy issues.  Those documents could be frequently reviewed and updated to address face data and facial recognition technology concerns.  Moreover, “camera in use” notices are not difficult to implement in the case of entities that deploy cameras for security, surveillance, or other reasons.  Avoiding legalese and use of vague or uncertain terms in those documents and notices could help reduce risks.  H1654-2 provides that a meaningful privacy notice should include: (a) the categories of personal data collected by the controller; (b) the purposes for which the categories of personal data is used and disclosed to third parties, if any; (c) the rights that consumers may exercise, if any; (d) the categories of personal data that the controller shares with third parties, if any; and (e) the categories of third parties, if any, with whom the controller shares personal data.  In the case of camera notices, prominently displaying the notice is standard, but companies should also be mindful of the differences in S. 847 and H1654-2 concerning notice and implied consent: the former may require notice and separate consent, while the later may provide that notice alone equates to implied consent under certain circumstances. Appropriate risk management also means business entities that supply face data sets to others, for machine learning development, training, and testing purposes, understand the source of its data and the data’s potential inherent biases.  Those businesses will be able to articulate the same to users of the data (who may insist on certain assurances about the data’s quality and utility if not provided on an as-is basis).  Ignoring potential inherent biases in data sets is inconsistent with a proactive and comprehensive risk management strategy. Both S. 847 and H1654-2 refer to a human-in-the-loop review process in certain circumstances, such as in cases where a final decision based on the output of a facial recognition technology may result in a reasonably foreseeable and material physical or financial harm to an end user or if it could be unexpected or highly offensive to a reasonable person.  Although “reasonably foreseeable,” “harm,” and “unexpected or highly offensive” are undefined, a thoughtful approach to managing risk and mitigating damages might consider ways to implement human reviews mindful of federal and state consumer protection, privacy, and civil rights laws that could be implicated absent use of a human reviewer. The White House’s AI technology use policy and S. 847 refer to the National Institute of Standards and Technology (NIST), which could play a large role in AI technology governance.  Learning about NIST’s current standards-setting approach and its AI model evaluation process could help companies seeking to do business with the federal government.  Of course, independent third parties could also evaluate a business’ AI models for bias, problematic data sets, model leakiness, and to identify potential problems that might lead to litigation.  While not every situation may require such extra scrutiny, the ability to recognize and avoid risks might justify the added expense. As noted above, neither S. 847 nor SB 5376 include a private right of action like BIPA, but the new laws could allow for states attorneys general to bring civil actions against violators.  Businesses should consider the possibility of such legal actions, as well as the other potential risks from the use of facial recognition technology and face data collection when assessing the risk factors that must be discussed in certain SEC filings. Above are just a few of the factors and approaches that businesses could consider as part of a risk management approach to the use of facial recognition technology in the face of a changing legal landscape. Read more »
  • Congress, States Introduce New Laws for Facial Recognition, Face Data – Part I
    Companies developing artificial intelligence-based products and services have been on the lookout for laws and regulations aimed at their technology.  In the case of facial recognition, new federal and state laws seem closer than ever.  Examples include Washington State’s recent data privacy and facial recognition bill (SB 5376; recent action on March 6, 2019) and the federal Commercial Facial Recognition Privacy Act of 2019 (S. 847, introduced March 14, 2019).  If enacted, these new laws would join others like Illinois’ Biometric Information Privacy Act (BIPA) and California’s Consumer Privacy Act (CCPA) in governing facial recognition systems and the collection, storage, and use of face data.  But even if those new bills fail to become law, they underscore how the technology will be regulated in the US and suggest, as discussed below, the kinds of litigation risks organizations may confront in the future. What is Face Data and Facial Recognition Technology? Definitions of face data often involve information that can be associated with an identified or identifiable person.  Face data may be supplied by a person (e.g., an uploaded image), purchased from a third party (i.e., a data broker), obtained from publicly-available data sets, or collected via audio-video equipment (e.g., using surveillance cameras). Facial recognition refers to extracting data from a camera’s output signal (still image or video), locating faces in the image data (an object detection process typically done using machine learning algorithms), picking out unique features from the faces that can be used to tell them apart from other people (e.g., facial landmarks), and comparing those features to all the faces of people already known to see if there is a match. Advances in the field of computer vision, including a machine learning technique called convolutional neural networks (ConvNets or CNNs), have turned what used to be a laborious manual process of identifying faces in image data into a highly accurate and automated process performed by machines in near real-time.  Online face image sources such as Facebook, Flickr, Twitter, Instagram, YouTube, news media websites, other websites, as well as face data images collected by government agencies from, among other sources, airport cameras, provide the data used to train and test CNNs. Why are Lawmakers Addressing Facial Recognition? Among the several AI technologies attracting lawmakers’ attention, facial recognition seems to top the list due in part to its rapidly-expanding use, especially in law enforcement, and the civil and privacy rights implications associated with the collection and use of face data, often without consent, by both private and public organizations. From a privacy perspective, Microsoft’s President Brad Smith, writing in 2018, expressed a common refrain by those concerned about facial recognition: unconsented surveillance.  “Imagine a government tracking everywhere you walked over the past month without your permission or knowledge.  Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like ‘Minority Report,’ ‘Enemy of the State’ and even ‘1984’ – but now it’s on the verge of becoming possible.” Beyond surveillance, others have expressed concerns about the security of face data. Unlike non-biometric data, which is replaceable (think credit card numbers or passwords), face data represent intimate, unique, and irreplaceable characteristics of a person.  Once hackers have maliciously exfiltrated a person’s face data from a business’ computer system, the person’s privacy is threatened. From a civil rights perspective, known problems with bias in facial recognition systems have been documented.  This issue became a headline in July 2018 when the American Civil Liberties Union (ACLU) reported that a widely-used, commercially-available facial recognition program “incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.”  The report noted that the members of Congress who were “falsely matched with the mugshot database [] used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country.”  The report also found that the mismatches disproportionately involved members who are people of color, thus raising questions about the accuracy and quality of the tested facial recognition technique, as well as revealing its possible inherent bias. <>  Bias may arise if a face data set used to train a CNN contains predominantly white male face images, for example.  The facial recognition technology using that algorithm may not perform well (or “generalize”) when it’s asked to classify (identify) non-white male faces.  The bias issue has led many to call for the meaningful ethical-based design of AI systems. But even beneficial uses of facial recognition technology and face data collection and use have been criticized, in part because people whose face data are being collected and used are typically not given an opportunity to give their consent (and in many cases, do not even know their face data is being used).  Thus, automatically identifying people in uploaded images (a process called image “tagging”), improving a person’s experience at a public venue, providing access to a private computer system or a physical location, establishing an employee’s working hours and their movements for safety purposes, and personalizing advertisements and newsfeeds displayed on a computer user’s browser, while arguably beneficial uses of face data, are often conducted without a user’s consent (or its access/use is conditioned upon giving consent) and thus criticized. As much as the many concerns about facial recognition may have piqued lawmakers’ interest in regulating face data, legislation like those mentioned above is just as likely to arise because stakeholders and vocal opponents have called for more certainty in the legal landscape.  Microsoft, for one, in 2018 called for regulating facial recognition technology.  “The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself,” Brad Smith wrote, his words clearly directed to Capitol Hill as well as state lawmakers in Olympia.  “And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.” Comparing the Washington, DC and Washington State Bills [For a summary of Illinois’ face data privacy law, click here.] If S. 847 becomes law, it would cover any person (other than the federal government, state and local governments, law enforcement agencies, a national security agency, or an intelligence agency) that collects, stores, or processes facial recognition data from still or video images, including any unique attribute or feature of the face of a person that is used by facial recognition technology for the purpose of assigning a unique, persistent identifier or for the unique personal identification of a specific individual.  SB 5376, in contrast, would cover any natural or legal persons which, alone or jointly with others, determines the purposes and means of the processing of personal data by a processor, including personal data from a facial recognition technology.  While the federal bill would not cover government agency use, SB 5376 would condition Washington state and local government agencies, including law enforcement agencies, to conduct ongoing surveillance of specified individuals in public spaces only if such use is in support of law enforcement activities and either (a) a court order has been obtained to permit the use of facial recognition services for that ongoing surveillance, or (b) where there is an emergency involving imminent danger or risk of death or serious physical injury to a person. On the issue of consent, S. 847 would require a business that knowingly uses facial recognition technology to collect face data to obtain from a person affirmative consent (opt-in consent).  To the extent possible, if facial recognition technology is present, a business must provide to the person a concise notice that facial recognition technology is present, and, if contextually appropriate, where the person can find more information about the use by the business of facial recognition technology and documentation that includes general information that explains the capabilities and limitations of the technology in terms that the person is able to understand.  SB 5376 would also require controllers to obtain consent from consumers prior to deploying facial recognition services in physical premises open to the public.  The placement of a conspicuous notice in physical premises that clearly conveys that facial recognition services are being used would constitute a consumer’s consent to the use of such facial recognition services when that consumer enters those premises that have such notice. Under S. 847, obtaining affirmative consent would be effective only if a business makes available to a person a notice that describes the specific practices of the business in terms that persons are able to understand regarding the collection, storage, and use of facial recognition data.  These include reasonably foreseeable purposes, or examples, for which the business collects and shares information derived from facial recognition technology or uses facial recognition technology, and information about the practice of data retention and de-identification, and whether a person can review, correct, or delete information derived from facial recognition technology.  Under SB 5376, processors that provide facial recognition services would be required to provide documentation that includes general information that explains the capabilities and limitations of face recognition technology in terms that customers and consumers can understand. S. 847 would prohibit a business from knowingly using a facial recognition technology to discriminate against a person in violation of applicable federal or state law (presumably civil rights laws, consumer protection laws, and others), repurpose facial recognition data for a purpose that is different from those presented to the person, and share the facial recognition data with an unaffiliated third party without affirmative consent (separate from the opt-in affirmative consent noted above). SB 5376 would prohibit processors of face data that provide facial recognition services from using such facial recognition services by controllers to unlawfully discriminate under federal or state law against individual consumers or groups of consumers. S. 847 would require meaningful human review prior to making any final decision based on the output of facial recognition technology if the final decision may result in a reasonably foreseeable and material physical or financial harm to an end user or may be unexpected or highly offensive to a reasonable person. SB 5376 would require controllers that use facial recognition for profiling must employ meaningful human review prior to making final decisions based on such profiling where such final decisions produce legal effects concerning consumers or similarly significant effects concerning consumers. Decisions producing legal effects or similarly significant effects include, but are not limited to, denial of consequential services or support, such as financial and lending services, housing, insurance, education enrollment, criminal justice, employment opportunities, and health care services. S. 847 would require a regulated business that makes a facial recognition technology available as an online service to make available an application programming interface (API) to enable an independent testing company to conduct reasonable tests of the facial recognition technology for accuracy and bias. SB 5376 would require providers of commercial facial recognition services that make their technology available as an online service for developers and customers to use in their own scenarios must make available an API or other technical capability, chosen by the provider, to enable third parties that are legitimately engaged in independent testing to conduct reasonable tests of those facial recognition services for accuracy and unfair bias. S. 847 would provide exceptions for certain facial recognition technology uses, including product or service designed for personal file management or photo or video sorting or storage, if the facial recognition technology is not used for unique personal identification of a specific individual, as well as uses involving the identification of public figures for journalistic media created for public interest. The law would also provide exceptions for the identification of public figures in copyrighted material for theatrical release, or use in an emergency involving imminent danger or risk of death or serious physical injury to an individual. The law would also provide certain exceptions for certain security applications.   Even so, the noted exceptions would not permit businesses to conduct the mass scanning of faces in spaces where persons do not have a reasonable expectation that facial recognition technology is being used on them. SB 5376 would provide exceptions in the case of complying with federal, state, or local laws, rules, or regulations, or with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by federal, state, local, or other governmental authorities.  The law would also provide exemptions to cooperate with law enforcement agencies concerning conduct or activity that the controller or processor reasonably and in good faith believes may violate federal, state, or local law, or to investigate, exercise, or defend legal claims, or prevent or detect identity theft, fraud, or other criminal activity or verify identities.  Other exceptions or exemptions are also provided. Under S. 847, violating aspects of the law would be defined as an unfair or deceptive act or practice under Section 18(a)(1)(B) of the Federal Trade Commission Act (15 USC 57a(a)(1)(B)).  The FTC would regulate the new law and would have authority to assert its penalty powers pursuant to 15 USC 41 et seq.  Moreover, state attorneys general, or any other officer of a state who is authorized by the state to do so, may, upon notice to the FTC, bring a civil action on behalf of state residents if it believes that an interest of the residents has been or is being threatened or adversely affected by a practice by a business covered by the new law that violates one of the law’s prohibitions.  The FTC may intervene in such civil action.  SB 5376 would provide that the state’s attorney general may bring an action in the name of the state, or as parens patriae on behalf of persons residing in the state, to enforce the law. S. 847 would also require the FTC to consult with the National Institute of Standards and Technology (NIST) to promulgate regulations within 180 days after enactment describing basic data security, minimization, and retention standards; defining action that are harmful and highly offensive; expanding the list of exceptions noted above in cases where it is impossible for a business to obtain affirmative consent from, or provide notice to, persons. S. 847 would not preempt tougher state laws covering facial recognition technology and the collection and use of face data, or other state or federal privacy and security laws. In Part II of this post, facial recognition and face data regulation impact for businesses will be discussed. Read more »
  • Government Plans to Issue Technical Standards For Artificial Intelligence Technologies
    On February 11, 2019, the White House published a plan for developing and protecting artificial intelligence technologies in the United States, citing economic and national security concerns among other reasons for the action.  Coming two years after Beijing’s 2017 announcement that China intends to be the global leader in AI by 2030, President Trump’s Executive Order on Maintaining American Leadership in Artificial Intelligence lays out five principles for AI, including “development of appropriate technical standards and reduc[ing] barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.”  The Executive Order, which lays out a framework for an “American AI Initiative” (AAII), tasks the White House’s National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence, established in 2018, with identifying federal government agencies to develop and implement the technical standards (so-called “implementing agencies”). Unpacking the AAII’s technical standards principle suggests two things.  First, federal governance of AI under the Trump Administration will favor a policy and standards governance approach over a more onerous command-and-control-type regulatory agency rulemaking approach leading to regulations (which the Trump administration often refers to as “barriers”).  Second, no technical standards will be adopted that stand in the way of the development or use of AI technologies at the federal level if they impede economic and national security goals. So what sort of technical standards might the Select Committee on AI and the implementing agencies come up with?  And how might those standards impact government agencies, government contractors, and even private businesses from a legal perspective? The AAII is short on answers to those questions, and we won’t know more until at least August 2019 when the Secretary of Commerce, through the Director of the National Institute of Standards and Technology (NIST), is required by the AAII to issue a plan “for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.”  Even so, it is instructive to review some relevant technical standards and related legal issues in anticipation of what might lie ahead for the United States AI industry. A survey of technical standards used across a spectrum of different industries shows that they can take many different forms, but often they classify as prescriptive or performance-based.  Pre-determined prescriptive metrics may specify requirements for things like accuracy, quality, output, materials, composition, and consumption.  In the AI space, a prescriptive standard could involve a benchmark for classification accuracy (loss or error) using a standardized data set (i.e., how well does the system work), or a numerical upper limit on power consumption, latency, weight, and size.  Prescriptive standards can be one-size-fits-all, or they can vary. Performance-based standards describe practices (minimum, best, commercially reasonable, etc.) focusing on results to be achieved.  In many situations, the performance-based approach provides more flexibility compared to using prescriptive standards.  In the context of AI, a performance-based standard could require a computer vision system to detect all objects in a specified field of view, and tag and track them for a period of time.  How the developer achieves that result is less important in performance-based standards. Technical standards may also specify requirements for the completion of risk assessments to numerically compare an AI system’s expected benefits and impacts to various alternatives.  Compliance with technical standards may be judged by advisory committees who follow established procedures for independent and open review.  Procedures may be established for enforcement of technical standards when non-compliance is observed.  Depending on the circumstances, technical standards may be published for the public to see or they may be maintained in confidence (e.g., in the case of national security).  Technical standards are often reviewed on an on-going or periodic basis to assess the need for revisions to reflect changes in previous assumptions (important in cases when rapid technological improvements or shifts in priorities occur). Under the direction of the AAII, the White House’s Select Committee and various designated implementing agencies could develop new technical standards for AI technologies, but they could also adopt (and possibly modify) standards published by others.  The International Organization for Standards (ISO), American National Standards Institute (ANSI), National Institute of Standards and Technology (NIST), and the Institute for Electronics and Electrical Engineers (IEEE) are among the few private and public organizations that have developed or are developing AI standards or guidance.  Individual state legislatures, academic institutions, and tech companies have also published guidance, principles, and areas of concern that could be applicable to the development of technical and non-technical standards for AI technologies.  By way of example, the ISO’s technical standard for “big data” architecture includes use cases for deep learning applications and large scale unstructured data collection.  The Partnership on AI, a private non-profit organization whose board consists of representatives from IBM, Google, Microsoft, Apple, Facebook, Amazon, and others, has developed what it considers “best practices” for AI technologies. Under the AAII, the role of technical standards, in addition to helping build an AI industry, will be to “minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies.”  It is hard to imagine a purely technical standard addressing trust and confidence, though a non-technical standards-setting process could address those issues by, for example, introducing measures related to fairness, accountability, and transparency.  Consider the example of delivering AI-based healthcare services at Veterans Administration facilities, where trust and confidence could be reflected in non-technical standards that provide for the publication of clear, understandable explanations about how an AI system works and how it made a decision that affected a patent’s care.  Addressing trust and confidence could also be reflected in requirements for open auditing of AI systems.  The IEEE’s “Ethically Aligned Design” reference considers these and related issues. Another challenge in developing technical standards is to avoid incorporating patented technologies “essential” to the standards adopted by the government, or if unavoidable, to develop rules for disclosure and licensing of essential patents.  As the court in Apple v. Motorola explained, “[s]ome technological standards incorporate patented technology. If a patent claims technology selected by a standards-setting organization, the patent is called an ‘essential patent.’ Many standards-setting organizations have adopted rules related to the disclosure and licensing of essential patents. The policies often require or encourage members of the organization to identify patents that are essential to a proposed standard and to agree to license their essential patents on fair, reasonable and nondiscriminatory terms to anyone who requests a license. (These terms are often referred to by the acronyms FRAND or RAND.)  Such rules help to insure that standards do not allow the owners of essential patents to abuse their market power to extort competitors or prevent them from entering the marketplace.”  See Apple, Inc. v. Motorola Mobility, Inc., 886 F. Supp. 2d 1061 (WD Wis. 2012).  Given the proliferation of new AI-related US patents issued to tech companies in recent years, the likelihood that government technical standards will encroach on some of those patents seems high. For government contractors, AI technical standards could be imposed on them through the government contracting process.  A contracting agency could incorporate new AI technical standards by reference in government contracts, and those standards would flow through to individual task and work orders performed by contractors under those contracts.  Thus, government contractors would need to review and understand the technical standards in the course of executing a written scope of work to ensure they are in compliance.  Sponsoring agencies would likely be expected to review contractor deliverables to measure compliance with applicable AI technical standards.  In the case of non-compliance, contracting officials and their sponsoring agency would be expected to deploy their enforcement authority to ensure problems are corrected, which could include monetary penalties assessed against contractors. Although private businesses (i.e., not government contractors) may not be directly affected by agency-specific technical standards developed under the AAII, customers of those private businesses could, absent other relevant or applicable technical standards, use the government’s AI technical standards as a benchmark when evaluating a business’s products and services.  Moreover, even if federal AI-based technical standards do not directly apply to private businesses, there is certainly the possibility that Congress could legislatively mandate the development of similar or different technical and non-technical standards and other requirements applicable to a business’ AI technologies sold and used in commerce. The president’s Executive Order on AI has turned an “if” into a “when” in the context of federal governance of AI technologies.  If you are a stakeholder, now is a good time to put resources into closely monitoring developments in this area to prepare for possible impacts. Read more »
  • Washington State Seeks to Root Out Bias in Artificial Intelligence Systems
    The harmful effects of biased algorithms have been widely reported.  Indeed, some of the world’s leading tech companies have been accused of producing applications, powered by artificial intelligence (AI) technologies, that were later discovered to exhibit certain racial, cultural, gender, and other biases.  Some of the anecdotes are quite alarming, to say the least.  And while not all AI applications have these problems, it only takes a few concrete examples before lawmakers begin to take notice. In New York City, lawmakers began addressing algorithmic bias in 2017 with the introduction of legislation aimed at eliminating bias from algorithmic-based automated decision systems used by city agencies.  That effort led to the establishment of a Task Force in 2018 under Mayor de Blasio’s office to examine the issue in detail.  A report from the Task Force is expected this year. At the federal level, an increased focus by lawmakers on algorithmic bias issues began in 2018, as reported previously on this website (link) and elsewhere.  Those efforts, by both House and Senate members, focused primarily on gathering information from federal agencies like the FTC, and issuing reports highlighting the bias problem.  Expect congressional hearings in the coming months. Now, Washington State lawmakers are addressing bias concerns.  In companion bills SB-5527 and HB-1655, introduced on January 23, 2019, lawmakers in Olympia drafted a rather comprehensive piece of legislation aimed at governing the use of automated decision systems by state agencies, including the use of automated decision-making in the triggering of automated weapon systems.  As many in the AI community have discussed, eliminating algorithmic-based bias requires consideration of fairness, accountability, and transparency, issues the Washington bills appear to address.  But the bills also have teeth, in the form of a private right of action allowing those harmed to sue. Although the aspirational language of legislation often only provides a cursory glimpse at how stakeholders might be affected under a future law, especially in those instances where, as here, an agency head is tasked with producing implementing regulations, an examination of automated decisions system legislation like Washington’s is useful if only to understand how  states and the federal government might choose to regulate aspects of AI technologies and their societal impacts. Purpose and need for anti-bias algorithm legislation According to the bills’ sponsors, in Washington, automated decision systems are rapidly being adopted to make or assist in core decisions in a variety of government and business functions, including criminal justice, health care, education, employment, public benefits, insurance, and commerce.  These systems, the lawmakers say, are often deployed without public knowledge and are unregulated.  Their use raises concerns about due process, fairness, accountability, and transparency, as well as other civil rights and liberties.  Moreover, reliance on automated decision systems without adequate transparency, oversight, or safeguards can undermine market predictability, harm consumers, and deny historically disadvantaged or vulnerable groups the full measure of their civil rights and liberties. Definitions, Prohibited Actions, and Risk Assessments The new Washington law would define “automated decision systems” as any algorithm, including one incorporating machine learning or other AI techniques, that uses data-based analytics to make or support government decisions, judgments, or conclusions.  The law would distinguish “automated final decision system,” which are systems that make “final” decisions, judgments, or conclusions without human intervention, and “automated support decision system,” which provide information to inform the final decision, judgment, or conclusion of a human decision maker. Under the new law, in using an automated decision system, an agency would be prohibited from discriminating against an individual, or treating an individual less favorably than another, in whole or in part, on the basis of one or more factors enumerated in RCW 49.60.010.  An agency would be outright prohibited from developing, procuring, or using an automated final decision system to make a decision impacting the constitutional or legal rights, duties, or privileges of any Washington resident, or to deploy or trigger any weapon. Both versions of the bill include lengthy provisions detailing algorithmic accountability reports that agencies would be required to produce and publish for public comment.  Among other things, these reports must include clear information about the type or types of data inputs that a technology uses; how that data is generated, collected, and processed; and the type or types of data the systems are reasonably likely to generate, which could help reveal the degree of bias inherent in a system’s black box model.  The accountability reports also must identify and provide data showing benefits; describe where, when, and how the technology is to be deployed; and identify if results will be shared with other agencies. An agency that deploys an approved report would then be required to follow conditions that are set forth in the report. Although an agency’s choice to classify its automated decision system as one that makes “final” or “support” decisions may be given deference by courts, the designations are likely to be challenged if the classification is not justified.  One reason a party might challenge designations is to obtain an injunction, which may be available in the case where an agency relies on a final decision made by an automated decision system, whereas an injunction may be more difficult to obtain in the case of algorithmic decisions that merely support a human decision-maker.  The distinction between the two designations may also be important during discovery, under a growing evidentiary theory of “machine testimony” that includes cross-examining machines witnesses by gaining access to source code and, in the case of machine learning models, the developer’s data used to train a machine’s model.  Supportive decision systems involving a human making a final decision may warrant a different approach to discovery. Conditions impacting software makers Under the proposed law, public agencies that use automated decision systems would be required to publicize the system’s name, its vendor, and the software version, along with the decision it will be used to make or support.  Notably, a vendor must make its software and the data used in the software “freely available” before, during, and after deployment for agency or independent third-party testing, auditing, or research to understand its impacts, including potential bias, inaccuracy, or disparate impacts.  The law would require any procurement contract for an automated decision system entered into by a public agency to include provisions that require vendors to waive any legal claims that may impair the “freely available” requirement.  For example, contracts with vendors could not contain nondisclosure impairment provisions, such as those related to assertions of trade secrets. Accordingly, software companies who make automated decision systems will face the prospect of waiving proprietary and trade secret rights and opening up their algorithms and data to scrutiny by agencies, third parties, and researchers (presumably, under terms of confidentiality).  If litigation were to ensue, it could be difficult for vendors to resist third-party discovery requests on the basis of trade secrets, especially if information about auditing of the system by the state agency and third-party testers/researchers is available through administrative information disclosure laws.  A vendor who chooses to reveal the inner workings of a black box software application without safeguards should consider at least financial, legal, and market risks associated with such disclosure. Contesting automated decisions and private right of action Under the proposed law, public agencies would be required to announce procedures how an individual impacted by a decision made by an automated decision system can contest the decision.  In particular, any decision made or informed by an automated decision system will be subject to administrative appeal, an immediate suspension if a legal right, duty, or privilege is impacted by the decision, and a potential reversal by a human decision-maker through an open due process procedure.  The agency must also explain the basis for its decision to any impacted individual in terms “understandable” to laypersons including, without limitation, by requiring the software vendor to create such an explanation.  Thus, vendors may become material participants in administrative proceedings involving a contested decision made by its software. In addition to administrative relief, the law would provide a private right of action for injured parties to sue public agencies in state court.  In particular, any person who is injured by a material violation of the law, including denial of any government benefit on the basis of an automated decision system that does not meet the standards of the law, may seek injunctive relief, including restoration of the government benefit in question, declaratory relief, or a writ of mandate to enforce the law. For litigators representing injured parties in such cases, dealing with evidentiary issues involving information produced by machines would likely follow Washington judicial precedent in areas of administrative law, contracts, tort, civil rights, the substantive law involving the agency’s jurisdiction (e.g., housing, law enforcement, etc.), and even product liability.  In the case of AI-based automated decision systems, however, special attention may need to be given to the nuances of machine learning algorithms to prepare experts and take depositions in cases brought under the law.  Although the aforementioned algorithmic accountability report could be useful evidence for both sides in an automated decision system lawsuit, merely understanding the result of an algorithmic decision may not be sufficient when assessing if a public agency was thorough in its approach to vetting a system.  Being able to describe how the automated decision system works will be important.  For agencies, understanding the nuances of the software products they procure will be important to establish that they met their duty to vet the software under the new law. For example, where AI machine learning models are involved, new data, or even previous data used in a different way (i.e., a different cross-validation scheme or a random splitting of data into new training and testing subsets), can generate models that produce slightly different outcomes.  While small, the difference could mean granting or denying agency services to constituents.  Moreover, with new data and model updates comes the possibility of introducing or amplifying bias that was not previously observed.  The Washington bills do not appear to include provisions imposing an on-going duty on vendors to inform agencies when bias or other problems later appear in software updates (though it’s possible the third party auditors or researchers noted above might discover it).  Thus, vendors might expect agencies to demand transparency as a condition set forth in acquisition agreements, including software support requirements and help with developing algorithmic accountability reports.  Vendors might also expect to play a role in defending against claims by those alleging injury, should the law pass.  And they could be asked to shoulder some of the liability either through indemnification or other means of contractual risk-shifting to the extent the bills add damages as a remedy. Read more »
  • What’s in a Name? A Chatbot Given a Human Name is Still Just an Algorithm
    Due in part to the learned nature of artificial intelligence technologies, the spectrum of things that exhibit “intelligence” has, in debates over such things, expanded to include certain advanced AI systems.  If a computer vision system can “learn” to recognize real objects and make decisions, the argument goes, its ability to do so can be compared to that of humans and thus should not be excluded from the intelligence debate.  By extension, AI systems that can exhibit intelligence traits should not be treated like mere goods and services, and thus laws applicable to such good and services ought not to apply to them. In some ways, the marketing of AI products and services using names commonly associated with humans, such as “Alexa,” “Sophia,” and “Siri,” buttresses the argument that laws applicable to non-human things should not strictly apply to AI.  For now, however, lawmakers and the courts struggling with practical questions about regulating AI technologies can justifiably apply traditional goods and services laws to named AI systems just as they do to non-named system.  After all, a robot or chatbot doesn’t become more humanlike and less like a man-made product merely because it’s been anthropomorphized.  Even so, when future technological breakthroughs suggest artificial general intelligence (AGI) is on the horizon, lawmakers and the courts will be faced with the challenge of amending laws to account for the differences between AGI systems and today’s narrow AI and other “unintelligent” goods and services.  For now, it’s instructive to consider why the rise in the use of names for AI system is not a good basis for triggering greater attention by lawmakers.  Indeed, as suggested below, other characteristics of AI system may be more useful in deciding when laws need to be amended.  To begin, the recent case of a chatbot named “Erica” is presented. The birth of a new bot In 2016, machine learning developers at Bank of America created a “virtual financial assistant” application called “Erica” (derived from the bank’s name America).  After conducting a search of existing uses of the name Erica in other commercial endeavors, and finding none in connection with a chatbot like theirs, BoA sought federal trademark protection for the ERICA mark in October 2016.  The US Patent and Trademark Office concurred with BoA’s assessment of prior uses and registered the mark on July 31, 2018.  Trademarks are issued in connection with actual uses of words, phrases, and logos in commerce, and in the case of BoA, the ERICA trademark was registered in connection with computer financial software, banking and financial services, and personal assistant software in banking and financial SaaS (software as a service).  The Erica app is currently described as possessing the utility to answer customer questions and make banking easier.  During its launch, BoA used the “she” pronoun when describing the app’s AI and predictive analytics capabilities, ostensibly because the name Erica is a stereotypical female gender name, but also because of the apparent female-sounding voice the app outputs as part of its human-bot interface. One of the existing uses of an Erica-like mark identified by BoA was an instance of “E.R.I.C.A,” which appeared in October 2010 when Erik Underwood, a Colorado resident, filed a Georgia trademark registration application for “E.R.I.C.A. (Electronic Repetitious Informational Clone Application).”  See Underwood v. Bank of Am., slip op., No. 18-cv-02329-PAB-MEH (D. Colo. Dec. 19, 2018).  On his application, Mr. Underwood described E.R.I.C.A. as “a multinational computer animated woman that has slanted blue eyes and full lips”; he also attached a graphic image of E.R.I.C.A. to his application.  Mr. Underwood later sought a federal trademark application (filed in September 2018) for an ERICA trademark (without the separating periods).  At the time of his lawsuit, his only use of E.R.I.C.A. was on a searchable movie database website. In May 2018, Mr. Underwood sent a cease-and-desist letter to BoA regarding BoA’s use of Erica, and then filed a lawsuit in September 2018 against the bank alleging several causes of action, including “false association” under § 43(a) of the Lanham Act, 15 U.S.C. § 1125(a)(1)(A).  Section 43(a) states, in relevant part, that any person who, on or in connection with any goods or services, uses in commerce a name or a false designation of origin which is likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection, or association of such person with another person, or as to the origin, sponsorship, or approval of his or her goods, services, or commercial activities by another person, shall be liable in a civil action by a person who believes that he or she is likely to be damaged by such act.  In testimony, Mr. Underwood stated that the E.R.I.C.A. service mark was being used in connection with “verbally tell[ing] the news and current events through cell phone[s] and computer applications” and he described plans to apply an artificial intelligence technology to E.R.I.C.A.  Mr. Underwood requested the court enter a preliminary injunction requiring BoA to cease using the Erica name. Upon considering the relevant preliminary injunction factors and applicable law, the District Court denied Mr. Underwood’s request for an injunction on several grounds, including the lack of relevant uses of E.R.I.C.A. in the same classes of goods and services that BoA’s Erica was being used in. Giving AI a persona may boost its economic value and market acceptance Not surprisingly, the District Court’s preliminary injunction analysis rested entirely on perception and treatment of the Erica and E.R.I.C.A. systems as nothing more than services, something neither party disputed or challenged.  Indeed, each party’s case-in-chief depended on their convincing the court that their applications fit squarely in the definition of goods and services despite the human-sounding names they chose to attach to them.  The court’s analysis, then, illuminated one of the public policies underlying laws like the Lanham Act, which is the protection of the economic benefits associated with goods and services created by people and companies.  The name Erica provides added economic value to each party’s creation and is an intangible asset associated with their commercial activities. The use of names has long been found to provide value to creators and owners, and not just in the realm of hardware and software.  Fictional characters like “Harry Potter,” which are protected under copyright and trademark laws, can be intellectual assets having tremendous economic value.  Likewise, namesake names carried over to goods and services, like IBM’s “Watson”–named after the company’s first CEO, John Watson–provide real economic benefits that might not have been achieved without a name, or even with a different name.  In the case of humanoid robots, like Hanson Robotics’ “Sophia,” which is endowed with aspects of AI technologies and was reportedly granted “citizenship” status in Saudi Arabia, certain perceived and real economic value is created by distinguishing the system from all other robots by using a real name (as compared to, for example, a simple numerical designation). On the other end of the spectrum are names chosen for humans, the uses of which are generally unrestricted from a legal perspective.  Thus, naming one’s baby “Erica” or even “Harry Potter” shouldn’t land a new parent in hot water.  At the same time, those parents aren’t able to stop others from using the same names for other children.  Although famous people may be able to prevent others from using their names (and likenesses) for commercial purposes, the law only recognizes those situations when the economic value of the name or likeness is established (though demonstrating economic value is not always necessary under some state right of publicity laws).  Some courts have gone so far as to liken the right to protect famous personas to a type of trademark in a person’s name because of the economic benefits attached to it, much the same way a company name, product name, or logo attached to a product or service can add value. Futurists might ask whether a robot or chatbot demonstrating a degree of intelligence and that endowed with unique human-like traits, including a unique persona (e.g., name and face generated from a generative-adversarial network) and the ability to recognize and respond to emotions (e.g., using facial coding algorithms in connection with a human-robot interface), thus making them sufficiently differentiable from all other robots and chatbots (at least superficially), should have special treatment.  So far, endowing AI technologies with a human form, gender, and/or a name has not motivated lawmakers and policymakers to pass new laws aimed at regulating AI technologies.  Indeed, lawmakers and regulators have so far proposed, and in some cases passed, laws and regulations placing restrictions on AI technologies based primarily on their specific applications (uses) and results (impacts on society).  For example, lawmakers are focusing on bot-generated spread and amplification of disinformation on social media, law enforcement use of facial recognition, the private business collection and use of face scans, users of drones and highly automated vehicles in the wild, production of “deepfake” videos, the harms caused by bias in algorithms, and others.  This application/results-focused approach, which acknowledges explicitly or implicitly certain normative standards or criteria for acceptable actions, as a means to regulate AI technology is consistent with how lawmakers have treated other technologies in the past. Thus, marketers, developers, and producers of AI systems who personify their chatbots and robots may sleep well knowing their efforts may add value to their creations and alter customer acceptance and attitudes about their AI systems, but they are unlikely to cause lawmakers to suddenly consider regulating them. At some point, however, advanced AI systems will need to be characterized in some normative way if they are to be governed as a new class of things.  The use of names, personal pronouns, personas, and metaphors associating bots to humans may frame bot technology in a way that ascribes particular values and norms to it (Jones 2017).  These might include characteristics such as utility, usefulness (including positive benefits to society), adaptability, enjoyment, sociability, companionship, and perceived or real “behavioral” control, which some argue are important in evaluating user acceptance of social robots.  Perhaps these and other factors, in addition to some measure of intelligence, need to be considered when deciding if an advanced AI bot or chatbot should be treated under the law as something other than a mere good or service.  The subjective nature of those factors, however, would obviously make it challenging to create legally-sound definitions of AI for governance purposes.  Of course, laws don’t have to be precise (and sometimes they are intentionally written without precision to provide flexibility in their application and interpretation), but a vague law won’t help an AI developer or marketer know whether his or her actions and products are subject to an AI law.  Identifying whether to treat bots as goods and services or as something else deserving of a different set of regulations, like those applicable to humans, is likely to involve a suite of factors that permit classifying advanced AI on the spectrum somewhere between goods/services and humans. Recommended reading  The Oxford Handbook of Law, Regulation, and Technology is one of my go-to references for timely insight about topics discussed on this website.  In the case of this post, I drew inspiration from Chapter 25: Hacking Metaphors in the Anticipatory Governance of Emerging Technology: The Case of Regulating Robots, by Meg Leta Jones and Jason Millar. Read more »
  • The Role of Explainable Artificial Intelligence in Patent Law
    Although the notion of “explainable artificial intelligence” (AI) has been suggested as a necessary component of governing AI technology, at least for the reason that transparency leads to trust and better management of AI systems in the wild, one area of US law already places a burden on AI developers and producers to explain how their AI technology works: patent law.  Patent law’s focus on how AI systems work was not borne from a Congressional mandate. Rather, the Supreme Court gets all the credit–or blame, as some might contend–for this legal development, which began with the Court’s 2014 decision in Alice Corp. Pty Ltd. v. CLS Bank International. Alice established the legal framework for assessing whether an invention fits in one of the patent law’s patent-eligible categories (i.e., any “new and useful process, machine, manufacture, or composition of matter” or improvements thereof) or is a patent-ineligible concept (i.e., law of nature, natural phenomenon, or abstract idea).  Alice Corp. Pty Ltd. v. CLS Bank International, 134 S. Ct. 2347, 2354–55 (2014); 35 USC § 101. Understanding how the idea of “explaining AI” came to be following Alice, one must look at the very nature of AI technology.  At their core, AI systems based on machine learning models generally transform input data into actionable output data, a process US courts and the Patent Office have historically found to be patent-ineligible.  Consider a decision by the US Court of Appeals for the Federal Circuit, whose judges are selected for their technical acumen as much as for their understanding of the nuances of patent and other areas of law, that issued around the same time as Alice: “a process that employs mathematical algorithms to manipulate existing information to generate additional information is not patent eligible.”  Digitech Image Techs, LLC v. Elecs. v. Imaging, Inc., 758 F.3d 1344, 1351 (Fed. Cir. 2014).  While Alice did not specifically address AI or mandate anything resembling explainable AI, it nevertheless spawned a progeny of Federal Circuit, district court, and Patent Office decisions that did just that.  Notably, those decisions arose not because of notions that individuals impacted by AI algorithmic decisions ought to have the right to understand how those decisions were made or why certain AI actions were taken, but because explaining how AI systems works helps satisfy the quid pro quo that is fundamental to patent law: an inventor who discloses to the world details of what she has invented is entitled to a limited legal monopoly on her creation (provided, of course, the invention is patentable). The Rise of Algorithmic Scrutiny Alice arrived not long after Congress passed patent reform legislation called the America Invents Act (AIA) of 2011, provisions of which came into effect in 2012 and 2013.  In part, the AIA targeted a decade of what many consider a time of abusive patent litigation brought against some of the largest tech companies in the world and thousands of mom-and-pop and small business owners who were sued for doing anything computer-related.  This litigious period saw the term “patent troll” used more often to describe patent assertion companies that bought up dot-com-era patents covering the very basics of using the Internet and computerized business methods and then sued to collect royalties for alleged infringement. Not surprisingly, some of the same big tech companies that pushed for patent reform provisions now in the AIA to curb patent litigation in the field of computer technology also filed amicus curiae briefs in the Alice case to further weaken software patents.  The Supreme Court’s unanimous decision in Alice helped curtail troll-led litigation by formalizing a procedure, one that lower court judges could easily adopt, for excluding certain software-related inventions from the list of inventions that are patentable. Under Alice, a patent claim–the language used by inventors to describe what he or she claims to be his or her invention–falls outside § 101 when it is “directed to” one of the patent-ineligible concepts noted above.  If so, Alice requires consideration of whether the particular elements of the claim, evaluated “both individually and ‘as an ordered combination,'” add enough to “‘transform the nature of the claim'” into one of the patent-eligible categories.  Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353 (Fed.Cir. 2016) (quoting Alice, 134 S. Ct. at 2355).  While simple in theory, it took years of court and Patent Office decisions to explain how that 2-part test is to be employed, and only more recently how it applies to AI technologies.  Today, the Patent Office and courts across the US routinely find that algorithms are abstract (even though algorithms, including certain mental processes embodied in algorithmic form performed by a computer, are by most measures useful processes).  According to the Federal Circuit, algorithmic-based data collection, manipulation, and communication–functions most AI algorithms perform–are abstract. Artificial Intelligence, Meet Alice In a bit of ironic foreshadowing, the Supreme Court issued Alice in the same year that major advances in AI technologies were being announced, such as Google’s deep neural network architecture that prevailed in the 2014 ImageNet challenge (ILSVCR) and Ian Goodfellow’s generative adversarial network (GAN) model, both of which were major contributions to the field of computer vision. Even as more breakthroughs were being announced, US courts and the Patent Office began issuing Alice decisions regarding AI technologies and explaining why it’s crucial for inventors to explain how their AI inventions work to satisfy the second half of Alice’s 2-part test. In Purepredictive, Inc. v. H2O.AI, Inc., for example, the US District Court for the Northern District of California considered the claims of US Patent 8,880,446, which, according to the patent’s owner, involves “AI driving machine learning ensembling.”  The district court characterized the patent as being directed to a software method that performs “predictive analytics” in three steps.  Purepredictive, Inc. v. H2O.AI, Inc., slip op., No. 17-cv-03049-WHO (N.D. Cal. Aug. 29, 2017).  In the method’s first step, it receives data and generates “learned functions,” or, for example, regressions from that data. Second, it evaluates the effectiveness of those learned functions at making accurate predictions based on the test data. Finally, it selects the most effective learned functions and creates a rule set for additional data input. The court found the claims invalid on the grounds that they “are directed to the abstract concept of the manipulation of mathematical functions and make use of computers only as tools, rather than provide a specific improvement on a computer-related technology.” The claimed method, the district court said, is merely “directed to a mental process” performed by a computer, and “the abstract concept of using mathematical algorithms to perform predictive analytics” by collecting and analyzing information.  The court explained that the claims “are mathematical processes that not only could be performed by humans but also go to the general abstract concept of predictive analytics rather than any specific application.” In Ex Parte Lyren, the Patent Office’s Appeals Board, made up of three administrative law judges, rejected a claim directed to customizing video on a computer as being abstract and thus not patent-eligible.  In doing so, the board disagreed with the inventor, who argued the claimed computer system, which generated and displayed a customized video by evaluating a user’s intention to purchase a product and information in the user’s profile, was an improvement in the technical field of generating videos. The claimed customized video, the Board found, could be any video modified in any way.  That is, the rejected claims were not directed to the details of how the video was modified, but rather to the result of modifying the video.  Citing precedent, the board reiterated that “[i]n applying the principles emerging from the developing body of law on abstract ideas under section 101, … claims that are ‘so result-focused, so functional, as to effectively cover any solution to an identified problem’ are frequently held ineligible under section 101.”  Ex ParteLyren, No. 2016-008571 (PTAB, June 25, 2018) (citing Affinity Labs of Texas,LLC v. DirecTV, LLC, 838 F.3d 1253, 1265 (Fed. Cir. 2016) (quoting Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1356 (Fed. Cir, 2016)); see also Ex parte Colcernian et al., No. 2018-002705 (PTAB, Oct. 1, 2018) (rejecting claims that use result-oriented language as not reciting the specificity necessary to show how the claimed computer processor’s operations differ from prior human methods, and thus are not directed to a technological improvement but rather are directed to an abstract idea). Notably, the claims in Ex Parte Lyren were also initially rejected as failing to satisfy a different patentability test–the written description requirement.  35 USC § 112.  In rejecting the claims as lacking sufficient description of the invention, the Patent Office Examiner found that the algorithmic features of the inventor’s claim were “all implemented inside a computer, and therefore all require artificial intelligence [(AI)] at some level” and thus require extensive implementation details “subject of cutting-edge research, e.g.[,] natural language processing and autonomous software agents exhibiting intelligent behavior.” The Examiner concluded that “one skilled in the art would not be persuaded that Applicant possessed the invention” because “it is not readily apparent how to make a device [to] analyze natural language.”  The Appeals Board disagreed and sided with the inventor who argued that his invention description was comprehensive and went beyond just artificial intelligence implementations.  Thus, while the description of how the invention worked was sufficiently set forth, Lyren’s claims focused too much on the results or application of the technology and thus were found to be abstract. In Ex Parte Homere, claims directed to “a computer-implemented method” involving “establishing a communication session between a user of a computer-implemented marketplace and a computer-implemented conversational agent associated with the market-place that is designed to simulate a conversation with the user to gather listing information, the Appeals Board affirmed an Examiner’s rejection of the claims as being abstract.  Ex Parte Homere, Appeal No. 2016-003447 (PTAB Mar. 29, 2018).  In doing so, the Appeals Board noted that the inventor had not identified anything in the claim or in the written description that would suggest the computer-related elements of the claimed invention represent anything more than “routine and conventional” technologies.  The most advanced technologies alluded to, the Board found, seemed to be embodiments in which “a program implementing a conversational agent may use other principles, including complex trained Artificial Intelligence (AI) algorithms.”  However, the claimed conversational agent was not so limited.  Instead, the Board concluded that the claims were directed to merely using recited computer-related elements to implement the underlying abstract idea, rather than being limited to any particular advances in the computer-related elements. In Ex Parte Hamilton, a rejection of a claim directed to “a method of planning and paying for advertisements in a virtual universe (VU), comprising…determining, via the analysis module, a set of agents controlled by an Artificial Intelligence…,” was affirmed as being patent ineligible.  Ex Parte Hamilton et al., Appeal No.2017-008577 (PTAB Nov. 20, 2018).  The Appeals Board found that the “determining” step was insufficient to transform the abstract idea of planning and paying for advertisements into patent-eligible subject matter because the step represented an insignificant data-gathering step and thus added nothing of practical significance to the underlying abstract idea. In Ex Parte Pizzorno, the Appeals Board affirmed a rejection of a claim directed to “a computer implemented method useful for improving artificial intelligence technology” as abstract.  Ex Parte Pizzorno, Appeal No. 2017-002355 (PTAB Sep. 21, 2018).  In doing so, the Board determined that the claim was directed to the concept of using stored health care information for a user to generate personalized health care recommendations based on Bayesian probabilities, which the Board said involved “organizing human activities and an idea in itself, and is an abstract idea beyond the scope of § 101.”  Considering each of the claim elements in turn, the Board also found that the function performed by the computer system at each step of the process was purely conventional in that each step did nothing more than require a generic computer to perform a generic computer function. Finally, in Ex Parte McAfee, the Appeals Board affirmed a rejection of a claim on the basis that it was “directed to the abstract idea of receiving, analyzing, and transmitting data.”  Ex Parte McAfee, Appeal No. 2016-006896 (PTAB May 22, 2018).  At issue was a method that included “estimating, by the ad service circuitry, a probability of a desired user event from the received user information, and the estimate of the probability of the desired user event incorporating artificial intelligence configured to learn from historical browsing information in the received user information, the desired user event including at least one of a conversion or a click-through, and the artificial intelligence including regression modeling.”  In affirming the rejection, the Board found that the functions performed by the computer at each step of the claimed process was purely conventional and did not transform the abstract method into a patent-eligible one. In particular, the step of estimating the probability of the desired user event incorporating artificial intelligence was found to be merely “a recitation of factors to be somehow incorporated, which is aspirational rather than functional and does not narrow the manner of incorporation, so it may include no more than incorporating results from some artificial intelligence outside the scope of the recited steps.” The above and other Alice decisions have led to a few general legal axioms, such as: a claim for a new abstract idea is still an abstract idea; a claim for a beneficial abstract idea is still an abstract idea; abstract ideas do not become patent-eligible because they are new ideas, are not previously well known, and are not routine activity; and, the “mere automation of manual processes using generic computers does not constitute a patentable improvement in computer technology.”  Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016); Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 1371, 1379-80 (Fed. Cir. 2015); Ultramercial, Inc. v. Hulu, LLC, 772 F.3d. 709, 715-16 (Fed. Cir. 2014); Credit Acceptance Corp. v. Westlake Servs., 859 F.3d 1044, 1055 (Fed. Cir. 2017); see also SAP Am., Inc. v. Investpic, LLC, slip op. No. 2017-2081, 2018 WL2207254, at *2, 4-5 (Fed. Cir. May 15, 2018) (finding financial software patent claims abstract because they were directed to “nothing but a series of mathematical calculations based on selected information and the presentation of the results of those calculations (in the plot of a probability distribution function)”); but see Apple, Inc. v.Ameranth, Inc., 842 F.3d 1229, 1241 (Fed. Cir. 2016) (noting that “[t]he Supreme Court has recognized that all inventions embody, use,reflect, rest upon, or apply laws of nature, natural phenomena, or abstractideas[ ] but not all claims are directed to an abstract idea.”). The Focus on How, not the Results Following Alice, patent claims directed to an AI technology must recite features of the algorithm-based system that represent how the algorithm improves a computer-related technology and is not previously well-understood, routine, and conventional.  In PurePredictive, for example, the Northern California district court, which sees many software-related cases due to its proximity to the Bay Area and Silicon Valley, found that the claims of a machine learning ensemble invention were not directed to an invention that “provide[s] a specific improvement on a computer-related technology.”  See also Neochloris, Inc. v. Emerson Process Mgmt LLLP, 140 F. Supp. 3d 763, 773 (N.D. Ill. 2015) (explaining that patent claims including “an artificial neural network module” were invalid under § 101 because neural network modules were described as no more than “a central processing unit – a basic computer’s brain”). Satisfying Alice, thus, requires claims focusing on a narrow application of how an AI algorithmic model works, rather than the broader and result-oriented nature of what the model is used for.  This is necessary where the idea behind the algorithm itself could be used to achieve many different results.  For example, a claim directed to a mathematical process (even one that is said to be “computer-implemented”), and that could be performed by humans (even if it takes a long time), and that is directed to a result achieved instead of a specific application, will seemingly be patent-ineligible under today’s Alice legal framework. To illustrate, consider an image classification system, one that is based on a convolutional neural network.  Such a system may be patentable if the claimed system improves the field of computer vision technology. Claiming the invention in terms of how the elements of the computer are technically improved by its deep learning architecture and algorithm, rather than simply claiming a deep learning model using results-oriented language, may survive an Alice challenge, provided the claim does not merely cover an automated process that human used to do.  Moreover, claims directed to the multiple hidden layers, convolutions, recurrent connections, hyperperameters, and weights could also be claimed. By way of another example, a claim reciting “a computer-implemented process using artificial intelligence to generate an image of a person,” is likely abstract if it does not explain how the image is generated and merely claims a computerized process a human could perform.  But a claim that describes a unique AI system that specifies how it generates the image, including the details of a generative adversarial network architecture and its various inputs provided by physical devices (not routine data collection), its connections and hyperparameters, has a better chance of passing muster (keeping in mind, this only addresses the question of whether the claimed invention is eligible to be patented, not whether it is, in fact, patentable, which is an entirely different analysis and requires comparing the claim to prior art). Uncertainty Remains Although the issue of explaining how an AI system works in the context of patent law is still in flux, the number of US patents issued by the Patent Office mentioning “machine learning,” or the broader term “artificial intelligence,” has jumped in recent years. Just this year alone, US machine learning patents are up 27% compared to the same year-to-date period in 2017 (thru the end of November), according to available Patent Office records.  Even if machine learning is not the focus of many of them, the annual upward trend in patenting AI over the last several years appears unmistakable. But with so many patents invoking AI concepts being issued, questions about their validity may arise.  As the Federal Circuit has stated, “great uncertainty yet remains” when it comes to the test for deciding whether an invention like AI is patent-eligible under Alice, this despite the large number of cases that have “attempted to provide practical guidance.”  Smart Systems Innovations, LLC v. Chicago Transit Authority, slip. op. No. 2016-1233 (Fed. Cir. Oct. 18, 2017).  Calling the uncertainty “dangerous” for some of today’s “most important inventions in computing,” specifically mentioning AI, the Federal Circuit expressed concern that perhaps the application of the Alice test has gone too far, a concern mirrored in testimony by Andrei Iancu, Director of the Patent Office, before Congress in April 2018 (stating, in response to Judiciary Committee questions, that Alice and its progeny have introduced a degree of uncertainty into the area of subject matter eligibility, particularly as it relates to medical diagnostics and software-related inventions, and that Alice could be having a negative impact on innovation). Absent legislative changes abolishing or altering Alice, a solution to the uncertainty problem, at least in the context of AI technologies, lies in clarifying existing decisions issued by the Patent Office and courts, including the decisions summarized above.  While it can be challenging to explain why an AI algorithm made a particular decision or took a specific action (due to the black box nature of such algorithms once they are fully trained), it is generally not difficult to describe the structure of a deep learning or machine learning algorithm or how it works. Even so, it remains unclear whether and to what extent fully describing how one’s AI technology and including “how” features in patent claims will ever be sufficient to “add[] enough to transform the nature of an abstract algorithm into a patent-eligible [useful process].” If explaining how AI works is to have a future meaningful role in patent law, the courts or Congress will need to provide clarity. Read more »
  • California Appeals Court Denies Defendant Access to Algorithm That Contributed Evidence to His Conviction
    One of the concerns expressed by those studying algorithmic decision-making is the apparent lack of transparency. Those impacted by adverse algorithmic decisions often seek transparency to better understand the basis for the decisions. In the case of software used in legal proceedings, parties who seek explanations about software face a number of obstacles, including those imposed by evidentiary rules, criminal or civil procedural rules, and by software companies that resist discovery requests. The closely-followed issue of algorithmic transparency was recently considered by a California appellate court in People v. Superior Court of San Diego County, slip op. Case D073943 (Cal. App. 4th October 17, 2018), in which the People sought relief from a discovery order requiring the production of software and source code used in the conviction of Florencio Jose Dominguez. Following a hearing and review of the record and amicus briefs in support of Dominguez filed by the American Civil Liberties Union, the American Civil Liberties Union of San Diego and Imperial Counties, the Innocence Project, Inc., the California Innocence Project, the Northern California Innocence Project at Santa Clara University School of Law, Loyola Law School’s Project for the Innocent, and the Legal Aid Society of New York City, the appeals court granted the People’s relief. In doing so, the court considered, but was not persuaded by, the defense team’s “black box” and “machine testimony” arguments. At issue on appeal was Dominguez’s motion to compel production of a DNA testing program called STRmix used by local prosecutors in their analysis of forensic evidence (specifically, DNA found on the inside of gloves). STRmix is a “probabilistic genotyping” program that expresses a match between a suspect and DNA evidence in terms the probability of a match compared to a coincidental match. Probabilistic genotyping is said to reduce subjectivity in the analysis of DNA typing results. Dominguez’s counsel moved the trial court for an order compelling the People to produce the STRmix software program and related updates as well as its source code, arguing that defendant had a right to look inside the software’s “black box.” The trial court granted the motion and the People sought writ relief from the appellate court. On appeal, the appellate court noted that “computer software programs are written in specialized languages called source code” and “source code, which humans can read, is then translated into [a] language that computers can read.” Cadence Design Systems, Inc. v. Avant! Corp., 29 Cal. 4th 215, 218 at fn.3 (2002). The lab that used STRmix testified that it had no way to access the source code, which it licensed from a software authorized seller.  Thus,  the court considered whether the company that created the software should produce it. In concluding that the company was not obligated to produce the software and source code, the court, citing precedent, found that the company would have had no knowledge of the case but for the defendant’s  subpoena duces tecum, and it did not act as part of the prosecutorial team such that it was obligated to turn over exculpatory evidence (assuming software itself is exculpatory, which the court was reluctant to find). With regard to the defense team’s “black box” argument, the appellate court found nothing in the record to indicate that the STRmix software suffered a problem, as the defense team argued, that might have affected its results. Calling this allegation speculative, the court concluded that the “black box” nature of STRmix was not itself sufficient to warrant its production. Moreover, the court was unpersuaded by the defense team’s argument that the STRmix program essentially usurped the lab analyst’s role in providing the final statistical comparison, and so the software program—not the analyst using the software—was effectively the source of the expert opinion rendered at trial. The lab, the defense argued, merely acted in a scrivener’s capacity for STRmix’s analysis, and since the machine was providing testimony, Dominguez should be able to evaluate the software to defend against the prosecution’s case against him. The appellate court disagreed. While acknowledging the “creativity” of the defense team’s “machine testimony” argument (which relied heavily on Berkeley law professor Andrea Roth’s “Machine Testimony” article (126 Yale L.J. 1972 (2017)), the panel noted the testimony that STRmix did not act alone, that there were humans in the loop: “[t]here are still decisions that an analyst has to make on the front end in terms of determining the number of contributors to a particular sample and determin[ing] which peaks are from DNA or from potentially artifacts” and that the program then performs a “robust breakdown of the DNA samples,” based at least in part on “parameters [the lab] set during validation.” Moreover, after STRmix renders “the diagnostics,” the lab “evaluate[s] … the genotype combinations … . to see if that makes sense, given the data [it’s] looking at.” After the lab “determine[s] that all of the diagnostics indicate that the STRmix run has finished appropriately,” it can then “make comparisons to any person of interest or … database that [it’s] looking at.” While the appellate court’s decision mostly followed precedent and established procedure, it could easily have gone the other way and affirmed the trial judge’s decision granting Defendant’s motion to compel the STRmix software and source code, which would have given Dominguez better insight into the nature of the software’s algorithms, its parameters and limitations in view of validation studies, and the various possible outputs the model could have produced given a set of inputs. In particular, the court might have affirmed the trial judge’s decision to grant access to the STRmix software if the policy of imposing transparency in STRmix’s algorithmic decisions were given more consideration from the perspective of actual harm that might occur if software and source code are produced. Here, the source code owner’s objection to production was based in part on trade secret and other confidentiality concerns; however, procedures already exist to handle those concerns. Indeed, source code reviews happen all the time in the civil context, such as in patent infringement matters involving software technologies. While software makers are right to be concerned about the harm to their businesses if their code ends up in the wild, the real risk of this happening can be low if proper procedures, embodied in a suitable court-issued Protective Order, are followed by lawyers on both sides of a matter and if the court maintains oversight and demands status updates from the parties to ensure compliance and integrity in the review process. Instead of following the trial court’s approach, however, the appellate court conditional access to STRmix’s “black box” on the demonstration of specific errors in the program’s results, which seems intractable: only by looking into the black box in the first place is a party able to understand whether problems exist that affect the result. Interestingly, artificial intelligence had nothing to do with the outcome of the appellate court’s decision, yet the panel noted that “We do not underestimate the challenges facing the legal system as it confronts developments in the field of artificial intelligence.” The judges acknowledged that the notion of “machine testimony” in algorithmic decision-making matters is a subject about which there are widely divergent viewpoints in the legal community, a possible prelude to what is ahead when artificial intelligence software cases make their way through the courts in criminal or non-criminal cases.  To that, the judges cautioned, “when faced with a novel method of scientific proof, we have required a preliminary showing of general acceptance of the new technique in the relevant scientific community before the scientific evidence may be admitted at trial.” Lawyers in future artificial intelligence cases should consider how best to frame arguments concerning machine testimony in both civil and criminal contexts to improve their chances of overcoming evidentiary obstacles. Lawyers will need to effectively articulate the nature of artificial intelligence decision-making algorithms, as well as the relative roles of data scientists and model developers who make decisions about artificial intelligence model architecture, hyperparameters, data sets, model inputs, training and testing procedures, and the interpretation of results. Today’s artificial intelligence systems do not operate autonomously; there will always be humans associated with a model’s output or result and those persons may need to provide expert testimony beyond the machine’s testimony.  Even so, transparency will be important to understanding algorithmic decisions and for developing an evidentiary record in artificial intelligence cases. Read more »
  • Thanks to Bots, Transparency Emerges as Lawmakers’ Choice for Regulating Algorithmic Harm
    Digital conversational agents, like Amazon’s Alexa and Apple’s Siri, and communications agents, like those found on customer service website pages, seem to be everywhere.  The remarkable increase in the use of these and other artificial intelligence-powered “bots” in everyday customer-facing devices like smartphones, websites, desktop speakers, and toys, has been exceeded only by bots in the background that account for over half of the traffic visiting some websites.  Recently reported harms caused by certain bots have caught the attention of state and federal lawmakers.  This post briefly describes those bots and their uses and suggests reasons why new legislative efforts aimed at reducing harms caused by bad bots have so far been limited to arguably one of the least onerous tools in the lawmaker’s toolbox: transparency. Bots Explained Bots are software programmed to receive percepts from their environment, make decisions based on those percepts, and then take (preferably rational) action in their environment.  Social media bots, for example, may use machine learning algorithms to classify and “understand” incoming content, which is subsequently posted and amplified via a social media account.  Companies like Netflix uses bots on social media platforms like Facebook and Twitter to automatically communicate information about their products and services. While not all bots use machine learning and other artificial intelligence (AI) technologies, many do, such as digital conversational agents, web crawlers, and website content scrappers, the latter being programmed to “understand” content on websites using semantic natural language processing and image classifiers.  Bots that use complex human behavioral data to identify and influence or manipulate people’s attitudes or behavior (such as clicking on advertisements) often use the latest AI tech. One attribute many bots have in common is that their functionality resides in a black box.  As a result, it can be challenging (if not impossible) for an observer to explain why a bot made a particular decision or took a specific action.  While intuition can be used to infer what happens, secrets inside a black box often remain secret. Depending on their uses and characteristics, bots are often categorized by type, such as “chatbot,” which generally describes an AI technology that engages with users by replicating natural language conversations, and “helper bot,” which is sometimes used when referring to a bot that performs useful or beneficial tasks.  The term “messenger bot” may refer to a bot that communicates information, while “cyborg” is sometimes used when referring to a person who uses bot technology. Regardless of their name, complexity, or use of AI, one characteristic common to most bots is their use as agents to accomplish tasks for or on behalf of a real person.  This anonymity of agent bots makes them attractive tools for malicious purposes. Lawmakers React to Bad Bots While the spread of beneficial bots has been impressive, bots with questionable purposes have also proliferated, such as those behind disinformation campaigns used during the 2016 presidential election.  Disinformation bots, which operate social media accounts on behalf of a real person or organization, can post content to public-facing accounts.  Used extensively in marketing, these bots can receive content, either automatically or from a principal behind the scenes, related to such things as brands, campaigns, politicians, and trending topics.  When organizations create multiple accounts and use bots across those accounts to amplify each account’s content, the content can appear viral and attract attention, which may be problematic if the content is false, misleading, and biased. The success of social media bots in spreading disinformation is evident in the degree to which they have proliferated.  Twitter recently produced data showing thousands of bot-run Twitter accounts (“Twitter bots”) were created before and during the 2016 US presidential campaign by foreign actors to amplify and spread disinformation about the campaign, candidates, and related hot-button campaign issues.  Users who received content from one of these bots would have had no apparent reason to know that it came from a foreign actor. Thus, it’s easy to understand why lawmakers and stakeholders would want to target social media bots and those that use them.  In view of a recent Pew Research Center poll that found most Americans know about social media bots, and those that have heard about them overwhelmingly (80%) believe that such bots are used for malicious purposes, and with technologies to detect fake content at its source or the bias of a news source standing at only about 65-70 percent accuracy, politicians have plenty of cover to go after bots and their owners. Why Use Transparency to Address Bot Harms? The range of options for regulating disinformation bots to prevent or reduce harm could include any number of traditional legislative approaches.  These include imposing on individuals and organizations various specific criminal and civil liability standards related to the performance and uses of their technologies; establishing requirements for regular recordkeeping and reporting to authorities (which could lead to public summaries); setting thresholds for knowledge, awareness, or intent (or use of strict liability) applied to regulated activities; providing private rights of action to sue for harms caused by a regulated person’s actions, inactions, or omissions; imposing monetary remedies and incarceration for violations; and other often seen command-and-control style governance approaches.  Transparency, which is another tool lawmakers could deploy, could impose on certain regulated persons and entities that they provide information publicly or privately to an organization’s users or customers through a mechanism of notice, disclosure, and/or disclaimer (among other techniques). Transparency is a long-used principal of democratic institutions that try to balance open and accountable government action and the notion of free enterprise with the public’s right to be informed.  Examples of transparency may be found in the form of information labels on consumer products and services under consumer laws, disclosure of product endorsement interests under FTC rules, notice and disclosures in financial and real estate transactions under various related laws, employee benefits disclosures under labor and tax laws, public review disclosures in connection with laws related to government decision-making, property ownership public records disclosures under various tax and land ownership/use laws, various healthcare disclosures under state and federal health care laws, and laws covering many other areas of public life.  Of particular relevance to the disinformation problem noted above, and why transparency seems well-suited to social media bots, is current federal campaign finance laws that require those behind political ads to reveal themselves.  See 52 USC §30120 (Federal Campaign Finance Law; publication and distribution of statements and solicitations; disclaimer requirements). A recent example of a transparency rule affecting certain bot use cases is California’s bot law (SB-1001; signed by Gov. Brown on September 28, 2018).  The law, which goes into effect July 2019, will, with certain exceptions, make it unlawful for any person (including corporations or government agencies) to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.  A person using a bot will not be liable, however, if the person discloses using clear, conspicuous, and reasonably designed notice to inform persons with whom the bot communicates or interacts that it is a bot.  Similar federal legislation may follow, especially if legislation proposed this summer by Sen. Diane Feinstein (D-CA) and legislative proposals by Sen. Warner and others gain traction in Congress. So why would lawmakers choose transparency to regulate malicious bot technology use cases rather than use an approach that is arguably more onerous?  One possibility is that transparency is seen as minimally controversial, and therefore less likely to cause push-back by those with ties to special interests that might negatively respond to lawmakers who advocate for tougher measures.  Or, perhaps lawmakers are choosing a minimalist approach just to demonstrate that they are taking action (versus the optics associated with doing nothing).  Maybe transparency is seen as a shot across the bow warning to industry leaders: work hard to police themselves and those that use their platforms by finding technological solutions to preventing the harms caused by bots or else be subject to a harsher regulatory spotlight.  Whatever the reason(s), even something viewed as relatively easy to implement as transparency is not immune from controversy. Transparency Concerns The arguments against the use of transparency applied to bots include loss of privacy, unfairness, unnecessary disclosure, and constitutional concerns, among others. Imposing transparency requirements can potentially infringe upon First Amendment protections if drafted with one-size-fits-all applicability.  Even before California’s bots measure was signed into law, for example, critics warned of the potential chilling effect on protected speech if anonymity is lifted in the case of social media bots. Moreover, transparency may be seen as unfairly elevating the principals of openness and accountability over notions of secrecy and privacy.  Owners of agent-bots, for example, would prefer to not to give up anonymity when doing so could expose them to attacks by those with opposing viewpoints and cause more harm than the law prevents. Both concerns could be addressed by imposing transparency in a narrow set of use cases and, as in California’s bot law, using “intent to mislead” and “knowingly deceiving” thresholds for tailoring the law to specific instances of certain bad behaviors. Others might argue that transparency places too much of the burden on users to understand the information being disclosed to them and to take appropriate responsive actions.  Just ask someone who’s tried to read a financial transaction disclosure or a complex Federal Register rule-making analysis whether the transparency, openness and accountability actually made a substantive impact on their follow-up actions.  Similarly, it’s questionable whether a recipient of bot-generated content would investigate the ownership and propriety of every new posting before deciding whether to accept the content’s veracity, or whether a person engaging with an AI chatbot would forgo further engagement if he or she were informed of the artificial nature of the engagement. Conclusion The likelihood that federal transparency laws will be enacted to address the malicious use of social media bots seems low given the current political situation in the US.  And with California’s bots disclosure requirement not becoming effective until mid-2019, only time will tell whether it will succeed as a legislative tool in addressing existing bot harms or whether the delay will simply give malicious actors time to find alternative technologies to achieve their goals. Even so, transparency appears to be a leading governance approach, at least in the area of algorithmic harm, and could become a go-to approach to governing harms caused by other AI and non-AI algorithmic technologies due to its relative simplicity and ability to be narrowly tailored.  Transparency might be a suitable approach to regulating certain actions by those who publish face images using generative adversarial networks (GANs), those who create and distribute so-called “deep fake” videos, and those who provide humanistic digital communications agents, all of which involve highly-realistic content and engagements in which a user could easily be fooled into believing the content/engagement involves a person and not an artificial intelligence. Read more »
  • AI’s Problems Attract More Congressional Attention
    As contentious political issues continue to distract Congress before the November midterm elections, federal legislative proposals aimed at governing artificial intelligence (AI) have largely stalled in the Senate and House.  Since December 2017, nine AI-focused bills, such as the AI Reporting Act of 2018 (AIR Act) and the AI in Government Act of 2018, have been waiting for congressional committee attention.  Even so, there has been a noticeable uptick in the number of individual federal lawmakers looking at AI’s problems, a sign that the pendulum may be swinging in the direction favoring regulation of AI technologies. Those lawmakers taking a serious look at AI recently include Mark Warner (D-VA) and Kamala Harris (D-CA) in the Senate, and Will Hurd (R-TX) and Robin Kelly (D-IL) in the House.  Along with others in Congress, they are meeting with AI experts, issuing new policy proposals, publishing reports, and pressing federal officials for information about how government agencies are addressing AI problems, especially in hot topic areas like AI model bias, privacy, and malicious uses of AI. Sen. Warner, for example, the Senate Intelligence Committee Vice Chairman, is examining how AI technologies power disinformation.  In a draft white paper first obtained by Axios, Warner’s “Potential Policy Proposals for Regulation of Social Media and Technology Firms” raises concerns about machine learning and data collection, mentioning “deep fake” disinformation tools as one example.  Deep fakes are neural network models that can take images and video of people containing one type of content and superimpose them over different images and videos of other (or the same) people in a way that changes the original’s content and meaning.  To the viewer, the altered images and videos look like the real thing, and many who view them may be fooled into accepting the false content’s message as truth. Warner’s “suite of options” for regulating AI include one that would require platforms to provide notice when users engage with AI-based digital conversational assistants (chatbots) or visit a website the publishes content provided by content-amplification algorithms like those used during the 2016 elections.  Another Warner proposal includes modifying the Communications Decency Act’s safe harbor provisions that currently protects social media platforms who publish offending third-party content, including the aforementioned deep fakes.  This proposal would allow private rights of action against platforms who fail to take steps, after notice from victims, that prevent offending content from reappearing on their sites. Another proposal would require certain platforms to make their customer’s activity data (sufficiently anonymized) available to public interest researchers as a way to generate insight from the data that could “inform actions by regulators and Congress.”  An area of concern is the commercial use, by private tech companies, of their user’s behavior-based data (online habits) without using proper research controls.  The suggestion is that public interest researchers would evaluate a platform’s behavioral data in a way that is not driven by an underlying for-profit business model. Warner’s privacy-centered proposals include granting the Federal Trade Commission with rulemaking authority, adopting GDPR-like regulations recently implemented across the European Union states, and setting mandatory standards for algorithmic transparency (auditability and fairness). Repeating a theme in Warner’s white paper, Representatives Hurd and Kelly conclude that, even if AI technologies are immature, they have the potential to disrupt every sector of society in both anticipated and unanticipated ways.  In their “Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy” report, the co-chairs of the House Oversight and Government Reform Committee make several observations and recommendations, including the need for political leadership from both Congress and the White House to achieve US global dominance in AI, the need for increased federal spending on AI research and development, means to address algorithmic accountability and transparency to remove bias in AI models, and examining whether existing regulations can address public safety and consumer risks from AI.  The challenges facing society, the lawmakers found, include the potential for job loss due to automation, privacy, model bias, and malicious use of AI technologies. Separately, Representatives Adam Schiff (D-CA), Stephanie Murphy (D-FL), and Carlos Curbelo (R-FL), in a September 13, 2018, letter to the Director of National Intelligence, are requesting the Director of National Intelligence provide Congress with a report on the spread of deep fakes (aka “hyper-realistic digital forgeries”), which they contend are allowing “malicious actors” to create depictions of individuals doing or saying things they never did, without those individuals’ consent or knowledge.  They want the intelligence agency’s report to touch on everything from assessing how foreign governments could use the technology to harm US national interests, what sort of counter-measures could be deployed to detect and deter actors from disseminating deep fakes, and if the agency needs additional legal authority to combat the problem. In a September 17, 2018, letter to the Equal Employment Opportunity Commission, Senators Harris, Patty Murray (D-WA), and Elizabeth Warren (D-MA) ask the EEOC Director to address the potentially discriminatory impacts of facial analysis technologies in the enforcement of workplace anti-discrimination laws.  As reported on this website and elsewhere, machine learning models behind facial recognition may perform poorly if they have been trained on data that is unrepresentative of data that the model sees in the wild.  For example, if training data for a facial recognition model contains primarily white male faces, the model may perform well when it sees new white male faces, but may perform poorly when it sees non-white male faces.  The Senators want to know if such technologies amplify bias in race, gender, disadvantaged, and vulnerable groups, and they have tasked the EEOC with developing guidelines for employers concerning fair uses of facial analysis technologies in the workplace. Also on September 17, 2018, Senators Harris, Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR), sent a similar letter to the Federal Trade Commission, expressing concerns that the bias in facial analysis technologies could be considered unfair or deceptive practices under the Federal Trade Commission Act.  Stating that “we cannot wait any longer to have a serious conversation about how we can create sound policy to address these concerns,” the Senators urge the FTC to commit to developing a set of best practices for the lawful, fair, and transparent use of facial analysis. Senators Harris and Booker, joined by Senator Cedric Richmond (D-LA), also sent a letter on September 17, 2018, to FBI Director Christopher Wray asking for the status of the FBI’s response to a 2016 General Accounting Office (GAO) comprehensive report detailing the FBI’s use of face recognition technology. The increasing attention directed toward AI by individual federal lawmakers in 2018 may merely reflect the politics of the moment rather than signal a momentum shift toward substantive federal command and control-style regulations.  But as more states join those states that have begun enacting, in the absence of federal rules, their own laws addressing AI technology use cases, federal action may inevitably follow, especially if more reports of malicious uses of AI, like election disinformation, reach more receptive ears in Congress. Read more »
WordPress RSS Feed Retriever by Theme Mason

Leave a Reply