The US courts will determine AI's limits, not politicians.
The Federal Trade Commission launched an investigation last week to determine whether OpenAI had violated consumer protection laws when it scraped online data from people in order to train its AI chatbot ChatGPT. Artists, authors and Getty, the image company, are suing AI firms such as OpenAI and Meta for allegedly violating copyright laws. They claim that these companies trained their models using their work, without any acknowledgement or payment.
These cases could force OpenAI Meta, Microsoft and others to change how AI is developed, trained and deployed in order to make it more fair and equal.
Through a licensing system and royalties, they could create new ways to compensate artists, authors and others for their work being used as data training for AI models.
The AI boom in the United States has revived American legislators' interest in passing AI-specific legislation. Ben Winters of the Electronic Privacy Information Center says that we are unlikely to see such legislation in the coming year due to the divided Congress and the intense lobbying by tech companies. Even Senator Chuck Schumer’s SAFE Innovation Framework, the most visible attempt to create AI rules, doesn't include any specific policy recommendations.
Sarah Myers West is the managing director of AI Now Institute.
This means litigation.
Lawsuits from left, right and center
The existing laws provide ample ammunition to those who claim that AI companies have violated their rights.
The companies were hit with a wave in the last year. Most recently, the comedian and writer Sarah Silverman filed a lawsuit claiming that OpenAI, Meta, and other companies illegally scraped copyrighted content from the internet for their models. She is making similar claims to artists who have filed a class action alleging popular AI image-generation software has used their copyrighted pictures without consent. Microsoft, OpenAI and GitHub's AI assisted programming tool Copilot also face a class-action lawsuit claiming it relies upon "software piracy of an unprecedented scale", because it is trained using existing programming code scraped off websites.
The FTC is also investigating OpenAI to determine if its data security and privacy policies are unfair or deceptive. It will also investigate whether OpenAI caused consumers harm when it developed its AI models, including harm to their reputation. OpenAI's concerns are backed up by real evidence: a bug caused a security breach in OpenAI earlier this year, exposing users' chat histories and payment information. AI language models are often inaccurate and make-up content. Sometimes, they even talk about real people.
OpenAI, at least publicly, is confident about the FTC's investigation. The company, when contacted by the FTC for comment, shared a tweet from CEO Sam Altman stating that the company was "confident" in its ability to follow the law.
Marc Rotenberg is the founder and president of the Center for AI and Digital Policy, a nonprofit. He says that an agency such as the FTC could take companies to court and enforce industry standards, and introduce improved business practices. CAIDP submitted a complaint in March to the FTC asking that it investigate OpenAI. Myers West says that the agency can create guardrails to tell AI companies exactly what they're allowed to do.
Rotenberg says that the FTC may require OpenAI, or any other company, to pay fines, delete illegally acquired data, and delete algorithms that use that data. ChatGPT may be taken off-line in the worst case scenario. This has happened before: in 2022, the agency ordered Weight Watchers to delete all data and algorithms after it illegally collected children's information.
It is possible that other government agencies will also start their own investigations. Consumer Financial Protection Bureau, for instance, has indicated that it will be investigating the use of AI-chatbots in banking. Winters says that if generative AI is a major factor in the 2024 US Presidential election, the Federal Election Commission may also conduct an investigation.
It could be a few years before we see the results from the FTC and class action lawsuits.
Mehtab Khan is a resident fellow of Yale Law School who specializes on intellectual property, data governance and AI ethics. He believes that many lawsuits filed in this year will be rejected by a court as being overly broad. They still serve a purpose. The lawyers are casting a broad net to see what sticks. It allows for more precise cases, which could eventually lead to companies changing the way they use and build their AI models.
Khan says that the lawsuits may also force companies improve their documentation practices. The tech industry has a very basic understanding of the data that goes into AI models. A better documentation of their data collection and use could expose illegal practices but also help them to defend themselves in court.
The history repeats itself
Khan says it's not uncommon for lawsuits and other forms of regulation to produce results before the US implements new technology. In fact, this is exactly how the US handled new technologies previously.
Its approach is different from other Western countries. The EU tries to prevent AI's worst harms by taking proactive measures, while the American approach is reactive. Amir Ghavi is a partner in the law firm Fried Frank. He says that the US waits until harms are evident before it regulates. Ghavi represents Stability AI, a company that created the open-source AI Stable Diffusion for image generation, in three copyright suits.
Ghavi says, "That is a pro-capitalist position." It fosters innovation. It allows creators and innovators to be more creative in their solutions.
According to Matthew Butterick and Joseph Saveri of a class action and antitrust law firm, class action lawsuits could provide more information on the "black box" AI algorithms and new compensation options for authors and artists who have their work included in AI models.
They are the ones who have filed suits against GitHub, Microsoft, OpenAI and Stability AI. Meta is also included. Saveri & Butterick represent Silverman who is part of an author group that claims the tech companies have trained their language models using their copyrighted works. The vast amount of data from images and texts scraped off the internet is used to train generative AI models. Copyrighted data is a part of this. Tech companies who scrape their intellectual property, without permission or attribution, should pay compensation to authors, artists and programmers.
Butterick says that while the AI technologies at issue in these suits may be new, there are still legal questions surrounding them. The team is relying on "good old fashioned" copyright law to bring law where it's needed. The AI technologies that are at the center of the lawsuits may be brand new, but the legal questions surrounding them aren't. According to Butterick, the team relies on "good, old-fashioned" copyright laws.
Butterick and Saveri cite Napster as an example. Record companies sued the company for copyright violations, which led to an important case about fair use of music.
Butterick says that the Napster settlement paved the way for other companies, such as Apple, Spotify and others, to create new license-based agreements. Both parties hope that their lawsuits will also clear the path for a licensing system whereby artists, writers and other copyright owners could be paid royalties when their content is used in an AI-model, similar to what happens in the music industry with sampling songs. Companies would have to explicitly ask permission to use copyrighted material in training sets.
Under US copyright laws, tech companies treat data that is publicly available and copyrighted on the Internet as "fair use", allowing them to use this information without first asking permission. Copyright holders disagree. Ghavi says that the class actions will determine who is correct.
It's only the beginning for tech attorneys. Experts MIT Technology Review consulted agreed that tech firms are likely to be sued over privacy and biometric information, including images of faces or audio clips. Prisma labs, the company that created the AI avatar program Lensa is facing a lawsuit for collecting biometric data.
Ben Winters predicts that we will see more suits involving Section 230 and product liability, where AI companies would be held responsible for their AI-based products if they go wrong and liable for any content produced by their AI models.
Saveri says that the litigation process can be an effective tool for social change, but it can also be blunt. "And nobody is lobbying Matthew [Butterick] and me."
————————————————————————————————————————————————————————————
By: Melissa Heikkilä
Title: How judges, not politicians, could dictate America’s AI rules
Sourced From: www.technologyreview.com/2023/07/17/1076416/judges-lawsuits-dictate-ai-rules/
Published Date: Mon, 17 Jul 2023 16:06:52 +0000
Leave a Reply