This article comes from The Technocrat (MIT Technology Review's weekly tech-policy newsletter about power, politics and Silicon Valley). Sign up to receive the newsletter in your inbox each Friday.
I spoke with a Washington, DC policy professor earlier this week. He told me that both students and colleagues are interested in GPT-4 and generative Ai. What should they be looking at? What attention should they pay?
She asked me if I had any ideas and what all these new advancements meant for legislators. After spending a few days reading, chatting with experts, and thinking about it, I created this newsletter. Here it is!
GPT-4 is the most prominent generative AI release, but it's only one of many notable releases over the past few months. Google, Nvidia and Adobe have all announced their own projects. In other words, generative AI has become the buzzword. Although the technology is not new, its implications for policy are still months if not years away.
OpenAI released GPT-4 last week. It is a multimodal large-language model that uses deep learning and prediction to determine the words in a sentence. It produces fluent text and can respond to both word-based prompts and images. GPT-4, which is available to paying customers, will power ChatGPT. This feature has been integrated into many commercial applications.
This week, Bill Gates described the new version as "revolutionary", and he wrote a letter calling it that. OpenAI was also criticised for not being transparent about how it was trained and evaluated for bias.
Generative AI is not without risks, despite all the excitement. They are often able to produce racist or sexist output because they have been trained using the internet's toxic repository. They are also known to fabricate things and present them with convincing confidence. This could lead to misinformation and make scammers more convincing and prolific.
It is possible that generative AI tools pose a threat to privacy and security. They also have little respect for copyright laws. Companies that use generative AI to steal the work of others have been sued.
Alex Engler is a Brookings Institution fellow in governance studies. He has examined how policymakers should think about this topic and identified two types of risk: harms due to malicious use and harms due to commercial use. In an email, Engler stated that malicious uses of technology like disinformation and automated hate speech and scamming "have a lot to do with content moderation" (To learn more, listen to the Sunday Show from Tech Policy Press this week. Justin Hendrix is an editor and lecturer on media and democracy and talks with a panel about whether generative AI systems should have the same regulation as search and recommendation algorithms. Hint: Section 233.
Discussions on generative AI policy have so far centered on the second category, which is risks from commercial usage of the technology like coding and advertising. The Federal Trade Commission (FTC) has been the US government's most notable agency, taking small, but not insignificant, actions so far. Last month, the FTC sent a warning to companies urging them to refrain from making claims about technical capabilities they are unable to support. On its business blog, the FTC used stronger language to warn companies about potential risks when using generative artificial intelligence.
Consider the possibility that the product could be misused to commit fraud or cause harm if you are developing or offering synthetic media products or generative AI products. This is often evident at the design stage. Ask yourself if such risks are too high to offer the product, the blog post states.
The US Copyright Office launched a new initiative to address the complex policy issues around AI, attribution and intellectual property.
Meanwhile, the EU is retaining its status as the global leader in tech policy. Melissa Heikkila, my colleague, wrote about EU efforts to pass the AI Act at the beginning of the year. It is a set of rules that prevents companies from releasing models into nature without disclosing their inner workings. This is exactly what OpenAI has been accused of with the GPT-4 release.
The EU plans to seperate high-risk AI uses, such as legal or financial applications, hiring, legal and financial, from lower-risk ones like video games and spam filters. It also wants more transparency about the sensitive uses. OpenAI acknowledged some concerns regarding the speed of adoption. OpenAI's CEO Sam Altman told ABC News that he has many of these same concerns. The company has not yet disclosed key information about GPT-4.
It's crucial for policy people in Washington, London, Brussels and London to realize that generative AI will be around forever. Although there is a lot of hype around AI, the latest advances in AI are just as important and as significant as the potential risks they present.
What am I reading this week
Yesterday, the United States Congress summoned Shou Zi Chew (CEO of TikTok) to testify at a hearing on privacy and security concerns surrounding the popular social media platform. After the Biden administration threatened to ban TikTok's parent company ByteDance from selling the majority of its shares, his appearance was made.
Many headlines were used, many using a temporal pun. The hearing exposed the deepest parts of the technological cold war between China and the USA. Many people watched the hearing and found it both informative and disappointing. Some legislators displayed poor technical understanding and hypocrisy regarding how Chinese companies handle data privacy, while American companies collect and trade in the same way.
It also showed how deeply American lawmakers distrust Chinese technology. These are some of the more interesting and useful articles that will help you get up-to-speed:
- The Guardian, Kari Paul and Johana Bahuiyan. Key Takeaways from TikTok Hearing in Congress – and the uncertain future
- What You Need to Know about TikTok Security Concerns. – Billy Perrigo. Time
- America's privacy issues online are more serious than TikTok, Will Oremus and Washington Post
- There is a problem with banning TikTok. It's called the First Amendment. Jamel Jaffer, Executive Director of Knight First Amendment Institute, NYT Opinion
What I Learned This Week
According to a Stanford study, AI can persuade people about hot-button issues such as an assault weapon ban and paid parent leave. This is according to the Polarization and Social Change Lab team. Researchers compared the political opinions of people on a topic before and following reading an AI-generated argument. They found that these arguments are as persuasive as human-written arguments in convincing readers. "AI consistently ranked as more factual, logical, less angry and less dependent upon storytelling as a persuasive tool."
Both teams raise concerns about the potential use of generative AI in political contexts, such as lobbying and online discourse. (For more information on the use generative AI in politics, please see this piece by Nathan Sanders & Bruce Schneier.
————————————————————————————————————————————————————————————
By: Tate Ryan-Mosley
Title: An early guide to policymaking on generative AI
Sourced From: www.technologyreview.com/2023/03/27/1070285/early-guide-policymaking-generative-ai-gpt4/
Published Date: Mon, 27 Mar 2023 11:00:00 +0000
Leave a Reply