This article comes from The Technocrat – MIT Technology Review’s weekly newsletter on tech policy, power, politics and Silicon Valley. Sign up to receive the newsletter every Friday in your email.
In the age of ChatGPT and large language models, we've heard quite a bit about AI risks. (Including from me!) There are many risks, including the spread of misinformation and disinformation as well as the erosion privacy. In April, Melissa Heikkila predicted that the new AI models will soon be a flood of spam and scams on the internet. This story today explains how this new wave is already here, and that it's being incentivized with ad dollars.
According to a report exclusively shared by MIT Technology Review, people are using AI in order to quickly create junk websites to grab some of the money from programmatic advertising that is sloshing online. This means that major brands and blue chip advertisers are funding the next generation of content farms without their knowledge.
NewsGuard, a company that rates websites' quality, discovered that 140 major brands advertise on sites that use AI-generated content that they consider "unreliable". Ninety percent were served by Google's ad tech, despite Google's policies which prohibit the placement of Google-served advertisements on pages that have "spammy, automatically generated content."
This ploy is successful because programmatic advertising enables companies to purchase ad spaces on the internet, without the need for human oversight. Algorithms bid on placements in order to maximize the number of relevant eyes likely to view that ad. Before generative AI, 21% of all ad impressions took place on "advertising-only" junk websites. This wasted $13 billion every year.
People are now using generative AI in order to create sites that can capture advertising dollars. NewsGuard tracked more than 200 "unreliable AI generated news and information websites" since April 2023. Most of them seem to be trying to profit from advertising dollars, which are often paid by reputable companies.
NewsGuard detects these sites by using AI. It checks if the text matches standard error messages generated by large language models such as ChatGPT. Human researchers then review those flagged.
Some sites feature artificially generated bios and fake photos of creators.
Lorenzo Arvanitis is a researcher for NewsGuard. He told me that "this is the way the internet works." Many well-intentioned companies pay for garbage, and sometimes even inaccurate, misleading or fake content, because they want to compete online for the attention of users. There's already been some good articles written on this topic.
Arvanitis says that the use of generative AI will make this ploy more popular, as language models get more sophisticated and more accessible.
We shouldn't ignore the less dramatic, but more likely outcome of generative AI. This is huge waste of money and resources.
What I'm currently reading
- Chuck Schumer, Senate majority leader of the US Congress, announced a plan to regulate AI in a speech on Wednesday. He said that innovation should be the "North Star", or the focus, for legislation. Last week, President Biden met with AI experts in San Francisco, another sign that regulation could be on the horizon, but I am not holding my breath.
- This great overview by the New York Times shows how political campaigns use generative AI to raise alarms about disinformation. Reporters Tiffany Hsu & Steven Lee Myers wrote that "political experts are concerned that misused artificial intelligence could have a damaging effect on democracy."
- Meta's oversight committee issued binding recommendations last week about how it moderates content related to war. The company must provide more information on why certain content is removed or left online, as well as preserve any documentation of human rights violations. Meta must also share this documentation with the authorities when necessary. Alexa Koenig is the executive director of Human Rights Center and she wrote an insightful analysis for Tech Policy Press to explain why this really is a big deal.
What I learned about HTML0 this week
Science is not clear about the link between social media and teens' mental health. Kaitlyn Tiffith, a writer at The Atlantic, wrote an in-depth article a few weeks back, examining the research, which was sometimes contradictory, in this field. In the United States, teens are experiencing an increase in mental health issues. Social media is considered to be a major contributor.
Research has not been able to establish exactly when and how social media can be harmful. Tiffany writes, "A decade of research and hundreds of studies has produced mixed results. In part, because they used a variety of methods, and in part, because they are trying to reach something elusive and complex."
————————————————————————————————————————————————————————————
By: Tate Ryan-Mosley
Title: Next-gen content farms are using AI-generated text to spin up junk websites
Sourced From: www.technologyreview.com/2023/06/27/1075545/next-gen-content-farms-ai-generated-text-ads/
Published Date: Tue, 27 Jun 2023 11:00:00 +0000
Leave a Reply