This story first appeared in The Algorithm (our weekly newsletter about AI). Sign up to receive stories like these in your inbox first.
This year millions of people have tried–and been wowed by– artificial-intelligence systems. This is in large part due to OpenAI's chatbot ChatGPT.
The chatbot was launched in November last year and quickly became a hit with students. Many of them used it to finish their homework and write essays. Some media outlets even declared that the college essay was dead.
Alarmed by the influx of AI-generated essays schools around the globe quickly moved to ban its use.
The outlook is much brighter now, nearly half a decade later. Will Douglas Heaven, my colleague, spoke with educators to discuss the implications of chatbots such as ChatGPT for teaching our children. ChatGPT is not just a cheating tool, but could actually improve education. You can read his story here.
Will's story shows that ChatGPT is going to change how schools teach. The biggest educational outcome of ChatGPT might not be a new way to write essays or do homework. It's AI literacy.
AI is becoming an integral part of our daily lives. Tech companies are launching AI-powered products at an astonishingly rapid pace. AI language models could be powerful productivity tools that we can use every day.
I have written many articles about the dangers of artificial intelligence. These include biased avatar generators and the impossible task to detect AI-generated text.
Experts always give me the same answer when I ask them about how ordinary people can protect themselves against these types of harm. It is urgent that the public be more informed about AI and its limitations in order to avoid being tricked or harmed from computer programs.
The adoption of AI literacy programs has been slow to date. ChatGPT forced schools to adapt quickly and teach AI 101 to their students.
Will spoke with teachers who had been using ChatGPT to help them see the value of technology. Emily Donahoe is a University of Mississippi writing tutor and educational development developer. She believes ChatGPT can help teachers move away from focusing too much on the final result. ChatGPT could help teachers engage with AI and think critically about its outputs, rather than asking them to "write and perform like robots."
Teachers love that the AI model is trained with North American data, and it reflects North American biases.
David Smith, a professor in bioscience education at Sheffield Hallam University, UK, allows students to use ChatGPT for their written assignments. However, he will also assess the prompt and the essay. He says that it is crucial to understand the prompt, and the output. We need to show how to do this."
AI language models have a major flaw in that they can make up stuff and present falsehoods as facts. They are not suitable for tasks that require accuracy, such as medical research or scientific research. Helen Crompton is an associate professor of instructional tech at Old Dominion University, Norfolk, Virginia. She found the AI model's "hallucinations", a valuable teaching tool.
Crompton states, "The fact it's imperfect is great." It offers an opportunity to have productive discussions about bias and misinformation.
These examples are encouraging and give me hope that policymakers and educators will see the importance of teaching critical thinking skills about AI to the next generation.
A free online course, Elements of AI for adults, is available from MinnaLearn, a startup, and the University of Helsinki. It was launched in 2018 in 28 languages. Elements of AI teaches you what AI is, and what it can and cannot do. It's great! I have used it myself.
I am more concerned about whether we can get the adults up to speed fast enough. People will fall for hype and unrealistic expectations if they don't have AI literacy in the internet-surfing adult population. AI chatbots can be used as powerful tools for phishing, fraud, and misinformation.
All will be well with the kids. We need to be concerned about the adults.
Deeper Learning
Spotify could pick your next favorite song using complex math and counterfactuals.
A team of Spotify researchers has created a new type of machine-learning model that captures the complex math behind counterfactual analyses. This technique is a method that can identify the causes of past events as well as predict their effects. It is possible to distinguish true causation and correlation by adjusting the right things.
The big deal:The model could increase the accuracy of automated decision making, including personalized recommendations in a variety of applications, from finance to healthcare. Spotify's example might be deciding which songs you want to hear or when artists should drop new albums. You can read more about Will Douglas Heaven right here.
Bits and Bytes
Sam Altman continues his PR blitzIt is fascinating to witness the birth of tech folklore live. The Wall Street Journal and the New York Times both profile OpenAI founder Sam Altman. They paint an image of Altman as a tech luminary similar to Steve Jobs and Bill Gates. The Times refers to Altman as "ChatGPT King," while The Journal calls him "AI Crusader". This is yet another proof that tech's Great Man myth remains alive and well.
ChatGPT invents a scandal of sexual harassment and accuses a real professor
AI models can make up things and even provide legitimate-looking citations to support their absurdity. This is a story about a professor accused of sexual harassment. It shows the real damage that can occur. OpenAI is already facing legal issues due to "hallucinations". OpenAI was threatened last week by an Australian mayor with a lawsuit for defamation if it didn't correct false claims that he had spent time in prison for corruption. Last year, I warned of this. (Washington Post)
How Lex Fridman’s podcast became a safe haven for the "anti-woke” tech elite
This fascinating story focuses on Lex Fridman's rise as a controversial and highly-popular AI researcher and podcaster. It also discusses his complex relationship with the AI community and Elon Musk. (Business Insider)
Surveyors are beginning to survey AIs rather than people
People do not respond to polls. New research is looking at whether AI chatbots can help by mirroring the responses of certain demographics to polling questions. This is likely to make polling even more dubious. (The Atlantic)
Fashion designers use AI-generated models to promote diversity
Calvin Klein and Levi's are now using AI-generated models to "supplement their" representations of people of different sizes, skin tones and ages. Why not hire diverse people? *Screams into void* (The Guardian).
————————————————————————————————————————————————————————————
By: Melissa Heikkilä
Title: AI literacy might be ChatGPT’s biggest lesson for schools
Sourced From: www.technologyreview.com/2023/04/12/1071397/ai-literacy-might-be-chatgpts-biggest-lesson-for-schools/
Published Date: Wed, 12 Apr 2023 09:05:12 +0000
Leave a Reply