This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
For many people, the start of September marks the real beginning of the year. No fireworks, no resolutions, but fresh notebooks, stiff sneakers, and packed cars. Maybe you agree that back-to-school season still feels like the start of something new, even if you, like me, are far past your time on campus.
The big thing this year seems to be the same one that defined the end of last year: ChatGPT and other large language models. Last winter and spring brought so many headlines about AI in the classroom, with some panicked schools going as far as to ban ChatGPT altogether. My colleague Will Douglas Heaven wrote that it wasn’t time to panic: generative AI, he argued, is going to change education but not destroy it. Now, with the summer months having offered a bit of time for reflection, some schools seem to be reconsidering their approach.
For a perspective on how higher education institutions are now approaching the technology in the classroom, I spoke with Jenny Frederick. She is the associate provost at Yale University and the founding director of the Poorvu Center for Teaching and Learning, which provides resources for faculty and students. She has also helped lead Yale’s approach to ChatGPT.
In our chat, Frederick explained that Yale never considered banning ChatGPT and instead wants to work with it. I’m sharing here some of the key takeaways and most interesting parts from our conversation, which has been edited for brevity and clarity.
Generative AI is new, but asking students to learn what machines can do is not.
On the teaching side, it’s really important to revisit: What do I want my students to learn in this course?
If a robot could do it adequately, do I need to rethink what I’m asking my students to learn, or raise the bar on why it is important to know this? How are we talking to our students about what it means to structure a paragraph, for example, or do your own research? What do [students] gain from that labor? We all learn long division, even though calculators can do that. What’s the purpose of that?
I have a faculty advisory board for the Poorvu Center, and we have a calculus professor in the group, and he laughed and said, “Oh, it’s kind of amusing for me to watch you all grapple with this, because we mathematicians have had to deal with the fact that machines could do the work. That’s been possible for quite a while now—for decades.”
So we have to think about justifying the learning we’re asking students to do when, yes, a machine could do it.
It’s too early to institute prescriptive policies about how students can use the tech.
There was no moment, ever, when Yale thought about banning it. We thought about how we can encourage an environment of learning and experimentation in our role as a university. This is a new technology, but this is not just a technical change; it’s a moment in society that’s challenging how we think about humans, how we think about knowledge, how we think about learning and what it means.
I got my staff together and said, “Look: we need to have guidance out there.” We don’t necessarily have the answers, but we need to have a curated set of resources for faculty to look at. We don’t have a policy that says you must use this, you shouldn’t use this, or this is the framework for using it. Make sure your students have a sense of how AI is relevant for the course, how might they use it, or should they not use it.
Using ChatGPT to cheat is less of a concern than what led to the cheating.
When we think about what makes a student cheat, nobody wants to cheat. They’re paying good money for an education. But what happens is people run out of time, they overestimate their abilities, they get overwhelmed, something turns out to be really hard. They’re stuck in a corner, and then they make the unfortunate decision.
So I’m much more worried about the things that contribute to that state of mental health and time management. How are we helping our students not get themselves into corners where they can’t do the thing that they came to do?
So yes, ChatGPT provides another way for people to cheat, but I think the path for people to get there is still the same path. So let’s work on that path.
Students may be putting their privacy at risk.
I think people have been a little worried—rightly so—about their students putting information into a system. Every time you use [ChatGPT or one of its competitors], you’re making it better. We do have the ethical questions about providing labor to OpenAI or whatever the corporation is. We don’t know exactly how things are working and how the inputs are kept, managed, monitored, or surveilled over time, not to sound overly conspiratorial. If you’re gonna ask students to do that, we’re responsible for our students’ safety, for their privacy. Yale’s data management policies are strict—for good reasons.
Teachers should look to their students for guidance.
The students in general are way ahead of the faculty. They’ve grown up in a world where new technologies are coming and going, and they’re trying things out. And of course, ChatGPT is the latest thing, so they’re using it. They want to use it responsibly. They’re asking “What’s allowed? Look at all these things I could do. Am I allowed to do that?”
So the advice that I gave to faculty was that you need to be trying this out. You need to at least be conversant in what your students are able to do, and think about your assignments and what this tool enables. What policies or what guidance are you gonna give students in terms of whether they are allowed to use it? In what way would you be allowed to use it?
You don’t have to do this by yourself. You can have a conversation with your students. You can co-create something, because why not draw on the experience in your classroom?
I really think that if you’re teaching, you need to realize that the world has AI now. And so students need to be prepared for a world where this is going to be integrated in industries in different ways. We do need to prepare them.
What I am reading this week
- This amazing story from the Economist, about how people from a small town in Albania moved to Britain at the urging of TikTok’s algorithm, has stuck with me for days. The impact of recommendation algorithms on people’s behavior has long been a personal (and professional) fascination, but this story lays out a quite dramatic example.
- Senate Majority Leader Chuck Schumer has announced his list of invitees for congressional discussions on AI, which I covered when he first announced them back in June. The list is very tech-company-CEO heavy, though AFL-CIO president Liz Shuler and AI ethics researcher Deb Raji did make the cut. Some people were quick to criticize the list, in part because of how many of the executives invited to inform AI policy stand to profit from the same technology.
- I really liked this take from Insider about the decline of social media and the rise of group chats, perhaps because it reflects my own behavior, and I always love to feel that I’m not alone in my habits.
What I learned this week
Speaking of recommendation systems, a new study from Stanford found that YouTube’s algorithms were not sending many people down new rabbit holes of extremist content, but rather that the platform “continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.” It was reported that the platform previously had a big problem with showing extreme content to relatively naive viewers, (which the company denied).
Kaitlyn Tiffany wrote about the study in the Atlantic and pointed out that the company’s changes to its recommendation system in 2019, intended to reduce misinformation and demonetize hate speech, might have helped with its radicalization problem. (Though, of course, it’s still a problem that already initiated individuals are stuck in viewing cycles of extremist content.)
————————————————————————————————————————————————————————————
By: Tate Ryan-Mosley
Title: How one elite university is approaching ChatGPT this school year
Sourced From: www.technologyreview.com/2023/09/04/1078932/elite-university-chatgpt-this-school-year/
Published Date: Mon, 04 Sep 2023 11:00:00 +0000
Leave a Reply