Is ChatGPT a “virus released into the wild”? • Zoo House News
- Technology
- December 10, 2022
- No Comment
- 23
More than three years ago, this editor met Sam Altman for a small event in San Francisco, shortly after he left his role as president of Y Combinator to become CEO of the AI company he co-founded with Elon Musk in 2015 had and others, OpenAI.
At the time, Altman described the potential of OpenAI in language that sounded far-fetched to some. For example, Altman said that the possibilities with artificial general intelligence — machine intelligence that can solve problems just like a human — are so impossibly vast that if OpenAI could crack them, the outfit “maybe catch the ray of light of all the future.” could have value in the universe.” He said the company “doesn’t have to publish research” because it’s so powerful. When asked if OpenAI was guilty of scaremongering, Elon Musk, a co-founder of the team, has repeatedly urged all organizations developing AI to do so regulated – Altman spoke about the dangers of not thinking about “social consequences” when “building something on an exponential curve”.
The audience laughed at various points in the conversation, unsure how seriously Altman should be taken. But nobody laughs anymore. While machines aren’t quite as smart as humans yet, the technology that OpenAI has since brought to the world is getting so close that some critics fear it could be our downfall (and there’s reportedly more sophisticated technology yet to come).
Though many users say it’s not that smart, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like one person that professionals across a range of industries are trying to process the effects. Educators, for example, are wondering how they will be able to distinguish original texts from the algorithmically generated essays they inevitably receive — and anti-plagiarism software can bypass that.
Paul Kedrosky is not an educator per se. An economist, venture capitalist, and MIT grantee, he describes himself as “a frustrated common man with a penchant for thinking about risk and unintended consequences in complex systems.” But he’s one of those people who are suddenly worried about our future together, tweet yesterday: “[S]I am ashamed of OpenAI for hurling this pocket-sized nuke into an unprepared society with no restrictions.” Kedrosky wrote, “I obviously believe that ChatGPT (and its ilk) should be withdrawn immediately. And if it is ever reintroduced, only with strict conditions.”
We spoke to him yesterday about some of his concerns and why he believes OpenAI is driving what he says is “the most disruptive change the US economy has seen in 100 years,” and not in a good way, and Way.
Our chat has been edited for length and clarity.
TC: ChatGPT came out last Wednesday. What triggered your reaction on Twitter?
PK: I’ve played with these Conversational User Interfaces and AI services in the past and this is obviously a huge leap beyond that. And what particularly bothered me here is the casual brutality, with massive implications for a variety of different activities. It’s not just the obvious ones, like high school essay writing, but pretty much every field where there’s a grammar – [meaning] an organized way of expressing yourself. This can be software development, school essays, legal documents. All of them are easily eaten and spat out again by this gluttonous beast without compensating what was used for training.
I heard from a colleague at UCLA who told me that he has no idea what to do with essays at the end of the current semester where he gets hundreds per course and thousands per department because he has no idea anymore what is fake and what is not. To do it so casually – as someone told me earlier today – is reminiscent of the so-called [ethical] White hat hacker who finds a bug in a widely used product and then informs the developer before the general public knows about it so the developer can patch their product and we don’t have mass devastation and power grid outages. The opposite is true when a virus has been released into the wild with no regard for the consequences.
It feels like it could eat up the world.
Some may say, “Well, were you the same way when automation took hold in car factories and auto workers became unemployed? Because that’s kind of a more general phenomenon.” But that’s very different. These specific learning technologies are self-catalyzing; You learn from the requests. Robots in a manufacturing facility that were disruptive and had incredible economic consequences for the people working there then didn’t turn around and began absorbing everything that was going on inside the factory and moving sector by sector, although not exactly that that’s just what we can expect but what you should expect.
Musk partially missed OpenAI disagreements about the development of the company, he said in 2019, and he has long spoken of AI as an existential threat. But people complained that he didn’t know what he was talking about. Now we are facing this powerful technology and it is not clear who is stepping in to address it.
I think it’s going to start in a bunch of places at once, most of which are going to look really clumsy, and people are going to do it [then] scoff, because that’s what technologists do. But it’s a shame, because we got ourselves into it by creating something so consistent. Just like the FTC required people to blog years ago [make clear they] Having affiliate links and making money from them, I think people will be forced, on a trivial level, to make disclosures that say, “We didn’t write any of this. It’s all machine generated.”
I also think we will see renewed energy for the ongoing lawsuit against Microsoft and OpenAI for copyright infringement related to our machine learning algorithms in development. I think there will be a broader DMCA issue here regarding this service.
And I think there’s potential for one [massive] Legal proceedings and eventual settlement regarding the consequences of the services, which as you know will probably take too long and not help enough people, but I don’t see how we shouldn’t end there [this place] related to these technologies.
What do you think at MIT?
Andy McAfee and his group over there are more optimistic and have a more orthodox view that every time we see disruption, other opportunities are created, people are mobile, moving from place to place and job to job, and we shouldn’t be doing it to be so narrow-minded that we think this particular evolution of technology is the one we cannot mutate and migrate around. And I think that’s broadly true.
But the lesson of the last five years in particular has been that these changes can take a long time. Free trade, for example, is one of those incredibly disruptive, economy-wide experiences, and we’ve all told ourselves as economists that the economy will adjust and people in general will benefit from lower prices. What nobody expected was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can foresee and predict what the consequences are going to be, but [we can’t].
They talked about writing essays for high school and college. One of our children has already asked – theoretically! — if using ChatGPT to write a paper would be plagiarism.
The purpose of writing an essay is to prove that you can think, so the process is short-circuited and the purpose defeated. Again, in terms of consequences and externalities, if we can’t give people homework because we don’t know if they’re cheating or not, it means everything has to happen in the classroom and be monitored. We can’t take anything home with us. More stuff needs to be done orally, and what does that mean? It means the school has just gotten a lot more expensive, a lot more artisanal, a lot smaller, and right at the moment we’re trying to do the opposite. The implications for higher education are devastating when it comes to actually providing one more service.
What do you think of the idea of universal basic income or everyone sharing in the AI gains?
I’m a much less strong advocate than I was before COVID. The reason is that COVID was in a way an experiment with a universal basic income. We paid people to stay home and they invented QAnon. So I’m really nervous about what happens when people don’t get in a car, go somewhere, have to do a job they hate, and come back home because the devil finds work for idle hands, and it’s going to be a lot giving of idle hands and much devilry.