If you work in tech, or are in “rationalist” intellectual circles, you’ve likely already seen the wonders of the latest developments in Artificial Intelligence — which mark the start of a technological advance as significant as the invention of the internet.
But the general media has hardly covered the latest advances. So this blog will:
What’s new: An AI that can consistently write more eloquently than the average human
I’ve been testing OpenAI’s newly-released ChatGPT, a program that runs on the fifth most powerful computer on Earth.
I tested it with various prompts, such as this simple but slightly-tricky history question:
The AI’s response could easily get an “A” as an 8th-grade essay. It shows strong grammar, nuance, and accuracy.1
Keep in mind that the AI is not merely copy-pasting this from somewhere. It generated this unique answer based on the word connections and knowledge it has from scanning a large portion of the internet.
Teachers may not know it yet, but take-home essays are obsolete. I know college students who are already using this AI to help with papers.
What’s new: An AI that can write computer code at a basic level
Here’s another prompt I tested it with:
The above computer program code correctly converts Fahrenheit to Celsius. And again, it didn’t just copy/paste. It uniquely designed this program using its word connections in its gigantic database.
Now — you can’t just say, “hey AI, build me Twitter.” Not yet. But if you break things down into manageable parts, it can save a coder a lot of time, even for advanced tasks.
I have a friend who does coding work, and he already uses the AI to give him a head start on his work.
Here’s another person saying that:
As with ChatGPT’s essay-writing ability, the current best use of the AI is to take its output as a starting point, from which humans can then edit it, and add their own knowledge and critical thinking ability.
But that will likely change down the road.
As Alex Tabarrok at Marginal Revolution notes, humans working with computer assistance were once better at playing chess than unaided humans, or computers alone. But for a while now, computers alone have been better at chess than even humans working along with computers.
The same thing, he notes, has happened with facial recognition. A study in the Journal of Experimental Psychology found that humans no longer provide added value in matching an identity with a photo. Instead, a computer working alone now has less error.
What’s new: An AI that can beat you at strategy and negotiation
AIs long ago beat humans at Chess (1997), Go (2015), and Poker (2019). I even remember watching the 1997 game live, as a kid, on dial-up.
Just in the last month, it was announced that AIs have beat humans at an even more impressive feat: the board game Diplomacy.
Diplomacy is relatively hard for an AI, precisely because it requires negotiating with humans.
But AIs now beat humans at Diplomacy! The creator of the AI explains:
It’s quite hard to tell which player is AI. (Answer here2) I also watched a full game of a human playing against AIs, and they're impressively realistic throughout.
Overall, the AI was tested in 40 games of online Diplomacy, and it scored in the top 10% of human players.
What’s new: An AI that provides ethical guidance
OK, this one is just for humor:
Good to know.
Limitations of current AI — no “critical thinking”
The big limitation is that the AI cannot “think critically” yet; it doesn’t come up with novel claims, or use parts of its database to refute other parts. Instead, it reaches its conclusions via pure word association.
That’s a lot more powerful that it sounds, because actually a lot of what we humans count as “thinking” is mere word association. This is why political activists fight so hard when it comes to obnoxious word games.3
The AI’s word association limitation comes across when testing it — you can see that it never seems to come up with novel ideas. It also credulously accepts and repeats incorrect information, as long as the incorrect info is closely and positively linked to the question in its giant database.
To take the first essay in this post as an example, the AI can see that most of its database on WWII victory cites US military and economic support — and also the Soviets’ sacrifice of millions of soldiers.
If some humans also wrote opinions arguing that the Soviets were only able sustain such losses thanks to US support — then, the AI could mention that too.
What it cannot really do, it seems, is come up with such an argument on its own.
An incredibly advanced word association model might be able to. For example, if humans made similar claims about combatants in other wars, an AI might be able connect those verbal connections with the WWII winners, and create a novel argument about the WWII.
But ChatGPT isn’t there yet.
For example, when Noah Smith asked ChatGPT about “the most important software company” created in the 2010s, the AI repeatedly cited companies created in the 2000s — but which were, apparently, closely verbally linked with being important, and which had an impact in the 2010s.
Here’s another example of a failure:
But, in my testing, this kind of error is rare — I had to seek out examples. ChatGPT answered virtually all of my questions coherently.
Also, AI will improve further, and it has been getting better at an exponential rate recently (reminiscent of the ever-doubling speed of computers.)
As an aside, it remains to be determined how much of human “critical thinking” IS just incredibly-advanced word association, vs some unique method.
We may soon find out, based on what AI is (and isn’t) able to achieve with pure word association.
So they made a decent chatbot, who cares?
When I showed the above to a relative of mine, of a relatively luddite bent, he was skeptical that AI would actually change our lives much in the next 10 years.
To respond, I typed in the following:
This gives a flavor of what it can do. Many white-collar professionals can already use this ChatGPT AI as a starting point in some of their work.
However, the best human output still beats AI, for now. A second hat-tip to Alex Tabarrok, who quotes Sam Hammond about further incoming uses of AI:
within a decade, ordinary people will have more capabilities than a CIA agent does today. You’ll be able to listen in on a conversation in an apartment across the street using the sound vibrations off a chip bag. You’ll be able to replace your face and voice with those of someone else in real time, allowing anyone to socially engineer their way into anything. Bots will slide into your DMs and have long, engaging conversations with you until it senses the best moment to send its phishing link … The courts, meanwhile, will be flooded with lawsuits because who needs to pay attorney fees when your phone can file an airtight motion for you?
Really, we’ll be able to listen in on a conversation in an apartment across the street using video of the sound vibrations off a chip bag in the room?
Well, you’ll need a good camera:
Reconstructing audio from video requires that … the number of frames of video captured per second be higher than the frequency of the audio signal… researchers used a high-speed camera that captured 2,000 to 6,000 frames per second. That’s much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.
But it’s already possible, and AI could make it consumer-accessible within 10 years.
These come on the heels of AI making incredible strides in artwork, which Scott Alexander covered excellently here.
Conclusion - implications and art
The above has big implications for society over the coming decades, but here I just want to share news about the technology, rather than speculate.
One thought I do have — given the possibility of AIs replacing a lot of human labor — is that it seems like a good idea to own a bit of the companies that own the AIs. In the case of OpenAI, Microsoft owns a big share of it. Google, meanwhile, owns DeepMind, another cutting-edge AI lab.
The recent AI developments make me optimistic about both of those companies.
Bigger picture, I don’t know enough to wade into the serious debates about whether future ultra-smart AIs will someday go rogue and wipe out humanity.
But I do think there’s a case for optimism, if they can remain controlled and specialized. We can already see the glimmers of future increased productivity.
To use the AI yourself, you (anyone) can go to OpenAI.com and create an account. I had a bit of trouble getting my login working, but once I clicked that I wanted to create an app (instead of the “media/press” option) everything worked smoothly.
I didn’t mention that the AI has gotten excellent at art, because that’s already sort of well-known. But it really is incredible.
Here’s an AI’s first try at drawing “The AI Revolution Has Begun”, using the AI Midjourney V3:
Good enough for my site’s cover art. The cover art for last week’s post on election betting was also AI generated. You can see it on the Maximum Truth homepage.
And I’m not a poet, but to my layman’s eye, the AI poetry also seems quite good. Just now, I asked it:
The most questionable line is about coordination between the US and USSR, which was not always great. For example, towards the end of the war, Stalin tried to trick his western allies that he was not so interested in taking Berlin. He then went straight for Berlin, and reaped a PR and technological victory as a result. But coordination was still significant, with the US providing nearly $200 billion (inflation-adjusted) to the Soviets, consisting of: “400,000 jeeps and trucks; 12,000 armored vehicles including 7,000 tanks … 11,400 aircraft … and 1.75 million tons of food.” The essay seems accurate enough here.
Austria is AI. Click the footnote (“2”) again to teleport back up to where you were.
One of the most blatant and gross examples has been the recent attempt to redefine “racism” to exclude racism as long as the hate/harm is directed against a “privileged”/“dominant”/“powerful” racial group.
Another is the attempt to redefine “democracy” away from “the majority directly decides policy via ‘one person, one vote’” and towards something more like “whatever elite consensus says is pro-equality.”
But, that’s a subject for a future post!
Good article, Maxim, thank you. I agree with the commenter below that the AI style is a giveaway (and with you that GPT is actually extremely strong at changing style, even GPT-2 was very strong at this already).
The poetry is pretty rudimentary. I feel that, like your essay example, it is middle school level, but more like a B- effort. It mostly does vague attempts at AABB/ABAB that scan poorly. It will produce a "haiku" that violate the syllable structure, and that's about as simple as it can get. I think the best thing I saw is that it can do a villanelle of sorts.
Input: Please write a villanelle on the topic of using excess deaths to measure Covid effects uniformly across countries.
Output: The pandemic grips us tight
With fear and death in tow
We search for ways to fight
Excess deaths show us the light
A metric to measure and know
The pandemic grips us tight
Uniform across lands, a uniform sight
No country escapes its blow
We search for ways to fight
The data speaks with all its might
A somber tale of woe
The pandemic grips us tight
We must learn from our plight
And better prepare for the next go
We search for ways to fight
The villanelle repeats with all its might
As the pandemic continues to grow
The pandemic grips us tight
We search for ways to fight
Re: Diplomacy it was immediately obvious that Austria was the AI. One issue I've seen with GPT (both in my own research and online) is that it really really wants to speak in full sentences and formal language, and is not good at casual texting