Commentdämmerung
On GPT-3 Imitations
GPT-3 (OpenAI’s text generation model) imitating human text: We aren’t pulling the mask off the machine to reveal a genius wizard, we’re pulling the mask off each other to reveal the bar is low. People are impressed that it reads like a legitimate 12th grade essay, but this tells us something about 12th graders presenting their ideas as wordlike spam, and our low expectations of them. The machine will surpass the average human, but we all seem to agree that we don’t have many expectations for the average human’s coherence.
It’s a little spooky for another reason. In places where very average people read very average comments all day long (reddit, etc) I think we can expect a Commentdämmerung moment in the near future.
Imagine you want to stonewall a thread about a topic. Or every thread about a topic. Any time the subject is posted, comments start rolling in. People arguing with each other along familiar lines. The same back and forth stuff like usual. Except this time, it’s all bots, and it’s indistinguishable because the speech patterns of average people are… average. And it’s way, way louder. You sort and collapse comments to try to find a new angle, but you can’t. It’s saturated with the blandest takes imaginable as quick as the spam filters allow.
How do you get to the point, then, where you’re only filtering by human comments? What happens when its 99,000 comments, and 99% of them bots? Will you know? Will you be influenced?
Will the average person be influenced? If advertising and the idea of saying something over and over works, and words are free, and you can GPT-3 your hand-rolled (or state authorized) propaganda every day in every comment thread in every forum to look like an average person is advocating for X, will other average people be more convinced of X?
This is the twilight of the comment sections.
Rather than authors GPT-3ing their own works and then revealing that they were ghost-written by the computer, the creepy scenario is someone hand-crafting conspiracy theories or lies so wild that no respectable person would fall for them, and then using GPT-3 bots to make it look like consensus is real, and convincing millions via synthetic social pressure.
Or: people sometimes edit wikipedia, write an article and use wiki as a source, then several days later change Wikipedia to use the article as a source. A synthetic citation. What’s that look like at scale? Remember: The bar for writing is low.
I don’t think the future will be dark. In fact I’m quite optimistic. But I think social networks and forum sites will change. They won’t compete over who has the biggest piles of content (user counts), they will compete over context (authenticity, real friends, small discussions). In other words, the next reddit, twitter, & facebook etc competitors will not look like reddit, twitter & facebook.
They will be smaller by design, harder to get in to, and slower-paced. And they’ll be better. Note that by solving the GPT-3 problem, you also largely solve the troll problem. Well, hopefully. Note in the above Commentdämmerung scenario, they can only happen as long as words are free. The simplest way to avoid this problem is to charge a small admissions fee. Then at least only actors with deep pockets can influence the average reader.
Note: GPT-3 isn’t there yet for something like this. After reviewing a lot of GPT-3 output for a while today, I’m actually unimpressed. The daemon of genius clearly isn’t home. The uncannily good ones that get shared are shared by humans, that is, they’re hand-picked and so make the output look better than it is on average. But we can expect a point in the future where free-for-all comment sections are essentially over, if actors want them to be. I think this is unfortunate because most of world history existed in dialogue, and then we shifted to mass-media, inventing ideology along the way, and its only recently in the internet age that we have returned to the agora model of mass person-to-person communication. (I’ve written about that, here). It would be a big shame if this ruins the bigger agoras, and it might.
I find it hard to believe at this point that synthetic writing will replace good writers, just as computers have not replaced chess players. But I suspect it will become a crucial tool for creatives, just as chess engines have become a crucial tool for chess players to see what they’re doing wrong, how they could improve, etc. A thesaurus is useful, a GPT-4-saurus would be much nicer. Imagine the writers block one could overcome, with a suggestion engine trained on you and writers you admire! Perhaps it’s not writers who should worry, but editors.
Then again, do imagine a suggestion engine writing away alongside you. After a while, are you doing the thinking, or are you letting the GPT-4-saurus take over? Are you convincing the readers, or is the program’s most-compelling-path-of-least-resistance just operant conditioning both you and the readers? Who’s clicking publish?