Just enter your email address registered with The Ken
Reset Password
Email Sent to:
Check your inbox for instructions to reset your password.
How far can ChatGPT go to fool humans?
The Nutgraf is a 10-min newsletter sent at 10 AM IST every Saturday. It connects the dots and synthesizes one big event in business, technology and finance that happened over the week in India. In a way you’ll never forget.
This is a paid newsletter that’s available exclusively to The Ken’s premium subscribers.
Just 10 mins longSynthesis not analysisSometimes memes
21 Jan, 2023
The answer does not rest with machines, but with humans.
A paid 🔒 weekly emailer that explains fundamental shifts in business, technology and finance that happened over the last seven days in India. In a way you’ll never forget. Someone sent you this? Sign up here
Good Morning Dear Reader,
For over a month now, we’ve been told that ChatGPT is going to change everything.
The biggest optimists seem to be on LinkedIn (where else?), where people breathlessly share use-cases everyday.
We’ve been told that ChatGPT is going to change content marketing. We’ve been told it’s going to change journalism. We’ve been told it’s going to change education. We’ve been told that it’s one of the biggest innovations in recent history, and it’s going to result in a paradigm shift that’ll impact multiple industries, changing them forever.
Mostly, I agree. This technology is revolutionary, and a lot of things will never be the same again.
But we’ve also been told other things. Like how as AI gets better and better, it’ll be able to do more things, like write great stories and screenplays. Or poetry. Or even music. And we’re told that this will be so mesmerising, beautiful, and human-like. There’s some evidence for this already. Even now, ChatGPT can create content in the style of your favourite painter, writer, or artist. As computational power accelerates and grows, it’ll only get better and better.
But… not really. This is not going to happen.
The central question around ChatGPT is essentially:
What happens when AI-powered content becomes so good that it can fool humans?
For a month now, most of the conversation around ChatGPT has been focused on the AI-powered content part. Breathless LinkedIn posts aside, there have been long Twitter threads about the ethics around it and lots of strategy presentations at company offsites about what happens when the cost of content creation becomes zero. These are legitimate things to talk about, and some of it is interesting.
But if you ask me, the far more interesting aspect—the one that’s less discussed—is not about technology, but about the last part of that question, i.e., humans.
The technology is going to get better and better, and it’s going to find more and more use-cases. This is inevitable. But if we really want to understand how AI-powered content that mimics humans is going to change the world as we know it, the answer lies less with understanding the technology and more with understanding humans.
Let’s dive in.
ChatGPT and the one-word Turing test
Back in June 2020, when GPT-3, the predecessor to ChatGPT, was released, I got access to it via a subscriber. I wrote about OpenAI, the company that developed it, and told the story of how AlphaGo, an AI-powered computer program, beat Lee Sedol, an 18-time world champion at one of the hardest games in the world. In that edition, I even used GPT-3 to generate the first 200 words of the newsletter, which I revealed to the reader only at the very end.
At the time, I asked, “how long before it can write the next 200 words?”
Today, I have the answer.
Never.
Humans have been obsessed with the question of whether machines can match human intelligence for centuries now. However, in 1950, one person finally came up with a way to measure it. You’ve probably heard of it. It’s called the Turing Test. Essentially, humans are asked to determine if they are chatting with a human or a machine simply by asking questions to them via a text box, and by observing the responses. Of course, the Turing Test isn’t meant to be a precise, objective definition of intelligence. But it’s a useful heuristic, and in the context of ChatGPT, which attempts to create human-like content, it’s even more relevant.
ChatGPT is extremely powerful, but as many have pointed out (including its creators), it’s not “intelligent” in the way we understand the word. It’s generative AI, which basically makes it excellent at pattern matching, and regurgitation of human content that’s been created in the past. It makes basic errors and is often factually wrong, but we tend to overlook all of that because occasionally, it feels human. Just a little bit. And we fall for it.
This happens for photos, because for humans, photos of other people are real. It’s hard to accept that none of these people exist, and we overlook obvious evidence—like a superhuman number of teeth or misshapen fingers, the biggest AI giveaways.
But we are far, far more gullible with text.
Text is one of the earliest forms of communication, and for all of us, we automatically assume it’s always created by humans, especially if it’s in a conversational context in chats. We also associate intelligence with its quality. If something is polished, well-written with the right spelling and grammar, we believe that the writer is intelligent—completely ignoring the fact that tools like Grammarly and spellcheck have assisted the writer significantly.
However, there’s still one step left before machines like ChatGPT can really fool humans.
They first need to fool other machines.
And ChatGPT is doing this very well. It’s already being used to create content that fools search engines into believing that it’s relevant and useful for humans. Today, there are millions of companies across the world offering content writing services—essentially farms to churn out content for products sold on e-commerce sites, or on media properties, or SaaS website blogs. All for exactly one purpose: to show up at the top of Google search as a relevant result.
And now, well, ChatGPT is changing all that. Take the example of CNET, which is going through a fairly public controversy around the use of AI in publishing:
CNET was once a high-flying powerhouse of tech reporting that commanded a $1.8 billion purchase price when it was acquired by CBS in 2008. Since then, it has fallen victim to the same disruptions and business model shifts as the rest of the media industry, resulting in CBS flipping the property to Red Ventures for just $500 million in 2020.
Red Ventures’ business model is straightforward and explicit: it publishes content designed to rank highly in Google search for “high-intent” queries and then monetizes that traffic with lucrative affiliate links. Specifically, Red Ventures has found a major niche in credit cards and other finance products. In addition to CNET, Red Ventures owns The Points Guy, Bankrate, and CreditCards.com, all of which monetize through credit card affiliate fees. The CNET AI stories at the center of the controversy are straightforward examples of this strategy: “Can You Buy a Gift Card With a Credit Card?” and “What Is Zelle and How Does It Work?” are obviously designed to rank highly in searches for those topics. Like CNET, Bankrate and CreditCards.com have also published AI-written articles about credit cards with ads for opening cards nestled within. Both Bankrate and CreditCards.com directed questions about the use of AI to Lance Davis, the vice president of content at Red Ventures; CNET’s disclosure also included Davis as a point of contact until last week.
This type of SEO farming can be massively lucrative. Digital marketers have built an entire industry on top of credit card affiliate links, from which they then earn a generous profit. Various affiliate industry sites estimate the bounty for a credit card signup to be around $250 each. A 2021 New York Times story on Red Ventures pegged it even higher, at up to $900 per card.
Viewed cynically, it makes perfect sense for Red Ventures to deploy AI: it is flooding the Google search algorithm with content, attempting to rank highly for various valuable searches, and then collecting fees when visitors click through to a credit card or mortgage application. AI lowers the cost of content creation, increasing the profit for each click. There is not a private equity company in the world that can resist this temptation.
The problem is that there’s no real reason to fund actual tech news once you’ve started down that path.
Inside CNET’s AI-powered SEO money machine, The Verge
Once you’ve fooled machines, well, the next step is to start fooling humans.
But which humans?
Well, obviously the not-so-smart ones.
This is an uncomfortable distinction and it probably sounds elitist, but it’s relevant. There’s a tight 1-1 relationship between the quality of content and its consumers. Just take a look at the bestseller section at any bookstore and you’ll know what I mean. It’s also why scam emails and SMSes are deliberately filled with typos and grammatical errors—it’s all done to select the audience who can’t see the mistakes and flaws, because they are the ones who are the most gullible, and er… dumb.
ChatGPT just takes this further. It creates content that sounds insightful and relevant for an audience a couple of levels above those who think emails from Nigerian princes are insightful and relevant.
By this, of course, I mean the LinkedIn crowd.
Here’s an example I created, using ChatGPT.
If I published this on LinkedIn, I’m pretty sure a ton of people would tell me how insightful they found this. I’d even get a Kudos or two. The lesson here is not how intelligent ChatGPT is, but how dumb humans are.
To be fair, sometimes, it’s not about whether the people are smart or dumb, but the context in which the information is presented. If I published this in a newspaper, most people would wonder why it was so shallow.
Eugene Goostman was a chatbot which some consider to have passed the Turing test successfully in 2012. It managed to convince a third of the human judges that they were chatting with a human and not a machine. It was portrayed as a 13-year-old Ukrainian boy—which led humans to overlook its many obvious grammatical errors and poor general knowledge, fooling them to believe that it was a human.
ChatGPT will get better and better. It will create great LinkedIn posts. It’ll create college essays that will fool admission officers. And over time, more and more people will occasionally believe that content created by it is created by humans and attribute authenticity to it. How long before it’s used to create WhatsApp forwards that sound convincing but are actually nonsense?
Back in 1986, the American philosopher Harry G. Frankfurt wrote a famous paper called, ‘On Bullshit’, where he examines bullshit speech in detail. The essence is, to quote the Wikipedia summary, that “Frankfurt determines that bullshit is speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care if what they say is true or false, but cares only whether the listener is persuaded”.
ChatGPT is going to get better and better at bullshit.
But only to a point.
The top echelons of creators and discerning consumers won’t ever be affected. In fact, they may thrive. I think these creators will wall off their content (this has alreadystarted) from ChatGPT and AI. Soon, content and media companies will proudly claim every single word written on their websites are “AI-free”, or “handcrafted”. When the market is flooded by a mass-market commodity, the artisanal products will talk about their distinctiveness more and try to stand out. Why should content be any different?
Imagine that you’re living in some dystopian future, and you have been accused of being an advanced AI, which is outlawed in this society. The penalty is death, and in order to convince the judge who will decide your fate, you can utter just one word, any word you like from the dictionary, to prove that you’re flesh and blood. What word do you choose?
It sounds like the setup for a cheesy sci-fi short, but this is actually part of a curious paper from a pair of researchers at MIT on something they call the “Minimal Turing Test.”
Instead of a machine trying to convince someone they’re human through conversation — which was the premise of the original Turing Test, outlined by British scientist Alan Turing in his seminal 1950 paper “Computing Machinery and Intelligence” — the Minimal Turing Test asks for just one word, either chosen completely freely or picked from a pair of words.
What word would you pick to convince a human that you were a human?
They ran this test with hundreds of participants, and this is what the results looked like.
Then they ran another test.
This time, they asked participants to choose between pairs of words, and asked them to decide which of the two words they thought was given by a human.
The winning answer?
Well…
Again, words like “love,” “human,” and “please” scored strongly, but the winning word was simpler and distinctly biological: “poop.” Yes, out of all of the word pairings, “poop” was selected most frequently to denote the very essence and soul of humanity. Poop.
A one-word Turing Test suggests ‘poop’ is what sets us apart from the machines, The Verge
What is poop if not another form of bullshit?
Share this edition
Here's the link to this edition for you to share. Or you can use the easy share buttons below
The Nutgraf is a paid weekly emailer that explains fundamental shifts in business, technology and finance that happened over the last seven days in India. In a way you’ll never forget.
Enter the email address that you’d like us to send this payment link to. This could be your HR, finance representative, or anyone from your organization. A copy of this email will be sent to the team’s admin as well.
Email Sent Successfully
Corporate pricing applies to teams of 5 or more members only.
Thank you. We have received your request to post comments. You’ll hear from us soon.
Are you sure? Your subscription will expire at the end of your current subscription period.
The Ken has added you as a partner. Read The Ken as a couple. Sign in to get started.
T
The Ken has added you as a partner. Read The Ken as a couple. Sign up to get started.
Having your name allows us to address you personally in emails and on our website. That’s all, nothing else.
T
The Ken has added you as a partner. Read The Ken as a couple.
The Ken’s stories are available only for paid subscribers. As a partner, you can now access The Ken subscription. For free. Just activate your account to get started.
T
The Ken has added you as a partner. Read The Ken as a couple.
The Ken’s stories are available only for paid subscribers. As a partner, you can now access The Ken subscription. For free. Just activate your account to get started.
By registering, you will be signed-up for a free account with The Ken
Sharp, original, insightful, analytical
Alert
Our anti-piracy system has flagged your account for suspicious activity and has temporarily paused your account. This may happen due to a number of reasons.
If you think that this was done in error, please get in touch with us at [email protected].
Are you sure?
You will be changing your registered email address to access your account. All email newsletters will be delivered to the new email ID.
As a part of the Learning and Development program at Myntra-Jabong, you have complete access to 300+ original daily stories over the next year, 500+ previously published stories and our comment sections. Also, do keep an eye out for our exclusive subscriber-only iOS and Android apps which will be rolled out for you shortly.
Happy Reading!
By continuing to browse our site you agree to our use of cookies to improve our performance and enhance your user experience.