Just enter your email address registered with The Ken
Reset Password
Email Sent to:
Check your inbox for instructions to reset your password.
The machines are here
The Nutgraf is a 10-min newsletter sent at 10 AM IST every Saturday. It connects the dots and synthesizes one big event in business, technology and finance that happened over the week in India. In a way you’ll never forget.
This is a paid newsletter that’s available exclusively to The Ken’s premium subscribers.
Just 10 mins longSynthesis not analysisSometimes memes
OpenAI released the most advanced language AI ever last week called GPT-3. The results will amaze you.
A paid 🔒 weekly emailer that explains fundamental shifts in business, technology and finance that happened over the last seven days in India. In a way you’ll never forget. Someone sent you this? Sign up here
Good Morning Praveen,
Last week, OpenAI released something called GPT-3, which represents a huge, promising leap for Artificial Intelligence. Why is this a big deal? Well, because Deep Learning allows machines to do anything that they’ve been trained to do, and apply it to completely unrelated fields. Think of it as the ability to write poetry, or maybe give it Shakespeare as input and it will be able to write sonnets.
The latest machine from OpenAI—implemented in GPT-3—can now tell if a picture is a dog or not. Try it yourself. Give it a try with your pet, or take a picture of your pet, and see if GPT-3 can correctly identify the animal.
This is huge. We’re talking about something that works similar to the human brain. Something that has the ability to learn on its own. This is probably a bigger deal than that phone you’re carrying in your pocket. Because that phone in your pocket is not alive. It doesn’t think. It doesn’t grow. It does what you tell it to do.
Maybe now is a good time to understand what Artificial Intelligence means, and where it will lead.
Be safe. Take care. Let’s dive in.
As always, here's your special invite link to share this newsletter to anyone:
Our story begins in June 2016, at a tech conference called Code.
It’s an annual conference held at Terranea Resort, located on a breathtaking stretch of Pacific coastline in Rancho Palos Verdes, California, just 30 miles south of Los Angeles. It’s one of the most exclusive events in the tech world, with speakers picked on an invitation-only basis. Past panelists include Steve Jobs, Bill Gates, Jeff Bezos—you get the picture. Tickets for attendees range from $6,500 to $9,000 (~Rs 4.5-6.8 lakh) each.
Walt Mossberg and Kara Swisher, two of the most prominent journalists and chroniclers of the American tech industry are on stage. They are interviewing Elon Musk, founder and CEO of Tesla and SpaceX. After some discussion around electric cars or space vehicles, the conversation shifts to Artificial Intelligence.
Kara Swisher asks, “I want to get clear what your thoughts are [about AI], because it’s mostly ‘Elon is scared of robots’...”
There’s a murmur in the audience. Even a few laughs. Musk shifts in his chair, looks up at the ceiling, and says, “I’m not scared of robots…” Mossberg and Swisher contextualise their question: There’s an overriding belief within the tech industry that AI will be good for everyone. What did Musk think?
Musk explains that it’s complicated, but takes the opportunity to talk about a company he founded called OpenAI, and why he chose to create it. He explains that OpenAI is set up as a non-profit, and has a clear mission.
...the intent with OpenAI is to democratise AI power. There’s a quote by Acton, he’s the guy who came up with power corrupts and absolute power corrupts absolutely…he came up with this ‘Freedom consists of the distribution of power and despotism in its concentration’. It’s important that if we have this incredible power of AI that it not be concentrated in the hands of the few…it’s not a world that we want
Mossberg asks who exactly is the despot here? Is it the computer?
“Well, it’s the people controlling the computer,” replies Musk.
Mossberg, ever the journalist, presses him. Is it a specific company?
Musk smiles, and carefully replies, “I won’t name a name. But there is only one.”
“And they are not preoccupied with making a car that will compete with you, I assume.”
Mossberg is presumably referring to Google, which was making heavy investments into self-driving cars.
Musk smiles and looks at the floor, avoiding eye contact, trying not to give anything away. He repeats his earlier sentence.
“There is only one.”
The last line of defence crumbles
Musk has been fairly vocal about his concerns about AI for a while. For good reason. He was an early investor in a company called DeepMind, which was focused on creating cutting-edge applications using AI. Musk insists his investment wasn’t driven by the need for financial returns, but “rather to keep a wary eye on the arc of AI.”
Shortly afterwards, in 2014, DeepMind was acquired by... Google.
A couple of years after the acquisition, DeepMind announced that it had built a computer program called AlphaGo. Its purpose? To play humans at a game called Go, and beat them.
This was significant for several reasons.
1. From a computational standpoint, Go is one of the hardest games in the world. It’s a game played between two players moving circles on a board. Black and White. A game of chess has 400 possible moves after the first two moves. A game of Go has over 130,000. Here’s what Google wrote in their blogpost:
But as simple as the rules are, Go is a game of profound complexity. The search space in Go is vast -- more than a googol times larger than chess (a number greater than there are atoms in the universe!). As a result, traditional “brute force” AI methods -- which construct a search tree over all possible sequences of moves -- don’t have a chance in Go. To date, computers have played Go only as well as amateurs. Experts predicted it would be at least another 10 years until a computer could beat one of the world’s elite group of Go professionals.
AlphaGo: Mastering the ancient game of Go with Machine Learning
2. DeepMind was built using a completely different method as compared to earlier computers. It was built using something called reinforcement learning, not brute force. 30 million past games were fed into AlphaGo, and then the computer proceeded to play against itself until it believed it reached a point where it could play a human and win.
The conventional wisdom at the time was that it was impossible for a computer to beat a good Go player. A $1 million prize was announced. On one side was Google’s AlphaGo. On the other side was 33-year-old South Korean Lee Sedol. The Atlanticsays Sedol “...is Michael Jordan, Tiger Woods, Roger Federer. He is one of those rare virtuosos who defines his era, who sets the pace for the rest of the world.” Sedol is an 18-time world champion at one of the hardest games in the world.
In March 2016, three months before Musk stepped on that stage in California, AlphaGo played five games against Sedol. It was expected to lose all five. Instead, it crushed Sedol 4-1. It wasn’t just what AlphaGo did but how it did it. It played Sedol not as a computer, but as a human. Most notably, in the second game, it made a move that seemingly made no sense to Sedol or the commentators. Why it did so would only dawn on them later.
Source : YouTube
Go is one of the oldest games in the world, and dates back over 2,500 years.
And a computer, in just a few months, reached a point where it had surpassed humanity’s combined knowledge of a couple of millennia.
Here’s what Elon Musk said after the game.
Broadly speaking, there are two types of applications of Artificial Intelligence. One is specific, narrow AI—where computers are used to solve a specific, defined problem. Like playing chess, or tweaking search results, or playing Go.
Then there’s something called Artificial General Intelligence (AGI). This is the ability to create a computer that can solve multiple problems, even those that cannot be imagined. When you think of machines taking over the world, or reaching sentience, or anything from a science fiction movie— that’s AGI.
OpenAI was created with the express intent of solving AGI.
Last week, it released GPT-3, a language generating AI, which represents its most advanced version of AGI.
And the results are… stunning.
Open becomes closed. Computers become human.
There are two narratives here. One is the story of OpenAI, the company and how it evolved over the last few years. The story of OpenAI is told by several outlets, but the definitive one is the longread by Karen Hao in the MIT Technology Review, from which I’ll quote extensively below. The other narrative is about GPT-3, and how it’s being used and what it represents. For this, I’ll use papers published by the OpenAI team and by early closed beta users who’ve published their results on Twitter and their websites.
OpenAI was originally conceived as a company with a vague sense of purpose. The only thing the company, and its early employees, knew was that it had to solve AGI well, and end up becoming the good guys.
Very quickly, the company realised it had a problem.
As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that “in order to stay relevant,” Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass money—while somehow also staying true to the mission.
The messy, secretive reality behind OpenAI’s bid to save the world, Karen Hao
OpenAI released a charter in April 2018. In it, it emphasised the mission of creating a AGI, but also had an additional point—resources were important, and the implication was that OpenAI would be willing to make some compromises to get there.
But let’s jump ahead to GPT-3 for a moment.
GPT-3 is an AI language model that is significantly different from its predecessors in one aspect—scale. I don’t want to get too deep into the technical aspects, but the premise is this—the larger the dataset that’s used to ‘train’ the AI, the better it can be at doing language operations. And GPT-3 isn’t just larger than everyone else, it’s massive.
The graph below illustrates, on a relative scale, how big GPT-3 is with respect to its predecessors, not just by OpenAI, but even companies like Google and Microsoft.
Basically the GPT-3 was built using several text-based datasets—from Wikipedia, books and many, many more. All with the expectation that GPT-3 would perform language-based tasks better than any computer ever. And almost mimic humans.
Here’s an example from the paper published by OpenAI, where they measure GPT-3’s ability to predictively complete sentences accurately with one word.
From the simple...
Alice was friends with Bob. Alice went to visit her friend _______
to the more complex...
George bought some baseball equipment, a ball, a glove, and a ______
There are three ways the experiment is done. One by giving GPT-3 no examples at all (zero-shot), the second by giving it one illustrative example (one-shot), and the third by giving it multiple examples (few-shot).
This is how GPT-3 did.
The greater the number of parameters, the closer it came to near-human accuracy.
Perhaps you are thinking, “How did GPT-3 do this? This must have cost an enormous amount of money for a non-profit.”
Yes, which is why OpenAI made a significant shift in strategy last year.
It moved away from being a non-profit organisation.
That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a “capped profit” arm—a for-profit with a 100-fold limit on investors’ returns, albeit overseen by a board that's part of a nonprofit entity. Shortly after, it announced Microsoft’s billion-dollar investment (though it didn’t reveal that this was split between cash and credits to Azure, Microsoft’s cloud computing platform).
Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: “Early investors in Google have received a roughly 20x return on their capital,” they wrote. “Your bet is that you’ll have a corporate structure which returns orders of magnitude more than Google ... but you don’t want to ‘unduly concentrate power’? How will this work? What exactly is power, if not the concentration of resources?”
The messy, secretive reality behind OpenAI’s bid to save the world, Karen Hao
Very quickly, users were also discovering the power and breadth of GPT-3.
All you had to do was...ask. That’s it. No special training. No additional libraries. No special installation for specific use-cases. Just describe to GPT-3 what you need in simple, human-readable chat, and it would do it for you.
Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldn’t functionally change OpenAI’s approach to research. Microsoft was well aligned with the lab’s values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.
For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didn’t even know what promises, if any, had been made to Microsoft.
But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altman’s message is clear: OpenAI needs to make money in order to do research—not the other way around.
The messy, secretive reality behind OpenAI’s bid to save the world, Karen Hao
The story of OpenAI is that of a company getting more closed while it built something that we still haven’t fully understood. The overriding arc of OpenAI’s experiment is that we have something with us, but we still haven’t understood the cost of what it took to get here.
In November 2019, Lee Sedol, the 18-time world champion who was beaten by DeepMind’s AlphaGo, retired from playing Go. He cited AI as the main reason for his retirement.
With the debut of AI in Go games, I’ve realized that I’m not at the top even if I become the number one through frantic efforts. Even if I become the number one, there is an entity that cannot be defeated.
Lee Sedol
Can a machine write a newsletter?
It’s good to get excited, but there’s a long way to go. There are tough questions about algorithmic bias, ethics and quite frankly, some of GPT-3 results look a little primitive at this point. It feels a bit like being introduced to the internet in the mid-90s.
Meanwhile, over the last week, people were discovering other aspects to GPT-3. It wasn’t just able to write and summarise text, it was able to create stuff as well. It was able to write beautiful short stories, poems, and creative fiction. In varying styles. Even in the styles of your favourite writers like Shelley or Shakespeare.
So I wondered—could it write a convincing introduction to a newsletter about GPT-3 itself? Maybe 5-6 sentences to set context?
Access to GPT-3’s API is tightly controlled right now, but I wanted to give it a shot. I made a public request for GPT-3 on Twitter. A day later, a subscriber got in touch. He had access to GPT-3, and would love to help me test it out.
I gave him a prompt. Here it is:
Good morning XXX,
Last week, OpenAI released something called GPT-3, which represents a huge, promising leap for Artificial Intelligence. Why is this a big deal?
He took that prompt, gave it to GPT-3, and emailed me back the results.
You can read it for yourself. All you have to do is scroll up.
The first 200 words of this newsletter are written by GPT-3.
How long before it can do the next 200 words?
And the next 200 words?
You can share your special link which will give the recipient a free 15-day trial to The Nutgraf. Here is is:
All credit to Sharath Ravishankar for the wonderful illustrations.
Above all, a huge thank you to Sushant Kumar, co-founder of Azuro and a longtime subscriber of The Ken, who patiently answered my questions about artificial intelligence, its applications and helped me get access to GPT-3. He has a website you should check out. It’s the only GPT-3 app out there in the public domain. You can play around with it and see the results. It’s here.
Another subscriber tried to see if GPT-3 could summarise The Ken's paywalled stories. The results are here.
I keep saying it to everyone I meet all the time : The Ken’s subscribers are amazing.
The Nutgraf is a paid weekly emailer that explains fundamental shifts in business, technology and finance that happened over the last seven days in India. In a way you’ll never forget.
Enter the email address that you’d like us to send this payment link to. This could be your HR, finance representative, or anyone from your organization. A copy of this email will be sent to the team’s admin as well.
Email Sent Successfully
Corporate pricing applies to teams of 5 or more members only.
Thank you. We have received your request to post comments. You’ll hear from us soon.
Are you sure? Your subscription will expire at the end of your current subscription period.
The Ken has added you as a partner. Read The Ken as a couple. Sign in to get started.
T
The Ken has added you as a partner. Read The Ken as a couple. Sign up to get started.
Having your name allows us to address you personally in emails and on our website. That’s all, nothing else.
T
The Ken has added you as a partner. Read The Ken as a couple.
The Ken’s stories are available only for paid subscribers. As a partner, you can now access The Ken subscription. For free. Just activate your account to get started.
T
The Ken has added you as a partner. Read The Ken as a couple.
The Ken’s stories are available only for paid subscribers. As a partner, you can now access The Ken subscription. For free. Just activate your account to get started.
By registering, you will be signed-up for a free account with The Ken
Sharp, original, insightful, analytical
Alert
Our anti-piracy system has flagged your account for suspicious activity and has temporarily paused your account. This may happen due to a number of reasons.
If you think that this was done in error, please get in touch with us at [email protected].
Are you sure?
You will be changing your registered email address to access your account. All email newsletters will be delivered to the new email ID.
As a part of the Learning and Development program at Myntra-Jabong, you have complete access to 300+ original daily stories over the next year, 500+ previously published stories and our comment sections. Also, do keep an eye out for our exclusive subscriber-only iOS and Android apps which will be rolled out for you shortly.
Happy Reading!
By continuing to browse our site you agree to our use of cookies to improve our performance and enhance your user experience.