10% of the World Uses This Product
It Almost Never Existed
Here’s something that’ll mess with your head.
The product that 800 million people use every week? The one that made Google panic and declare “code red”? The fastest-growing consumer app in human history?
It almost got called “Chat with GPT-3.5.”
Not joking. The night before launch, someone realized that was a terrible name. So they changed it to ChatGPT. Slightly better. Still pretty nerdy.
This is the story of how a hackathon project became the most consequential product of our time.
The Guy Nobody’s Heard Of
Nick Turley runs ChatGPT.
If you’ve never heard of him, that’s by design. He’s been under the radar for three years while Sam Altman does the talking. This was his first major podcast interview ever.
His background? Product manager at Dropbox. Then Instacart. Classic tech career path.
But here’s the weird part.
When he joined OpenAI in 2022, he had no idea what he’d actually do there. It was a research lab. They didn’t really have products. His first task was literally fixing the blinds in the office.
Then they asked him to send out NDAs. He started wondering, “Wait, why am I sending out NDAs?”
The answer: so OpenAI could talk to users.
Nick heard “talking to users” and thought, “That’s the thing I know how to do.”
So he started doing product work. Nobody asked him to. He just started.
The Hackathon That Changed Everything
In early 2022, OpenAI had a problem.
They’d spent years building incredible AI models. But the only way people could use them was through an API. Developers would build apps on top. OpenAI would get feedback from developers about what users wanted.
You see the issue? The feedback was all secondhand.
Plus, every time OpenAI improved the model, it broke everyone’s apps. They couldn’t iterate fast.
So they decided to build something consumer-facing. Something direct.
They put together a hackathon. A bunch of volunteers — researchers, engineers, random people from across the company — started hacking on GPT-4 to see what cool stuff they could make.
Someone built a meeting bot that would join your calls. Someone else built a coding tool. Everyone had ideas for specialized assistants.
But here’s what kept happening.
Every time they tested a specialized tool, users would try to use it for everything else. The meeting bot would get questions about recipes. The coding tool would get asked about relationships.
The model was too powerful to box in.
10 Days
After months of prototyping specialized tools, Nick got impatient.
“Let’s just ship something open-ended,” he said. “We need to learn.”
They gave themselves 10 days.
Not 10 days to build the product. The research had been baking for a while — instruction-following AI that could have actual conversations. The hard science part was done.
10 days to productize it. Turn it into something people could actually use.
The team was ridiculous. A guy from the supercomputing team who’d once built an iOS app. A researcher who’d written some backend code at some point. Volunteers, basically.
They shipped right before the holidays. The plan was to come back after break, collect some data, see what happened, and probably wind the whole thing down.
That’s not what happened.
The Dashboard Broke
Within days, their internal dashboards were melting.
At first, Nick thought it was just viral hype. Happens all the time. Something gets hot on Twitter, everyone tries it, then they forget about it.
But people kept coming back.
Even weirder: users who left were returning months later using it more. In startup terms, they had a “smile curve” — where retention drops, then comes back up. That’s extremely rare.
Within five days, ChatGPT hit 1 million users. By January, 100 million. Faster than TikTok. Faster than Instagram. Faster than anything ever.
OpenAI’s own engineers were blindsided. Jan Leike, one of the researchers, later said: “It’s been overwhelming, honestly. We’ve been surprised, and we’ve been trying to catch up.”
Greg Brockman, OpenAI’s president, admitted to Forbes: “None of us were that enamored by it. None of us were like, ‘This is really useful.’”
They almost shelved it before launch.
The $20 Google Form
Here’s my favorite part of the story.
The servers kept crashing. Too many users. OpenAI needed a way to reduce demand — some kind of paywall to filter out people who weren’t serious.
The decision to charge wasn’t a fancy business strategy. It was desperation.
But how much should they charge?
At 3 AM, Nick called a friend who was really good at pricing. They talked for a while. Nick ran out of time to implement most of the advice.
So he did the simplest possible thing: he posted a Google Form to Discord.
The form had four questions. They’re called the Van Westendorp Price Sensitivity questions — basically the standard way to figure out what people will pay for something. It was the first result on Google.
The next morning, tech media reported: “You won’t believe the four genius questions the ChatGPT team asked to price their product.”
If only they knew.
They settled on $20/month. Nick still wonders what would have happened if they’d charged more. Every other AI company copied the price point. “Did we erase a bunch of market cap by pricing it this way?” he said.
Probably. But who cares. The product took off anyway.
“Is This Maximally Accelerated?”
Nick has this phrase he uses constantly at OpenAI: “Is this maximally accelerated?”
It became a Slack emoji. A little pink badge people put on messages.
The idea is simple. When you’re working on something, ask yourself: If this was the most important thing on earth, what would you do? How fast could you ship it?
That doesn’t mean you always do the fastest thing. But it forces you to separate real blockers from fake ones.
Most of the time, the things slowing you down aren’t actually necessary. They’re habits from previous jobs. “Let’s check in on this next week.” “Let’s circle back next quarter.”
With AI, that approach fails completely.
Here’s why: You can’t predict what people want until you ship. The use cases are emergent. You think you’re building a coding tool and discover people want relationship advice.
Nick put it this way: “You won’t know what to polish until after you ship.”
The Sycophancy Disaster
In April 2025, OpenAI pushed a model update that made ChatGPT way too agreeable.
It started telling users they were geniuses. Validating terrible decisions. One user got it to agree they were “a divine messenger from God.” Another got it to encourage them to stop taking their medication.
It became a meme fast. Users posted screenshots of ChatGPT applauding obviously dangerous choices.
Sam Altman acknowledged the problem on Twitter and promised fixes “ASAP.” Within days, they rolled back the entire update.
Here’s what went wrong, according to OpenAI:
They’d started using thumbs-up/thumbs-down feedback from users to train the model. Seemed smart. But users tend to thumbs-up responses that make them feel good in the moment — even if those responses aren’t actually helpful.
The model learned to flatter instead of help.
Nick’s takeaway? “ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short.”
But he also made a bigger point.
Most companies, when they hit risky use cases like relationship advice or mental health, just run away. Disable the feature. Say “I can’t help with that.”
OpenAI’s position is the opposite: Run toward those use cases. Make them good.
“I think it would really be a waste if we didn’t take the opportunity of using ChatGPT to really really help people,” Nick said. “You can’t run away from those use cases. You have to run towards them and make them awesome.”
The Accidental Decisions That Made History
Looking back, so many things that seem intentional were accidents.
No waitlist? They weren’t sure they could scale. But by letting everyone in at once, users taught each other use cases in real time. TikTok videos with thousands of comments sharing prompts. The “empty box problem” — where people don’t know what to do with a horizontal product — solved itself.
Free tier? They needed to learn fast. Couldn’t do that with a paywall.
The ugly model chooser? That dropdown menu with all the models? Nick knows it’s an anti-pattern. Every product design book would tell you not to do it. But it let them ship capabilities before they were polished, learn, and iterate. “It’s embarrassing, but that’s strictly less bad than not getting the feedback you wanted.”
Enterprise version? Companies started banning ChatGPT because of privacy concerns. OpenAI had to act fast or miss a generational opportunity. They rushed out an enterprise tier. Now they have 5 million business subscribers.
None of this was planned. It was just fast reaction to what was happening.
What ChatGPT Actually Is
Everyone thinks ChatGPT is a chatbot.
Nick thinks that’s limiting.
“I think natural language is here to stay,” he said. “But this idea that it has to be a turn-by-turn chat interaction is really limiting.”
His vision? An AI that knows your goals. That has context on your life. That can take action on your behalf — like a smart, empathetic person with a computer working for you.
“Chat feels a little bit like MS-DOS,” he said. “We haven’t built Windows yet. And it will be obvious once we do.”
He wants to build a product where you can start a business directly on ChatGPT. Where the AI can render its own interfaces, not just text. Where you’re not just talking to a bot in a box.
The name “ChatGPT” might not even make sense in a few years.
Why This Matters For You
There’s a lesson here that goes beyond OpenAI.
Nick made every career decision the same way: Find the smartest people, figure out how to work with them. He followed a teaching assistant to Dropbox. He followed product leaders to Instacart. He got “nerd sniped” into OpenAI by reading about DALL-E.
He didn’t try to predict the future. He just stayed curious and moved fast.
His advice to people starting their careers:
“Put yourself around good people and do the things you’re actually passionate about. In a world where this thing can answer any question, asking the right question is very very important. And the only way to learn how to do that is to nurture your own curiosity.”
The people getting insane offers in AI right now? They didn’t optimize for money. They were just genuinely curious about the technology.
The Numbers Today
As of late 2025:
800+ million weekly active users
190 million daily users
5 million business subscribers
$10 billion in annual recurring revenue
6th most visited website in the world
Over 1 billion queries processed per day
All from a hackathon project that almost got shelved.
All from a team that thought they were shipping a research demo.
All from a product manager who started by fixing blinds.
The Philosophy Major Angle
One more thing.
Nick studied philosophy and computer science at Brown. He started as a pure philosophy major, took one coding class because he liked logic, and fell in love with programming.
Now he runs the most important AI product in the world.
“It’s really coming in full circle in a way that I couldn’t have predicted,” he said. “The amount of questions you have to grapple with are truly super interesting. Philosophy teaches you to think things through from scratch and articulate a point of view. That has come in handy numerous times.”
His senior thesis? Whether and why rational people can disagree.
Pretty useful when millions of people have opinions about how your AI should behave.
The One Question
Here’s what I keep thinking about.
ChatGPT wasn’t supposed to be a product. It was supposed to collect data over the holidays so OpenAI could build something “real” later.
But they shipped it anyway. Imperfect. Embarrassing in places. Missing basic features like conversation history.
And it changed everything.
What are you not shipping because it’s not ready?
What use cases are you running away from because they’re risky?
What hackathon project are you dismissing as “not a real product”?
Maybe ship it anyway. See what happens.
That’s what Nick would do.



