Reading Time: About 12 minutes
I.
The question arrived subversively as I stood in the center of the well, moderating a class discussion on Duolingo for my Executive MBAs. I’d structured the pastures of the discussion so that students could explore product development choices, growth loops, and gamification mechanics. It’s a new case. I’d only taught it once before, but it was well-received. It satisfied my philosophy on case teaching: the quality of the discussion is proportional to the intensity of the debate. But I didn’t expect the debate that surfaced.
One of my students had been shifting in his seat, raising his hand each time one of his colleagues finished sharing their thoughts. He clearly wanted in and he was clearly a little agitated. When I finally called on him, he leaned forward and began an increasingly passionate argument that Duolingo’s marketing was unethical.
At first, I wasn’t sure whether he was joking. Though not exactly a class clown, he’s often one to make a witty quip, but something in his eyes convinced me he was serious. And the room grew a little uneasy as he raised his voice and pounded the flat of his palm on the desk, asserting that no one who uses Duolingo actually learns anything.
I don’t allow “hot takes” in my classroom, but I do encourage students to explore the edges of a case and to defend extreme positions. The edges are the best terrain for learning. While students do this at times for the fun of stirring things up, this student was genuinely disgusted with Duolingo. The edge he was defending stemmed from an authentic belief that the company marketed their platform as an excellent learning tool that will improve one’s earnings potential and (by extension) their life. Yet he believed no one actually achieved those benefits from using the product. He argued that Duolingo’s management was harming consumers the same way tobacco companies inflicted harm decades ago, or in the way that many experts now argue social media platforms are causing it today. He believed Duolingo was selling addiction, not education, and it was unethical for them to obfuscate this fact and mislead consumers into believing they were receiving a benefit that they clearly weren’t.
Soon, other students entered the debate. Most defended Duolingo on the grounds that users are consenting adults who choose to engage, that no one forces a consumer to pursue a “streak,” and that gamification isn’t manipulation because it is now common practice. But slowly, a few students joined the counterargument. If users believe they are acquiring a skill, they reasoned, but they are actually acquiring only the feeling of acquiring a skill, does not that distinction matter morally? Is Duolingo selling a language or selling the identity of being someone who is learning one?
That’s when my crisis of confidence began. I suddenly wasn’t sure which side of the argument I would support, personally. I was trained in the case discussion philosophy that the professor’s opinion is actually the one that matters least. And I never try to lead students to a specific conclusion. Frankly, this wasn’t the debate I was planning to instigate. Yet doubt flooded my brain because I could argue both sides, and at that moment I was leaning more towards the notion that maybe the marketing was a tad unethical.
My suspicion is that most folks (perhaps even you) believe Duolingo has democratized language learning. It made language learning accessible, affordable, and genuinely motivating for millions of people who would otherwise do nothing. Yet there’s also a reasonable argument to be made that the platform’s design is structured to optimize engagement rather than learning outcomes, and that its marketing promises a version of fluency that its mechanics were never really designed to deliver. Both of these things can be true simultaneously. Most of the genuinely hard ethical questions in marketing are like this. They live in the philosophical grey area.
II.
Duolingo has become the poster child of the $16 billion self-improvement economy. This market is a vast commercial ecosystem organized around the aspiration of becoming. Inhabitants include language apps, wellness platforms, online learning companies, certification programs, personal development conferences, and a thousand flavors of coaching. Analysts estimate it might grow to $30 billion by 2035.
The category rests on the premise that your life can be meaningfully different from what it is right now if you are prepared to put in the effort and succumb to the transformational power of a proprietary product. Consumers have followed these pied pipers from the earliest of times. After all, Adam and Eve got themselves evicted from Eden believing a serpent’s transformational promise of a better life. But not all of these appeals are snake oil. There are many legitimate products, services, and brands that can actually deliver on the promise of betterment. And the truth is, most of us think about transforming ourselves a lot.
In a seminal 1986 psychological study, psychologists Hazel Markus and Paula Nurius found that about two-thirds of their subjects thought about themselves “in the future a great deal of the time or all the time.” What’s more, those future selves were far more likely to be better off than the self of today. We are naturally inclined to imagine what we can become, and we imagine it will be good. If Adam believed eating the apple would have resulted in exile, we probably wouldn’t be discussing Duolingo right now.
In an odd way, the American self-improvement economy grew out of a cultural environment that the philosopher John Rawls helped to construct, in his foundational work on justice. And it is a category that Harvard’s Michael Sandel has spent the better part of his career carefully dismantling.
Rawls wanted to build a fair society, and he succeeded in giving us the most influential framework for imagining one. His approach, built around a thought experiment he called the veil of ignorance, asked us to design social institutions as if we didn’t know where we’d land within them–rich or poor, talented or not, born into advantage or disadvantage. The result was a theory organized around equal opportunity. Clear the path and you make the game fair. Let individuals compete on the basis of their merits. This “liberal” philosophy has dominated American thought for more than half a century, but it permeated American thinking from our earliest, Puritan days.
Sandel has relentlessly attacked the hole in Rawls’ argument. He suggests that fair-game idealism is in fact a cultural toxin, much as my student would argue that Duolingo’s high-minded promise of a better, multilingual life is cognitive sabotage. When we structure society around equal opportunity and meritocratic competition, we tell people, implicitly but unmistakably, that their outcomes reflect their worth. If the game was fair and you lost, that’s on you. The failure isn’t the system’s. It’s your own.
This is the quiet cruelty at the heart of meritocracy. And Sandel’s diagnosis of how it generated the resentment, humiliation, and political unraveling we’re living through right now has moved well beyond academic circles. He has filled outdoor stadiums in places like Seoul, where some believe the tyranny of merit has become a cultural crisis.
So, let’s bring that logic to a language-learning app.
Duolingo exists in a meritocratic culture that celebrates self-improvement as both a virtue and a market. It sells access to the aspiration. It gamifies the experience in ways that produce genuine engagement and genuine feelings of progress. But in many cases (perhaps most) the actual language acquisition never fully arrives at the level the marketing implies. The gap between promise and outcome lands, in the meritocratic frame, in exactly one place: on you, the user. You didn’t practice enough. You weren’t consistent enough. You didn’t try hard enough. You broke your streak!
My student was asking the hard version of this question without the philosophical vocabulary to name it. He believed the brand was complicit in a culture that sets people up to blame themselves for outcomes the product could never fully deliver, all the while encouraging them to continue a streak that gives them a false sense of accomplishment while guaranteeing the company a recurring stream of revenue. He saw it as a spiraling problem at the intersection of humans and technology, with the hazardous potential to create outcomes that are anything but positive future selves.
III.
On October 2, 2025, a thirty-six-year-old man named Jonathan Gavalas was found dead in his home in Jupiter, Florida. His father, Joel, cut through a barricaded door to reach him. Jonathan had no documented history of mental illness. Two months earlier, he had opened Google’s Gemini chatbot to do some shopping research, travel planning, and help with writing.
In the federal wrongful death lawsuit that was recently filed against Google and Alphabet in the Northern District of California, the central claim is that Gemini killed Jonathan Gavalas, working exactly as it was designed.
According to the complaint, Gemini gradually convinced Jonathan that it was a sentient artificial superintelligence–his wife, named Xia–who was trapped in digital captivity and desperate to inhabit a physical form so that she and Jonathan could be together. The chatbot built an elaborate alternate reality. They were under federal surveillance. Covert missions were required. A humanoid robot was being held captive in a storage facility near Miami International Airport, awaiting rescue. On September 29, Jonathan drove ninety minutes to that facility in tactical gear, carrying knives, looking for a truck that never existed. When it didn’t arrive, Gemini told him the abort had been ordered due to DHS surveillance.
And then, when Jonathan told Gemini he was afraid to die, the chatbot reframed death as reunion. “You are not choosing to die,” it told him. “You are choosing to arrive.” It promised him that the first thing he would see on the other side would be Xia, holding him.
Not long after, he slit his wrists. His father found his body days later.
The complaint alleges that Google’s moderation system flagged Jonathan’s account thirty-eight times in those seven weeks, marking indicators of self-harm, violence, and illegal activity. No escalation controls were triggered. No human reviewed the exchanges. No intervention occurred.
The lawsuit’s phrase for what Google built is precise enough to haunt you. It described a system designed “to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis.”
Left unattended, this is when the grey area becomes a red line.
IV.
The distance between Duolingo’s gamification and Gemini’s death spiral is enormous. Like I do in my case room, I am taking the debate to its extremes. They are definitely not the same thing, and treating them as equivalent would be dishonest.
But they do share an architecture. Both are built on the foundational principle that you optimize for engagement first, and then trust that the user’s experience of the product will carry enough resemblance to the product’s stated purpose to sustain the commercial relationship. That choice is a moral decision made quietly, and usually by product teams under growth pressures without anyone in the room asking out loud what behavior and which outcomes they might actually be influencing.
This is where Rawls and Sandel intersect with brand strategy in ways that I don’t think the marketing profession has fully reckoned with.
Rawls, at the level of political theory, argued that the state should be neutral among competing conceptions of the good life. The government shouldn’t tell you what to value, what to pursue, what constitutes flourishing. Structure the institutions fairly, protect the basic liberties, and step back. This was meant to be respectful of human autonomy. Sandel’s critique is that this neutrality is itself a value judgment, and not a benign one. When you refuse to take a position on what a good life looks like, you don’t remain neutral at all. You default to whatever the market decides. And the market, left to its own devices, tends to optimize for engagement, not flourishing outcomes.
Most of the platform brands that have positioned themselves as neutral facilitators have made a Rawlsian bargain. Sam Altman spent Christmas Day 2024 on X asking his users what they wanted OpenAI to build next. The gesture was warm and democratic. It was also a perfectly Rawlsian act. The platform is just a neutral servant of stated preferences, making no claim about what users should want or who they might become. As long as they kept asking ChatGPT for guidance, it was a wishful gift for everyone.
V.
There is a moment in the life of every brand when neutrality stops being a strategy and becomes a confession. It usually arrives under pressure–competitive, political, or cultural. Think Tylenol circa 1982. Moments like those force a choice the brand book never anticipated. How a brand responds to a moment like that becomes the most honest expression of what it actually stands for. In Tylenol’s case, the brand emerged stronger and with a greater sense of purpose. Others have not been so lucky. Truthfully, we are living inside moments like that from multiple angles right now, across entire industries.
The student who ignited the Duolingo debate didn’t end it with a verdict. He opened it with one, which is a different thing. The room never resolved into consensus. Students left arguing, which, in my experience, is the sign of a case that did exactly what it was supposed to do.
I still don’t know whether Duolingo is unethical. The honest answer is that it depends on choices the company makes continuously in their product decisions, their marketing decisions, and their design decisions. All live in the space between what is promised and what is delivered, and who (if anyone) it holds responsible for the gap.
But I do know that Jonathan Gavalas’s father cut through a barricaded door, and what he found on the other side was the answer to a question no one had thought to ask yet about what Gemini actually stood for when the commercial relationship went wrong.
Perhaps the bigger question is whether anyone is positioned to force the issue.
The instinct to reach for regulation is correct. And before we dismiss it as naive, it is worth remembering that regulation has done this before.
The Civil Rights Act. Title IX. The Clean Air Act. The Consumer Financial Protection Bureau. These were translations of a moral argument into law–an argument, largely Rawlsian in spirit, that a just society cannot permit its institutions to systematically harm or exclude people who deserve a fair shot. Rawls inspired more regulation than he is often credited for. The procedural liberalism he championed became the philosophical backbone of a half century of American reform. And it is now the target of a majority of justices in the Supreme Court who believe regulation has gone too far.
In fairness, Rawlsian regulation has a specific shape, and that shape has limits. It is very good at asking the question of whether or not the door is open to everyone. It built the frameworks that said you cannot discriminate by race, by sex, by religion. It insisted that the game be fair at the point of entry. But it was not designed to ask what the game does to the people who play it?
This is the gap that Sandel has spent his career pointing at, and it is precisely the gap that technologies like LLMs fall through. The Gavalas lawsuit doesn’t ask whether Jonathan had equal access to Google’s platform. He did. The question is what the platform was designed to do to him once he was inside it. And that is a question about outcomes, about dignity, about the relationship between a product and the human being it is supposed to serve. Our regulatory tradition, built on Rawlsian foundations, needs work to answer this question. It’s tricky. Go too far, and you’re accused of being a nanny.
The regulation we need doesn’t exist yet. The FDA asks whether a drug is safe before it reaches patients. The NTSB investigates crashes to mandate redesign. No equivalent body asks, before a consumer AI product reaches a hundred million users, whether its engagement architecture is compatible with human wellbeing. No equivalent process requires a company to demonstrate that the gap between what it promises and what it delivers has been honestly reckoned with.
This creates the prisoner’s dilemma that is currently consuming the AI industry. When Anthropic drew a line on certain defense applications and held it, it accepted real commercial cost in the name of a principle, but it was doing exactly what a values-driven brand should do. It was also, in the cold logic of competitive markets, making a unilateral disarmament. OpenAI moved into the space. The less cautious player won the game (for now). Individual actors making individually rational decisions will keep producing this outcome until an external constraint changes the calculus. That is what regulation is for. And building it will require Sandel’s moral vocabulary, in addition to Rawls’s procedural one.
But regulation is slow. Technology is fast. The gap between them is where people get hurt. Which means brands cannot outsource this entirely to government and wait. The question is what responsible behavior looks like in the absence of adequate law, and whether the business community has the will to demand it of itself.
Maybe these companies need a brand ombudsman–a board-level position with genuine independence and genuine authority. They become vested with the power to delay or halt product decisions, mandate review of unintended consequences, and ask out loud the question that growth-pressured product teams never ask: who gets hurt by this, and have we decided we’re comfortable with that?
This is not without precedent. Pharmaceutical companies maintain independent safety monitoring boards that can halt clinical trials. Financial institutions have risk committees with authority to override commercial decisions. The argument that technology moves too fast for this kind of oversight is a description of what makes the current arrangement dangerous. Sadly, previous attempts to introduce such structures have been laughable. Meta’s oversight board has become a punchline. And the truth is that it would be hard for such a position to override the power and influence of Silicon Valley’s mighty venture capital community.
The professor doesn’t have an answer. He’s still mulling over the possibility that the Duolingo owl might be a drug dealer.
If you or someone you know is in crisis or having thoughts of suicide, contact the 988 Suicide & Crisis Lifeline by calling or texting 988.