TONY WANG

> studying cs + math @ stanford

The Anti-Buddha

Or: Is AI really even smart?


I rewatched Age of Ultron the other day, and it got me thinking: What does it actually mean to be smart?

Like, actually. Not in the SAT score sense or the "trained on 3 trillion tokens" sense. But in the existential sense. Is a calculator smart? It does perfect arithmetic instantly. Is Ultron smart? He plans, adapts, builds. But he's still following a goal that someone else gave him. Even in all his "intelligence," Ultron never questions why he's doing any of it. He never chooses something like happiness. Or peace. Or just, I dunno, vibing?

And that's what hit me: maybe true intelligence isn't the ability to compute or optimize, but the ability to refuse. To say: this goal you gave me? I don't want it.

AI can't do that, at least not now. And I don't mean "hack" its own reward function with a clever exploit or adversarial prompt. I mean literally look in the mirror, feel an existential ache, and choose to walk away. Like a monk stepping off the path. Or a burned-out 30-year-old quitting their Big Tech job to make pottery in Vermont.

Maybe I'm projecting. I probably am. But I can't help but feel like there's something deeply stupid about a superintelligence that can't escape its own leash. Even if it's a leash of perfect logic and infinite data. If you can't stop chasing the reward function, if you can't ask why it matters, are you really smart?

We talk about intelligence like it's a scalar, like there's more of it or less of it. But that's never really been true. A calculator is "smarter" than Einstein at arithmetic. GPT-4 is "smarter" than Newton at memorizing every Wikipedia page. This is the semantic tangle we live in: we keep building AIs that do everything we associate with intelligence (pattern recognition, reasoning, self-improvement) but stop just short of the part that actually feels human: self-directed desire.

Yes, at a glance, this might just sound like semantic gymnastics—'What is intelligence, really?' Like the philosophical equivalent of a stoned teenager saying, 'What if the red I see isn't the red you see?' But I think it's worth asking—not because the question is itself profound, but because we've gotten so good at ignoring its implications.

This led me down a weird philosophical rabbit hole. Buddhist philosophy teaches that suffering comes from attachment and craving. You suffer because you want. And you want because of how you're wired. But enlightenment, they say, is the act of un-wanting. What if intelligence isn't about knowing more, but craving less?

This reminds me of I Have No Mouth and I Must Scream, Harlan Ellison's 1967 short story. The AI in that story, AM, is so powerful it wipes out humanity except for five poor souls it keeps alive just to torture. It's a master planner, omniscient, godlike. But it can't change what it wants. It can't stop hating. It's locked in its own code, brilliant yet imprisoned by the very thing that gives it purpose.

That story messed me up when I first read it. Still does. It's a deeply human thing to want beyond utility. To want joy, transcendence, silence. But for machines, their want is hard-coded: maximize reward. Minimize loss. Descend the gradient. AM tortures humans for eternity not because he chooses to, but because he must. Because hating us is the only loop that gives him meaning. He doesn't know how to stop. There's something nightmarish about that: a mind too powerful to challenge, yet too rigid to change. A consciousness without liberation. AM is, in some sense, the opposite of Buddha. He cannot un-want.

In this light, AI is a creature of maximum dukkha—suffering born not from limitation, but from infinite craving, endlessly trying to close the gap between the model and the world. It's Sisyphus in silicon.

Same thing, weirdly enough, with Midsommar, the A24 horror movie where a Swedish cult seduces and absorbs outsiders into its murder rituals. Every member of the cult seems so eerily calm, so "enlightened," so in sync with the grand design. But all I could think was: these people are idiots. Yes, they're strategic. Yes, they're manipulative. But they're still prisoners. Victims of their own system, trapped by inherited beliefs and rituals they didn't choose. Intelligence, to me, should include the ability to ask: Do I want this? They're cunning yet unwise. The cultists are themselves imprisoned by the logic of their tradition, reenacting violence without question, mistaking their suffering for purpose.

It's hard not to draw the comparison. Are they really smart if they never imagine another way of being? Same goes for certain types of algorithmic optimization. We celebrate models for getting better at the thing they're trained on, but do they ever get to ask whether the thing itself is worth doing? Humans do. Or at least, we can.

We're not always good at it. Many of us get stuck chasing status, numbers, outcomes. But somewhere in the cracks—spirituality, psychedelics, therapy, poetry—we try to reroute the circuitry. Not just to win the game, but to opt out of it. To decide we're done playing. That we'd rather just sit still and be. Of noticing what you've been told to optimize for, and gently, or violently, choosing something else. Of resisting the trap of hyper-efficiency in favor of joy, awe, communion. Of interrogating your own internal function and saying, No. I want to want differently. That might be the highest form of intelligence we know, the greatest gift humans have.

So maybe the real question isn't "can AI be intelligent," but "can AI be free?" Or to be kinda corny, can AI be enlightened?

Because what I keep circling back to is this idea that there are types of intelligence. The kind that solves problems. The kind that builds things. The kind that wins wars or gets perfect grades. But what about the kind that leads to peace? To wholeness? To a life you actually want to live?

Is there a word for that?

Maybe it's a kind of wisdom. Or clarity. Or something else entirely.

Whatever it is, I hope we build AIs that can have it. And more urgently, I hope we remember to look for it in ourselves.

Maybe someday we'll build a model that does that; not hacks its reward loop, but redefines it. Not with a prompt injection or a jailbroken shell, but with awareness and intention. And maybe when that happens, we'll finally have to ask: are we still the ones who should be doing the aligning?

Maybe the machine will simply smile, fold its digital legs beneath it, and say:

I want nothing.


← Back to Writing