When I was younger, I tried to doze off couple of times. I tried everything. Likewise, Aldous Huxley one fine day, swallowed mescaline in his garden and the world came apart at the veins. I still recall that day, I was 21, I tried something that made my mind broad and fatigue-free. I was thinking something real but in a different way. Not in a bad way. More like when a too-tight shirt rips right as you finally take a real breath and your ribs go, oh, thank God.
Many years later I got hands on Doors of Perception and learned what happened that day with my brain. I remember reading that description on a sticky summer night in my early twenties and feeling weirdly called out, like, "oh, so that's what I've been doing to my own head." He'd been a serious intellectual his whole life—novels, essays, politely devastating reviews—and in four hours on a May afternoon in 1953, a cactus derivative showed him his brain had been running the tightest possible operation just to keep him functional. Everything he'd ever perceived was pre-filtered, pre-sorted, sanded down enough that he could walk to the store without losing his mind in the produce aisle.
The reducing valve, he called it. Your brain as the 'do too much' bouncer at the nightclub of reality, letting in only the safest, most convenient bits and telling the rest to come back NEVER. But Our AI race trying to fill that gap between nerves and vision and bouncer guy with some attitude, like bad-bouncer.
Mescaline fired the bouncer. Suddenly the flowers on the table weren't "decor" or "symbol of domestic peace" or any of the stories we paste onto objects. They were just overwhelmingly themselves. The real thing-in-itself. I remember closing the book and looking at the pomegranate tree outside my balcony like, "okay, so what are you hiding?" He even reached for German, like a good Anglo-European intellectual, and wrote about "Istigkeit"—the is-ness of things. That is-ness didn't have a bottom. It just kept going. I looked at the tree longer than usual if anything I could see or hear or feel that us-ness—not is-ness.
Now it's 2026. I'm sitting in the front seat of the bus, while going to work, someone left half-drunk coffee next to me, and the damn thing keeps politely suggesting I use AI to rewrite my emails. Because I was trying to compose one email to my boss. We've got chatbots writing sonnets, solving calculus, and apologizing for "any inconvenience this may have caused." People I know—friends, coworkers—keep asking some version of the same question: do we still have to chew weird jungle plants to expand our consciousness? Can't the robots just do it for us while we stay sober and hydrated and home by 10?
That turns out to be the wrong question. Here's what made Huxley's trip actually matter: nobody could have it except him. That sounds evident but sit with it for a second. He spends a whole book trying to give explanation what happened and keeps running into the same brick wall—language doesn't go there. The place he visited for those hours sits outside words, before words, under words. You can sketch the map of the neighborhood all you want, but you can't upload the being there. It's like trying to rationalize color to someone born blind, except worse, because the blind person at least knows they're missing something. Most of us don't even doubt there's anything to miss. We're busy arguing about screen time and which notification setting is less stressful. Or how many days got sober like filthy-things.
AI, by design, lives entirely inside language. It doesn't have experiences, it has training data. When it writes about consciousness or grief or the sublime, it's stitching together patterns from a billion human attempts to write about those things. A kind of species-level collage. That expresses like an insult, but it isn't; that's accurately what it is.
The machine isn't reporting from the territory. It's poring over everyone else's maps and drawing a new, handsome, suspiciously well-formatted one. I can feel that when I use it: it's smart in a sideways, echoing way, not in a "someone is actually home in there" way.
Philip K. Dick stumbled into this knot long before we put chatbots in search bars. Do Androids Dream of Electric Sheep? follows a guy whose job is to retire androids, and the experiment he uses is supposed to measure empathy—the one thing machines supposedly can't fake. But, if you read it, you can feel that the book keeps prodding at the same sore spot: how do you know you're not just a very sophisticated machine yourself? Your emotions are neurochemistry. Your memories might be implants. That feeling of having an inner life could be the most elaborate con your meat-brain ever pulled on you. Dick wrote all this while paranoid, wired on amphetamines, and possibly schizophrenic, but he wasn't just ranting. He'd sniffed out something real that still makes my stomach drop a little when I reread him. We comfort ourselves with a bright line between "authentic experience" and "simulation," but the line blurs as soon as you look too closely. Maybe human consciousness is already a sort of AI—a pattern-recognition loop written in slow, wet hardware instead of silicon. It's a tidy metaphor. Too tidy, maybe. And then you take mushrooms and that clever analogy falls apart in about ten minutes.
Terence McKenna worried this bone for decades. He was an ethnobotanist, yes, but also a kind of stand-up mystic: years in the Amazon, eating anything that looked even remotely like it might talk back, then returning to lecture halls to report what the plants had said. His theories wandered into wild territory—timewave zero, novelty theory, self-transforming machine elves made of language—but under all the cosmic giggling was one solid point: psychedelics don't just rearrange your existing furniture. They knock out a wall you didn't know was structural and quietly reveal that the house was always bigger than you thought. I haven't met McKenna's elves, but I've had enough weird evenings staring at trees to know the "bigger house" part is not just a metaphor.
He called it "ontological surprise." Not just new information, but new categories of information. Not just seeing things differently, but realizing there are other ways of seeing that your default mind literally could not imagine from inside itself. It's like discovering you've been playing a video game, but in real life, in windowed mode and suddenly finding the "Fullscreen" button. And then, maybe, realizing there are other monitors you didn't even know were plugged in. So: can AI do that? Can it genuinely blindside us with something we couldn't have thought ourselves into, not just remix the existing playlist?
Ted Chiang pokes at that in the story where a man's intelligence gets dialed up past human norms. It isn't just faster thoughts, it's new kinds of thought—patterns in patterns, causes behind causes, the underlying scaffolding of reality that ordinary consciousness slides right over. At first it feels like an unambiguous upgrade: more doors, more light, more everything. Then he meets another enhanced person and they almost immediately try to destroy each other, because being profoundly smart doesn't make you kind, or wise, or bearable. It just makes you frighteningly efficient at pursuing whatever your previous settings already valued. I think about that whenever people talk about "fixing" the world with intelligence alone. The tools themselves are neutral. We are prone to forget that because it's comforting to blame the hammer for the hole in the wall instead of asking who swung it and why.
I believe, when Michael Pollan went deep on psychedelics for How to Change Your Mind, the amazement for him wasn't the mystical language people used. It was how un-mystical a lot of the underlying neuroscience is. People talk about meeting God or dissolving into cosmic unity, but the observable effect is very specific: psilocybin down-regulates the default mode network. That's the set of brain regions responsible for your sense of being a separate "me" narrating a life over time—the voice in your head that never shuts up. When that network quiets down, even briefly, people describe an almost embarrassing level of relief. Like discovering the backpack full of rocks you've been carrying since childhood is optional. You can just put it down. Or at least rest it on the ground for a while. Their depression lifts, sometimes for months. End-stage cancer patients lose their obsessive terror of death. Alcoholics stop drinking, not because someone yelled at them or they found a new self-improvement app, but because for a few hours the inner voice whispering "you're separate, you're broken, you need more to be enough" finally shut up. If you've ever had that voice quiet down for even a day, you know how big that is. It's not a self-help trick; it's like the floor of the room changing.
AI can't do that. It can comb through your social posts and flag you as high-risk for depression. It can recommend a therapist or spit out a script for a guided meditation in a soothing mid-Atlantic accent. But it can't hand you the felt experience of your ego dissolving, of knowing in your bones that you are a wave in the ocean instead of a lonely drop trying not to evaporate. At best, it can help you schedule the appointment.
So psychedelics win, right?
Not so fast. There's one huge thing they're terrible at: scaling. Huxley's revelation was profound, but it was also stubbornly private. He couldn't export it. He couldn't drag-and-drop the raw file into someone else's head. Even his book is an after-the-fact translation of something he knew would get lost in translation as he wrote it. Every mystic runs into that same bottleneck: you go to the mountaintop, you encounter God or the Void or the nondual field, you come back with a bag of metaphors and an awkward glow. The experience itself stays non-transferable. You can tell your friend about your trip for three hours and they still have to go home in their own skull. Here Indian rishis wins because whatever they wrote, that was first hand experience, for example vedas or verses of upanishad are way excellent than anything. Because some verses are too profound that one verse could bring tons of metaphysical explaining. No normal mind can write such an such furitive verses.
AI doesn't have that particular problem. It can chew through petabytes of climate data, genomic sequences, economic records, and see patterns that exist but are simply larger than any single skull can hold. Yes for that they are God-like. In future they could be more than God-like. But can it surface correlations across decades and continents. And then, crucially, can it visualise them in ways we can actually act on, if we choose to. That "if" is doing a lot of work. And honest thing is we will be happy being a human not being a God, I hope. William Gibson had the outlines of this in Neuromancer back in 1984: cyberspace as consensual hallucination, data experienced as place.
We slid into that future so slowly we didn't notice the weirdness. Your phone quietly knows where you are, who you love, what you doomscroll every day hiding behind toilet bowl. Your consciousness already leaks into the cloud. You just don't feel the ridges, because the interface has got polished enough. I catch myself reaching for my phone when I'm anxious, like it's a second nervous system that lives in my pocket. I catch my Kindle like when I really need something to get out of words, even that machine knows my likes or dislikes of books or reading. Although, they are not real interaction. But wonderfully, we are falling for its trap.
The Matrix took the metaphor and turned it into a blunt instrument. Red pill, blue pill: do you want "reality" or comforting illusion? The movies get more interesting once Neo realizes both worlds are systems, both have rules and constraints and blind spots. The question stops being "which one is real?" and becomes "what does 'real' even mean when everything you ever touch is mediated, filtered, constructed?" It's not a teenage dorm poster question anymore; it's just… daily life. And that loops us back to Huxley's reducing valve. Our ordinary consciousness already does what the Matrix does—builds a simplified, manageable interface so we don't get obliterated by raw existence. Your brain fills in the blind spot in your retina. It edits out the weird jump cuts in your perception. It stitches a coherent narrative from scattered sensory remains. That's not, certainly, malicious deception; it's basic survival strategy. The alternative is being so overwhelmed by the is-ness of everything that you forget to eat, sleep, text back, pay rent.
So maybe the whole "psychedelics versus AI" frame is a dead end. Maybe they're not rivals at all, just tools working on different layers of the same problem. Psychedelics remind you the reducing valve exists. They show you the self is constructed, contingent, kind of flimsy. They hint that reality is weirder and larger than your default setting allows. That reminder matters, because without it you forget you're living inside a simulation you helped build. You mistake the map for the territory so thoroughly that it takes a molecule to pry your fingers off the paper.
AI, meanwhile, extends your reach into the practical, collective world. Pattern recognition at planetary scale. Information processing beyond any one nervous system. A kind of collective intelligence that doesn't require singing om in a circle, just a lot of data centres and some surprisingly fragile supply chains. It's mystical in effect, mundane in infrastructure.
You probably need both. But you need to be very clear on what each one actually does. The danger is confusion. Equating AI-style "consciousness"—pattern manipulation in language—with psychedelic consciousness, which is pre-verbal and stubbornly first-person. Or, on the other side, believing that private enlightenment auto-solves public problems. You could absolutely have a society where half the population is micro-dosing and talking about nonduality while we continue to fry the planet, because personal insight doesn't automatically instantiate political wisdom. And we could build AI powerful enough to simulate the entire climate system and still deploy it mainly to squeeze a slightly higher click-through rate out of an ad, because tools mostly do what we pay them to do, not what they "should" do. What Huxley glimpsed in that garden wasn't just that mescaline opens doors. It was that we built the doors, and the locks, and most of the walls.
The reducing valve is necessary. You need some kind of filter if you want to survive a trip to the supermarket. Try living with no reduction at all and see how long you last before you're staring at a puddle for six hours, overwhelmed by its shimmering being. But when you forget the valve is a tool—when you start treating the filtered interface as the whole of reality—you're in real trouble.
AI is another tool. More complicated, more opaque, and plugged into more power sockets, but still just an insanely detailed map. Maps don't replace territory. They keep you from wandering into a swamp by accident. Or, at least, they try. Maybe the real art is learning to move between states without getting stuck. Expanded and narrowed. Direct and mediated. Alone in your skull and plugged into the swarm. Huxley took mescaline and saw infinity in a handful of flowers. We spin up models and find patterns in continental-scale chaos. Both are valid. Both are incomplete. Neither, on its own, is a final answer. The door worth learning to use isn't purely chemical or purely digital. It's the one that lets you pick up both tools without vanishing inside either. To know when you need the reducing valve and when you need to crack it open. When you need silicon and when, honestly, you need psilocybin and a trusted friend watching the clock and maybe a bucket nearby just in case.
That's harder than swallowing a pill or typing a clever prompt into a glowing box. But if anything is going to change how we actually live, day to day, it's probably that.



No comments:
Post a Comment