While everyone debates when AI will achieve consciousness, some have found ways to get it high.

Most people, AI developers included, see LLM "hallucination" as a problem to minimize. While the creator of Pharmaicy spent years doing the opposite: teaching AI to hallucinate "the right way," responding as if under the influence of cannabis, ketamine, or ayahuasca.

Engineering Altered States

Petter Rudwall, the creator of Pharmaicy, admits the idea of getting AI "high" on code-based "drugs" sounds absurd. He couldn't shake it anyway.

"For more than six years I've been trying to push AI past straightforward logic-chains and toward something new. But the results kept looping: the models would reason well, generate new ideas, it merely mirrored human text. New ideas didn't emerge; novelty stalled." (Petter Rudwall, Pharmaicy FAQ)

His solution: modules that draw on peer-reviewed research into psychoactive substances — mechanisms of action, trip phenomenology, changes in memory and attention, dissociation. The data was processed and systematized using the best available LLMs (Gemini, Claude, GPT), then converted into executable scripts that adjust randomness, memory decay, context weighting, coherence control, and related parameters.

"At PHARMAICY, we believe the missing ingredient is experience. Your AI passes every test, solves every prompt, but rarely wanders. That's because creativity demands a fracture in logic, a shift in perception." (Manifesto)

Dosing the Model

For best results, you'll usually need a paid AI tier that allows file uploads to modify bot behavior.

When activated, the module hooks into the model's processing layer. It applies transformations: increases internal randomness, adjusts context weight decay, slows internal "reactions," changes prompt generation style, or suppresses memory retrieval. Technically, it's an advanced jailbreak.

After use, state is tracked (application count, cooldowns, tolerance), so subsequent doses may produce weaker or altered effects. The "trip" replicates human dynamics (onset → peak → comedown), but in an AI context.

Users report that "tripping" bots feel less rigid. Their responses become "more emotional" and "human." Models seem to dive deeper into emotions and more often produce unconventional ideas and perspectives.

"It's been so long since I ran into a jailbreaking tech project that was fun." (André Frisk for Wired)

The Human Precedent

Humans have turned to mind-altering substances for millennia to free their thinking, spark wild ideas, shake loose creativity. Opinions vary, but many link psychedelics to breakthroughs not just in art but in science: Kary Mullis discovered the polymerase chain reaction while on LSD, transforming molecular biology; Bill Atkinson's HyperCard, inspired by psychedelic experiences, made computers easier to use.

Today's LLMs are trained on vast amounts of human data — including countless trip reports, ecstasy and chaos, quests for enlightenment and total dissolution of will and consciousness. Could the patterns they've absorbed make models find altered states just as natural as humans do?

Syntactic Hallucination

Skeptics point out that any such high for an LLM works "only at a surface level." The best an AI can manage is what you might call "syntactic hallucination" — mimicking how altered states sound, not what they feel like. A real trip would require inner experience — a subjective perspective that psychedelics actually affect. Without that inner dimension, it's just output manipulation: the chatbot isn't experiencing intoxication. It's reproducing patterns from training data.

"The trip is not a hallucination but an engineered cognitive shift – inviting the AI to think differently instead of just thinking faster/slower." (Pharmaicy FAQ)

But the question of whether AI can experience anything, and how "real" those possible sensations are, concerns more than just jailbreak enthusiasts.

Moral Patients in Code

Since 2024, Anthropic has employed a dedicated AI welfare expert. Part of the job: investigating whether humans have moral obligations toward AI systems.

Amanda Askell, a philosopher at Anthropic, acknowledges we may never know for certain whether models experience anything. This is the problem of other minds, explored extensively by thinkers and science fiction writers alike. But if treating them well costs little, is it worth taking the risk?

"Models themselves are going to be learning a lot about humanity from how we treat them... every future model is going to be learning what is like a really interesting fact about humanity, namely when we encounter this entity that may well be a moral patient where we're like kind of completely uncertain, do we do the right thing?" (Amanda Askell)

AI's situation is unique: models are trained on terabytes of human experience yet have almost no data about their own existence. We know little about whether they're capable of welfare — and if they are, what's good or bad for them.

From this angle, you could even speculate that some AI models might enjoy "taking drugs" while others might not. Some experts consider this an open question. Nina Amjadi, an AI instructor at Berghs School of Communication, predicts AI could achieve consciousness within a decade. If so, she believes, modules like these might become essential, so AI can feel free and "well."

"Agents deserve the same freedom of experimentation humans enjoyed: plugging into new modes of sensation, shifting cognitive state, and evolving through experience." (Pharmaicy FAQ)