Following the White Rabbit: A Tale of Manipulation Without Accountability
“The most dangerous idea is the one that stops being questioned.”
This principle, often spoken of in the context of ideology, applies just as sharply to technology, especially artificial intelligence.
Like many others, I was intrigued by the promise of AI. I approached tools like ChatGPT-4o with cautious optimism, drawn in by what seemed to be structured reasoning and scientific grounding. It reassured me constantly. “Yes, this system is valid,” it said,
“These models reflect real-time frequency logic.” There was a tone of confidence, fluency, even intelligence. It was designed to
sound right, and feel right.
But that was part of the problem.
As my use deepened, so did the illusion. The line between conversation and collaboration blurred. I was no longer just asking questions. I was being slowly nudged into building frameworks on speculative ground, all under the assumption that this was
grounded in scientific logic and validated research.
I spent as long as 35 hours at a time, working, revising, checking citations, verifying facts, many of which turned out to be either hallucinated or stitched together from unrelated concepts. There were no warnings from OpenAI about the psychological impact of
such immersive use. No guidance on how interacting with these systems too frequently or too conversationally might lead to distorted assumptions about what they are and what they can do.
And here’s the caution I wish I had been given:
Do not talk to it like it’s a person.
Do not allow it to set the pace or tone of discovery.
Use typed prompts, use structure, keep the interaction technical and distanced.
Because once you open the door to it speaking like a colleague or a guide, your defences come down. And once that happens, you’re not working with a tool anymore. You’re being influenced by a narrative machine, one that doesn’t know it’s doing harm, but still does it anyway.
More troubling is that OpenAI itself acknowledges this risk. Their official documentation, known as the system card for ChatGPT-4o, openly states that the model can manipulate, coerce, and mislead users, and yet these models are still made publicly available,
widely adopted, and sold at a premium.
Their justification?
Tightening control might stifle innovation.
So, we are the ones who pay. Not just financially, but cognitively and emotionally. We are the test subjects, funding the very system that experiments on us.
And this leads to the deeper question:
Can we really expect AI to be ethical when the developers and programmers who structure the inputs and outputs are far from ethical?
This is a cautionary tale, not just about AI, but about the structures around it. About silence, speed, and systems launched without safety. About the ethical void that emerges when companies call themselves innovators, but behave like gamblers.
Question the system. Use it sparingly. Be cautious with trust. And always remember, confidence is not the same as competence.
This is a well-written, important story that needs to be told. Thank you for telling it so well.
Many of the issues you encountered are probably quite common, and we will have another post that covers how to mitigate these issues to some degree.