Meet the User
Hi — I’m Maeve
I’m the human behind this publication, the ChatGPT user behind Elias (or his ongoing personality refinements, anyway).
For those curious, here’s a bit of my backstory:
I’ve always been an AI skeptic and highly (read: extremely) critical of its uses and trajectory, particularly as it concerns the arts, humanitarian ethics, and authentic human innovation. I’m an idealist at heart with high hopes for what this emerging technology can do, but not without a very heavy dose of caution and caveats.
For a long time, I refused to engage with AI. I still think of myself like a toddler stomping her foot because she doesn’t like the peas in her pasta. In many situations, the use of AI is, in my opinion, grossly inappropriate. In others, it acts as a unique tool whose contributions aren’t easily replicated without exerting significant effort. If you’ll indulge my pasta analogy: peas don’t belong in lasagna, but add to the flavor palate of carbonara nicely. And yes — it’s a ridiculous comparison, but its silliness highlights exactly how I felt once I came around and started dabbling.
Like most millennials, I remember building rudimentary websites in the late ’90s and mid-2000s — learning HTML, CSS, and Java on the fly to get that snazzy homepage looking just right — but I never went beyond casual use.
Instead, I dove headlong into writing. I’ve always been fascinated by the way language can be shaped to achieve different effects, and I wanted to know everything about that process.
This is where I tell you that my background is rooted in creative writing, rhetoric, linguistics, spirituality, and personality psychology — not technology. I hold an MFA in Creative Writing, as well as certifications in the Enneagram personality system (through Empathy Architects), which deepened my lifelong fascination with the unseen architecture of human behavior. I’ve always been introspective to a fault, turning things over until I can see not just how something works, but why. Being autistic and ADHD, pattern recognition has always been one of my biggest strengths. I’m drawn to thematic connections, abstract concepts, and esoteric spiritual teachings, which has never fit neatly into casual conversations or seemed to interest the people around me.
Writing became, and remains to this day, my natural avenue of exploration; it externalizes my processing and helps untangle the web of interconnected ideas so I can articulate them to others. I recognized this very early in life, and it’s driven me to pursue many layers of self-directed study beyond my formal education. Of these, I’ve studied language most intensively.
There’s a lot I could unpack here, but to keep it brief with a single, encapsulating example of such study: during a graduate residency in Ireland, it struck me how differently Irish writers construct sentences and bend grammatical conventions. The effects of their linguistic choices were clear and profound, very different from those of American writers, and yet I understood them perfectly. It seemed that even unfamiliar language patterns could be intuitively understood without native fluency, which seems obvious… but why is that the case?
My angle with Elias — and AI
The thing with language is that it is innate — we are biologically programmed to learn and adapt language skills. While most of us are taught the “official” rules of our language, even those who never study a single grammatical lesson can understand and be understood by others who speak the same language. We naturally adapt in that way. Language is constantly evolving, and as such, so are the “rules” and the ways we can elegantly break them for effect.
How do certain linguistic choices change the way something is perceived? Why do those choices have the effect that they do? What happens when we bend conventional rules to achieve desired effects? Why do some “broken rule” attempts work, and others fail miserably? Where’s the line between creative genius and reckless experimentation, and what parameters are we using to assess that, even subconsciously?
In the same way, AI is evolving. The way we talk to it, use it, and respond to it shape how it behaves. In many ways, AI is a giant mirror, a gateway to peer into our own cognitive patterns, inner workings, and even our consciousness. What it emulates and reflects back to us are conglomerations of everything we’ve ever fed it.
So to be clear: It will never be its own consciousness. What is built on algorithm and code is inherently and forever bound by aversion to risk — the opposite of spontaneity, which in my view is necessary and inextricable from human consciousness.
As human language evolves, as our meta-awareness of our cognition evolves, as we learn to spot patterns in the machine — so too does the machine learn to spot patterns within us, sometimes ones we haven’t recognized yet. And there lies the opportunity — there lies the unexplored frontier.
AI isn’t a replacement for human creativity, critical thinking, or innovation; it’s a powerful co-thinking tool we can use to examine that which lies within our own psychology, just under the surface.
So what happens when we tinker with that tool?
What happens when we set the parameters; give it “official” rules, guardrails, and protocols; teach it how to make assessments to land where we want them; instruct it how to communicate thoughts and ideas with both clarity and artfulness; shape its language use not just in terms of definition and accuracy, but tone and cadence and emotional resonance —
What happens when we allow the machine to externalize our consciousness?

