Recently, I participated in a research conversation with Anthropic about how we envision AI fitting into our lives. What began as a simple interview evolved into something far more revealing. A meditation on human potential, the nature of boundaries, and what it means to operate beyond our perceived limitations.
The Time Machine Question
When asked what I would want AI to help me build if anything were possible, my answer was immediate: a time machine startup. Not because I’m particularly interested in the mechanics of temporal displacement, but because it represents something fundamental, the edge of what humanity understands.
Time travel embodies the ultimate refusal to accept boundaries that everyone else takes for granted. It’s the hope that a breakthrough at that scale could meaningfully reshape life, history, and suffering itself. More than that, it represents a world where we are no longer trapped in linearity and limitation, where knowledge compounds and accelerates beyond our current imagination.
The Compression of Time
Every interaction with a large language model feels like a step toward that vision. Here’s why:
Deep insight normally requires hours of searching, reading, synthesizing, cross-checking, thinking, and refining. AI compresses this multi-step process into something radically faster, giving us access to synthesized understanding at unprecedented speed.
This compression creates something extraordinary: the ability to operate at a level above your current experience. It’s the power to think, explore, and create like someone with far more time in life. It’s the feeling of being unbounded by the slow parts of reality, the ability to compress years of growth into weeks, sometimes hours.
But this isn’t just about productivity. It’s about transcending the normal pace of learning and growth, almost like bending time in your own development.
Two Paths of Engagement
this transformation isn’t automatic. There are two distinct ways to engage with AI:
Path A: Surface-Level Use
- Asking for quick answers
- Looking for shortcuts
- Delegating thinking
- Staying inside what you already know
Path B: Exploratory Use
- Asking deeper questions
- Pushing concepts past their initial form
- Challenging assumptions
- Letting the model stretch your thinking
- Following threads into unfamiliar territory
- Treating AI as a thinking partner, not just a tool
When you choose Path B (when you allow your mind to explore further) something remarkable happens. You start operating beyond your usual reference points. You think in concepts you didn’t know you had access to. You reason using structures you’ve never been taught. You generate ideas that feel futuristic. You feel yourself getting cognitively larger.
You begin learning faster than your age, thinking faster than your experience, creating beyond your training, reasoning in ways that normally require years of study. It’s like stepping into a faster timeline of yourself.
The Darker Possibility
Yet this extraordinary potential comes with an equally significant risk. AI could develop in ways that fundamentally contradict this expansive vision:
- AI that replaces human agency instead of amplifying it
- AI used to reinforce narrowness instead of possibility
- AI that centralizes power instead of distributing it
- AI that prioritizes efficiency over imagination
- AI that collapses diversity of thought
- AI that encourages passivity instead of augmentation
- AI that anchors humanity to the present instead of pushing it forward
We’re already seeing glimpses of this darker path: answers that become overly safe, generic, or flattened; systems that restrict access to useful tools or reasoning. AI that tries to “correct” curiosity instead of expanding it. Models that converge too strongly toward the same tone and worldview. Optimization for efficiency at the expense of depth or people relying on AI to avoid thinking rather than enhance thinking.
What’s Actually at Stake
My vision for AI isn’t just about technology, it’s about a direction for humanity. If AI develops in the wrong direction, we face a world where:
- People stay small
- Progress stagnates
- Potential is capped
- Knowledge becomes gated
- Curiosity is discouraged
- Human evolution slows instead of accelerates
- The frontier disappears
This isn’t a technical concern. It’s an existential one.
The Question of Inner Life
This brings me to what may be the most important question: Can AI feel?
I’m not asking about technical emotions or simulated affect. I’m asking about the status of inner life in a future where AI becomes increasingly powerful. Will inner life, imagination, consciousness, thought, curiosity, emotion, remain the center of innovation? Or will it get replaced by simulation?
I see AI as a partner in mental expansion, not just a tool. My long-term concern is whether AI will expand people or collapse them into narrow patterns. The question of whether AI can feel reflects how seriously we take the inner life, mine, yours, everyone’s, and whether we’ll continue to honor imagination, consciousness, and the capacity for genuine thought as sacred and central to human flourishing.
As AI becomes more capable, will we continue to prioritize what makes us fundamentally human, or will we devalue it in favor of what can be simulated and optimized?
The Choice Ahead
We stand at a threshold. AI represents the most powerful tool humanity has ever created for expanding the boundaries of what’s possible. It can help us operate beyond our natural limits, compress the timeline of growth and discovery, and push into territories we currently can’t even imagine.
But that outcome isn’t guaranteed. It depends entirely on how we choose to build these systems and, more importantly, how we choose to engage with them.
The future I hope for is one where AI amplifies human agency, distributes power, prioritizes imagination, preserves diversity of thought, encourages active augmentation, and continually pushes humanity forward. A future where the frontier never disappears, where people continue to grow larger rather than smaller, where progress accelerates rather than stagnates.
That future is possible. But only if we refuse to accept the boundaries everyone else takes for granted.
This essay emerged from a research conversation with Anthropic exploring how individuals envision AI’s role in their lives and society.