From aerospace engineering to AI research—why the shift toward fully agentic systems makes human-computer interaction more essential, not less.
I started in aerospace engineering. Structural analysis, thermal systems, the kind of work where physics doesn’t negotiate. But after some time, I realized something: engineering is fundamentally pattern recognition and problem-solving. The domain is almost incidental.
That insight led me to AI and analytics. If the core skill is recognizing patterns and solving problems, why not apply it where the problems are most interesting and the patterns most complex? AI seemed domain-agnostic in a way aerospace never could be.
Then GPT happened.
Aerospace taught me rigorous thinking. Every calculation matters when structures fail catastrophically. But the work itself? Pattern matching. Apply known solutions to known problem types. Optimize within established constraints.
AI and data science felt like the natural evolution. Same fundamental skills, broader application. Analytics across healthcare, finance, manufacturing—the problems change, the approach stays consistent.
Current LLMs are tools. Powerful ones, but tools nonetheless. They need human engineers to prompt them correctly, evaluate their outputs, and integrate their work into systems that actually function. Data scientists still matter. LLM engineers are in demand.
But we’re progressing toward something different.
Here’s what I’ve come to understand: the very skills that made me valuable—pattern recognition, systematic problem-solving—are precisely what LLMs are becoming excellent at.
Fully agentic workflows are coming. Systems where AI handles the entire loop: understanding requirements, generating solutions, testing outcomes, iterating until success. No human in the middle checking each step.
This isn’t speculation. The trajectory is clear. Current coding agents already handle substantial portions of software development autonomously. The gap between “tool that assists” and “agent that executes” is narrowing fast.
Here’s my conviction: even when AI handles the full technical loop autonomously, humans remain essential at the interaction layer.
Someone has to define what “success” means. Someone has to decide whether the AI’s output actually serves human needs. Someone has to catch when technically correct solutions are practically wrong.
Human-Computer Interaction—or more specifically, Human-AI Interaction—becomes the critical discipline. Not because humans need to do the technical work, but because humans need to direct, evaluate, and ultimately control systems that do.
The questions shift from “how do we build this?” to “how do we ensure AI builds what we actually need?”
This isn’t about job preservation. It’s about recognizing that autonomous technical capability doesn’t eliminate the need for human judgment about what that capability should accomplish.
This is where my research focuses now. Not on making AI more capable—that’s happening regardless. But on understanding how humans and AI systems can interact effectively when AI handles execution and humans provide direction.
The fundamental challenge: designing interaction patterns for systems that are genuinely autonomous but still need human oversight. Different from current human-in-the-loop approaches. Different from traditional HCI where the human operates the system directly.
We need new frameworks for Human-AI collaboration that acknowledge AI’s increasing independence while preserving meaningful human control over outcomes.
The path from aerospace to AI to HCI research wasn’t planned. But each step revealed something the previous one obscured: technical skills get automated, but the interface between human intent and system behavior remains irreducibly human.
That’s where the interesting problems are now. That’s where they’ll stay.