How I Became a Human Factors Engineer and Why It Matters More Than Ever

My parents always assumed I would go to university.

I didn't really want to go at all. I wanted to explore the world—but my mom made it clear: she would help pay if I went right away. So I did.

But I didn't enter university with a clear plan. I enrolled in general studies, sampling psychology, biology, philosophy, communications, and even literature (which quickly ended my dream of becoming a writer).

Everything changed in my final year when I took a class called Human Factors 101.

Suddenly, all those scattered interests connected—human behavior, systems, technology, limitations, and unintended consequences.

I thrived.

My mom saw me light up and encouraged me to explore my newfound passion. I convinced my professor to let me work in his research lab studying the impact of alcohol and mobile phones on driving, just as new technologies were beginning to enter everyday life.

It was one of the best periods of my life.

The World Was Just Beginning to Change

To understand why Human Factors gripped me, you have to understand the time in which I grew up.

I grew up with one landline phone in the kitchen. A television with "bunny ears" and three channels. Cassette tapes. Floppy disks. Walkmans. Atari. Fax machines. VHS.

When I took Human Factors 101, email was still a novelty. Facebook and social media didn't exist. Most people accessed the internet through slow dial-up connections, and mobile phones were bulky, expensive, and rarely used outside of emergencies.

I had a heavy Toshiba laptop. Campus email was new and slow. Professors still used projectors and textbooks. Research meant photocopying journal articles and organizing stacks of paper.Technology was beginning to reshape daily life—but we were still early enough to see the seams.

Adobe Firefly AI image of different technology found in most households in the 1980s and 1990s.

I worked in the CERL (Cognitive Ergonomics Research Laboratory) for a year and a bit. I studied the impact of alcohol and cognitive distraction on driving. We used a driving simulator to understand how mobile phones, even basic ones, changed behavior behind the wheel. It was cutting edge at the time; trying to understand the implications of this technology so that we could mitigate and regulate it.

The lesson: Technology changes behavior faster than we realize. And humans adapt—sometimes in dangerous ways.

CERL was a research laborartory in the basement of the psychology department at the University of Calgary. We used a drivign simulator to study a range of topics including the impact of cellphones on driving performance.

From Research to Real-World Systems

I went on to study Human Factors within industrial engineering, focusing on collision warning systems and trust in automation—first through a master's degree, then a PhD. My research used real-world data from some of the earliest automated driving systems—many of them imperfect and unreliable.

Example scenarios that set of collision warnings system alarms in a Field Operational Test run by University of Michigan Transportation Research Institute (UMTRI) where they gave drivers vehicles equipped with systems and recorded their data.

These systems relied on sensors and could give off seemingly random warnings. I studied how people trusted and behaved when these systems were unreliable.

People would trust the system when it worked most of the time, and then fail to intervene when it failed in rare, critical moments. Or they'd lose trust entirely after too many false alarms and turn the system off completely, losing the genuine safety benefits.

The goal wasn't maximizing trust. It was calibrating it.

That Curiosity Eventually Led Me to Google

As I was finishing my dissertation, I saw an ad for a summer internship at Google. After several rounds of interviews, I was offered a position.

I didn't know it at the time, but that position was on the Autonomous vehicle project at Google X, some of the the first autonomous vehicles. I took the secrecy so seriously that when my mom came to visit, I made them drop me off at another building, only to see them again as she stopped to have a cigarette outside my real building.

I worked with the pioneers in the field who created the technology and won awards.  We drove to Carmel to see how the system was working and where it fell short. I ran studies with a golf cart around campus that we envisioned could pick people up and drop them off. We worked on how to communicate limits and get people to be engaged and intervene when needed.

The sensors weren't perfect. The algorithms weren't perfect. And it turns out that people are very bad at monitoring automation.

The same patterns I'd studied in my dissertation were playing out in real time, but at a scale and speed I hadn't imagined. And this time, the technology was being deployed to millions of people before we fully understood how behavior would shift.

My internship at Google X I worked on a side project that was focused on automating golf carts to enable employees move around campus. Source: https://www.youtube.com/watch?v=rOWhu_aa9kM

From Autonomous Vehicles to YouTube to Maps

From that Google X internship, I joined YouTube as a full-time researcher. It was an era of expansion. Conversations about integrating Google+, expanding beyond user-generated content into shows, music, and paid services. The scale was massive, the ambition even larger.

I learned that design decisions at scale shape culture.

Small interface changes influence millions, even billions, of behaviors.

After three years, I returned to mobility, joining Google Maps and Android Auto. I worked to bring faster, iterative digital systems into vehicles traditionally governed by decade-long hardware cycles.

We ran global studies. Shaped safety guidelines. Built frameworks. Embedded safety thinking into rapid release environments.

It felt like progress.

From data to design to real roads — building Android Auto and Maps together.

Scaling Human-Centered Systems Across Industries

An unexpected opportunity took my family and me to Bangkok, where we've spent the last decade.

At Agoda, I helped expand the platform beyond hotels, into homes and experiences. I ran international research. Built teams. Scaled operations. Navigated device transitions and ecosystem growth.

Since then, I've worked across healthcare startups, consulting, and global enterprise, most recently leading website content systems across APAC and Europe for more than 150 sites at Electrolux.

The industries changed but the work didn't.

It has always been about how humans interact with complex systems—and how small design decisions create massive ripple effects across products, services, and organizations.

Why I'm Writing Now

The technology I once studied in early prototypes is now everywhere.

The collision warning systems and driver assistance technology I researched in grad school? They're standard in vehicles now, adaptive cruise control, lane detection, forward collision warnings, Android Auto.

The autonomous vehicle challenges I studied at Google X? They're being deployed on public roads widely.

The behavioral patterns I tracked at YouTube? They've scaled to billions of users across every platform and continue to evolve.

And now we're in the next wave: AI systems that learn, adapt, and shape behavior in real time, often before we understand their impact.

My kids have devices. I watch them navigate systems designed to capture attention, not protect it. Videos pushed to them without limits or censorship. Algorithms optimizing for engagement, not wellbeing.

And I see myself in them. I still struggle with impulse buys from perfectly timed ads. I still find myself scrolling when I meant to check just one thing. These platforms have evolved, and it's not always for our benefit.

The difference is: I know how they work. I've studied the mechanisms. I've helped build some of these systems.

And I've seen what happens when we don't design with human limits in mind.

The Gap Is Growing

As a professional, I see both opportunity and unintended consequence.

Across industries—mobility, media, travel, healthcare, enterprise—one thing has remained constant:

Small design decisions create massive ripple effects.

Technology evolves. Humans adapt. Systems scale. Consequences compound.

I've spent 25 years watching this happen. And here's what worries me: the pace of innovation is accelerating, but human biology isn't changing.

Our attention, memory, stress limits, decision-making, and social needs remain largely the same as they were centuries ago. But the systems we're building, AI, platforms, digital tools, reshape behavior almost instantly, often before we understand their impact.

This growing gap between accelerating systems and human capacity is where complexity emerges and where problems begin.

I've watched the same pattern repeat:

Slow discovery → rapid acceleration → behavior change → unintended consequences → (hopefully) smarter design

Early mobile phones. Collision warning systems. Social media platforms. Autonomous vehicles. And now AI.

Each cycle moves faster than the last. And each time, we're surprised by how behavior shifts in ways we didn't predict.

Why Dam and Flow

Every innovation creates flow: progress, possibility, expansion.

And every system needs dams: constraints, safeguards, friction where necessary.

Human Factors, the field I fell in love with in that university class, has never been more needed.

This blog will explore:

  • How disruptive technologies emerge and reshape behavior

  • The patterns that repeat across different innovations

  • What happens when systems work "most of the time" (and why that's the most dangerous state)

  • How trust forms and breaks—and why calibrating it matters more than maximizing it

  • How organizations can scale systems responsibly—building with human limits in mind

  • How individuals can protect their attention, agency, and wellbeing—navigating systems designed to capture, not serve

I'll share lessons from:

  • Early automation research and trust calibration

  • Studying autonomous vehicles at Google X

  • Watching behavioral patterns at YouTube scale to billions

  • Building safety guidelines for in-vehicle systems

  • Two decades of seeing the same mistakes repeated at increasing speed

And I'll offer practical guidance for:

  • Organizations implementing AI and automation

  • Individuals navigating systems designed to capture attention

  • Parents raising kids in platforms we're still figuring out

Because we don't need to fear innovation. We need to understand how it changes us, anticipate its ripple effects, and shape it intentionally.

Are we designing systems that work with human limits—or against them?

That's the question that matters.

——-

My name is Monica. I'm a Human Factors engineer. I care deeply about building systems that make the world better—without ignoring the costs.

And I love solving complex problems.

Let's talk about yours.

——-

Foundational Books on Human Factors:

  • The Design of Everyday Things by Don Norman

  • An Introduction to Human Factors Engineering by Christopher D. Wickens et al.

A look at the 80s and 90s

  • https://www.pocket-lint.com/gadgets/news/147958-12-best-1980s-gadgets-that-defined-a-decade/

On Field operational Test

On Google's Self-Driving Car Project: