Image
How Much of My Work Needs to Be Human?
May 2, 2025  |  Jordan Julien

Lately, I’ve been sitting with a question I can’t seem to shake. Artificial intelligence is everywhere right now. It can summarize interviews, generate personas, design screens, map journeys, even write full reports. And if I’m honest, some of it is really good.

Lately, I’ve been sitting with a question I can’t seem to shake.

Artificial intelligence is everywhere right now. It can summarize interviews, generate personas, design screens, map journeys, even write full reports. And if I’m honest, some of it is really good.

At first, I saw it as a tool. Just another way to speed up the parts of my work that take time but not talent. But the more I’ve worked with it, the more I’ve started to wonder: where should I draw the line?

That’s not a rhetorical question. I mean it quite literally. In a world where AI can do a lot, how much of my work needs to be human?

Technology doesn't just do things for us. It does things to us, changing not just what we do but who we are.

Sherry Turkle

I’ve started noticing this question popping up in all kinds of places.

Sometimes I’m doing research and I realize that a model could probably summarize this transcript in seconds. But would it be able to determine the part of the discussion that gave me goosebumps?

Other times, I’m sketching early ideas for a concept and AI offers a wall of options. Some of them are clever. But which ones are right for the context I’m in? Which ones go beyond utility to put a smile on people's faces? Which ones feel alive?

Or I’ll be mapping out a system or a service journey, and it hits me: this isn’t just about organizing steps. It’s about making meaning from a messy, political, human thing. And I’m not sure I want an algorithm doing that without me.

Every tool asks a quiet question: what will you give up to use me?

Rather than choosing “AI or no AI,” I’ve started thinking in gradients.

There are moments when a fast AI-generated report is exactly what’s needed. There are others when the only honest way forward is to slow down, listen, reflect, and stay fully human. And then there’s the in-between, the liminal space where I work with AI, but not for it.

So I sketched out five rough modes of working. Not as a formal framework. Just as a way to notice where I am on the spectrum, and to help others name it too.

AI-Led, Human-Curated

The AI does the work. I tidy it up.

This is the most hands-off approach. It’s useful when time is short or when a directional answer is good enough. I don’t always trust the results, but it gets things moving. That said, it still takes a certain level of expertise to prompt an AI in a way that produces something clear and meaningful.

AI-Assisted, Human-Directed

I lead the process. AI just helps me go faster.

This is where the work starts to feel like collaboration, but on my terms. I define the questions, set the direction, and hold the final say. AI helps me move faster. It can summarize notes, generate variations, or compare themes, but it does not lead. I rely on discernment, framing, synthesis, and strategic editing. I am not just prompting AI. I am shaping what matters, spotting what is missing, and steering the process toward something useful and usable.

AI-Aware, Human-Led

I do the work, but I stay conscious of what AI could do.

This is a more deliberate way of working. I lead the process fully and choose when, if ever, to bring AI into the mix. Sometimes I use it to test a hunch or cross-check a theme, but the core thinking stays human. What I lean on here is experience. Pattern recognition, ethical judgment, and critical reflection are doing the heavy lifting. I trust my instincts, and I know when not to outsource them.

No AI, All Human

I choose to keep the process fully human by design.

This is fully human work, by design. I do the listening, the thinking, and the making myself. The tools are simple. Conversation, observation, reflection, and iteration. I lean into presence, patience, and care. This approach asks for time, but it gives something back too. Trust, connection, and a kind of insight that only comes from being there.

Each of these modes can shape how we work but they also shape what we offer. Some clients will want the speed and scale that AI can deliver. Others are navigating sensitive systems, working with vulnerable populations, or protecting deeply personal data. In those cases, a human-first or human-only approach is not just preferable. It is necessary.

The reality is that these models are not just philosophies. They're service models. They help us match our ways of working to the trust, risk, and complexity each project demands. Some work moves fast and stays light. Some work holds weight, and must be held with care.

I am still figuring out where I stand. I am not against using AI; just thoughtful use. I keep asking myself where my presence actually makes a difference. What can I safely automate, and what deserves to be protected?

Because the work I value most, the work that has truly landed or made a difference, has always been shaped by good judgment. Often, it has been shaped by people who understood the responsibility of doing it well. Co-created with people who are living the problem.

And that kind of work still matters.

Image

Jordan Julien

Jordan (he/him) is passionate about making products better for people, especially those who are often overlooked or underserved. He believes great design should be accessible, inclusive, and built with real people in mind.