← Back
AItasteengineeringessay

Taste Was Always the Job

A hand selecting a pocket watch from scattered clock parts and gears on a workbench

Slavoj Žižek has this thought experiment about sex toys. You take a vibrator and a fleshlight, plug them into each other, turn them both on, and let them go at it. Meanwhile the two actual humans sit at a nearby table, drinking tea and having a real conversation.[1]

The machines are doing it for us, buzzing in the background, and I'm free to do whatever I want... we have a nice talk; we have tea; we talk about movies. I talk with a lady because we really like each other.

I keep thinking about this in the context of AI and work. We've spent decades optimizing for the wrong thing: the mechanical part, the execution, the buzzing. And now that machines can handle the buzzing, we're sitting at the table for the first time, realizing the conversation is what mattered all along. Some of us are discovering we're great conversationalists. Others are discovering they never learned how to talk.

In February 2026, Paul Graham posted that taste is now the core differentiator.[2] Sam Altman told Fortune that even non-technical people can contribute to AGI teams if they have taste.[3] Suddenly taste was the word on everyone's lips. But the framing was wrong. Taste isn't becoming the job. It was always the job. We just couldn't see it through the fog of execution.


The Fog of Execution

For most of human history, execution was so expensive that it obscured what actually mattered.

  • A monk hand-copying a manuscript in 1440 couldn't afford to ask whether the book was worth reading. He had six months of lettering ahead of him.
  • A portrait painter in 1830 couldn't question whether the subject was worth depicting. The commission paid rent.
  • A recording engineer in 1955 couldn't waste studio time wondering if the song was any good. Professional tape cost $50 a reel, roughly $570 in today's dollars.[4]

The cost of making something was so high that the question of whether you should make it was a luxury.

1440
Printing Press
Execution cost ↓
Book production: months → days. Cost per copy drops ~80%.
Taste demand ↑
The press didn't make editors obsolete. It invented them. Publishers, literary critics, and curators emerged because someone had to decide what was worth the ink.
1839
Photography
Execution cost ↓
Capturing a likeness: weeks of portrait sitting → seconds of exposure.
Taste demand ↑
Everyone said painting was dead. Instead, painters freed from representational labor became artists. Impressionism, Expressionism, and Cubism all followed within decades.
1920s
Recording & Radio
Execution cost ↓
Music distribution: limited to venue capacity → infinite broadcast reach.
Taste demand ↑
A&R executives, producers, label curators. An entire industry built around a single question: what deserves to be heard?
1985
Desktop Publishing
Execution cost ↓
Typesetting: $500/page specialists → anyone with a Macintosh and LaserWriter.
Taste demand ↑
99% of flyers were hideous. Graphic design became a real profession specifically because execution without taste produced garbage at unprecedented scale.
2005
YouTube & Blogging
Execution cost ↓
Publishing: broadcast licenses and editorial gatekeepers → a WordPress install and a webcam.
Taste demand ↑
500 hours of video uploaded per minute by 2025. The people who built audiences were editors and curators, not the fastest typists.
2025
AI Agents
Execution cost ↓
Software: teams of engineers, months of sprints, millions in payroll → an afternoon with an agent.
Taste demand ↑
This is where we are now.
swipe →

Then the cost dropped. And every single time, the same pattern emerged: the technology didn't eliminate the need for human judgment. It created entirely new professions dedicated to it. The people who thrived weren't the ones who could operate the new technology fastest. They were the ones who could decide what the technology should be pointed at.

This is Jevons Paradox applied to creative and knowledge work.[5] When coal got cheaper in the 1860s, England didn't use less coal; total consumption increased because cheaper energy made more applications economically viable. Resistance to this idea was fierce. People assumed efficiency would reduce total demand. Instead it exploded it, because the constraint had been masking latent demand that was always there.

The same thing is happening with execution. When building software gets 10x cheaper, we don't build 1x the software with 10x less effort. We build 100x the software, and the question of whether any of it is good becomes the entire question.

1B
AI-assisted code contributions on GitHub in 2024
4x
YC W25 applications vs. prior year
$2T
SaaS market cap at risk from AI commoditization

GitHub reported 1 billion AI-assisted code contributions in 2024, up from effectively zero two years before.[7] Y Combinator's W25 batch drew 4x the applications of the year prior.[6] Morgan Stanley flagged $2 trillion in SaaS market cap at risk from AI-driven commoditization of software.[13] The supply of software is exploding. The supply of good judgment about what software should exist has not changed.

Every drop in the cost of execution is a corresponding increase in the demand for taste. For someone who can look at the flood of output and say this one matters, that one doesn't, and here's why. Taste was always the job. We're just finally able to see it.


The Thirty-Year Window

We had a specific, historically unusual window (roughly 1995 to 2025) where you could build an entire career on execution alone. Could you get the code to compile? Could you ship on time? Could you scale to a million users without the servers falling over? The difficulty of execution created a fog thick enough that we rarely asked the more important question: should this thing exist at all? Is it good? Does anyone actually need it?

I've watched this fog lift in real time. A year ago, building a feature meant weeks of scoping, architecting, debugging, deploying. The process was so consuming that we rarely paused to ask whether the feature was right because we were too deep in the work of making it function. Now an agent builds it in an afternoon. The backlog that used to represent months of work gets cleared in days. And suddenly you're face-to-face with the question you'd been too busy to ask: is this actually good?

I now write 95% of my code from my phone. I'm mass-producing software. I'm often mentally exhausted by 11am.

Simon Willison, co-creator of Django, describes this shift viscerally.[8] The cognitive load didn't decrease when AI took over the typing. It shifted. From the mechanical act of writing code to the much harder work of deciding what code should be written, evaluating whether the output is correct, and directing the next iteration. The exhaustion is real, but it's a different kind of exhaustion: judgment, not labor.

And the squeeze isn't evenly distributed:

  • Senior engineers amplify deep experience through agents. They thrive.
  • Juniors onboard faster than ever. AI closes the knowledge gap.
  • Mid-career engineers, the ones whose value was reliable execution, face the greatest pressure.

Harvey, the $11B legal AI startup, sees the same shift in law. Their CEO Winston Weinberg puts the pace bluntly: "Every four months you have to reinvent yourself as a founder."[9] Their co-founder, Gabe Pereyra, writes that more agent throughput doesn't reduce the need for lawyers; it means "more judgment calls, and a deeper need for high-skill, high-trust lawyers" at every step.[10] But the deeper insight is about what happens to entire organizations: with the ability to hire infinite AI employees, companies stop being constrained by throughput. The speed at which individual employees can go alone asymptotes. And then institutions have to relearn how to go far together:

  • What work actually matters?
  • How do you review AI output at scale?
  • How do you build trust in decisions you didn't make?
  • How do you train people when the work keeps changing?
  • How do you redesign organizations around a surplus of intelligence bottlenecked by judgment?

This is the part most "taste" discourse misses. It's not just about individual discernment. Meaningful leverage under these conditions isn't about how much one person or one organization can produce. It's about how much context people, teams, and institutions can coordinate across humans and agents. The bottleneck has moved from doing the work to deciding which work matters, and from individual decisions to institutional judgment.


How Taste Is Actually Built

So taste is the job. But how do you actually develop it? That depends on what you think taste is. The discourse has fractured into at least five positions:

What taste meansWho says itImplication
Choosing what to makePaul Graham, Greg Brockman[2]Selection is the skill. Build the right thing.
Choosing what not to makeEric De Castro[15]Restraint is the moat. Saying no is harder than saying yes.
Trained instinctEmil Kowalski[11]Learnable through exposure, analysis, practice.
Pattern recognition (AI can learn it)Paras Chopra, Nan Yu (Linear)[16]If taste is just good judgment, models will get there.
Conviction, not tasteJulie Zhuo[17], Ivan Zhao[18]Taste-as-prediction is trainable. Will is the real differentiator.

These aren't contradictory. They're describing different layers of the same thing. Selection, restraint, instinct, judgment, conviction. The question is which layer matters most when AI can already handle the first few. I think the answer is all of them, in sequence, and we're currently watching AI climb the stack from the bottom.

But here's what they all agree on, even if they don't say it directly: taste was always the job. It was always underneath, always the thing that separated the best work from the functional. Execution just made it invisible.

So how do you build it? Emil Kowalski has a useful analogy: when the first car came out, nobody cared about color or silhouette because the competition was a horse.[11] Basic transportation was the miracle. But once cars were everywhere, design became the differentiator, because the functional problem was solved. Software is at this exact inflection point. Shipping something that works is no longer impressive. An agent can do that. The question is whether what you shipped is worth using.

Kowalski's framework is three things:

  1. Surround yourself with great work. Deliberate exposure to the best in your field.
  2. Think critically about why you like it. Not "I like this design" but "this design works because the hierarchy guides my eye from the value prop to the CTA without me having to think about where to look next."
  3. Practice your craft relentlessly. Close the gap between your judgment and your output.

Taste isn't inborn preference. It's trained instinct.

I've noticed this pattern in my own work and it's been surprisingly direct. The weeks I spend reading great essays, studying products I admire, analyzing why a specific interface or API feels right, those are the weeks I make noticeably better decisions about what to build and how to build it. The weeks I'm heads-down executing without pausing to look up, I ship more but the work is mediocre. Speed without taste is just faster mediocrity.

Taste is also a reading list. The blogs you follow, the products you study, the people whose judgment you respect enough to learn from. The act of selection: choosing what's worth your attention, which ideas to absorb, which frameworks to internalize, which trends to ignore. That selection process is taste in action, long before you open an editor.


The Window Is Closing

Every previous technology in the timeline above automated execution but couldn't touch judgment. A camera captured light but couldn't decide what was worth photographing. A printing press reproduced text but couldn't tell you what was worth reading. The division was clean: machines handle production, humans handle selection.

AI breaks this division. It doesn't just automate mechanical execution. It automates cognitive execution. And it's climbing the taste stack faster than most people realize.

To understand why, you need to see what taste actually is. It's not one skill. It's a meta-capability: the ability to hold an entire system in your head and make judgments across every dimension simultaneously. When you look at a codebase and feel that something is wrong, you're not doing one thing. You're evaluating architectural coherence, long-term maintainability, API ergonomics, user-facing implications, team velocity effects, and a dozen other axes at once, then synthesizing all of it into a single directional judgment. That's why AI-generated code still feels like slop even when it compiles and passes tests. The code works locally but fails holistically. It solves the immediate prompt without understanding the system it lives in.

Here's another way to think about it: for any system you want to build, there are a million valid implementations. Different architectures, different abstractions, different tech stacks, different tradeoffs. Most of them work. Very few of them are good. Taste is the prior that lets you navigate that enormous solution space and converge on something coherent rather than sampling randomly from "things that function."

But AI is learning to narrow that space too. Each generation climbs higher up the stack:

  • Code review tools now flag architectural decisions that will create maintenance debt three sprints out.
  • Design critique surfaces hierarchy and consistency issues faster than most human designers.
  • Models suggest refactors that require genuine understanding of a codebase's intent, not just its syntax.

With each step up in model capability, the bar for what humans uniquely contribute rises. The people who were differentiating on local pattern recognition (is this component well-designed?) find that AI handles that now. The ones differentiating on system-level judgment (should this component exist at all, and how does it reshape the product three months from now?) still have an edge. But the stack keeps climbing.

Marc Andreessen points to institutional navigation as one durable human advantage: the messy, political work of getting organizations to actually adopt change.[12] Sequoia makes a similar argument, that humans occupy "the edge" where intelligence meets reality, navigating trust dynamics, cultural context, and ethical judgment that models can't perceive.[14] These are real frictions. But they're not the fundamental bottleneck. The fundamental bottleneck is that taste itself, the holistic, multi-dimensional, system-level judgment that separates good work from functional work, is genuinely hard. Organizations will adapt their structures. The deeper question is whether AI can develop the kind of judgment that currently requires caring about the outcome.

An AI can evaluate which design is better. It can't care which one ships. It can tell you a feature will increase engagement metrics. It can't tell you whether the engagement is worth having, whether the product is making lives better or just more addicted.

Taste is pattern recognition plus aesthetic instinct, and pattern recognition is learnable. Conviction is something else. Having a stake in the outcome, choosing what to fight for, refusing to ship what you don't believe in. That requires caring about the result, and caring requires having something to lose.

Taste might be a temporary human edge. Conviction, the willingness to stake your reputation on a judgment call, might be more durable.


The Race Condition

The people thriving right now were always exercising taste but were bottlenecked by execution overhead. AI removed the bottleneck and now they're amplified, shipping more of what they already knew was right. The people struggling are discovering they don't know what to build when building is free. That's not a permanent verdict. It's useful, actionable information.

But here's what the "just develop taste" advice misses: you're not improving in a vacuum. You're improving while the bar rises under your feet. Each model generation automates another layer of judgment that used to be exclusively human. The surface layers go first: local code quality, visual consistency, pattern matching. The deeper layers, system-level reasoning, long-term architectural judgment, understanding what users actually need versus what metrics say they want, those take longer. But the stack keeps climbing.

So the real question is whether you're developing taste faster than AI is learning it. Holistic system-level judgment takes years of deliberate practice. Models are compressing that timeline. The window where human judgment is the clear differentiator is open, but it won't stay open by default.

If you've been coasting on execution skill, start building the muscle you neglected: study the best work in your field, develop real opinions about why things are good or bad, practice making judgment calls with stakes attached. If you already have strong taste, go deeper. The layers that require genuine understanding of systems, users, and long-term consequences are the ones worth investing in, because they'll be the last to get automated.


Thanks to Xiuyu Li for reading a draft of this and pushing back on the parts that needed it.

References

  1. Slavoj Žižek on synthetic sex. Big Think interview. The vibrator-and-fleshlight thought experiment.
  2. Paul Graham on taste in the AI age. Feb 14, 2026. Greg Brockman QT: 'Taste is a new core skill.' ~3.7M combined impressions.
  3. Sam Altman on taste and AGI teams. Fortune, Feb 27, 2026
  4. Recording tape costs in the 1950s. Professional recording tape was ~$50/reel in 1955 dollars (~$570 adjusted)
  5. Jevons Paradox. William Stanley Jevons, 1865. Increased efficiency of coal use led to increased total coal consumption.
  6. Y Combinator W25 batch growth. 4x application increase year-over-year
  7. GitHub Octoverse 2024: 1B AI-assisted contributions. Reported Oct 2024
  8. Simon Willison on Lenny's Podcast. Apr 2026. 95% of code from phone, exhausted by 11am, 'hundreds of small prompts'
  9. Winston Weinberg on reinvention (Sequoia podcast). Re-earn your role every 4 months, bias for action
  10. Harvey: How Autonomous Agents Will Transform Legal. Gabe Pereyra, co-founder. Judgment over throughput, trust as bottleneck.
  11. Emil Kowalski: Developing Taste. Taste as trained instinct: exposure, analysis, practice. The car analogy.
  12. Marc Andreessen on Latent Space. Apr 3, 2026. Institutional resistance as the real bottleneck.
  13. Morgan Stanley: AI's Impact on $2T SaaS Market. 2025. SaaS market cap risk from AI commoditization of software.
  14. Sequoia: From Hierarchy to Intelligence. Humans at 'the edge' where intelligence meets reality
  15. Eric De Castro: Taste Is the Only Moat. Feb 2026. Taste defined by what you refuse to do — restraint as the moat.
  16. Is taste a 'new core skill'? Techies debate. Feb 2026. Paras Chopra and Nan Yu (Linear) on whether AI can learn taste as pattern recognition.
  17. Julie Zhuo: When AI Has Better Taste Than You. Conviction vs. taste — will as the real differentiator when AI can predict quality.
  18. The Ivanisms that power Notion: Ivan Zhao on taste and conviction. Taste as pattern recognition is replicable; movement-making conviction is not.

If any of this resonated or you see it differently, I'd like to hear from you