← Back
AItasteengineeringessay

Taste Was Always the Job

A hand selecting a pocket watch from scattered clock parts and gears on a workbench

Slavoj Žižek has this thought experiment about sex toys. You take a vibrator and a fleshlight, plug them into each other, turn them both on, and let them go at it. Meanwhile the two actual humans sit at a nearby table, drinking tea and having a real conversation.[1]

The machines are doing it for us, buzzing in the background, and I'm free to do whatever I want... we have a nice talk; we have tea; we talk about movies. I talk with a lady because we really like each other.

I keep thinking about this in the context of AI and work. We've spent decades optimizing for the wrong thing: the mechanical part, the execution, the buzzing. And now that machines can handle the buzzing, we're sitting at the table for the first time, realizing the conversation is what mattered all along. Some of us are discovering we're great conversationalists. Others are discovering they never learned how to talk.

In February 2026, Paul Graham posted that taste is now the core differentiator.[2] Sam Altman told Fortune that even non-technical people can contribute to AGI teams if they have taste.[3] Suddenly taste was the word on everyone's lips. But the framing was wrong. Taste isn't becoming the job. It was always the job. We just couldn't see it through the fog of execution.


The Fog of Execution

For most of human history, execution was so expensive that it obscured what actually mattered.

  • A monk hand-copying a manuscript in 1440 couldn't afford to ask whether the book was worth reading. He had six months of lettering ahead of him.
  • A portrait painter in 1830 couldn't question whether the subject was worth depicting. The commission paid rent.
  • A recording engineer in 1955 couldn't waste studio time wondering if the song was any good. Professional tape cost $50 a reel, roughly $570 in today's dollars.[4]

The cost of making something was so high that the question of whether you should make it was a luxury.

1440
Printing Press
Execution cost ↓
Book production: months → days. Cost per copy drops ~80%.
Taste demand ↑
The press didn't make editors obsolete. It invented them. Publishers, literary critics, and curators emerged because someone had to decide what was worth the ink.
1839
Photography
Execution cost ↓
Capturing a likeness: weeks of portrait sitting → seconds of exposure.
Taste demand ↑
Everyone said painting was dead. Instead, painters freed from representational labor became artists. Impressionism, Expressionism, and Cubism all followed within decades.
1920s
Recording & Radio
Execution cost ↓
Music distribution: limited to venue capacity → infinite broadcast reach.
Taste demand ↑
A&R executives, producers, label curators. An entire industry built around a single question: what deserves to be heard?
1985
Desktop Publishing
Execution cost ↓
Typesetting: $500/page specialists → anyone with a Macintosh and LaserWriter.
Taste demand ↑
99% of flyers were hideous. Graphic design became a real profession specifically because execution without taste produced garbage at unprecedented scale.
2005
YouTube & Blogging
Execution cost ↓
Publishing: broadcast licenses and editorial gatekeepers → a WordPress install and a webcam.
Taste demand ↑
500 hours of video uploaded per minute by 2025. The people who built audiences were editors and curators, not the fastest typists.
2025
AI Agents
Execution cost ↓
Software: teams of engineers, months of sprints, millions in payroll → an afternoon with an agent.
Taste demand ↑
This is where we are now.
swipe →

Then the cost dropped. And every single time, the same pattern emerged: the technology didn't eliminate the need for human judgment. It created entirely new professions dedicated to it. The people who thrived weren't the ones who could operate the new technology fastest. They were the ones who could decide what the technology should be pointed at.

This is Jevons Paradox applied to creative and knowledge work.[5] When coal got cheaper in the 1860s, England didn't use less coal; total consumption increased because cheaper energy made more applications economically viable. Resistance to this idea was fierce. People assumed efficiency would reduce total demand. Instead it exploded it, because the constraint had been masking latent demand that was always there.

The same thing is happening with execution. When building software gets 10x cheaper, we don't build 1x the software with 10x less effort. We build 100x the software, and the question of whether any of it is good becomes the entire question.

1B
AI-assisted code contributions on GitHub in 2024, up from nearly zero two years prior

GitHub reported 1 billion AI-assisted code contributions in 2024, up from effectively zero two years before.[7] Y Combinator's W25 batch drew 4x the applications of the year prior.[6] Morgan Stanley flagged $2 trillion in SaaS market cap at risk from AI-driven commoditization of software.[13] The supply of software is exploding. The supply of good judgment about what software should exist has not changed.

Every drop in the cost of execution is a corresponding increase in the demand for taste. For someone who can look at the flood of output and say this one matters, that one doesn't, and here's why. Taste was always the job. We're just finally able to see it.


The Thirty-Year Window

We had a specific, historically unusual window (roughly 1995 to 2025) where you could build an entire career on execution alone. Could you get the code to compile? Could you ship on time? Could you scale to a million users without the servers falling over? The difficulty of execution created a fog thick enough that we rarely asked the more important question: should this thing exist at all? Is it good? Does anyone actually need it?

I've watched this fog lift in real time. A year ago, building a feature meant weeks of scoping, architecting, debugging, deploying. The process was so consuming that we rarely paused to ask whether the feature was right because we were too deep in the work of making it function. Now an agent builds it in an afternoon. The backlog that used to represent months of work gets cleared in days. And suddenly you're face-to-face with the question you'd been too busy to ask: is this actually good?

I now write 95% of my code from my phone. I'm mass-producing software. I'm often mentally exhausted by 11am.

Simon Willison, co-creator of Django, describes this shift viscerally.[8] The cognitive load didn't decrease when AI took over the typing. It shifted. From the mechanical act of writing code to the much harder work of deciding what code should be written, evaluating whether the output is correct, and directing the next iteration. The exhaustion is real, but it's a different kind of exhaustion: judgment, not labor.

And the squeeze isn't evenly distributed:

  • Senior engineers amplify deep experience through agents. They thrive.
  • Juniors onboard faster than ever. AI closes the knowledge gap.
  • Mid-career engineers, the ones whose value was reliable execution, face the greatest pressure.

Harvey, the $11B legal AI startup, sees the same shift in law. Their CEO Winston Weinberg puts the pace bluntly: "Every four months you have to reinvent yourself as a founder."[9] Their head of AI research, Gabe Pereyra, writes that more agent throughput doesn't reduce the need for lawyers; it means "more judgment calls, and a deeper need for high-skill, high-trust lawyers" at every step.[10] But the deeper insight is about what happens to entire organizations: with the ability to hire infinite AI employees, companies stop being constrained by throughput. The speed at which individual employees can go alone asymptotes. And then institutions have to relearn how to go far together:

  • What work actually matters?
  • How do you review AI output at scale?
  • How do you build trust in decisions you didn't make?
  • How do you train people when the work keeps changing?
  • How do you redesign organizations around a surplus of intelligence bottlenecked by judgment?

This is the part most "taste" discourse misses. It's not just about individual discernment. Meaningful leverage under these conditions isn't about how much one person or one organization can produce. It's about how much context people, teams, and institutions can coordinate across humans and agents. The bottleneck has moved from doing the work to deciding which work matters, and from individual decisions to institutional judgment.


How Taste Is Actually Built

So taste is the job. But how do you actually develop it? That depends on what you think taste is. The discourse has fractured into at least five positions:

What taste meansWho says itImplication
Choosing what to makePaul Graham, Greg BrockmanSelection is the skill. Build the right thing.
Choosing what not to makeEric De CastroRestraint is the moat. Saying no is harder than saying yes.
Trained instinctEmil KowalskiLearnable through exposure, analysis, practice.
Pattern recognition (AI can learn it)Paras Chopra, Nan Yu (Linear)If taste is just good judgment, models will get there.
Conviction, not tasteJulie Zhuo, Ivan ZhaoTaste-as-prediction is trainable. Will is the real differentiator.

These aren't contradictory. They're describing different layers of the same thing. Selection, restraint, instinct, judgment, conviction. The question is which layer matters most when AI can already handle the first few. I think the answer is all of them, in sequence, and we're currently watching AI climb the stack from the bottom.

But here's what they all agree on, even if they don't say it directly: taste was always the job. It was always underneath, always the thing that separated the best work from the functional. Execution just made it invisible.

So how do you build it? Emil Kowalski has a useful analogy: when the first car came out, nobody cared about color or silhouette because the competition was a horse.[11] Basic transportation was the miracle. But once cars were everywhere, design became the differentiator, because the functional problem was solved. Software is at this exact inflection point. Shipping something that works is no longer impressive. An agent can do that. The question is whether what you shipped is worth using.

Kowalski's framework is three things:

  1. Surround yourself with great work. Deliberate exposure to the best in your field.
  2. Think critically about why you like it. Not "I like this design" but "this design works because the hierarchy guides my eye from the value prop to the CTA without me having to think about where to look next."
  3. Practice your craft relentlessly. Close the gap between your judgment and your output.

Taste isn't inborn preference. It's trained instinct.

I've noticed this pattern in my own work and it's been surprisingly direct. The weeks I spend reading great essays, studying products I admire, analyzing why a specific interface or API feels right, those are the weeks I make noticeably better decisions about what to build and how to build it. The weeks I'm heads-down executing without pausing to look up, I ship more but the work is mediocre. Speed without taste is just faster mediocrity.

Taste is also a reading list. The blogs you follow, the products you study, the people whose judgment you respect enough to learn from. The act of selection: choosing what's worth your attention, which ideas to absorb, which frameworks to internalize, which trends to ignore. That selection process is taste in action, long before you open an editor.


The Window Is Closing

Every previous technology in the timeline above automated execution but couldn't touch judgment. A camera captured light but couldn't decide what was worth photographing. A printing press reproduced text but couldn't tell you what was worth reading. The division was clean: machines handle production, humans handle selection.

AI breaks this division. It doesn't just automate mechanical execution. It automates cognitive execution. It evaluates its own output, generates novel approaches, and iterates on feedback. Consider what's changed in just the last year:

  • An AI code reviewer now flags architectural decisions that will create maintenance debt three sprints from now.
  • It identifies abstraction patterns that a mid-level engineer would miss.
  • It suggests refactors that require genuine understanding of the codebase's intent.
  • Design critique tools surface spacing, hierarchy, and consistency issues faster than most human designers.

The gap between human taste and machine taste is narrowing at a rate that should make everyone pay attention.

Marc Andreessen, on the Latent Space podcast, points to something that might outlast individual taste as a human advantage: institutional navigation.[12]

High quality software is just like infinitely available... This thing that was this incredibly scarce resource is just going to become a completely fungible thing.

But if software is fungible, the bottleneck isn't building it. It's the messy, political, deeply human world of organizations that resist change regardless of the quality of the proposed alternative. The disruption is slower than pure technical capability would predict, because organizations don't run on logic alone. They run on trust, relationships, internal politics, and institutional memory that no model can perceive or navigate. Sequoia's recent essay on organizational structure makes the same point from the inside: when AI replaces the hierarchy that used to route information, humans occupy what they call "the edge", where intelligence makes contact with reality.[14] The value is in things models cannot perceive: intuition, cultural context, trust dynamics, ethical judgment.

But I think there's something even more fundamental than institutional friction.

An AI can evaluate which design is better. It can't care which one ships. An AI can tell you that a feature will increase engagement metrics. It can't tell you whether the engagement is worth having (yet), whether the product is making users' lives better or just more addicted.

Taste is pattern recognition plus aesthetic instinct, and pattern recognition is learnable. Conviction is something else. Having a stake in the outcome, choosing what to fight for, refusing to ship what you don't believe in. That requires caring about the result, and caring requires having something to lose.

Taste might be a temporary human edge. Conviction, the willingness to stake your reputation on a judgment call, might be more durable.


Racing Through the Window

Every morning I'm directing AI agents across multiple projects, different codebases, different contexts, different judgment calls. By end of day I'm more exhausted than I've ever been in my career, but it's a completely different kind of exhaustion. It's not the tiredness of typing or debugging or deploying. It's the tiredness of making several times more decisions per hour than I used to, switching between agent contexts, evaluating output quality at a pace that would have been physically impossible a year ago. The decisions that used to be spread across weeks of implementation now land on my desk in an afternoon, each one demanding an answer now.

Some people are thriving in this environment. They were always exercising taste, always had strong instincts about what to build and how to build it, but were bottlenecked by execution overhead. AI removed the bottleneck and now they're amplified, shipping more of what they already knew was right.

Others are discovering they don't know what to build when building is free. That's not a permanent verdict. It's useful, actionable information: read more, study the best work in your field, develop the instinct you never had bandwidth to develop when execution consumed your entire working life.

Taste was always the job. The fog just made it easy to forget.

But here's the question I keep coming back to: if you're improving your taste slower than AI is learning it, then what? The window where human taste is the differentiator is open but closing. AI is learning taste (pattern recognition, aesthetic evaluation, quality assessment) and it's learning fast. I'm excited to find out how fast. I'm building, reading, choosing carefully, and trying to widen the gap between my judgment and what a model can replicate. The machines are getting better at buzzing. The question is whether you've been building the skills for the conversation at the table, or whether you've been buzzing all along.


References

  1. [1]Slavoj Žižek on synthetic sex. Big Think interview. The vibrator-and-fleshlight thought experiment.
  2. [2]Paul Graham on taste in the AI age. Feb 14, 2026. Greg Brockman QT: 'Taste is a new core skill.' ~3.7M combined impressions.
  3. [3]Sam Altman on taste and AGI teams. Fortune, Feb 27, 2026
  4. [4]Recording tape costs in the 1950s. Professional recording tape was ~$50/reel in 1955 dollars (~$570 adjusted)
  5. [5]Jevons Paradox. William Stanley Jevons, 1865. Increased efficiency of coal use led to increased total coal consumption.
  6. [6]Y Combinator W25 batch growth. 4x application increase year-over-year
  7. [7]GitHub Octoverse 2024: 1B AI-assisted contributions. Reported Oct 2024
  8. [8]Simon Willison on Lenny's Podcast. Apr 2026. 95% of code from phone, exhausted by 11am, 'hundreds of small prompts'
  9. [9]Winston Weinberg on reinvention (Sequoia podcast). Re-earn your role every 4 months, bias for action
  10. [10]Harvey: How Autonomous Agents Will Transform Legal. Gabe Pereyra. Judgment over throughput, trust as bottleneck.
  11. [11]Emil Kowalski: Developing Taste. Taste as trained instinct: exposure, analysis, practice. The car analogy.
  12. [12]Marc Andreessen on Latent Space. Apr 3, 2026. Institutional resistance as the real bottleneck.
  13. [13]Morgan Stanley: AI's Impact on $2T SaaS Market. 2025. SaaS market cap risk from AI commoditization of software.
  14. [14]Sequoia: From Hierarchy to Intelligence. Humans at 'the edge' where intelligence meets reality

Have questions or experienced similar issues? Get in touch →