Anthropic published the most data-grounded AI labor market study to come out of any AI lab on March 5, 2026. The headline finding sounds reassuring. Understood correctly, it’s a countdown clock.
The paper introduces a metric called observed exposure — measuring how much AI is actually doing in professional work today versus what it technically could do. The gap between those two numbers is the most strategically important figure for any leader managing a knowledge-work team right now.

Why Every Previous Model Got This Wrong
Most AI labor studies measure theoretical capability. They score an occupation’s tasks against what an LLM could theoretically accelerate, then generate a risk ranking. The historical track record of that approach is poor. Offshoring research in the 2000s flagged a quarter of U.S. jobs as vulnerable. A decade later, most were growing.
Anthropic researchers Maxim Massenkoff and Peter McCrory built something different. Their observed exposure metric combines two inputs: theoretical feasibility scores from Eloundou et al. (2023) — which estimate whether an LLM can halve the time needed for a task — with real Claude usage data from the Anthropic Economic Index, covering professional interactions across hundreds of occupations.
A task only counts as exposed if it is both theoretically feasible and appearing in real professional Claude usage. Automated use cases, where AI completes a task without a human in the loop, are weighted more heavily than augmentative ones. The result correlates with BLS employment projections across 800 occupations through 2034, which purely theoretical models do not.
The 61-Point Gap
Computer and Math occupations are theoretically 94% exposed to LLM disruption. Observed exposure: 33%.
That 61-point gap is the number. It tells you AI isn’t yet doing what it’s technically capable of. It also tells you the ceiling for disruption is far above current conditions.
Office and Administrative roles follow the same pattern — roughly 90% theoretical exposure, far lower in practice. The researchers attribute the gap to model limitations in specific contexts, legal constraints, human verification requirements, and software integration hurdles.
All are solvable over time. Across all Claude professional usage already, 97% of observed tasks fall within theoretically feasible territory. The bottleneck isn’t capability — it’s deployment.
At the other end, 30% of the U.S. workforce has zero observed exposure: cooks, mechanics, bartenders, dishwashers. Their work requires physical presence no LLM replicates.

The Three Occupations at the Top of the List
The paper’s top ten most-exposed occupations include three that stand out. Computer Programmers sit at 75% observed exposure, the highest of any role, consistent with what development teams already observe.
Customer Service Representatives rank among the highest, driven by API traffic showing high automation of query handling and resolution tasks. Data Entry Keyers come in at 67%, their core task of reading and entering source data showing significant automated use.
For context: the paper’s framework is sensitive enough to detect a scenario equivalent to the Great Recession for white-collar workers. During 2007–2009, U.S. unemployment doubled from 5% to 10%.
A comparable doubling in the top AI-exposure quartile — from 3% to 6% — would be clearly visible in this analysis. It hasn’t happened. The framework was built precisely to catch it when it does.
Who Sits in the Exposed Quadrant
The demographic profile of high-exposure workers matters. The top quartile is 16 percentage points more likely to be female and 11 percentage points more likely to be white, with Asian workers nearly twice as represented as in the zero-exposure group.
These workers earn 47% more on average. Graduate degree holders are 17.4% of the most-exposed group versus 4.5% of the least exposed — a fourfold difference.
This is not the warehouse worker or the service floor employee. This is the financial analyst, the software developer, the legal researcher. High-skill, high-wage, highly educated roles are concentrated precisely where observed exposure is deepest.
What the Employment Data Actually Shows Right Now
No mass unemployment. That’s the current state of play, and the paper is direct about it.
A difference-in-differences analysis comparing top-quartile exposure workers against the zero-exposure group since late 2022 shows a gap indistinguishable from zero. The framework is designed to catch a “Great Recession scenario” if it emerges — and right now, it isn’t flagging one.
What is showing up is a hiring slowdown for workers aged 22–25 entering exposed occupations. The job-finding rate for that cohort dropped approximately 14% in the post-ChatGPT period compared to 2022, a result that barely clears statistical significance. Workers over 25 show no equivalent signal.
The pattern matters. This isn’t mass layoffs — it’s the quiet closing of the entry-level hiring pipeline. Companies aren’t cutting existing headcount at scale. They’re freezing the intake valve.
The Broader Picture: Messy, Not Apocalyptic
The broader labor market adds necessary context here. The February 2026 BLS jobs report showed a loss of 92,000 nonfarm payroll jobs and unemployment rising to 4.4%.
Alarming at first read — but the main driver was a 28,000-job drop in healthcare attributable to the Kaiser Permanente nurses’ strike, plus winter storm distortions. The information services sector, which is AI-adjacent, lost 11,000 jobs in February, continuing a 12-month trend of approximately 5,000 losses per month.
The AI attribution question is genuinely complicated. In 2025, companies explicitly cited AI in approximately 55,000 layoff announcements — more than 12 times the number two years prior — out of 1.17 million total cuts, the highest layoff total since the pandemic, according to Challenger, Gray and Christmas.
But researchers and economists increasingly flag “AI washing” as a real phenomenon: companies attributing financially motivated or structural cuts to AI because it reads better to investors and avoids political backlash than citing overhiring or weak demand. Amazon’s Andy Jassy initially cited AI agents as a driver of 14,000 corporate job cuts in 2025, then later clarified the cuts were “not really AI-driven, not right now at least.”
None of this means displacement isn’t coming. It means the signal is currently mixed, which is exactly the environment Anthropic’s framework was designed to navigate. Brookings researchers Daron Acemoglu, David Autor, and Simon Johnson make the case that the outcome isn’t predetermined — policy choices around tax structure, antitrust, and worker voice will meaningfully shape whether AI becomes augmentative or purely automating. The trajectory is not fixed.
What Leaders Should Do With This
For CTOs and technical leaders: Observed exposure scores belong on your workforce planning dashboard, next to headcount ratios and tooling costs. The BLS correlation is operationally useful: every 10 percentage points of observed exposure correlates with 0.6 percentage points slower projected job growth through 2034.
Map that against your teams in software, customer support, and data operations. If your entry-level hiring pipeline in those areas hasn’t already slowed, it will. Get ahead of the reskilling question before attrition and freeze make it an emergency.
For founders and CEOs: The demographic profile of highly exposed workers creates specific talent retention and reputational risk as AI deployment deepens. Educated, higher-paid, disproportionately female workforces in AI-exposed roles will face the earliest pressure.
Teams that build augmentation-first tooling policies — with clear lines on what stays human and what gets automated — will retain talent longer and be better positioned as regulatory scrutiny on AI-driven workforce restructuring increases. The agentic AI platforms being embedded in enterprise software right now are moving this timeline faster than most workforce planning models account for.
For B2B marketers and RevOps leaders: This study validates a significant demand-gen window in HR-tech, people analytics, and workforce intelligence. Enterprises are actively looking for tools that can quantify their own exposure, model hiring shifts before they become visible externally, and show ROI on reskilling investment.
If you sell into CHROs or COOs, observed exposure now has peer-reviewed backing from one of the most credible AI research labs in the world. That’s a sales conversation with teeth.
The Window You Actually Have
The honest read of this paper: displacement hasn’t hit unemployment yet, but the structural conditions are being built. Hiring is already slowing for the youngest cohort of knowledge workers in the most exposed roles. Task coverage will expand as model capability advances and deployment barriers erode.
The BLS projects through 2034. The Anthropic framework updates with new data. If your team’s core work sits inside the high-exposure quadrant, you have a window — likely measured in months to a few years — before these numbers become operational urgency instead of strategic planning.
Use that window. The paper gives you the coordinates.