Views Through a Policy Prism

Views Through a Policy Prism

When Politicians Break the Thermometer

How information destruction becomes a new form of policy entrepreneurship

Dana Dolan, Ph.D.
Sep 25, 2025
∙ Paid
1
1
Share

Last week’s AI jobs analysis highlighted a concerning paradox: while economists demand better data on technological unemployment, the government just fired the statistician responsible for employment trends. This case reveals how breaking measurement systems can be more effective than winning policy debates. This piece explains what that means for governing complex challenges, and what steps to take next.

Smashed and broken thermometer (Source: Substack AI Image generator)

The Policy Stream Paradox

Here’s what makes the AI jobs debate so instructive: the solutions already exist. Future-of-work researcher Jason Wingard has outlined specific interventions for technological displacement—upskilling programs for AI-complementary skills, retraining grants, wage insurance during transitions, job placement assistance [5]. These aren’t experimental ideas; they represent technically feasible, value-acceptable policies sitting on the shelf.

Yet none of these solutions can gain traction. Why? Because the problem they’re designed to address never crystallizes clearly enough to justify deployment.

This reveals something important about how policy change actually works. In Kingdon’s Multiple Streams Framework, problems must be defined with sufficient precision to enable “coupling” with specific solutions when political conditions align [2]. Vague problems resist coupling; well-defined problems enable it. You can’t sell targeted retraining programs to address “AI might displace some jobs somehow.” But you can sell them to address “AI is displacing X jobs in Y industries at Z rate.”

The AI case shows what happens when this problem definition process gets systematically disrupted—and why that disruption might be intentional.

Information Infrastructure as Competitive Terrain

Traditional MSF analysis focuses on how problems emerge and capture attention. But the AI jobs case suggests something different: how problems can be prevented from forming in the first place.

Consider the three mechanisms Kingdon identifies for problem recognition. Indicators provide systematic evidence through trusted data sources tracked over time. Focusing events create emotional responses that dramatize broader problems. Feedback reveals what’s working through reliable information flows between implementers and decision-makers.

Now watch what’s happening to each mechanism:

🔒 Want the full framework for diagnosing when information destruction is masquerading as policy debate? Upgrade to read the complete analysis, including the diagnostic questions you can apply to other policy puzzles.

✅ If you’re a paid subscriber, read on for the full analysis, and thank you! Please share your thoughts or questions by leaving a comment at the end.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Dana Dolan, Ph.D.
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture