Humanity and AI · The Window

The Five-Year Window

A plain-language guide to what's changed with AI, where we are, and why the next few years matter more than any that came before. Click any milestone to see how institutions responded.

SCROLL TO EXPLORE
Filter:
Before the Shift
Capability
2012
AI learned to recognize cats in photos
For decades, AI was good at narrow tasks — chess, spam filters, recommending Netflix shows. Then a breakthrough: AI could look at a million photos and learn to identify objects without being explicitly taught rules. This "deep learning" moment changed everything, but it was invisible to most people.
What it meant: The foundation for every AI advance since was laid.
click to see institutional responses ↓
How institutions responded
CongressNo awareness or response
Tech industryImmediate investment; Google acquired DeepMind 2014
AcademiaRapid adoption; ML programs expanded globally
EU / regulatorsNo response
CapabilityGovernance
2017
AI learned to translate languages — better than humans
Google's research team published a paper describing a new architecture called a "transformer." It let AI systems understand context across long passages of text. Within a year, machine translation reached human-level accuracy for many language pairs.
What it meant: Language — the thing most uniquely human — was now within AI's reach.
click to see institutional responses ↓
How institutions responded
CongressNo response
Tech industryTransformer became the dominant architecture within 2 years
EUEarly AI strategy papers began circulating; no binding action
White HouseNo response
The Acceleration
CapabilityLabor
2020
For the first time, AI could write, code, and hold a real conversation
OpenAI released GPT-3. If you gave it a few examples of writing, it could continue in that style — essays, code, poetry, legal documents, emails. Most people never heard of it, but engineers immediately understood: this was qualitatively different from anything before. AI could generalize.
What it meant: AI crossed from "narrow tool" to "general-purpose language machine." A line was crossed.
click to see institutional responses ↓
How institutions responded
CongressNo response
OpenAICommercialized API; shifted from nonprofit mission to capped-profit
EUDrafted AI Act proposal (2021); first major regulatory framework
Labor economistsFirst serious displacement studies published; no policy action
CapabilityGovernanceLabor
Nov 2022
100 million people tried AI in two months
ChatGPT launched. In the history of consumer technology, nothing had ever grown that fast — not smartphones, not social media, not the internet. Suddenly AI wasn't a research paper. It was a thing your grandmother could use. The conversation — publicly, politically, culturally — finally began.
What it meant: The public became aware. The race for what AI would become was now public, for better and worse.
click to see institutional responses ↓
How institutions responded
CongressSenate hearings called (early 2023); no legislation resulted
EUAI Act negotiations accelerated; passed 2024
White HouseRequested voluntary safety commitments from labs; not binding
Schools / universitiesChaotic mix of bans and integration attempts; no consensus
Oklahoma legislatureNo response
CapabilityLaborGovernance
2023
AI passed the bar exam, the medical boards, and the SAT
GPT-4 and Claude scored in the top percentile on professional licensing exams that took humans years of study to pass. It wasn't that AI was "smart" in a human sense — it had processed more text about law and medicine than any human could read in a lifetime. But the practical implications were immediate: AI could now do knowledge work.
What it meant: Every job requiring a college degree was now within AI's range. The economic disruption became real, not hypothetical.
click to see institutional responses ↓
How institutions responded
SenateBlumenthal/Hawley AI hearings; bipartisan concern, no bill passed
EUAI Act finalized and passed; most comprehensive regulation to date
ABA / bar associationsIssued guidance on lawyer AI use; not binding
Medical boardsBegan reviewing policies; FDA issued some AI device guidance
Oklahoma legislatureNo response
⬇ The Window Opens
Here's the crucial thing most coverage misses: AI is now capable enough to transform civilization — but it's not yet so autonomous that humans are out of the loop. That gap between "enormously capable" and "beyond human direction" is the window. It's the period when democratic societies can still decide what AI is for.
CapabilityLabor
2024
AI learned to think through hard problems step by step
"Reasoning models" arrived. Previous AI answered questions from pattern recognition — like a student who memorizes answers. Newer systems actually work through problems: they consider possibilities, check their logic, revise their answers. Performance on math, science, and coding jumped dramatically.
What it meant: The gap between AI and human cognitive ability narrowed — not in a sci-fi way, but in a measurable, practical one.
click to see institutional responses ↓
How institutions responded
CongressSeveral bills introduced; none reached a floor vote
White HouseExecutive Order on AI (Oct 2024) — then rescinded Jan 2025
Labs (OpenAI, Anthropic, Google)Raced to deploy reasoning models; published safety evals of mixed quality
Oklahoma legislatureNo response
CapabilityLaborGovernance
2025
AI started doing multi-step tasks on its own
"Agents" — AI systems that can take actions in the world, not just answer questions — became practical. An AI could be given a goal ("research competitors and write a report") and go do it: browsing the web, writing code, sending files, checking its own work. Human involvement went from constant to occasional.
What it meant: The nature of human work began shifting in real time. Not just "AI as a tool" but AI as a collaborator — or, in some cases, a replacement.
click to see institutional responses ↓
How institutions responded
CongressLimited response; AI caucus active but no major legislation
NLRBOpened investigations into AI-driven labor decisions
States (CA, TX, CO)Introduced or passed AI transparency and liability bills
EnterpriseRapid deployment; estimates of 10–30% white-collar task automation
Oklahoma legislatureNo response on record
You are here — February 2026
Window: estimated time remaining to shape AI's direction
Roughly 3–7 years, depending on how fast autonomous systems develop — and what we do with the time.
What Comes Next
RiskGovernance
2026–2027
AI systems may begin improving themselves
Several research labs are working on AI that can identify its own weaknesses and improve its training. If this succeeds at scale, the pace of capability growth stops being driven by human engineering hours. It becomes something faster.
Why it matters now: Once self-improvement loops are active, the window for human course-correction shrinks rapidly.
Confidence
Multiple lab roadmaps + independent researchers
GovernanceRisk
2028+
The institutions we build now will determine what this era looks like
History is not made by the most powerful technology. It's made by who controls it, what rules govern it, and whether those rules were built when there was still time to build them carefully.
The choice: AI that concentrates power — or AI that distributes it. AI that replaces human agency — or AI that augments it.
Confidence
Historical precedent — Æ assessment

Acting during the window is the work.

This isn't about slowing AI down. It's about making sure that when capability outpaces our current institutions, we've already built better ones.