The ninth edition of the AI Index Report centers on a key question – can the systems built around AI keep up with its rapid progress? The new report tracks AI development across areas such as reasoning, real-world task execution, and safety. In addition, researchers present updated estimates of generative AI’s economic value and its impact on the labor market.

For the first time, this edition includes dedicated chapters on AI in science and AI in medicine, reflecting the growing influence of these technologies across both domains.

The researchers behind the report emphasize that it provides an independent evidence base and highlights long-term trends essential for informed decision-making around AI. According to the Stanford Institute for Human-Centered Artificial Intelligence, the report is intended to be useful for policymakers, researchers, executives, journalists, and the broader public – especially as AI continues to evolve faster than our ability to measure its impact.

In science, AI has shifted from accelerating individual research steps to attempting full replacement of entire workflows. In medicine, clinical AI tools have moved beyond pilot programs into broader deployment – for example, ambient AI scribes that automatically generate clinical notes are scaling across healthcare systems.

Co-chairs of the AI Index Report, Yolanda Gil and Raymond Perrault, note that generative AI reached nearly 53% population-level adoption within just three years, while organizational adoption rose to 88%. Early estimates also suggest that the consumer value of generative AI has grown significantly within a single year. Taking this into account, the authors aim to show how such a large scale of activity impacts different sectors. For example, in scientific research, AI is no longer just a tool for accelerating individual research steps, but is increasingly positioned to replace entire workflows.

“The public is also navigating competing signals. Global optimism about AI rose in 2025, but so did nervousness,” – note co-chairs of the AI Index Report, Yolanda Gil and Raymond Perrault.

Supporting partners of the report include Google, OpenAI, Open Philanthropy, Infosys, and others. Analytics and Research Partners listed in the report include GitHub, LinkedIn, Digital Policy Alert, among others.

The AI Index Report consists of nine chapters covering different domains of AI interaction: research and development, technical performance, responsible AI, economy, science, medicine, education, policy and governance, and public opinion.

Key findings: selected highlights from the report

  1. AI development is not slowing down – its capabilities continue to expand rapidly and reach more people. According to the report, over 90% of notable frontier AI models were developed by industry. Some already meet or exceed human performance on PhD-level science questions, multimodal reasoning, and competition mathematics. As stated in the report: “On a key coding benchmark – SWE-bench Verified – performance rose from 60% to near 100% of meeting the human baseline in a single year. Organizational adoption reached 88%, and 4 in 5 university students now use generative AI.”
  2. The U.S. and China have effectively reached parity in AI model performance. According to the report, in February 2025, the Chinese model DeepSeek-R1 briefly matched the top U.S. model, and by March 2026, a model from Anthropic leads by just 2.7%. At the same time, the U.S. continues to produce more top-tier models and high-impact patents, while China leads in publication volume, citations, and industrial robot installations. Researchers also highlight South Korea as having the highest AI patent density per capita globally.
  3. The United States leads in the number of data centers, with 5,427 facilities – more than 10 times any other country – and also consumes more energy than any other nation. However, nearly all advanced AI chips are produced by TSMC in Taiwan, making the global supply chain highly dependent on a single manufacturer.
  4. Despite AI systems being able to win gold medals in competitions, they still struggle with simple tasks like telling time. Researchers refer to this as the “jagged frontier.” As noted in the report: “Gemini Deep Think earned a gold medal at IMO, yet the top model reads analog clocks correctly just 50.1% of the time. AI agents made a leap from 12% to ~66% task success on OSWorld, which tests agents on real computer tasks across operating systems, though they still fail roughly 1 in 3 attempts on structured benchmarks.”
  5. AI adoption is spreading at a historic pace, and users are deriving significant value – often from free tools. “Generative AI reached 53% population adoption within three years, faster than the PC or the internet, though the pace varies by country and correlates strongly with GDP per capita. Some show higher-than-expected adoption, such as Singapore (61%) and the United Arab Emirates (54%), while the U.S. ranks 24th at 28.3%. The estimated value of generative AI tools to U.S. consumers reached $172 billion annually by early 2026, with the median value per user tripling between 2025 and 2026.” – researchers noted.
  6. Productivity gains are accompanied by declining entry-level employment. Researchers found productivity increases of 14-26% in areas such as customer support and software development, while effects are weaker or even negative in tasks requiring judgment. AI agent adoption in business remains limited. In software development, where AI impact is most visible, employment among U.S. developers aged 22-25 dropped nearly 20% since 2024, while the number of more experienced developers continues to grow.
  7. People are learning AI skills at all stages of life, but formal education is lagging behind. More than 80% of students in the U.S. use AI for school-related tasks, yet only half of schools have formal AI policies, and just 6% of teachers consider them clear.
  8. There is also a clear divide in how AI’s future is perceived. Experts and the public differ significantly in their expectations, and trust in institutions remains fragmented. As the report states: “When it comes to how people do their jobs, 73% of experts expect a positive impact, compared with just 23% of the public, a 50-point gap. Similar divides appear for AI’s impact on the economy and medical care. Globally, trust in governments to regulate AI varies. Among surveyed countries, the United States reported the lowest level of trust in its own government to regulate AI, at 31%. Globally, the EU is trusted more than the United States or China to regulate AI effectively.”

The full report spans over 400 pages and includes numerous interactive tools and public datasets provided by Stanford Institute for Human-Centered Artificial Intelligence, available in open access on Google Drive. The full report can be accessed via the official link.