google.com, pub-6867310892380113, DIRECT, f08c47fec0942fa0 ** ** **
Inside AI’s biggest gathering, a surprising admission: No one knows how it works                 U.S                 Afghanistan                 Iran                 International                                
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Despite soaring progress, scientists at AI’s largest gathering say key questions about how models work and how to measure them remain unsolved.
2025/12/09-01:36

Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Researchers highlight their latest work Monday at the NeurIPS Mechanistic Interpretability Workshop at the San Diego Convention Center.Jared Perlo / NBC News

For the past week, academics, startup founders and researchers representing industrial titans from around the globe descended on sunny San Diego for the top gathering in the field of artificial intelligence.

The Neural Information Processing Systems, or NeurIPS, conference has been held for 39 years, but it drew a record-breaking 26,000 attendees this year, twice as many as just six years ago.

Since its founding in 1987, NeurIPS has been devoted to researching neural networks and the interplay among computation, neurobiology and physics. While neural networks, computational structures inspired by human and animal cognitive systems, were once an esoteric academic fixation, their role underpinning AI systems has transformed NeurIPS from a niche meeting in a Colorado hotel to an event filling the entire San Diego Convention Center - also home to the world-famous Comic-Con.

But even as the gathering boomed along with the AI industry and sessions on hyperspecific topics like AI-created music proliferated, one of the buzziest points of discussion was basic and foundational to the field of AI: the mystery around how frontier systems actually work.

Most - if not all - leading AI researchers and CEOs readily admit that they do not understand how today`s leading AI systems function. The pursuit of understanding models` internal structure is called interpretability, given the desire to "interpret" how the models function.

Shriyash Upadhyay, an AI researcher and co-founder of an interpretability-focused company called Martian, said the interpretability field is still in its infancy: "People don`t really understand fully what the field is about. There`s a lot of ferment in ideas, and people have different agendas."

"In traditional incremental science, for example, where the ideas are mostly settled, scientists might attempt to add an additional decimal point of accuracy of measurement to particular properties of an electron," Upadhyay said.

"With interpretability, we`re in the phase of asking: ‘What are electrons? Do electrons exist? Are they measurable?` It`s the same question with interpretability: We`re asking, ‘What does it mean to have an interpretable AI system?`" Upadhyay and Martian used the NeurIPS occasion to launch a $1 million prize to boost interpretability efforts.

As the conference unfolded, leading AI companies` interpretability teams signaled new and diverging approaches to understanding how their increasingly advanced systems work.

Early last week, Google`s team announced a significant pivot to shift away from attempts to understand every part of a model to more practical methods focused on real-world impact.

Neel Nanda, one of Google`s interpretability leaders, wrote in a statement that "grand goals like near-complete reverse-engineering still feel far out of reach" given that "we want our work to pay off within ~10 years." Nanda highlighted AI`s rapid progress and lackluster advancements on the teams` previous, more "ambitious reverse-engineering" approach as reasons for the switch.

On the other hand, OpenAI`s head of interpretability, Leo Gao, announced Friday and discussed at NeurIPS that he was doubling down on a deeper, more ambitious form of interpretability "to fully understand how neural networks work."

Adam Gleave, an AI researcher and co-founder of the FAR.AI research and education nonprofit organization, said he was skeptical of the ability to fully understand models` behavior: "I suspect deep-learning models don`t have a simple explanation - so it`s simply not possible to fully reverse engineer a large-scale neural network in a way that is comprehensible to a person."

Despite the barrier to complete understanding of the complex systems, Gleave said he was hopeful that researchers would still make meaningful progress in understanding how models behave on many levels, which would help researchers and companies create more reliable and trustworthy systems.

"I`m excited by the growing interest in issues of safety and alignment in the machine-learning research community," Gleave told NBC News, though he noted that NeurIPS meetings dedicated to increasing AI capabilities were so large that they "took place in rooms that could double as aircraft hangars."

In addition to uncertainties in how models behave, most researchers are unimpressed with current methods for evaluating and measuring AI systems` current capabilities.

"We don`t have the measurement tools to measure more complicated concepts and bigger questions about models` general behavior, things like intelligence and reasoning," said Sanmi Koyejo, a professor of computer science and leader of the Trustworthy AI Research Lab at Stanford University.

"Lots of evaluations and benchmarks were built for a different time when researchers were measuring specific downstream tasks," Koyejo said, emphasizing the need for more resources and attention to create new, reliable and meaningful tests for AI systems.

The same questions about what aspects of AI systems should be measured and how to measure them apply to AI models being used for specific scientific domains.

Ziv Bar-Joseph, a professor at Carnegie Mellon University, the founder of GenBio AI and an expert in advanced AI models for biology, said evaluations of biology-specific AI systems are also in their infancy.

"It`s extremely, extremely early stages for biology evaluations. Extremely early stages," Bar-Joseph told NBC News. "I think we are still working out what should be the way we evaluate things, let alone what we should even be studying for evaluation."

Despite incremental and halting progress in understanding how cutting-edge AI systems work and how to actually measure their progress, researchers nonetheless see rapid advancements in AI systems` ability to enhance scientific research itself.

"People built bridges before Isaac Newton figured out physics," said Upadhyay, of Martian, pointing out that complete understanding of AI systems is not necessary to unleash significant real-world change.

For the fourth year in a row, researchers organized an offshoot of the main NeurIPS conference to focus on the latest AI methods to boost scientific discovery. One of the event`s organizers, Ada Fang, a Ph.D. student studying the intersection of AI and chemistry at Harvard, said this year`s NeurIPS edition was "a great success."

"Frontier research in AI for science is happening separately in areas of biology to materials, chemistry and physics, yet the underlying challenges and ideas are deeply shared," Fang told NBC News. "Our goal was to create a space where researchers could discuss not only the breakthroughs, but also the reach and limits of AI for science."

Jeff Clune, a pioneer in the use of AI for science and a computer science professor at the University of British Columbia in Vancouver, said in a panel discussion that the field is quickly accelerating.

"The amount of emails, contacts and people who are stopping me at NeurIPS and want to talk about creating AI that can go learn, discover and innovate for science, the interest level is through the roof," Clune said. "I have been absolutely blown away by the change.

"I`m looking out in the crowd here and seeing people who were there 10 years ago or 20 years ago, in the wilderness with me, when nobody cared about this issue. And now, it seems like the world cares," he added. "It`s just heartwarming to see that AI works well enough, and now there`s enough interest, to want to go tackle some of the most important and pressing problems for human well-being."

 

 

#AI                # OpenAI                # Meta                # xAI                # grokAI               
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
AI bot
YouTube opens deepfake detection tool to politicians and journalists
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
dinosaurs
`Exquisite` fossil of one of the smallest dinosaurs found in Argentina
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Discord
Discord pushes back age verification rollout following backlash
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
AI
Chinese AI companies `distilled` Claude to improve own models, Anthropic says
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
AI
Tech companies are making their robots cute to try and win over humans
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
USA
Tensions between the Pentagon and AI giant Anthropic reach a boiling point
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Nigeria
Fossils of huge `hell heron` dinosaur unearthed in Niger
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
American AI
House Democrats probe Musk and Grok over nonconsensual undressing on X
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
space
NASA to rehearse moon launch again after repairing its rocket
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
India
Modi pitches India as global artificial intelligence hub at AI summit
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
obotic dog
A robotic dog made in China gets an Indian university kicked out of an AI summit
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
WhatsApp
Mark Zuckerberg testifies during landmark trial on social media addiction
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
children
AI toy company Miko adds an AI off switch after political pressure
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
children
U.K. eyes rapid ban on social media for under 16s, curbs to AI chatbots
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
children
AI toy maker exposed thousands of responses to children, senators say
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
WhatsApp
Russia fully blocks WhatsApp, talks up state-backed alternative
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
AI
Google says attackers used 100,000+ prompts to try to clone AI chatbot Gemini
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
U.S
Afghanistan
Iran
International
Social
Economic
Articles
Athletic
Read
Science
Medical
Interview
Art and Culture
Travel
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
Inside AI’s biggest gathering, a surprising admission: No one knows how it works
          
@ 2025 Ansar Press
Inside AI’s biggest gathering, a surprising admission: No one knows how it works