AGI News February – March 2025

News

i
Feature Post: Sam Altman
.Altman is CEO of Open AI, a leading frontier company in the AGI field

Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019. Altman’s net worth was estimated at $1.1 billion in January 2025.

  • Throughout the month, we will be adding to this post articles, images, livestreams, and videos about the latest US issues, politics, and government (select the News tab).
  • You can also participate in discussions in all AGI onAir posts as well as share your top news items and posts (for onAir members – it’s free to join).
Guardrails for AI
DiamantAIMarch 25, 2025

What Are Guardrails and Why Do We Need Them?
Guardrails are the safety measures we build around AI systems – the rules, filters, and guiding hands that ensure our clever text-generating models behave ethically, stay factual and respect boundaries. Just as we wouldn’t let a child wander alone on a busy street, we shouldn’t deploy powerful AI models without protective barriers.

The need for guardrails stems from several inherent challenges with large language models:

The Hallucination Problem

The Bias Echo Chamber

The Helpful Genie Problem

The Accidental Leaker

How Guardrails Work in Practice

Agents are here, but a world with AGI is still hard to imagine
AI Supremacy, Michael Spencer and Harry LawMarch 27, 2025

We start off with a simple question, will agents lead us to AGI? OpenAI conceptualized agents as stage 3 of 5. You can ascertain that agents in 2025 are barely functional.

Since ChatGPT was launched nearly 2.5 years ago, outside of DeepSeek, we haven’t really seen a killer-app emerge. It’s hard to know what to make of Manus AI? Part Claude wrapper, but also an incredible UX with Qwen reasoning integration. Manus AI, which has offices in Beijing and Wuhan and is part of Beijing Butterfly Effect Technology. The startup is Tencent backed, and with deep Qwen integration you have to imagine Alibaba might end up acquiring it.

Today technology and AI historian, Harry Law of  Learning From Examples , explores this awkward stage we are at halfway between reasoning models and agents. This idea that agents will lead to AGI is also quite baffling. You might also want to read some articles of the community on Manus AI: but will “unfathomable geniuses” really escape today’s frontier models, suddenly appearing like sentiment boogeymen saluting us in their made-up languages?

The Government Knows AGI is Coming
The Ezra Klein ShowMarch 4, 2025 (01:03:00)

Artificial general intelligence — an A.I. system that can beat humans at almost any cognitive task – is arriving in just a couple of years. That’s what people tell me — people who work in A.I. labs, researchers who follow their work, former White House officials. A lot of these people have been calling me over the last couple of months trying to convey the urgency. This is coming during President Trump’s term, they tell me. We’re not ready.

One of the people who reached out to me was Ben Buchanan, the top adviser on A.I. in the Biden White House. And I thought it would be interesting to have him on the show for a couple reasons: He’s not connected to an A.I. lab, and he was at the nerve center of policymaking on A.I. for years. So what does he see coming? What keeps him up at night? And what does he think the Trump administration needs to do to get ready for the A.G.I. – or something like A.G.I. – he believes is right on the horizon?

Biden’s AI legacy: A headache for Europe and the tech industry
Digital Future Daily, Daniella CheslowMarch 27, 2025

In Trump’s Washington, Europe’s tech regulation is a regular object of scorn. But there is one piece of American tech policy that’s united European diplomats and U.S. industry: A rule issued in President Joe Biden’s final days in office that sorted the world into three tiers for AI chip export, with more than half of Europe left off the top rung.

Under the Framework for AI Diffusion, 17 EU countries were designated Tier 2, setting caps on their access to chips needed to train AI, while the rest of Europe was set for Tier 1, with no import restrictions. Countries listed in the second tier are treating it as a scarlet letter.

“We’re going around town trying to explain that we have no idea why we ended up in Tier 2,” said one European diplomat, granted anonymity to discuss sensitive talks. “If this has to do with cooperation with the U.S. on security, we are NATO allies, we are more than willing.”

Reshuffle: Who wins when AI restacks the knowledge economy (book)
Platforms, AI, and the Economics of BigTech, Sangeet Paul ChoudaryMarch 16, 2025

Beyond AI hype and fearmongering
The AI debate is polarized today. Technologists with Altman-esque delusions hype new tools and the impending arrival of AGI. Policymakers disconnected from ‘why this time is really different’ cling on to outdated frameworks to debate job losses. Businesses, caught in between, are confused as they struggle to distinguish AI snake-oil from the real deal.

Taking these opposing views makes no sense. And yet, it gets the memetic spread that any polarising debate will.

Reshuffle grounds these discussions on some core principles and cuts through the noise. It provides a framework to understand the fundamental nature of AI systems – and their impact on economic interactions.

Reshuffle grounds itself in a few core issues, including:

  • the tension that workers have always had with tools,
  • the tug-of-war that tool providers have with their customers,
  • the nature of workflows and their impact on organization design, and eventually, the division of work, value, and power across an ecosystem,
  • the importance of knowledge management in making organizations – and more importantly – ecosystems function,
  • fundamentally new ways to build companies in an economy that is revisiting basic assumptions on knowledge work.
Building an AI Agent with Memory and Adaptability
DiamantAI, Nir DiamantMarch 20, 2025

This blog post is a tutorial based on, and a simplified version of, the course “Long-Term Agentic Memory With LangGraph” by Harrison Chase and Andrew Ng on DeepLearning.AI.

Conclusion

We’ve now built an email agent that’s far more than a simple script. Like a skilled human assistant who grows more valuable over time, our agent builds a multi-faceted memory system:

  1. Semantic Memory: A knowledge base of facts about your work context, contacts, and preferences
  2. Episodic Memory: A collection of specific examples that guide decision-making through pattern recognition
  3. Procedural Memory: The ability to improve its own processes based on feedback and experience

This agent demonstrates how combining different types of memory creates an assistant that actually learns from interactions and gets better over time.

Imagine coming back from a two-week vacation to find that your AI assistant has not only kept your inbox under control but has done so in a way that reflects your priorities and communication style. The spam is gone, the urgent matters were flagged appropriately, and routine responses were handled so well that recipients didn’t even realize they were talking to an AI. That’s the power of memory-enhanced agents.

This is just a starting point! You can extend this agent with more sophisticated tools, persistent storage for long-term memory, fine-grained feedback mechanisms, and even collaborative capabilities that let multiple agents share knowledge while maintaining privacy boundaries.

The AI ‘arms race’ fallacy: The future of globalization in the age of AI
Platforms, AI, and the Economics of BigTech, Sangeet Paul ChoudaryFebruary 5, 2025

Marc Andreessen calls DeepSeek AI’s Sputnik moment.

Yes, it did catch the US unprepared but that’s where the ‘space race’ analogy ends.

Today’s AI race is not merely an ‘arms race’ nor is DeepSeek easily explained away as just a Sputnik moment.

This race is playing out against the larger backdrop of more than a decade of technology infrastructure export by the largest economies around the world. – whether it is India’s export of digital public infrastructure, cloud export by the US BigTech, or China’s Digital Silk Road working alongside its Belt and Road project.

And that’s what makes this truly interesting!

This combination of tech infrastructure exports combined with leverage through complementary AI capabilities creates a new format of globalization – standards-based globalization – something that most people don’t yet fully understand.

What’s next for robots
MIT Technology Review’, James O’Donnellarchive pageJanuary 23, 2025

With tests of humanoid bots and new developments in military applications, the year ahead will intrigue even the skeptics.

Humanoids are put to the test
The race to build humanoid robots is motivated by the idea that the world is set up for the human form, and that automating that form could mean a seismic shift for robotics. It is led by some particularly outspoken and optimistic entrepreneurs, including Brett Adcock, the founder of Figure AI, a company making such robots that’s valued at more than $2.6 billion (it’s begun testing its robots with BMW). Adcock recently told Time, “Eventually, physical labor will be optional.” Elon Musk, whose company Tesla is building a version called Optimus, has said humanoid robots will create “a future where there is no poverty.” A robotics company called Eliza Wakes Up is taking preorders for a $420,000 humanoid called, yes, Eliza.

A smarter brain gets a smarter body
Plenty of progress in robotics has to do with improving the way a robot senses and plans what to do—its “brain,” in other words. Those advancements can often happen faster than those that improve a robot’s “body,” which determine how well a robot can move through the physical world, especially in environments that are more chaotic and unpredictable than controlled assembly lines.

At last week’s Board of Visitors meeting, George Mason University’s Vice President and Chief AI Officer Amarda Shehu rolled out a new model for universities to advance a responsible approach to harnessing artificial intelligence (AI) and drive societal impact. George Mason’s model, called AI2Nexus, is building a nexus of collaboration and resources on campus, throughout the region with our vast partnerships, and across the state.

AI2Nexus is based on four key principles: “Integrating AI” to transform education, research, and operations; “Inspiring with AI” to advance higher education and learning for the future workforce; “Innovating with AI” to lead in responsible AI-enabled discovery and advancements across disciplines; and “Impacting with AI” to drive partnerships and community engagement for societal adoption and change.

Shehu said George Mason can harness its own ecosystem of AI teaching, cutting-edge research, partnerships, and incubators for entrepreneurs to establish a virtuous cycle between foundational and user-inspired AI research within ethical frameworks.

As part of this effort, the university’s AI Task Force, established by President Gregory Washington last year, has developed new guidelines to help the university navigate the rapidly evolving landscape of AI technologies, which are available at gmu.edu/ai-guidelines.

Further, Information Technology Services (ITS) will roll out the NebulaONE academic platform equipping every student, staff, and faculty member with access to hundreds of cutting-edge Generative AI models to support access, performance, and data protection at scale.

“We are anticipating that AI integration will allow us to begin to evaluate and automate some routine processes reducing administrative burdens and freeing up resources for mission-critical activities,” added Charmaine Madison, George Mason’s vice president of information services and CIO.

George Mason is already equipping students with AI skills as a leader in developing AI-ready talent ready to compete and new ideas for critical sectors like cybersecurity, public health, and government. In the classroom, the university is developing courses and curriculums to better prepare our students for a rapidly changing world.

In spring 2025, the university launched a cross-disciplinary graduate course, AI: Ethics, Policy, and Society, and in fall 2025, the university is debuting a new undergraduate course open to all students, AI4All: Understanding and Building Artificial Intelligence. A master’s in computer science and machine learning, an Ethics and AI minor for undergraduates of all majors, and a Responsible AI Graduate Certificate are more examples of Mason’s mission to innovate AI education. New academies are also in development, and the goal is to build an infrastructure of more than 100 active core AI and AI-related courses across George Mason’s colleges and programs.

The university will continue to host workshops, conferences, and public forums to shape the discourse on AI ethics and governance while forging deep and meaningful partnerships with industry, government, and community organizations to offer academies to teach and codevelop technologies to meet our global society needs. State Council of Higher Education for Virginia (SCHEV) will partner with the university to host an invite-only George Mason-SCHEV AI in Education Summit on May 20-21 on the Fairfax Campus.

Virginia Governor Glenn Youngkin has appointed Jamil N. Jaffer, the founder and executive director of the National Security Institute (NSI) at George Mason’s Antonin Scalia Law School, to the Commonwealth’s new AI Task Force, which will work with legislators to regulate rapidly advancing AI technology.

The university’s AI-in-Government Council is trusted resource for academia, public-sector tech providers, and government for advancing AI approaches, governance frameworks, and robust guardrails to guide AI development and deployment in government.

Learn more about George Mason’s AI work underway at gmu.edu/AI.

i
World Futures Day 2025: Join the 24-hour global conversation shaping our future
Other, Mara Di BerardoFebruary 26, 2025

Every year on March 1st, World Futures Day (WFD) brings together people from around the globe to engage in a continuous conversation about the future. What began as an experimental open dialogue in 2014 has grown into a cornerstone event for futurists, thought leaders, and citizens interested in envisioning a better tomorrow. WFD 2025 will mark the twelfth edition of the event.

WFD is a 24-hour, round-the-world global conversation about possible futures and represents a new kind of participatory futures method (Di Berardo, 2022). Futures Day on March 1 was proposed by the World Transhumanist Association, now Humanity+, in 2012 to celebrate the future. Two years later, The Millennium Project launched WFD as a 24-hour worldwide conversation for futurists and the public, providing an open space for discussion. In 2021, UNESCO established a WFD on December 2. However, The Millennium Project and its partners continue to observe March 1 due to its historical significance, its positive reception from the futures community, and the value of multiple celebrations in maintaining focus on future-oriented discussions.

The Singularity: The Future of the West? Do We Cross the Threshold?
The One Percent Rule, Colin W.P. LewisMarch 1, 2025

Reading Karp and Zamiska (Zami) prompted me to think about the Singularity again. Regardless of how we look at it, AI is increasing its capabilities at a rapid pace, far beyond what the public realize. Soon we will have increasingly advanced iterations of AI. Think of AI plus, then AI plus, plus and AI plus, plus, plus and then what awaits us when intelligence surpasses its creators? This is also articulated by two of the CEO’s of leading AI labs, Demis Hassabis and Dario Amodei in this conversation.

It is not the stuff of distant myth or idle speculation. This is our proximate future, a trajectory set in motion by the relentless march of accelerating computation and recursive self-improvement. The singularity, so named by John von Neumann before being elaborated upon by I. J. Good, Vernor Vinge, and Ray Kurzweil, is no longer a concept confined to speculative fiction or Silicon Valley techno-utopianism. It is a inevitable force, steadily reshaping our institutions, our identities, and our very notion of control. As Good himself put it,

“the first ultraintelligent machine is the last invention that man need ever make.”

Do we NEED International Collaboration for Safe AGI?
Imagination in ActionFebruary 14, 2025 (46:34)

AI visionaries Max Tegmark, Demis Hassabis, Yoshua Bengio, Dawn Song, and Ya-Qin Zhang. In this engaging conversation, the experts unpack the distinctions between narrow AI, AGI, and super intelligence while exploring how international collaboration can accelerate breakthroughs and mitigate risks. Learn why agentic systems pose unique challenges, how global partnerships—from academia to government—can safeguard our future, and what collaborative frameworks might ensure AI benefits all of humanity. Whether you’re an AI enthusiast, researcher, or policymaker, this discussion offers valuable insights into building a safer, more united AI landscape.

i
The End of Programming as We Know It
O’Reilly website, Tim O’ReillyJanuary 5, 2025

Learning by doing

AI will not replace programmers, but it will transform their jobs. Eventually much of what programmers do today may be as obsolete (for everyone but embedded system programmers) as the old skill of debugging with an oscilloscope. Master programmer and prescient tech observer Steve Yegge observes that it is not junior and mid-level programmers who will be replaced but those who cling to the past rather than embracing the new programming tools and paradigms. Those who acquire or invent the new skills will be in high demand. Junior developers who master the tools of AI will be able to outperform senior programmers who don’t. Yegge calls it “The Death of the Stubborn Developer.”

My ideas are shaped not only by my own past 40+ years of experience in the computer industry and the observations of developers like Yegge but also by the work of economic historian James Bessen, who studied how the first Industrial Revolution played out in the textile mills of Lowell, Massachusetts during the early 1800s. As skilled crafters were replaced by machines operated by “unskilled” labor, human wages were indeed depressed. But Bessen noticed something peculiar by comparing the wage records of workers in the new industrial mills with those of the former home-based crafters. It took just about as long for an apprentice craftsman to reach the full wages of a skilled journeyman as it did for one of the new entry-level unskilled factory workers to reach full pay and productivity. The workers in both regimes were actually skilled workers. But they had different kinds of skills

AGI by 2030? Sam Altman’s Vision for the Future of AI
AI Odyssey QuestFebruary 15, 2025

In an exclusive conversation, OpenAI CEO Sam Altman shares his vision for the future of Artificial General Intelligence (AGI) by 2030. With AGI set to redefine technology, society, and ethics, Altman discusses the breakthroughs required to achieve human-like AI and the challenges we must overcome.

Key Insights:

The technological and philosophical challenges of AGI

Breakthroughs in machine learning, neural networks, and cognitive modeling

The ethical dilemmas and risks of advanced AI systems

How AGI can solve complex global challenges

Will AGI be a revolutionary force for good, or does it pose existential risks?

A quantum playbook for Trump
Politico, Christine MuiFebruary 24, 2025

As Elon Musk’s “Department of Government Efficiency” rampages through the federal bureaucracy, demanding staff and budget cuts, one tech industry that counts on the government for fundamental research and funding is projecting confidence.

Quantum has come a long way since it caught the interest of civilian, defense and intelligence agencies in the 1990s as a theoretical, ill-understood future paradigm changer.

Since then, quantum computers have grown steadily larger and more functional — though still very much in the realm of experiment and science. Last week, Microsoft CEO Satya Nadella showed off a new palm-sized quantum chip that he proclaimed was the physical representation of its 20-year pursuit of creating “an entirely new state of matter” and would lead to a truly meaningful quantum computer in years instead of decades.

GPT-4.5 could be the last of its kind
Axios AI+, Ina FriedFebruary 28, 2025

GPT-4.5, OpenAI’s big new model, represents a significant step forward for AI’s industry leader. It could also be the end of an era.

The big picture: 4.5 is “a giant, expensive model,” as OpenAI CEO Sam Altman put it. The company has also described it as “our last non-chain-of-thought model,” meaning — unlike the newer “reasoning” models — it doesn’t take its time to respond or share its “thinking” process.

Why it matters: The pure bigger-is-better approach to model pre-training now faces enormous costs, dwindling availability of good data and diminishing returns — which is why the industry has begun exploring different roads to continuing to improve each new AI generation.

Between the lines: Building and powering the massive data centers required to build and run the latest models has become an enormous burden, while assembling ever-bigger datasets has become challenging, since today’s models already use nearly all the data available on the public internet.

If AGI Means Everything People Do… What is it That People Do?
Am I Stronger Yet?, Steve NewmanFebruary 27, 2025

And Why Are Today’s “PhD” AIs So Hard To Apply To Everyday Tasks?

There’s a huge disconnect between AI performance on benchmark tests, and its applicability to real-world tasks. It’s not that the tests are wrong; it’s that they only measure things that are easy to measure. People go around arguing that AIs which can do everything people can do may arrive as soon as next year2. And yet, no one in the AI community has bothered to characterize what people actually do!

The failure to describe, let alone measure, the breadth of human capabilities undermines all forecasts of AI progress. Our understanding of how the world functions is calibrated against the scale of human capabilities. Any hope of reasoning about the future depends on understanding how models will measure up against capabilities that aren’t on any benchmark, aren’t in any training data, and could turn out to require entirely new approaches to AI.

I’ve been consulting with experts from leading labs, universities and companies to begin mapping the territory of human ability. The following writeup, while just a beginning, benefits from an extended discussion which included a senior staff member at a leading lab, an economist at a major research university, an AI agents startup founder, a senior staffer at a benchmarks and evals organization, a VC investing heavily in AI, the head of an economic opportunity nonprofit, and a senior technologist at a big 5 tech company.

What Kind of Revolution is AI? We Are Recapitulating the Past
Facing the Future, Dana F. BlankenhornFebruary 27, 2025

The real AI revolution will not be televised. It will only begin in mid-year, when the $3,000 Nvidia device Jensen Huang calls “Project Digits” ships. While the last two years have recapitulated the first computer revolution, the next years will recapitulate the PC revolution.

And when the applications are in place, the second Internet revolution will commence.

Will AI Make Us More or Less Wise?
AI Supremacy, Michael Spencer and Chad Woodford

How to cultivate discernment in the Intelligence Age, and the importance of truth for a healthy society

We are more than Biological Machines

In the 21st century we may require an AI not simply designed to “augment” ourselves or be more “productive” as economic agents in society, but also to enable us to develop things like wisdom, discernment and clarity in the synthesis of the sum total of our human experiences.

Can AI be used to bridge the gap to discover new forms of meaningconnection and enlightenment?

  • What might a Vedic cosmologist say about AI? Chad covers a lot of angles here today and it’s a privilege to have a philosophical writer enter this debate to enrich the awareness of our readers.

Discuss

OnAir membership is required. The lead Moderator for the discussions is AGI Policy. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
  • Author
    Posts
  • #7511
    AGI Policy
    Participant
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar