Washington, DC, May 5
The Trump administration is reportedly weighing the introduction of formal government supervision for next-generation artificial intelligence models following the emergence of highly advanced systems, according to a report by The New York Times.
The White House is contemplating an executive order to establish an AI working group. This body would be tasked with investigating how such "official government oversight" might be implemented effectively to manage the risks associated with rapid technological leaps.
According to The New York Times, the proposed working group would likely consist of industry leaders and government representatives. Their primary role would be to create evaluation protocols, enabling the group to "review AI models before they are released" to the general public.
The administration has already held preliminary talks regarding this oversight mechanism with top executives from major tech firms, including Google, Anthropic, and OpenAI.
This development signifies a potential pivot from President Donald Trump's previous "hands-off approach" to the sector. Some White House officials are reportedly pushing for a framework that secures "first access" for the US government to these advanced models without necessarily halting their commercial launch.
The shift comes despite President Trump's earlier staunch advocacy for the industry's unhindered growth. In July last year, he remarked, "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules."
This sentiment was previously echoed by US Vice President JD Vance, who warned that the "excessive regulation of the AI sector could kill a transformative industry just as it's taking off."
However, the administration has encountered friction regarding national security and AI. Earlier this year, the United States Department of War designated Anthropic as a supply chain risk after the company declined to grant "unrestricted access" to its models for military purposes. Consequently, the Department of War entered into an agreement with OpenAI.
The current push for oversight intensified following Anthropic's unveiling of Claude Mythos. The model is reputedly so sophisticated that it could potentially identify and "exploit cybersecurity flaws" that have remained undetected by human experts.
While Anthropic has limited the availability of Mythos to a restricted circle of firms, the administration is reportedly discussing the broader integration of the model across various government agencies.
- ANI
Reader Comments
We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.
Leave a Comment