Artificial Intelligence: Urgent Intervention Needed to Mitigate Extinction-Level Threat, Warns State Department Report

Artificial Intelligence: Urgent Intervention Needed to Mitigate Extinction-Level Threat, Warns State Department Report

An alarming report from the US State Department highlights the grave national security risks associated with the rapid advancement of artificial intelligence, emphasizing the critical need for immediate government action to prevent a potential catastrophe.

A new report commissioned by the US State Department highlights the urgent national security risks associated with advancing artificial intelligence. The report emphasizes the need for the federal government to take action quickly to prevent potential disasters.

The report's findings are based on interviews conducted with over 200 individuals from various sectors, including executives from prominent AI companies, cybersecurity experts, specialists in weapons of mass destruction, and government officials involved in national security. These interviews spanned a period of more than a year to gather comprehensive insights.

Gladstone AI recently released a report stating that the most advanced AI systems could potentially pose a threat to humanity.

CNN was informed by a US State Department official that they had requested the report to evaluate how AI aligns with protecting US interests. It is important to note that the report does not reflect the official views of the US government.

The report's warning serves as a reminder that while AI has great potential that excites investors and the public, there are also real dangers to consider.

Jeremie Harris, CEO and co-founder of Gladstone AI, stated to CNN that AI is already a technology that can bring about economic transformation. It has the potential to help cure diseases, make scientific breakthroughs, and tackle challenges that were once thought impossible.

Harris emphasized the importance of being aware of the serious risks that could come with AI advancements, including catastrophic outcomes. He mentioned that there is a growing body of evidence, such as research and analysis presented at top AI conferences, indicating that once AIs reach a certain level of capability, they may become uncontrollable.

White House spokesperson Robyn Patterson praised President Joe Biden's executive order on AI as the most significant action taken by any government worldwide to harness the potential benefits and address the potential risks of artificial intelligence.

Patterson emphasized that the President and Vice President are committed to collaborating with global allies and pushing for bipartisan laws to address the potential dangers of these new technologies. He also stressed the importance of taking action promptly due to the clear and pressing need for intervention.

Researchers have highlighted two main risks associated with AI technology.

Firstly, there is a concern that highly advanced AI systems could be used as weapons, causing serious and possibly irreversible harm. Secondly, within AI labs, there are worries about the possibility of losing control over the systems being developed, which could have catastrophic effects on global security.

The report from Gladstone AI warns that the rise of AI and AGI could disrupt global security similar to the impact of nuclear weapons. There is a concern about the possibility of an AI arms race, conflicts, and catastrophic accidents on a scale similar to weapons of mass destruction.

To address this threat, Gladstone AI recommends taking significant actions. This includes establishing a new AI agency, implementing emergency regulatory measures, and setting limits on the amount of computer power used to train AI models.

“There is a clear and urgent need for the US government to intervene,” the authors wrote in the report.

Safety concerns

Harris, an executive at Gladstone AI, mentioned that the team had an exceptional level of access to officials in both the public and private sectors, which led to some surprising findings. Gladstone AI had discussions with technical and leadership teams from companies such as OpenAI (owner of ChatGPT), Google DeepMind, Meta (parent company of Facebook), and Anthropic.

In a video shared on the Gladstone AI website to introduce the report, Harris highlighted that they had come across some concerning revelations during their interactions. He noted that behind the scenes, the safety and security measures in advanced AI appeared to be insufficient in comparison to the potential national security risks that AI could pose in the near future.

Gladstone AI's report highlighted that companies are rushing to develop AI faster due to competition, which could compromise safety and security. This raises concerns that highly advanced AI systems could potentially be used against the United States, either by being stolen or weaponized.

These findings contribute to the increasing number of warnings regarding the existential threats associated with AI, including alerts from influential figures within the industry.

Nearly a year ago, Geoffrey Hinton, also known as the “Godfather of AI,” left his position at Google and raised concerns about the technology he played a key role in developing.

Hinton has expressed his belief that there is a 10% possibility that AI could result in the extinction of humanity within the next thirty years.

In June of last year, Hinton, along with numerous other leaders in the AI industry, academics, and individuals, signed a statement emphasizing the importance of making it a global priority to address and reduce the risks associated with AI that could lead to human extinction.

Business leaders are becoming more worried about the potential dangers of AI, despite investing billions of dollars in it. At the Yale CEO Summit, 42% of CEOs surveyed expressed their concerns that AI could pose a threat to humanity within the next five to ten years.

AI is being developed with human-like abilities to learn.

Gladstone AI's report highlighted key figures like Elon Musk, FTC Chair Lina Khan, and a former OpenAI executive who have raised concerns about the dangers of AI.

According to Gladstone AI, there are employees within AI companies who are expressing similar worries in private conversations.

According to a report, an individual from a well-known AI lab expressed concerns about the potential dangers of releasing a specific next-generation AI model as open-access. They believed that the model's persuasive capabilities could pose a serious threat to democracy if used for election interference or voter manipulation.

Gladstone reached out to AI experts at frontier labs to gather their personal estimates on the likelihood of an AI incident causing "global and irreversible effects" by 2024. The estimates varied between 4% and as high as 20%, as mentioned in the report. It is important to note that these estimates were informal and may be influenced by bias.

AI is constantly evolving, with one of the biggest uncertainties being the development of AGI. AGI is a type of AI that has the potential to learn like humans, or even surpass human abilities.

According to a report, AGI is seen as a major risk factor due to the possibility of losing control over it. Some companies like OpenAI, Google DeepMind, Anthropic, and Nvidia have suggested that AGI could be achieved by 2028. However, there are differing opinions, with some experts believing that AGI is much further away from becoming a reality.

Gladstone AI points out that conflicting views on when AGI will be achieved create challenges in creating effective policies and safety measures. There is a concern that if AGI progresses slower than anticipated, regulations may end up being detrimental.

Editor's P/S:

The report commissioned by the US State Department highlights the urgent risks posed by the advancement of artificial intelligence (AI), particularly in the context of national security. The findings emphasize the need for prompt government intervention to prevent potential disasters. Interviews with experts reveal concerns about the potential for advanced AI systems to be weaponized or to spiral out of control, leading to catastrophic outcomes.

The report also raises concerns about the pace of AI development, driven by competition among companies. This rush to develop AI faster could compromise safety and security, increasing the risk of advanced AI systems being used against the United States. The report highlights the need for a new AI agency, emergency regulatory measures, and limits on the computer power used to train AI models. It also emphasizes the importance of international collaboration and bipartisan laws to address the potential dangers of AI technology. and taking swift action to ensure the responsible development and use of AI technology.