Last Updated on November 5, 2024 by SPN Editor
Meta Platforms Inc. has announced that it will permit U.S. government agencies and contractors engaged in national security to use its AI models for military purposes. This decision marks a significant departure from the company’s previous policy, which explicitly prohibited the use of its technology for military and warfare-related efforts.
Expansion of Llama AI Models
Meta revealed that its AI models, known as Llama, will be made available to federal agencies. The company is also collaborating with defense contractors such as Lockheed Martin and Booz Allen, alongside defense-oriented tech firms like Palantir and Anduril. Notably, the Llama models are open-source, allowing other developers, companies, and governments to freely copy and distribute the technology.
This move represents an exception to Meta’s existing acceptable use policy, which restricted the use of its AI software for military, warfare, and nuclear industries, among other purposes. In a blog post, Nick Clegg, Meta’s President of Global Affairs, emphasized the company’s support for “responsible and ethical uses” of the technology that align with U.S. and democratic values in the global race for AI supremacy.
“Meta wants to play its part to support the safety, security, and economic prosperity of America—and of its closest allies too,” Clegg wrote. He added that the widespread adoption of American open-source AI models serves both economic and security interests. A Meta spokesperson further confirmed that the technology would be shared with members of the Five Eyes intelligence alliance, which includes Canada, Britain, Australia, and New Zealand, in addition to the United States.
Meta, which owns Facebook, Instagram, and WhatsApp, has been working to disseminate its AI software to as many third-party developers as possible. This strategy aims to compete with industry giants like OpenAI, Microsoft, Google, and Anthropic. To catch up with these rivals, Meta decided to open-source its code, leading to over 350 million downloads of its software since August.
Meta is likely to face scrutiny for this decision. The military applications of Silicon Valley tech products have sparked controversy in recent years, with employees at Microsoft, Google, and Amazon protesting deals with military contractors and defense agencies. Additionally, Meta’s open-source approach to AI has been criticized for its potential misuse. While OpenAI and Google argue that their AI technology is too powerful to release widely, Meta believes that transparency and widespread examination can improve and safeguard AI.
Meta’s executives have expressed concerns about potential harsh regulations on open-source AI by the U.S. government and others. These fears were heightened after Reuters reported that Chinese research institutions with ties to the People’s Liberation Army had used Llama to develop defense applications, known as ChatBIT. Meta asserted that this use was unauthorized and contrary to its policy.
In his blog post, Clegg highlighted that the U.S. government could use Meta’s technology to track terrorist activities and enhance cybersecurity across American institutions. He stressed that employing Meta’s AI models would help the United States maintain its technological edge while fostering responsible and ethical innovations that support strategic and geopolitical interests.
Meta’s decision to permit the use of its AI models for military purposes marks a significant shift in its policy. While the move aims to bolster national security and maintain technological leadership, it also raises important ethical and regulatory questions about the use of AI in defense applications.