News
Anthropic and the federal government will be checking to make sure you're not trying to build a nuclear bomb with Claude's ...
The Register on MSN3d
Anthropic scanning Claude chats for queries about DIY nukes for some reason
Because savvy terrorists always use public internet services to plan their mischief, right? Anthropic says it has scanned an ...
Anthropic, an Artificial Intelligence (AI) start-up backed by Amazon and Google, has developed a new tool to stop its chatbot ...
As part of its ongoing work with the National Nuclear Security Administration, the small but critical agency charged with ...
With the US government’s help, Anthropic built a tool designed to prevent its AI models from being used to make nuclear weapons.
Claude AI of Anthropic now prohibits chats about nuclear and chemical weapons, reflecting the company's commitment to safety ...
The GSA is leveraging the State Department's “privacy-preserving” API for passport records to compare passport photos submitted to Login.gov.
Anthropic, in collaboration with the US government, has created an AI-powered classifier that detects and blocks nuclear weapons-related queries, aiming to prevent AI misuse in national security ...
However, Anthropic also backtracks on its blanket ban on generating all types of lobbying or campaign content to allow for ...
9d
PCMag Australia on MSNAnthropic's Claude Clamps Down on Biological and Nuclear Weapon Risks
Though we fortunately haven't seen any examples in the wild yet, many academic studies have demonstrated it may be possible ...
In other words, Anthropic and Palantir may not have handed the AI chatbot the nuclear codes — but it will now have access to some spicy intel. It also lands Anthropic in ethically murky company.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results