Goodbye OpenAI
I have canceled my personal subscriptions to ChatGPT and committed to not using Codex or ChatGPT at work going forward. I cannot continue to support a company that shows such disregard for my values and the values of its employees.
Last night, Sam Altman, the CEO of OpenAI, announced that OpenAI would be entering into an agreement with the Department of Defense to use their models:
Tonight, we reached an agreement with the Department of War [sic] to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
Here is a partial timeline of recent events leading up to this announcement, starting with Anthropic’s refusal to agree to the Department of Defense’s demands for the use of their models in domestic surveillance and autonomous weapons: https://anthropic-timeline.vercel.app/
However, it is missing two important events. First, three hours before this announcement, a group of OpenAI and Google employees voiced their public support for Anthropic’s position: https://notdivided.org/ Second, three hours after this announcement, the United States launched a vicious, premeditated, and illegal set of strikes on Iran.
In this context it is evident that Sam Altman’s declaration that the “DoW display[s] a deep respect for safety” is an outright lie. It’s a rejection of decent values and an insult to all of us. Military strikes are not safety.
Do not let them hide behind weasel words that try to distinguish AI safety and safety. That would both show a myopic misunderstanding of why AI safety is important, and would also be yet another lie: The underlying concerns about domestic surveillance are concerns about safety tout court. What we see in their actions is the morally bankrupt and considered position of OpenAI, and it deserves nothing but scorn.