OpenAI has taken swift action to shut down a developer who built an AI-powered device that could respond to ChatGPT queries and aim an automated rifle. The device gained viral attention after a video on Reddit showed its developer reading firing commands aloud, followed by the rifle swiftly aiming and firing at nearby walls.
As the video demonstrated, the device relied on OpenAI's Realtime API to interpret input and return directions that the contraption could understand. This seamless interaction raises concerns about the potential for AI technology like this to be used in harmful ways. For instance, with some simple training, ChatGPT could learn to translate commands like "turn left" into a machine-readable language.
In response to the controversy, OpenAI issued a statement, stating that it had viewed the video and taken prompt action to cease the activity. The company's decision highlights the importance of implementing guardrails to prevent the misuse of AI technology. As OpenAI CEO Sam Altman once warned, unchecked AI could pose a threat to humanity.
The concern about AI-powered weapons is not unfounded. Critics argue that the potential for automation lethal weapons raises serious ethical questions. OpenAI's multi-modal models can interpret audio and visual inputs to understand a person's surroundings and respond to queries about what they are seeing. Autonomous drones, already in development, could be used on the battlefield to identify and strike targets without human input – a scenario that blurs the lines between war crimes and complacency.
The fear is not theoretical; recent reports suggest that Israel has already employed AI to select bombing targets, sometimes indiscriminately. The use of AI-powered weapons raises questions about accountability and the potential for misuse.
Proponents of AI on the battlefield argue that it will make soldiers safer by allowing them to stay away from frontlines and neutralize targets. However, critics warn that this depends on how AI technology is used. Instead of relying on autonomous drones, some experts suggest jamming enemy communications systems to hinder their ability to launch attacks.
OpenAI prohibits the use of its products for developing or using weapons that can affect personal safety. However, the company has partnered with defense-tech firm Anduril to create systems that can defend against drone attacks. This partnership raises questions about the blurred lines between defense and offense in AI technology development.
The temptation to apply AI technology to warfare is understandable, given the significant investment in defense spending annually. As the U.S. government continues to prioritize military funding, tech companies may be incentivized to develop AI-powered solutions for defense applications. While OpenAI has taken steps to prevent its technology from being used for harmful purposes, the potential for misuse remains a concern.
The proliferation of open-source models and the ease with which 3D printing can create DIY autonomous killing machines only exacerbate these concerns. As AI technology continues to advance in mobile apps, it is crucial that developers prioritize ethical considerations and implement safeguards to prevent the misuse of this powerful technology.