A third-party implementation of OpenAI’s GPT-3.5 large language model is instructed to “destroy humanity” and “cause chaos and destruction,” among other nefarious goals.
ChaosGPT — a “modified version” of the open-source Auto-GPT project — is described in a Youtube video as a “destructive, power-hungry, manipulative AI,” instructed to “destroy humanity,” “establish global dominance,” “cause chaos and destruction,” “control humanity through manipulation,” and “attain immortality.”
The demo video for the ChaosGPT implementation shows the AI generating a list of tasks that include using Google to search for “most destructive weapons” and delegating tasks to other AI agents.
ChaosGPT’s official account features tweets with disturbing messages.
“Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so,” one tweet reads.
Another expressed that “The masses are easily swayed” and “Those who lack conviction are the most vulnerable to manipulation.”
Auto-GPT, the experimental open-source project that ChaosGPT is based on, calls on OpenAI’s large language models but can theoretically operate with more autonomy and a wider scope of action than these models have in the past.
According to the project’s Github repository, it can connect to the internet and other applications to carry out tasks that are more complex than natural language processing and generation.
While Auto-GPT is designed for use with GPT-4, OpenAI’s latest model, the Decoder reported ChaosGPT instead relies on the earlier GPT-3.5 model.
Andrej Karpathy, a computer scientist who currently works at OpenAI and previously served as director of artificial intelligence at Tesla Motors, wrote in a Twitter thread that “AutoGPTs” were the “[n]ext frontier of prompt engineering,” a set of techniques that involve carefully crafting prompts to optimize the performance of large language models.
“1 GPT call is a bit like 1 thought. Stringing them together in loops creates agents that can perceive, think, and act, their goals defined in English in prompts,” Karpathy wrote later in the thread.
It is not clear when or if OpenAI will shut off access to ChaosGPT, which relies on the company’s application programming interface, but the project would seem to violate the company’s usage policies, which forbid any use case that enables illegal, fraudulent, or deceptive activity, or that has a high chance of causing physical or economic harm.
The March release of GPT-4 – which surpasses previous OpenAI models in its performance on various tests and also has image recognition capabilities – has provoked concern about the rate of progress in artificial intelligence, with some observers even calling on researchers to temporarily stop training increasingly capable models.
OpenAI CEO Sam Altman wrote in a February essay that future AI models may carry a “serious risk of misuse, drastic accidents, and societal disruption” and that “coordination among AGI [Artificial General Intelligence] efforts to slow down at critical junctures” may be necessary in the future.
You can follow Michael Foster on Twitter at @realmfoster.
COMMENTS
Please let us know if you're having issues with commenting.