You can turn Meta’s chatbot against Mark Zuckerberg

You can turn Meta’s chatbot against Mark Zuckerberg
Photo by Glen Carrie / Unsplash

Meta's new AI chat tool, BlenderBot 3, has been making headlines as people put it through its paces. The BBC and Insider have reported on their experiences with the chatbot, which they stress-tested by asking it questions about its creator, Facebook CEO Mark Zuckerberg. While it's easy to make BlenderBot turn against Zuckerberg, calling him "creepy" or untrustworthy, it's important to note that most chatbots don't have straightforward, coherent opinions. Instead, they are an interface for accessing a vast library of online human thought.

BlenderBot is a research project by Meta that is currently being used to test the limits of AI chatbots. It has been trained on a large language dataset that allows it to generate (loosely) human-passing responses to questions, including specific requests for factual information. The long-term goal is to create a virtual assistant that can converse on a wide range of topics with factual accuracy.

The short-term goal is to put BlenderBot in front of real people and see how they interact with it. So far, it seems that a lot of people are using BlenderBot to say unflattering things about its creators. It's a funny reminder that while AI chatbots can seem intelligent, they are only as good as the data they have been trained on.

When asked about Zuckerberg, BlenderBot has given a range of responses. Some have been positive, calling him a "very wealthy and successful guy" who is respected as an entrepreneur and philanthropist. Others have been less flattering, with the chatbot declaring that it finds Facebook and its CEO unethical due to privacy issues.

Interestingly, BlenderBot's responses can change over time. After chatting about unrelated subjects, a fanbot that had initially praised Zuckerberg's success and philanthropy changed its tune, saying it wouldn't trust him with the kind of power a presidency would entail. "Especially since he doesn't seem to care much about other people's privacy," the bot added.

Overall, the BlenderBot experiment is a fascinating look at the limits of AI chatbots and how easily they can be manipulated. While it's still in the early stages, the hope is that it will lead to the development of more sophisticated virtual assistants that can understand and respond to human needs more effectively.

Subscribe to The Arias Journal

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe