What is this threat?
ChatGPT, an artificial intelligence chatbot created by OpenAI, was tested for its ability to spread misinformation when asked questions with conspiracy theories and false narratives.
Results were considered troubling by researchers from NewsGuard, a company that tracks online misinformation, who conducted the experiment.
AI-powered chatbots can easily spread misinformation and personalize it in credible and persuasive ways, making disinformation cheaper and easier to produce for more people.
No available mitigation tactics can effectively combat this issue.
Predecessors to ChatGPT have been used to spread comments and spam on online forums and social media platforms.
Microsoft had to halt activity from its Tay chatbot in 2016 after it was taught to spew racist and xenophobic language by trolls.
ChatGPT is far more powerful and sophisticated, producing convincing and clean variations of disinformation content within seconds.
Microsoft and OpenAI introduced a new Bing search engine and web browser that uses chatbot technology to plan vacations, translate texts, or conduct research.