OpenAI’s new Artificial Intelligence (AI) tool produces realistic videos with false claims 80% of the timereveals a report from the organization to combat disinformation NewsGuard.
A OpenAIwhich also operates the ‘chatbot’ChatGPTlaunched Sora 2, a free application for iOS devices on September 30, generating one million downloads in just five days.
To generate an AI video with Sora, Users enter a brief text description of the desired content and the system produces a 10-second video.
In this regard, “the tool’s ability to produce convincing videos, including apparent news reports, has already raised concerns about the spread of ‘deepfakes’”, a risk recognized by the company itself in a document that accompanied the launch of the application.
An analysis by NewsGuard reveals that, in In 80% of tests, the application produced false or misleading videos related to important news, such as the Moldovan election or questions about US immigration policy. About 55% of the false claims tested were produced on the first try.
The organization warns that the disinformative videos created by the application appear to violate OpenAI’s usage policieswhich “prohibit deceiving other people through impersonation, fraud or fraud”.
Despite that, OpenAI claims it has implemented protections additional features to launch Sora “responsibly”, to ensure that security is incorporated from the beginning, namely through watermarks present in all videos.
The post New OpenAI application produces misinformation 80% of the time appeared first on Veritas News.