[ad_1]
Representation of artificial intelligence at The Science Museum, London. Image (C) Tim Sandle
As artificial intelligence programs continue to develop and access becomes easier, differentiating fact from fiction is becoming more challenging. For example, recently an AI-generated image of an explosion near the Pentagon made headlines online, with a number of people thinking it was footage of a real attack.
According to The Guardian: “The image of a tall, dark gray plume of smoke quickly spread on Twitter, including through shares by verified accounts. It remains unclear where it originated.”
Cayce Myers, a professor in Virginia Tech’s School of Communication, has been studying this evolving technology and considers the future of deep fakes and how to spot them.
“It is becoming increasingly difficult to identify disinformation, particularly sophisticated AI generated deep fake,” explains Myers. “The cost barrier for generative AI is also so low that now almost anyone with a computer and internet has access to AI.”
Myers expects we will be exposed to increasing levels of disinformation – both visual and written – over the next few years. This will bring with it complications. He states: “Spotting this disinformation is going to require users to have more media literacy and savvy in examining the truth of any claim.”
While Photoshop programs have been used for years, Myers sees the difference between this photo enhancing technology and wilful disinformation as created with AI as one of sophistication and scope.
Here Myers says of the differences: “Photoshop allows for fake images, but AI can create altered videos that are very compelling. Given that disinformation is now a widespread source of content online this type of fake news content can reach a much wider audience, especially if the content goes viral.”
When it comes to combating disinformation, Myers says there are two main sources – ourselves and the AI companies.
By this, Myers means: “Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation,” he says. “However, that is not going to be enough. Companies that produce AI content and social media companies where disinformation is spread will need to implement some level of guardrails to prevent the widespread disinformation from being spread.”
Myers goes on to explain the problem: developmental pace. Myers observes that the technology of AI has developed so fast that it is likely that any mechanism to prevent the spread of AI generated disinformation will not be full proof.
Furthermore, attempts to regulate AI are unlikely to succeed. Myers states: “The issue is that lawmakers do not want to create a new law regulating AI before we know where the technology is going. Creating a law too fast can stifle AI’s development and growth, creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge.”
[ad_2]
Source link