Big tech companies, the famous Big Techs like Google and Meta, will have to take action against deepfakes and fake accounts – or risk facing huge losses. Deepfakes are videos that use a person’s image to portray them doing something they’ve never done.
New EU regulation, backed by the Digital Services Act (DSA), will require tech companies to address these forms of disinformation on their platforms. Companies can be fined up to 6% of their global turnover if they fail to comply.
The reinforced code aims to avoid profiting from misinformation and fake news on their platforms, as well as increase transparency around political advertising and curb the spread of ‘new malicious behavior’ such as bots, fake accounts and deepfakes.
Clubhouse, Google, Meta, TikTok, Twitter and Twitch are among the 33 signatories to the improved code, and they worked together to agree to the new rules.
Big Techs that have signed up to the code will be required to share more information with the EU – with all signatories required to provide initial reports on code implementation by early 2023.
Platforms with more than 45 million monthly active users in the EU will have to report to the Commission every six months.
Nick Clegg, president of global affairs at Meta, wrote on Twitter: “Fighting the spread of misinformation is a complex and evolving social issue.”
“We continue to invest heavily in teams and technology and look forward to more collaboration to resolve this together.”
A Twitter spokesperson said the company welcomed the updated code. “Through and beyond the Code, Twitter remains committed to fighting misinformation and misinformation as we continue to evaluate and evolve our approach in this ever-changing environment,” said a statement.
Google did not respond to a request for comment.
What are the dangers of deepfakes?
Deepfakes have been identified as an emerging form of disinformation when used maliciously to target politicians, celebrities and ordinary citizens.
In recent years, they have become increasingly associated with pornography, with faces of individuals mapped onto sexually explicit material.
Deepfake expert Nina Schick says non-consensual pornographic deepfakes are the main form of malicious deepfakery today — notably affecting well-known figures including Michelle Obama, Natalie Portman and Emma Watson.
Concerns have also been raised about the use of deepfakes in the political sphere, with fake videos of world leaders being shared online during the Russia-Ukraine war.
“This new anti-disinformation code comes at a time when Russia is weaponizing disinformation as part of its military aggression against Ukraine,” said Věra Jourová, European Commission Vice President for Values and Transparency, “but also when we see attacks on wider democracy”.
“We now have very significant commitments to reducing the impact of online disinformation and much more robust tools to measure how it is implemented across the EU in all countries and in all their languages.”
Double-edged sword
The difficulty of differentiating deepfakes from real images is likely to increase over the next few years, says Schick, citing the increased availability of tools and applications needed to develop malicious deepfakes.
While deepfakes allow troublemakers to directly spread disinformation, their appearance more widely on online platforms is causing a climate of uncertainty in information – which is open to further manipulation.
For example, genuine images can be dismissed as deepfakes by those looking to avoid liability.
This makes the challenge for citizens to recognize genuine content, and for regulators and platforms to act on it, increasingly difficult.
“You have this kind of double-edged sword – anything can be faked and everything can be denied,” says Schick.
In recent years, Big Techs have made efforts to detect and combat deepfakes on their platforms – with Meta and Microsoft among the stakeholders launching the Deepfake Detection Challenge for AI researchers in 2019.
But platforms “too often use deepfakes like a fig leaf to cover up the fact that they’re not doing enough on existing forms of disinformation,” says Schick.
“They are not the most prevalent or malicious forms of disinformation; We have so many existing forms of disinformation that are already causing more damage.”
Under the revised EU code, accounts that engage in coordinated inauthentic behavior generating false engagement, impersonation and bot-driven amplification will also need to be periodically reviewed by relevant tech companies.
But Schick adds that deepfakes still have the potential to become “the most potent form of disinformation” online.
“This technology will become increasingly prevalent relatively quickly, so we need to be at the forefront,” she says.
Vlops and Vlos
The DSA – agreed by the European Parliament and EU member states in April – is the EU’s planned regulation for illegal content, goods and services online, based on the principle that things that are illegal offline must also be illegal online.
Due to come into force in 2024, the DSA will apply to all online services operating in the EU, but with a special focus on what it calls Vlops (very large online platforms such as Facebook and YouTube) and Vloses (large platforms search online). engines such as Google) – defined as services that have more than 45 million users in the EU.
It will be the legal tool used to support the new code of conduct on disinformation, in a bid to fight fake news and falsified images online.
Want to know more about the news? Don’t forget to follow ADNEWS on social media and stay on top of everything!
Translated BBC article from Tom Gerken and Liv McMahon
The post Big Techs must deal with disinformation or face end, says UE appeared first on DNEWS.