New threats are constantly emerging in the digital era, threats that, unfortunately, existing cyber defense strategies sometimes have little effect against. One such persistent threat is the spread of disinformation campaigns or “fake news” as it’s been coined.
Disinformation is a process in which false information about an individual, group, company or brand is being shared to create a false perception or understanding of facts. Detecting misinformation presents a special challenge as it is fundamentally more difficult than tackling spam.
Whereas anti-spam efforts are mostly automated today, countering disinformation is still very work intensive. It’s not always clear whether content contains misinformation, even to a human reader; it’s necessary to analyze stories to identify manipulative speech and false narratives. There have been advances in utilizing artificial intelligence (AI) against fake news, but because it can be biased towards its designers and the input data, AI models still need human input.
Everyone is affected by it. A study conducted by Adverifai shows that Google Ads and Google’s DoubleClick internet advertising company were responsible for 69% of the ads that appeared on fake news sites during January and February of this year. All this despite all Alphabet companies, together with social media platforms, having policies to fight fake news propagation.
Facebook, Google, and Twitter all say they are committed to fighting fake news. Primarily, this takes the form of blocking some publications and preventing the promotion of others, done manually & decided on a case-by-case basis. Facebook hopes to see AI taking a primary role in detecting hate speech on Facebook within the next five to ten years.
Google has said that the company has a strict policy of blocking dangerous and misleading content – the company claims to remove such content and block these sites’ ability to create revenue. Google has also referenced its advertiser controls that give brands the ability to decide where their ads run, including the ability to exclude specific websites or entire topics.
Content recommendation & advertising firms are also taking steps to ensure their platforms cannot be manipulated. Taboola, known for placing sponsored links on publishers’ websites, has also taken steps to halt the spread of misinformation. According to the CEO, Adam Singolda, the company is positioning itself at the forefront of anti-fake news efforts in content moderation. On the company’s content moderation work: “We are one of the world’s leading companies for content moderation, with a team that works manually according to our public policy. We also work with other companies that assist us in our efforts. […] With Taboola everything is public, we have a human team that’s backed by AI and software, and you can always expect us to be consistent.” To that end, Taboola said it plans to invest more than $100m in research and development in 2021.
The latest study shows that, while the big tech companies prevent such content from appearing on their own platforms, there is still significant work needed against sites that use their platforms & capabilities to spread disinformation. The most significant challenges to tackle the spread of disinformation will be in the AI sphere, and the development of new technologies to provide a faster, more agile response against the propagation of fake news.