Mass manipulation and platform privacy: where we’re at

May 6, 2019  - written by  in Security

The seventh Agora du FIC took place on April 18, 2019 at The Maison de la Chimiein Paris. FIC is the Forum International de la Cybersécurité—the International Cybersecurity Forum—a cybersecurity conference that happens once a year, while the Agora happens in between. The theme of this Agora was “from influence to intervention: democracy in the face of cybersecurity threats,” and Gandi was in attendance. 

AgoraFic-avril19
© Agora_FIC

How do you define cyberwarfare?

According to General Watin-Augouard, moderator of the event and co-founder of FIC, the manipulation of information and the protection of platforms are major challenges for the 21st century. Jean-Louis Gergorin, former diplomat and author of Cyber. The Permanent War, elaborates, “Cyberwarfare is a modern continuation of political activity that follows two vectors: intrusion and manipulation. False information spreads seven times as fast as real information on social networks.”

Famously, large-scale manipulation of information peaked during the last US Presidential election, with the United States Senate Select Committee on Intelligence estimating that 126 million US Americans were impacted.

To further refine the notion of manipulation and examine the phenomenon, CAPS and IRSEM, two think tanks, conducted more than 100 interviews across 20 countries, research that culminated in the publication of a non-official report available in both French and English: “Information Manipulation: A Challenge For Our Democracies“. According to Alexandre Escorcia, adjunct director of CAPS, there are three common criteria and three grades among cases of manipulation studied in the report:

CRITERIA

  • Distorted news or wholesale fabrications
  • Massive and artificial distribution (bots)
  • A political intention to do harm

GRADES

  • Fake news, defamation
  • Manipulation by States
  • Manipulation by non-States (e.g. ISIS, ethnic/religious groups)

Meanwhile, new technologies are making information manipulation even easier. Future challenges on the horizon include deep fakes, “kinetization” (i.e. targeting physical information transmission infrastructure such as undersea cables and satellites), mainstreamization, and proxyzation (enlisting populations in areas such as Latin America and Africa into a proxy fight for informational dominance). In France, the Law Against the Manipulation of Information, passed November 20, 2018, was influenced by this report in particular.

Thierry Vedel, researcher at CNRS and Cevipof and political science teacher adds further depth to the topic, remarking:

“Disinformation is not a new historical phenomenon. What’s changed is the scale, the speed, and the visibility given to it by social media.”

Thierry Vedel

The new, digital social contract

AgoraFIC2-avril19-FlorianBachelier
Florian Bachelier, Premier Questeur of the French National Assembly and Deputy from Ille-et-Vilaine

Florian Bachelier, Premier Questeur (i.e. investigator) of the French National Assembly and Deputy from Ille-et-Vilaine, expressed a viewpoint on information manipulation, relating it back to the fundamentals of democracy:

The digital revolution is over. Now it’s the new social contract. Public space is now fully digital and our relationship with time is disrupted. Cybersecurity is the rider of this contract. There is no space for ignorance, passivity, or naïvety.

Florian Bachelier

For Jean-Louis Gergorin, there has never been in history a new technology that, once invented, was not turned towards a military use. The digital explosion is no exception to this rule. When this logic is applied to new online technology, the strategic application of social networks becomes mass influence and control.

The former diplomat traced this history back to its beginning. In the US, Jared Cohen, under President George H. W. Bush, was one of the first to use social networks to promote democracy. Jean-Louis Gergorin also cites operation Earnest Voice, undertaken by the Pentagon in 2011, to counter the Taliban and Al Qaeda (specifically in Afghanistan and Pakistan). The idea was to use sock puppet accounts (fake Facebook and Twitter accounts) in the local language in order to spread information and influence political developments. The details of this program leaked only weeks after it started, rending it moot.

A final example of the early history of information manipulation from Jean-Louis Gergorin was the Arab Spring and the huge role that social media played in propagating the movement. Russia belived this was in fact a practice round on the part of the United States of what they had prepared to destablize Russia. Pavel Dourov, the founder of VKontakte (VK), the Russian equivalent of Facebook, was ousted and control of the social network was passed to elements close to Vladimir Putin.

How to fight cyber-manipulation

Filters and recommendations on social media create a cognitive bias when accessing information. Suggested content prompts users to look at content linked to their tastes and opinions. There is effectively a “like seeks like” effect of these filters and algorithms that groups people into communities based on those particular tastes. Thierry Vedel recalled that cognitive psychology studies how the brain tries to filter information. It wants coherence. And most individuals are exposed to information selectively. This produces what’s called a confirmation bias: false information can satisfy us more than true information because it matches what we’re already looking for.

“We believe what we see. We see what we believe.”

Thierry Vedel

Participants at this Agora cited the recent measles outbreaks as a case of cyber-epidemic, resulting from massive anti-vaccination campaigns on social media.

For Jean-Louis Gergorin, this can in large part be explained by the algorithms used by Facebook and Twitter, which seek to maximize the audience for a particular post. YouTube and Instagram today are also moving in the same direction. True or false, the most sensational information is what gets promoted, hence the attraction to the extremes.

A lot of content is regularly flagged. The non-profit organization Point de Contact marks content that has been flagged by the wider public and by a network of international associations. It’s president, Jean-Christophe Le Tocquin, counts—as of 2018—32,000 URLs flagged, of which 18,000 were determined by Point de Contact to have been clearly illicit. The majority of this content related to child sexual abuse.

GANDI’S POINT OF VIEW

Gandi wears two hats: hosting provider and registrar. We have obligations to act according to the law and according to our contractual agreements (c.f. ICANN). Our treatment of cybercrime involves human, legal, and technical review. For each notification we receive, we make sure to provide an answer that’s respectful of the rights of others as well as the rights of our users.

Gandi continuously trains its representatives, participating in developing the best practices in the industry. For example, we participated in creating the white paper Child sexual abuse material and online terrorist propaganda, also available on the Point de Contact site.

Still learning

In Europe, things are advancing, by example with the law against hate in Germany, which holds the owners of social networks responsible. For Thierry Vedel, even if the law exists, we apply it poorly.

“How, on a national scale, can we expect transnational social networks to regulate themselves?”

Thierry Vedel

At the same time, even if a law is clear and not difficult to implement, it’s important that it be able to secure things.

Jean-Christophe Le Tocquin notes a collective discomfort as we continue to learn. Nonetheless, companies are evolving quickly. Microsoft, Facebook, and Twitter hold different positions than they did just a few years ago.

Brad Smith, chief legal officer at Microsoft, helped push some of that change. And last March, Mark Zuckerberg recognized in French periodical JDD the need for Facebook to regulate itself (in English here). Jack Dorsey, founder of Twitter, has also acknowledged that there are gaps in how the company handles content.

In the US, Facebook and Twitter have begun to self-regulate, especially with regards to removing fake accounts. But each of these platfoms is itself subject to its own internal tensions—engineers have a technical approach, with artificial intelligence emerging as a solution. Moderators on the other hand prefer a more sociological approach. In reality, a balance is necessary.

For Jean-Christophe Le Tocquin, the two points of improvement are:

  • Technology (today, structures for identifying false content are essentially manual)
  • Protection for people and moderators (untrained and unsupervised).

GANDI’S POINT OF VIEW

With regards to security, Gandi defends the rights of users and the protection of data. In a society where individuals are more and more dependent on large, private companies with little to no transparency of their economic model, especially when it comes to how they treat personal data, there are few transparent and open solutions to overcome that depedency.

Users should take the need to keep their communications private seriously. That’s why Gandi participates in the Caliopen project along with Qwant, UPMC, and with the support of BPI. Caliopen is a secure messaging tool built around the confidentiality of private messages. Anything communicated online is sent as a postcard without an envlope. Caliopen aims to put these messages in a sealed envelope that protects your IMAP email accounts and direct messages. Stay tuned for the announcement of the Beta version, coming soon!

More about Caliopen.org

Leave a Reply