The truth is that there's always been a problem with data

One thing is scarier than the details of our online behaviour being bought and sold – the flawed surveys and faulty research that policy-makers have relied on for decades

DUBAI, UNITED ARAB EMIRATES - OCTOBER 15, 2018. 
World Wide Web creator Tim Berners-Lee at Gitex.

(Photo by Reem Mohammed/The National)

Reporter: Patrick Ryan + Nick Webster
Section:  NA
Powered by automated translation

In 1983, France became the first nation to create a National Consultative Ethics Committee for Health and Life Sciences. The role of this body is to address the societal impact of technological progress. As a direct result of its existence, the country introduced its first bioethics laws in 1994.

Current French bioethics laws were adopted on July 7, 2011. Back then, I was leading the neuroscience and public policy programme at the prime minister’s Centre for Strategic Analysis. During this time, I took an active role in the research and preparation of areas of law related to neuroscience data. Despite three years of work and contributions from dozens of experts, the French parliament adopted legislative recommendations that went against our findings. It was tough to learn that in matters of science and technology, regulators and policy-makers often lack state-of-the-art knowledge and have interests that are not aligned with the industry.

As the latest revision process comes to an end, with new bioethics laws expected to be voted in next year, several public consultations have been conducted throughout 2018. Given recent news, it is to be expected that issues related to privacy and the use of personal data figured prominently in these conversations.

A few days ago, at Gitex Technology Week in Dubai, Sir Tim Berners-Lee, the inventor of the world wide web, advocated for a radical restructuring of the online world, arguing that  global companies have become far too powerful and that several high-profile data-sharing scandals have highlighted the need for change.

Tech companies and individual employees are taking their responsibilities to the public seriously. A document published by Microsoft earlier this year discusses the possibility of a coders' Hippocratic oath, similar to that taken by medical professionals, reinforcing the fact that that their mission is also to "first do no harm".

However, relying on the industry to regulate itself is clearly not enough. According to the Pew Research Center, 51% of people in the United States believe that big tech companies should be more heavily regulated than they are. The US Honest Ads Act aims to bring regulation of digital political advertisements in line with those on TV and radio, where disclosure of funding is compulsory. The General Data Protection Regulation (GDPR) enforced in the European Union addresses issues of privacy and the selling of personal data. Both are steps in the right direction. Yet, according to a recent analysis, five months after it was introduced, it appears that the GDPR has negatively impacted ad trackers used on websites, but increased Google Ads’ market shares in Europe.

In October, three major scandals involving big tech have broken: Facebook providing a forum for hate speech against the Rohingya; WhatsApp being used to influence the Brazilian elections; and Google being forced to shut down Google+ after failing to disclose user data leaks. In spite of all this, Pew's research shows that 74% of Americans believe that the services provided by big technology companies have done them more good than harm, while 63% believe "they have had a net positive impact on society as a whole".

Speaking purely theoretically, it is easy to say that tech companies need to self-regulate, otherwise consumers will simply stop using their services. This could happen, but it is not necessarily the way that humans behave.

People should, of course, be concerned about their online privacy, but many apps and websites save us time and make our lives easier. It is likely that a significant number of people will want that convenience and be willing to accept that their data being used for commercial purposes is the price that needs to be paid for it.

There is one thing that scares me more than algorithms that can accurately predict my behavior being used to inform policy and marketing strategies. That is flawed methodologies and inaccurate data analysis being used for the same purposes. This has been happening for decades, with governments relying almost exclusively on faulty, self-reported data gleaned from surveys, questionnaires and focus groups. Unlike the recent concerns about big data, people have been largely silent about this.

Consumer awareness relating to the commercial use of personal data is extremely positive. So are the steps towards regulation being made by the industry and policy-makers – as long as the latter are working from an informed perspective and willing to listen to expert advice. However, we should not forget that the sharing of data is not just of commercial benefit. It has allowed great strides to be made in medical and life sciences. If my data can be used to provide better healthcare to people, I’m all in. The process simply needs to be transparent, legal and ethical.

Professor Olivier Oullier is the president of Emotiv, a neuroscientist and a DJ