Google: arbiter of wellness?
Project Baseline is mapping the health data from 10,000 people over four years, in a quest ‘to collect comprehensive health data and use it as a map and compass, pointing the way to disease prevention’. Headed by Google and Verily, another Alphabet company, the project has been dubbed the ‘Google Earth for Human Health’.
The team are collecting data from healthy people from early adulthood to old age. Everything is measured, from participants’ genomes, microbiomes and immunomes to their diets, activities and sleep patterns. The results of questionnaires, check-ups and medical tests will be combined with data—heartrate, electrodermal activity and inertial movements—tracked on Verily-built smartwatches. The aim is to construct a picture of normal health and to find and track the factors that contribute to the onset of disease.
At the same time Google are creating algorithms to understand our lives and genes so completely that they will be able to monitor us, prevent illness developing and determine targeted treatment. The prize is the prediction, prevention and tailored early treatment of illnesses.
While this is a worthy ambition, the prospect of a Google Earth for human health is also worrying. What will it mean to let one company, any company, with its own commercial interests at heart hold this information and create the algorithms that determine our wellbeing? Google, particularly, is under increasing pressure on privacy and data sharing, illustrated most recently by Shoshana Zuboff in her book on ‘surveillance capitalism’.
According to Google, while it “provides the computing, analytics, and data handling power, Google will not sell your information for advertising. All your information will be stored in a secure, encrypted database with restricted access.” But their record for keeping their word does not inspire confidence: DeepMind Health is a healthcare subsidiary that has a partnership with 10 NHS hospitals in the UK to process patient medical data. Despite DeepMind’s promises when bidding for the NHS work, that ‘at no stage will patient data ever be linked or associated with Google accounts, products or services”, the company was transferred from sibling status within holding company Alphabet, into Google in November last year.
Algorithms hide embedded assumptions and intentions, and the workings of companies like Google are opaque. It is not a philanthropic organisation, we need to know how will it seek a return on its investment.
The Arbiter of Wellness
More concerning than participants’ information being sold for advertising, is one organisation holding and analysing the information of millions of people. The Google Earth analogy indicates an ambition to monitor us all.
In A New Dark Age: technology and the end of the future, James Bridle points out the dangers of a company with ‘the computing, analytics, and data handling power’ mapping and modelling such an area of knowledge and expertise:
“That which computation sets out to map and model, it eventually takes over. Google set out to index all human knowledge and became the source and arbiter of that knowledge: it became what people actually think.”
It takes over. If Google creates a ‘Google Earth for human health’ it will come to define what illness is and how such illness is to be managed. Normality, including mental normality, will be determined by computation.
The complexity of such massive data collection and analysis systems beguile. We imagine them to be objective, politically and emotionally neutral, and accurate. Yet algorithms are subject to the biases of the select group of humans that set them up and the often erroneous and partial data that feeds them.
The data is partial because human wellness is complex and multifactorial. Many of these factors cannot be measured. Questionnaires are biased and miss grey areas between simple answers, while not all material questions are asked. Behaviour is nuanced, just as physical indications can be ambiguous. Equipment malfunctions, just as humans are inconsistent in their use of equipment—and their recall and description of symptoms.
As Project Baseline advances, masses of useful data will be processed and analysed to enable us to predict, more accurately treat and hopefully prevent many diseases. All this is undeniably good. But we must ensure that access to personal information is secure and the parameters of a disease are not defined by someone else’s wish to sell a cure. Crucially, machines, with their unknown biases and inaccuracies, must not be used to define normality where illness, or a prospective illness, involves loss of liberty, rights or employment.
- The project is a joint initiative of Google, Verily, Duke University School of Medicine and Stanford Medicine. ↑
- James Bridle, A New Dark Age: technology and the end of the future, Verso 2018, p 39 ↑
- According to Bridle, we tend to believe machines. ‘Automation bias’, meaning a tendency to value automated information over our own experience, has been observed ‘in every computational domain from spell-checking software to autopilots, and in every type of person’. ↑
We welcome comments on all aspects of our editorial content and coverage. If you have any questions about our service, or want to know more, please e-mail us or complete our enquiry form:Submit an Enquiry