Four steps to precision public health


Scott F. DowellDavid Blazes & Susan Desmond-Hellmann at Nature: “When domestic transmission of Zika virus was confirmed in the United States in July 2016, the entire country was not declared at risk — nor even the entire state of Florida. Instead, precise surveillance defined two at-risk areas of Miami-Dade County, neighbourhoods measuring just 2.6 and 3.9 square kilometres. Travel advisories and mosquito control focused on those regions. Six weeks later, ongoing surveillance convinced officials to lift restrictions in one area and expand the other.

By contrast, a campaign against yellow fever launched this year in sub-Saharan Africa defines risk at the level of entire nations, often hundreds of thousands of square kilometres. More granular assessments have been deemed too complex.

The use of data to guide interventions that benefit populations more efficiently is a strategy we call precision public health. It requires robust primary surveillance data, rapid application of sophisticated analytics to track the geographical distribution of disease, and the capacity to act on such information1.

The availability and use of precise data is becoming the norm in wealthy countries. But large swathes of the developing world are not reaping its advantages. In Guinea, it took months to assemble enough data to clearly identify the start of the largest Ebola outbreak in history. This should take days. Sub-Saharan Africa has the highest rates of childhood mortality in the world; it is also where we know the least about causes of death…..

The value of precise disease tracking was baked into epidemiology from the start. In 1854, John Snow famously located cholera cases in London. His mapping of the spread of infection through contaminated water dealt a blow to the idea that the disease was caused by bad air. These days, people and pathogens move across the globe swiftly and in great numbers. In 2009, the H1N1 ‘swine flu’ influenza virus took just 35 days to spread from Mexico and the United States to China, South Korea and 12 other countries…

The public-health community is sharing more data faster; expectations are higher than ever that data will be available from clinical trials and from disease surveillance. In the past two years, the US National Institutes of Health, the Wellcome Trust in London and the Gates Foundation have all instituted open data policies for their grant recipients, and leading journals have declared that sharing data during disease emergencies will not impede later publication.

Meanwhile, improved analysis, data visualization and machine learning have expanded our ability to use disparate data sources to decide what to do. A study published last year4 used precise geospatial modelling to infer that insecticide-treated bed nets were the single most influential intervention in the rapid decline of malaria.

However, in many parts of the developing world, there are still hurdles to the collection, analysis and use of more precise public-health data. Work towards malaria elimination in South Africa, for example, has depended largely on paper reporting forms, which are collected and entered manually each week by dozens of subdistricts, and eventually analysed at the province level. This process would be much faster if field workers filed reports from mobile phones.

Sources: Ref. 8/Bill & Melinda Gates Foundation

…Frontline workers should not find themselves frustrated by global programmes that fail to take into account data on local circumstances. Wherever they live — in a village, city or country, in the global south or north — people have the right to public-health decisions that are based on the best data and science possible, that minimize risk and cost, and maximize health in their communities…(More)”

21st Century Enlightenment Revisited


Matthew Taylor at the RSA: “The French historian Tzvetan Todorov describes the three essential ideas of the Enlightenment as ‘autonomy’, ‘universalism’ and ‘humanism’. The ideal of autonomy speaks to every individual’s right to self-determination. Universalism asserts that all human beings equally deserve basic rights and dignity (although, of course, in the 18th and 19th century most thinkers restricted this ambition to educated white men). The idea of humanism is that it is up to the people – not Gods or monarchs – through the use of rational inquiry to determine the path to greater human fulfilment….

21st Century Enlightenment 

Take autonomy; too often today we think of freedom either as a shrill demand to be able to turn our backs on wider society or in the narrow possessive terms of consumerism. Yet, brain and behavioural science have confirmed the intuition of philosophers through the ages genuine autonomy is something we only attain when we become aware of our human frailties and understand our truly social nature. Of course, freedom from oppression is the base line, but true autonomy is not a right to be granted but a goal to be pursued through self-awareness and engagement in society.

What of universalism, or social justice as we now tend to think of it? In most parts of the world and certainly in the West there have been incredible advances in equal rights. Discrimination and injustice still exist, but through struggle and reform huge strides have been made in widening the Enlightenment brotherhood of rich white men to women, people of different ethnicity, homosexuals and people with disabilities. Indeed the progress in legal equality over recent decades stands in contrast to the stubborn persistence, and even worsening, of social inequality, particularly based on class.

But the rationalist universalism of human rights needs an emotional corollary. People may be careful not to use the wrong words, but they still harbour resentment and suspicion towards other groups. …

Finally, humanism or the call of progress. The utilitarian philosophy that arose from the Enlightenment spoke to the idea that, free from the religious or autocratic dogma, the best routes to human fulfilment could be identified and should be pursued. The great motors of human progress – markets, science and technology, the modern state – shifted into gear and started to accelerate. Aspects of all these phenomena, indeed of Enlightenment ideas themselves, could be found at earlier stages of human history – what was different was the way they fed off each other and became dominant. Yet, in the process, the idea that these forces could deliver progress often became elided with the assumption that their development was the same as human progress.

Today this danger of letting the engines of progress determine the direction of the human journey feels particularly acute in relation to markets and technology. There is, for example, more discussion of how humans should best adapt to AI and robots than about how technological inquiry might be aligned with human fulfilment. The hollowing out of democratic institutions has diminished the space for public debate about what progress should comprise at just the time when the pace and scale of change makes those debates particularly vital.

A twenty first century enlightenment reinstates true autonomy over narrow ideas of freedom, it asserts a universalism based not just on legal status but on empathy and social connection and reminds us that humanism should lie at the heart of progress.

Think like a system act like an entrepreneur

There is one new strand I want to add to the 2010 account. In the face of many defeats, we must care as much about how we achieve change as about the goals we pursue. At the RSA we talk about ‘thinking like a system and acting like an entrepreneur’, a method which seeks to avoid the narrowness and path dependency of so many unsuccessful models of change. To alter the course our society is now on we need more fully to understand the high barriers to change but then to act more creatively and adaptively when we spot opportunities to take a different path….(More)”

The Government Isn’t Doing Enough to Solve Big Problems with AI


Mike Orcutt at MIT Technology Review: “The government should play a bigger role in developing new tools based on artificial intelligence, or we could miss out on revolutionary applications because they don’t have obvious commercial upside.

That was the message from prominent AI technologists and researchers at a Senate committee hearing last week. They agreed that AI is in a crucial developmental moment, and that government has a unique opportunity to shape its future. They also said that the government is in a better position than technology companies to invest in AI applications aimed at broad societal problems.

Today just a few companies, led by Google and Facebook, account for the lion’s share of AI R&D in the U.S. But Eric Horvitz, technical fellow and managing director of Microsoft Research, told the committee members that there are important areas that are rich and ripe for AI innovation, such as homelessness and addiction, where the industry isn’t making big investments. The government could help support those pursuits, Horvitz said.

For a more specific example, take the plight of a veteran seeking information online about medical options, says Andrew Moore, dean of the school of computer science at Carnegie Mellon University. If an application that could respond to freeform questions, search multiple government data sets at once, and provide helpful information about a veteran’s health care options were commercially attractive, it might be available already, he says.

There is a “real hunger for basic research” says Greg Brockman, cofounder and chief technology officer of the nonprofit research company OpenAI, because technologists understand that they haven’t made the most important advances yet. If we continue to leave the bulk of it to industry, not only could we miss out on useful applications, but also on the chance to adequately explore urgent scientific questions about ethics, safety, and security while the technology is still young, says Brockman. Since the field of AI is growing “exponentially,” it’s important to study these things now, he says, and the government could make that a “top line thing that they are trying to get done.”….(More)”.

From policing to news, how algorithms are changing our lives


Carl Miller at The National: “First, write out the numbers one to 100 in 10 rows. Cross out the one. Then circle the two, and cross out all of the multiples of two. Circle the three, and do likewise. Follow those instructions, and you’ve just completed the first three steps of an algorithm, and an incredibly ancient one. Twenty-three centuries ago, Eratosthenes was sat in the great library of Alexandria, using this process (it is called Eratosthenes’ Sieve) to find and separate prime numbers. Algorithms are nothing new, indeed even the word itself is old. Fifteen centuries after Eratosthenes, Algoritmi de numero Indorum appeared on the bookshelves of European monks, and with it, the word to describe something very simple in essence: follow a series of fixed steps, in order, to achieve a given answer to a given problem. That’s it, that’s an algorithm. Simple.

 Apart from, of course, the story of algorithms is not so simple, nor so humble. In the shocked wake of Donald Trump’s victory in the United States presidential election, a culprit needed to be found to explain what had happened. What had, against the odds, and in the face of thousands of polls, caused this tectonic shift in US political opinion? Soon the finger was pointed. On social media, and especially on Facebook, it was alleged that pro-Trump stories, based on inaccurate information, had spread like wildfire, often eclipsing real news and honestly-checked facts.
But no human editor was thrust into the spotlight. What took centre stage was an algorithm; Facebook’s news algorithm. It was this, critics said, that was responsible for allowing the “fake news” to circulate. This algorithm wasn’t humbly finding prime numbers; it was responsible for the news that you saw (and of course didn’t see) on the largest source of news in the world. This algorithm had somehow risen to become more powerful than any newspaper editor in the world, powerful enough to possibly throw an election.
So why all the fuss? Something is now happening in society that is throwing algorithms into the spotlight. They have taken on a new significance, even an allure and mystique. Algorithms are simply tools but a web of new technologies are vastly increasing the power that these tools have over our lives. The startling leaps forward in artificial intelligence have meant that algorithms have learned how to learn, and to become capable of accomplishing tasks and tackling problems that they were never been able to achieve before. Their learning is fuelled with more data than ever before, collected, stored and connected with the constellations of sensors, data farms and services that have ushered in the age of big data.

Algorithms are also doing more things; whether welding, driving or cooking, thanks to robotics. Wherever there is some kind of exciting innovation happening, algorithms are rarely far away. They are being used in more fields, for more things, than ever before and are incomparably, incomprehensibly more capable than the algorithms recognisable to Eratosthenes….(More)”

How Should a Society Be?


Brian Christian: “This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit….(More) (Video)”

Big data promise exponential change in healthcare


Gonzalo Viña in the Financial Times (Special Report: ): “When a top Formula One team is using pit stop data-gathering technology to help a drugmaker improve the way it makes ventilators for asthma sufferers, there can be few doubts that big data are transforming pharmaceutical and healthcare systems.

GlaxoSmithKline employs online technology and a data algorithm developed by F1’s elite McLaren Applied Technologies team to minimise the risk of leakage from its best-selling Ventolin (salbutamol) bronchodilator drug.

Using multiple sensors and hundreds of thousands of readings, the potential for leakage is coming down to “close to zero”, says Brian Neill, diagnostics director in GSK’s programme and risk management division.

This apparently unlikely venture for McLaren, known more as the team of such star drivers as Fernando Alonso and Jenson Button, extends beyond the work it does with GSK. It has partnered with Birmingham Children’s hospital in a £1.8m project utilising McLaren’s expertise in analysing data during a motor race to collect such information from patients as their heart and breathing rates and oxygen levels. Imperial College London, meanwhile, is making use of F1 sensor technology to detect neurological dysfunction….

Big data analysis is already helping to reshape sales and marketing within the pharmaceuticals business. Great potential, however, lies in its ability to fine tune research and clinical trials, as well as providing new measurement capabilities for doctors, insurers and regulators and even patients themselves. Its applications seem infinite….

The OECD last year said governments needed better data governance rules given the “high variability” among OECD countries about protecting patient privacy. Recently, DeepMind, the artificial intelligence company owned by Google, signed a deal with a UK NHS trust to process, via a mobile app, medical data relating to 1.6m patients. Privacy advocates say this as “worrying”. Julia Powles, a University of Cambridge technology law expert, asks if the company is being given “a free pass” on the back of “unproven promises of efficiency and innovation”.

Brian Hengesbaugh, partner at law firm Baker & McKenzie in Chicago, says the process of solving such problems remains “under-developed”… (More)

Misinformation on social media: Can technology save us?


 at the Conversation: “…Since we cannot pay attention to all the posts in our feeds, algorithms determine what we see and what we don’t. The algorithms used by social media platforms today are designed to prioritize engaging posts – ones we’re likely to click on, react to and share. But a recent analysis found intentionally misleading pages got at least as much online sharing and reaction as real news.

This algorithmic bias toward engagement over truth reinforces our social and cognitive biases. As a result, when we follow links shared on social media, we tend to visit a smaller, more homogeneous set of sources than when we conduct a search and visit the top results.

Existing research shows that being in an echo chamber can make people more gullible about accepting unverified rumors. But we need to know a lot more about how different people respond to a single hoax: Some share it right away, others fact-check it first.

We are simulating a social network to study this competition between sharing and fact-checking. We are hoping to help untangle conflicting evidence about when fact-checking helps stop hoaxes from spreading and when it doesn’t. Our preliminary results suggest that the more segregated the community of hoax believers, the longer the hoax survives. Again, it’s not just about the hoax itself but also about the network.

Many people are trying to figure out what to do about all this. According to Mark Zuckerberg’s latest announcement, Facebook teams are testing potential options. And a group of college students has proposed a way to simply label shared links as “verified” or not.

Some solutions remain out of reach, at least for the moment. For example, we can’t yet teach artificial intelligence systems how to discern between truth and falsehood. But we can tell ranking algorithms to give higher priority to more reliable sources…..

We can make our fight against fake news more efficient if we better understand how bad information spreads. If, for example, bots are responsible for many of the falsehoods, we can focus attention on detecting them. If, alternatively, the problem is with echo chambers, perhaps we could design recommendation systems that don’t exclude differing views….(More)”

Talent Gap Is a Main Roadblock as Agencies Eye Emerging Tech


Theo Douglas in GovTech: “U.S. public service agencies are closely eyeing emerging technologies, chiefly advanced analytics and predictive modeling, according to a new report from Accenture, but like their counterparts globally they must address talent and complexity issues before adoption rates will rise.

The report, Emerging Technologies in Public Service, compiled a nine-nation survey of IT officials across all levels of government in policing and justice, health and social services, revenue, border services, pension/Social Security and administration, and was released earlier this week.

It revealed a deep interest in emerging tech from the public sector, finding 70 percent of agencies are evaluating their potential — but a much lower adoption level, with just 25 percent going beyond piloting to implementation….

The revenue and tax industries have been early adopters of advanced analytics and predictive modeling, he said, while biometrics and video analytics are resonating with police agencies.

In Australia, the tax office found using voiceprint technology could save 75,000 work hours annually.

Closer to home, Utah Chief Technology Officer Dave Fletcher told Accenture that consolidating data centers into a virtualized infrastructure improved speed and flexibility, so some processes that once took weeks or months can now happen in minutes or hours.

Nationally, 70 percent of agencies have either piloted or implemented an advanced analytics or predictive modeling program. Biometrics and identity analytics were the next most popular technologies, with 29 percent piloting or implementing, followed by machine learning at 22 percent.

Those numbers contrast globally with Australia, where 68 percent of government agencies have charged into piloting and implementing biometric and identity analytics programs; and Germany and Singapore, where 27 percent and 57 percent of agencies respectively have piloted or adopted video analytic programs.

Overall, 78 percent of respondents said they were either underway or had implemented some machine-learning technologies.

The benefits of embracing emerging tech that were identified ranged from finding better ways of working through automation to innovating and developing new services and reducing costs.

Agencies told Accenture their No. 1 objective was increasing customer satisfaction. But 89 percent said they’d expect a return on implementing intelligent technology within two years. Four-fifths, or 80 percent, agreed intelligent tech would improve employees’ job satisfaction….(More).

The ethical impact of data science


Theme issue of Phil. Trans. R. Soc. A compiled and edited by Mariarosaria Taddeo and Luciano Floridi: “This theme issue has the founding ambition of landscaping data ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data ethics builds on the foundation provided by computer and information ethics but, at the same time, it refines the approach endorsed so far in this research field, by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software and data—rather than on the variety of digital technologies that enable them. And it emphasizes the complexity of the ethical challenges posed by data science. Because of such complexity, data ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework. Only as a macroethics will data ethics provide solutions that can maximize the value of data science for our societies, for all of us and for our environments….(More)”

Table of Contents:

  • The dynamics of big data and human rights: the case of scientific research; Effy Vayena, John Tasioulas
  • Facilitating the ethical use of health data for the benefit of society: electronic health records, consent and the duty of easy rescue; Sebastian Porsdam Mann, Julian Savulescu, Barbara J. Sahakian
  • Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions; Luciano Floridi
  • Compelling truth: legal protection of the infosphere against big data spills; Burkhard Schafer
  • Locating ethics in data science: responsibility and accountability in global and distributed knowledge production systems; Sabina Leonelli
  • Privacy is an essentially contested concept: a multi-dimensional analytic for mapping privacy; Deirdre K. Mulligan, Colin Koopman, Nick Doty
  • Beyond privacy and exposure: ethical issues within citizen-facing analytics; Peter Grindrod
  • The ethics of smart cities and urban science; Rob Kitchin
  • The ethics of big data as a public good: which public? Whose good? Linnet Taylor
  • Data philanthropy and the design of the infraethics for information societies; Mariarosaria Taddeo
  • The opportunities and ethics of big data: practical priorities for a national Council of Data Ethics; Olivia Varley-Winter, Hetan Shah
  • Data science ethics in government; Cat Drew
  • The ethics of data and of data science: an economist’s perspective; Jonathan Cave
  • What’s the good of a science platform? John Gallacher

 

Teaching an Algorithm to Understand Right and Wrong


Greg Satell at Harvard Business Review: “In his Nicomachean Ethics, Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma. We all agree that we should be good and just, but it’s much harder to decide what that entails.

Since Aristotle’s time, the questions he raised have been continually discussed and debated. From the works of great philosophers like Kant, Bentham, andRawls to modern-day cocktail parties and late-night dorm room bull sessions, the issues are endlessly mulled over and argued about but never come to a satisfying conclusion.

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Designing a Learning Environment

Every parent worries about what influences their children are exposed to. What TV shows are they watching? What video games are they playing? Are they hanging out with the wrong crowd at school? We try not to overly shelter our kids because we want them to learn about the world, but we don’t want to expose them to too much before they have the maturity to process it.

In artificial intelligence, these influences are called a “machine learning corpus.”For example, if you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats and things that are not cats. Eventually, it figures out how to tell the difference between, say, a cat and a dog. Much as with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry, as in the case of Microsoft’s Tay, aTwitter bot that the company unleashed on the microblogging platform. In under a day, Tay went from being friendly and casual (“Humans are super cool”) to downright scary (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Francesca Rossi, an AI researcher at IBM, points out that we often encode principles regarding influences into societal norms, such as what age a child needs to be to watch an R-rated movie or whether they should learn evolution in school. “We need to decide to what extent the legal principles that we use to regulate humans can be used for machines,” she told me.

However, in some cases algorithms can alert us to bias in our society that we might not have been aware of, such as when we Google “grandma” and see only white faces. “There is a great potential for machines to alert us to bias,” Rossi notes. “We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.”…

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context….

As pervasive as artificial intelligence is set to become in the near future, the responsibility rests with society as a whole. Put simply, we need to take the standards by which artificial intelligences will operate just as seriously as those that govern how our political systems operate and how are children are educated.

It is a responsibility that we cannot shirk….(More)