How Can We Use Administrative Data to Prevent Homelessness among Youth Leaving Care?


Article by Naomi Nichols: “In 2017, I was part of a team of people at the Canadian Observatory on Homelessness and A Way Home Canada who wrote a policy brief titled, Child Welfare and Youth Homelessness in Canada: A proposal for action. Drawing on the results of the first pan-Canadian survey on youth homelessness, Without a Home: The National Youth Homelessness Surveythe brief focused on the disproportionate number of young people who had been involved with child protection services and then later became homeless. Indeed, 57.8% of homeless youth surveyed reported some type of involvement with child protection services over their lifetime. By comparison, in the general population, only 0.3% of young people receive child welfare service. This means, youth experiencing homelessness are far more likely to report interactions with the child welfare system than young people in the general population. 

Where research reveals systematic patterns of exclusion and neglect – that is, where findings reveal that one group is experiencing disproportionately negative outcomes (relative to the general population) in a particular public sector context – this suggests the need for changes in public policy, programming and practice. Since producing this brief, I have been working with an incredibly talented and passionate McGill undergraduate student (who also happens to be the Vice President of Youth in Care Canada), Arisha Khan. Together, we have been exploring just uses of data to better serve the interests of those young people who depend on the state for their access to basic services (e.g., housing, healthcare and food) as well as their self-efficacy and status as citizens. 

One component of this work revolved around a grant application that has just been funded by the Social Sciences and Humanities Research Council of Canada (Data Justice: Fostering equitable data-led strategies to prevent, reduce and end youth homelessness). Another aspect of our work revolved around a policy brief, which we co-wrote and published with the Montreal data-for-good organization, Powered by Data. The brief outlines how a rights-based and custodial approach to administrative data could a) effectively support young people in and leaving care to participate more actively in their transition planning and engage in institutional self-advocacy; and b) enable systemic oversight of intervention implementation and outcomes for young people in and leaving the provincial care system. We produced this brief with the hope that it would be useful to government decision-makers, service providers, researchers, and advocates interested in understanding how institutional data could be used to improve outcomes for youth in and leaving care. In particular, we wanted to explore whether a different orientation to data collection and use in child protection systems could prevent young people from graduating from provincial child welfare systems into homelessness. In addition to this practical concern, we also undertook to think through the ethical and human rights implications of more recent moves towards data-driven service delivery in Canada, focusing on how we might make this move with the best interests of young people in mind. 

As data collection, management and use practices have become more popularresearch is beginning to illuminate how these new monitoring, evaluative and predictive technologies are changing governance processes within and across the public sector, as well as in civil society. ….(More)”.

How we can place a value on health care data


Report by E&Y: “Unlocking the power of health care data to fuel innovation in medical research and improve patient care is at the heart of today’s health care revolution. When curated or consolidated into a single longitudinal dataset, patient-level records will trace a complete story of a patient’s demographics, health, wellness, diagnosis, treatments, medical procedures and outcomes. Health care providers need to recognize patient data for what it is: a valuable intangible asset desired by multiple stakeholders, a treasure trove of information.

Among the universe of providers holding significant data assets, the United Kingdom’s National Health Service (NHS) is the single largest integrated health care provider in the world. Its patient records cover the entire UK population from birth to death.

We estimate that the 55 million patient records held by the NHS today may have an indicative market value of several billion pounds to a commercial organization. We estimate also that the value of the curated NHS dataset could be as much as £5bn per annum and deliver around £4.6bn of benefit to patients per annum, in potential operational savings for the NHS, enhanced patient outcomes and generation of wider economic benefits to the UK….(More)”.

The Hidden Costs of Automated Thinking


Jonathan Zittrain in The New Yorker: “Like many medications, the wakefulness drug modafinil, which is marketed under the trade name Provigil, comes with a small, tightly folded paper pamphlet. For the most part, its contents—lists of instructions and precautions, a diagram of the drug’s molecular structure—make for anodyne reading. The subsection called “Mechanism of Action,” however, contains a sentence that might induce sleeplessness by itself: “The mechanism(s) through which modafinil promotes wakefulness is unknown.”

Provigil isn’t uniquely mysterious. Many drugs receive regulatory approval, and are widely prescribed, even though no one knows exactly how they work. This mystery is built into the process of drug discovery, which often proceeds by trial and error. Each year, any number of new substances are tested in cultured cells or animals; the best and safest of those are tried out in people. In some cases, the success of a drug promptly inspires new research that ends up explaining how it works—but not always. Aspirin was discovered in 1897, and yet no one convincingly explained how it worked until 1995. The same phenomenon exists elsewhere in medicine. Deep-brain stimulation involves the implantation of electrodes in the brains of people who suffer from specific movement disorders, such as Parkinson’s disease; it’s been in widespread use for more than twenty years, and some think it should be employed for other purposes, including general cognitive enhancement. No one can say how it works.

This approach to discovery—answers first, explanations later—accrues what I call intellectual debt. It’s possible to discover what works without knowing why it works, and then to put that insight to use immediately, assuming that the underlying mechanism will be figured out later. In some cases, we pay off this intellectual debt quickly. But, in others, we let it compound, relying, for decades, on knowledge that’s not fully known.

In the past, intellectual debt has been confined to a few areas amenable to trial-and-error discovery, such as medicine. But that may be changing, as new techniques in artificial intelligence—specifically, machine learning—increase our collective intellectual credit line. Machine-learning systems work by identifying patterns in oceans of data. Using those patterns, they hazard answers to fuzzy, open-ended questions. Provide a neural network with labelled pictures of cats and other, non-feline objects, and it will learn to distinguish cats from everything else; give it access to medical records, and it can attempt to predict a new hospital patient’s likelihood of dying. And yet, most machine-learning systems don’t uncover causal mechanisms. They are statistical-correlation engines. They can’t explain why they think some patients are more likely to die, because they don’t “think” in any colloquial sense of the word—they only answer. As we begin to integrate their insights into our lives, we will, collectively, begin to rack up more and more intellectual debt….(More)”.

From Hippocrates to Artificial Intelligence: Moving Towards a Collective Intelligence


Carlos María Galmarini at Open Mind: “Modern medicine is based upon the work of Hippocrates and his disciples and is compiled in more than 70 books comprising the Hippocratic body of work. In its essence, these writings declare that any illness originates with natural causes. Therefore, medicine must be based on detailed observation, reason, and experience in order to establish a diagnosis, prognosis, and treatment. The Hippocratic tradition stresses the importance of the symptoms and the clinical exam. As a result, medicine abandoned superstition and the magic performed by priest-doctors, and it was transformed into a real, experience-based science….

A complementary combination of both intelligences (human and artificial) could help overcome the other’s shortcomings and limitations. As we incorporate intelligent technologies into medical processes, a new, more powerful form of collaboration will emerge. Analogous to the past when the automation of human tasks completely changed the known world and ignited an evolution in the offering of products and services, the combination of human and artificial intelligence will create a new type of collective intelligence capable of building more efficient organizations, and in the healthcare industry, it will be able to solve problems that until now have been unfathomable to the human mind alone.

Finally, it is worth remembering that fact based sciences are divided into natural and human disciplines. Medicine occupies a special place, straddling both. It can be difficult to establish the similarities between a doctor who works, for example, with rules defined by specific clinical trials and a traditional family practitioner. The former would be more related to a natural science, and the latter with a more human science – “the art of medicine.”

The combination of human and artificial intelligence in a new type of collective intelligence will enable doctors themselves to be a combination of the two. In other words, the art of medicine – human science – based on the analysis of big data – natural science. A new collective intelligence working on behalf of a wiser medicine….(More)”.

Smart Cities in Application: Healthcare, Policy, and Innovation


Book edited by Stan McClellan: “This book explores categories of applications and driving factors surrounding the Smart City phenomenon. The contributing authors provide perspective on the Smart Cities, covering numerous applications and classes of applications. The book uses a top-down exploration of the driving factors in Smart Cities, by including focal areas including “Smart Healthcare,” “Public Safety & Policy Issues,” and “Science, Technology, & Innovation.”  Contributors have direct and substantive experience with important aspects of Smart Cities and discuss issues with technologies & standards, roadblocks to implementation, innovations that create new opportunities, and other factors relevant to emerging Smart City infrastructures….(More)”.

Betting on biometrics to boost child vaccination rates


Ben Parker at The New Humanitarian: “Thousands of children between the ages of one and five are due to be fingerprinted in Bangladesh and Tanzania in the largest biometric scheme of its kind ever attempted, the Geneva-based vaccine agency, Gavi, announced recently.

Although the scheme includes data protection safeguards – and its sponsors are cautious not to promise immediate benefits – it is emerging during a widening debate on data protection, technology ethics, and the risks and benefits of biometric ID in development and humanitarian aid.

Gavi, a global vaccine provider, is teaming up with Japanese and British partners in the venture. It is the first time such a trial has been done on this scale, according to Gavi spokesperson James Fulker.

Being able to track a child’s attendance at vaccination centres, and replace “very unreliable” paper-based records, can help target the 20 million children who are estimated to miss key vaccinations, most in poor or remote communities, Fulker said.

Up to 20,000 children will have their fingerprints taken and linked to their records in existing health projects. That collection effort will be managed by Simprints, a UK-based not-for-profit enterprise specialising in biometric technology in international development, according to Christine Kim, the company’s head of strategic partnerships….

Ethics and legal safeguards

Kim said Simprints would apply data protection standards equivalent to the EU’s General Directive on Privacy Regulation (GDPR), even if national legislation did not demand it. Families could opt out without any penalties, and informed consent would apply to any data gathering. She added that the fieldwork would be approved by national governments, and oversight would also come from institutional review boards at universities in the two countries.

Fulker said Gavi had also commissioned a third-party review to verify Simprints’ data protection and security methods.

For critics of biometrics use in humanitarian settings, however, any such plan raises red flags….

Data protection analysts have long been arguing that gathering digital ID and biometric data carries particular risks for vulnerable groups who face conflict or oppression: their data could be shared or leaked to hostile parties who could use it to target them.

In a recent commentary on biometrics and aid, Linda Raftree told The New Humanitarian that “the greatest burden and risk lies with the most vulnerable, whereas the benefits accrue to [aid] agencies.”

And during a panel discussion on “Digital Do No Harm” held last year in Berlin, humanitarian professionals and data experts discussed a range of threats and unintended consequences of new technologies, noting that they are as yet hard to predict….(More)”.

New App Uses Crowdsourcing to Find You an EpiPen in an Emergency


Article by Shaunacy Ferro: “Many people at risk for severe allergic reactions to things like peanuts and bee stings carry EpiPens. These tools inject the medication epinephrine into one’s bloodstream to control immune responses immediately. But exposure can turn into life-threatening situations in a flash: Without EpiPens, people could suffer anaphylactic shock in less than 15 minutes as they wait for an ambulance. Being without an EpiPen or other auto-injector can have deadly consequences.

EPIMADA, a new app created by researchers at Israel’s Bar-Ilan University, is designed to save the lives of people who go into anaphylactic shock when they don’t have EpiPens handy. The app uses the same type of algorithms that ride-hailing services use to match drivers and riders by location—in this case, EPIMADA matches people in distress with nearby strangers carrying EpiPens. David Schwartz, director of the university’s Social Intelligence Lab and one of the app’s co-creators, told The Jerusalem Post that the app currently has hundreds of users….

EPIMADA serves as a way to crowdsource medication from fellow patients who might be close by and able to help. While it may seem unlikely that people would rush to give up their own expensive life-saving tool for a stranger, EPIMADA co-creator Michal Gaziel Yablowitz, a doctoral student in the Social Intelligence Lab, explained in a press release that “preliminary research results show that allergy patients are highly motivated to give their personal EpiPen to patient-peers in immediate need.”…(More)”.

Nudging Us To Health


Blog by Chuck Dinerstein: “Policymakers love nudges – predictably altering people’s behavior without forbidding choices or changing economic incentives. That is especially true for our food choices since we all eat, and it is clear that diet does have some effect on our health. The “promise to improve people’s diet at a fraction of the cost of economic incentives or education programs without imposing new taxes or constraints on business or consumers,” is a have my cake and eat it world. A study by the masters of understanding our behavior, the marketers, sheds light on which nudges work best….

Cognitive nudges include those often-invoked nutritional labels or evaluative labels that skip the verbiage and are green for buy, red for put it back on the shelf, and yellow for it’s up to you. Visibility enhancements refer to getting the item into your visual field at the right time and place, like eye-level on the shelf or in the check-out line. Nudges that appeal to our feelings include all the images making food appealing, think food porn, or simple slogans, e.g., “natural,” “healthy choice,” or ”just like mama made.” (Assuming mom was a good cook). As the researchers point out, there are no labels on foods that promote guilt or concern as we see on tobacco’s Surgeon General warnings. Finally, there are behavioral nudges that make one choice easier than another,  precut fruits and vegetables, or big utensils for vegetables, and tiny ones for fried chicken. It also includes smaller plates that might look fuller or large drink glasses that are 80% ice

Here is the graphic of their findings:

Graphic by Pierre Chandon

  • Nudges do move behavior, although the effect is small, a change of about 124 calories, or in the author’s words “eight fewer teaspoons of sugar.” For those who do not eat sugar by the spoonful, you might consider this to be 1 ½ Jelly Filled Munchkins.
  • As the graph shows, appealing to our intellect works the least well, appeals to our emotions are twice as effective, and making it easy to do the “right thing” works the best, five-fold better, than educating us.
  • Nudges are better at decreasing bad choices than increasing good ones. “it is easier to make people eat less chocolate cake than to make them eat more vegetables…” In fact, total eating was basically unaffected by nudges, again as the authors write, “this finding is consistent with what we know about the difficulty – perhaps even pointlessness – of hypocaloric diets.”
  • The effect of nudges is a lot less when you’re shopping than when you’re eating.
  • The effect of nudges in isolation seems more pronounced statistically speaking than when considered in conjunction with where they take place and other contextual information.
  • Nudges were equally effective, or ineffective, with adults and children; although you might expect that adults would be more responsive given their presumably better understanding of diet and health

The study makes two points. First, nudges can move behavior a little bit, and the fact that it has few recognized costs means that policymakers will continue to utilize them. Second, it provides an analytic framework highlighting areas where the evidence is scant and could, I suggest, nudge researchers to explore….(More)”.

Crosscope


Crosscope is revolutionizing the way practitioners and researchers are leveraging digital pathology to share and solve medical cases.

Since the 1900s cancer diagnosis has been limited to the subjective interpretation of what the pathologist could see under a microscope. To transform the way we perform pathology and cancer research, we are developing new tools to leverage powerful AI & perspectives of medical experts at the same time.

At Crosscope, we are building a place for the convergence of collective intelligence of our massive online medical community and AI. We are commited to developing cutting edge AI tools for better decision support in cancer care. We aim to be the largest database for tagged histopathology images which will contain a lot more information than genomics alone and will be crucial in early diagnosis of cancer….(More)”.

AI Ethics — Too Principled to Fail?


Paper by Brent Mittelstadt: “AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics.

Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement….(More)”.