These patients are sharing their data to improve healthcare standards


Article by John McKenna: “We’ve all heard about donating blood, but how about donating data?

Chronic non-communicable diseases (NCDs) like diabetes, heart disease and epilepsy are predicted by the World Health Organization to account for 57% of all disease by 2020.

Heart disease and stroke are the world’s biggest killers.

This has led some experts to call NCDs the “greatest challenge to global health”.

Could data provide the answer?

Today over 600,000 patients from around the world share data on more than 2,800 chronic diseases to improve research and treatment of their conditions.

People who join the PatientsLikeMe online community share information on everything from their medication and treatment plans to their emotional struggles.

Many of the participants say that it is hugely beneficial just to know there is someone else out there going through similar experiences.

But through its use of data, the platform also has the potential for far more wide-ranging benefits to help improve the quality of life for patients with chronic conditions.

Give data, get data

PatientsLikeMe is one of a swathe of emerging data platforms in the healthcare sector helping provide a range of tech solutions to health problems, including speeding up the process of clinical trials using Real Time Data Analysis or using blockchain to enable the secure sharing of patient data.

Its philosophy is “give data, get data”. In practice it means that every patient using the website has access to an array of crowd-sourced information from the wider community, such as common medication side-effects, and patterns in sufferers’ symptoms and behaviour….(More)”.

Using Data to Raise the Voices of Working Americans


Ida Rademacher at the Aspen Institute: “…At the Aspen Institute Financial Security Program, we sense a growing need to ground these numbers in what people experience day-to-day. We’re inspired by projects like the Financial Diaries that helped create empathy for what the statistics mean. …the Diaries was a time-delimited project, and the insights we can gain from major banking institutions are somewhat limited in their ability to show the challenges of economically marginalized populations. That’s why we’ve recently launched a consumer insights initiative to develop and translate a more broadly sourced set of data that lifts the curtain on the financial lives of low- and moderate-income US consumers. What does it really mean to lack $400 when you need it? How do people cope? What are the aspirations and anxieties that fuel choices? Which strategies work and which fall flat? Our work exists to focus the dialogue about financial insecurity by keeping an ear to the ground and amplifying what we hear. Our ultimate goal: Inspire new solutions that react to reality, ones that can genuinely improve the financial well-being of many.

Our consumer insights initiative sees power in partnerships and collaboration. We’re building a big tent for a range of actors to query and share what their data says: private sector companies, public programs, and others who see unique angles into the financial lives of low- and moderate-income households. We are creating a new forum to lift up these firms serving consumers – and in doing so, we’re raising the voices of consumers themselves.

One example of this work is our Consumer Insights Collaborative (CIC), a group of nine leading non-profits from across the country. Each has a strong sense of challenges and opportunities on the ground because every day their work brings them face-to-face with a wide array of consumers, many of whom are low- and moderate-income families. And most already work independently to learn from their data. Take EARN and its Big Data on Small Savings project; the Financial Clinic’s insights series called Change Matters; Mission Asset Fund’s R&D Lab focused on human-centered design; and FII which uses data collection as part of its main service.

Through the CIC, they join forces to see more than any one nonprofit can on their own. Together CIC members articulate common questions and synthesize collective answers. In the coming months we will publish a first-of-its-kind report on a jointly posed question: What are the dimensions and drivers of short term financial stability?

An added bonus of partnerships like the CIC is the community of practice that naturally emerges. We believe that data scientists from all walks can, and indeed must, learn from each other to have the greatest impact. Our initiative especially encourages cooperative capacity-building around data security and privacy. We acknowledge that as access to information grows, so does the risk to consumers themselves. We endorse collaborative projects that value ethics, respect, and integrity as much as they value cross-organizational learning.

As our portfolio grows, we will invite an even broader network to engage. We’re already working with NEST Insights to draw on NEST’s extensive administrative data on retirement savings, with an aim to understand more about the long-term implications of non-traditional work and unstable household balance sheets on financial security….(More)”.

Democracy is an information system


Bruce Shneier on Security: “That’s the starting place of our new paper: “Common-Knowledge Attacks on Democracy.” In it, we look at democracy through the lens of information security, trying to understand the current waves of Internet disinformation attacks. Specifically, we wanted to explain why the same disinformation campaigns that act as a stabilizing influence in Russia are destabilizing in the United States.

The answer revolves around the different ways autocracies and democracies work as information systems. We start by differentiating between two types of knowledge that societies use in their political systems. The first is common political knowledge, which is the body of information that people in a society broadly agree on. People agree on who the rulers are and what their claim to legitimacy is. People agree broadly on how their government works, even if they don’t like it. In a democracy, people agree about how elections work: how districts are created and defined, how candidates are chosen, and that their votes count­ — even if only roughly and imperfectly.

We contrast this with a very different form of knowledge that we call contested political knowledge,which is, broadly, things that people in society disagree about. Examples are easy to bring to mind: how much of a role the government should play in the economy, what the tax rules should be, what sorts of regulations are beneficial and what sorts are harmful, and so on.

This seems basic, but it gets interesting when we contrast both of these forms of knowledge across autocracies and democracies. These two forms of government have incompatible needs for common and contested political knowledge.

For example, democracies draw upon the disagreements within their population to solve problems. Different political groups have different ideas of how to govern, and those groups vie for political influence by persuading voters. There is also long-term uncertainty about who will be in charge and able to set policy goals. Ideally, this is the mechanism through which a polity can harness the diversity of perspectives of its members to better solve complex policy problems. When no-one knows who is going to be in charge after the next election, different parties and candidates will vie to persuade voters of the benefits of different policy proposals.

But in order for this to work, there needs to be common knowledge both of how government functions and how political leaders are chosen. There also needs to be common knowledge of who the political actors are, what they and their parties stand for, and how they clash with each other. Furthermore, this knowledge is decentralized across a wide variety of actors­ — an essential element, since ordinary citizens play a significant role in political decision making.

Contrast this with an autocracy….(More)”.

Driven to safety — it’s time to pool our data


Kevin Guo at TechCrunch: “…Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together….

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition….(More)”

Blockchain systems are tracking food safety and origins


Nir Kshetri at The Conversation: “When a Chinese consumer buys a package labeled “Australian beef,” there’s only a 50-50 chance the meat inside is, in fact, Australian beef. It could just as easily contain rat, dog, horse or camel meat – or a mixture of them all. It’s gross and dangerous, but also costly.

Fraud in the global food industry is a multi-billion-dollar problem that has lingered for years, duping consumers and even making them ill. Food manufacturers around the world are concerned – as many as 39 percent of them are worried that their products could be easily counterfeited, and 40 percent say food fraud is hard to detect.

In researching blockchain for more than three years, I have become convinced that this technology’s potential to prevent fraud and strengthen security could fight agricultural fraud and improve food safety. Many companies agree, and are already running various tests, including tracking wine from grape to bottle and even following individual coffee beans through international trade.

Tracing food items

An early trial of a blockchain system to track food from farm to consumer was in 2016, when Walmart collected information about pork being raised in China, where consumers are rightly skeptical about sellers’ claims of what their food is and where it’s from. Employees at a pork farm scanned images of farm inspection reports and livestock health certificates, storing them in a secure online database where the records could not be deleted or modified – only added to.

As the animals moved from farm to slaughter to processing, packaging and then to stores, the drivers of the freight trucks played a key role. At each step, they would collect documents detailing the shipment, storage temperature and other inspections and safety reports, and official stamps as authorities reviewed them – just as they did normally. In Walmart’s test, however, the drivers would photograph those documents and upload them to the blockchain-based database. The company controlled the computers running the database, but government agencies’ systems could also be involved, to further ensure data integrity.

As the pork was packaged for sale, a sticker was put on each container, displaying a smartphone-readable code that would link to that meat’s record on the blockchain. Consumers could scan the code right in the store and assure themselves that they were buying exactly what they thought they were. More recent advances in the technology of the stickers themselves have made them more secure and counterfeitresistant.

Walmart did similar tests on mangoes imported to the U.S. from Latin America. The company found that it took only 2.2 seconds for consumers to find out an individual fruit’s weight, variety, growing location, time it was harvested, date it passed through U.S. customs, when and where it was sliced, which cold-storage facility the sliced mango was held in and for how long it waited before being delivered to a store….(More)”.

Crowdsourced data informs women which streets are safe


Springwise“Safe & the City is a free app designed to help users identify which streets are safe for them. Sexual harassment and violent crimes against women in particular are a big problem in many urban environments. This app uses crowdsourced data and crime statistics to help female pedestrians stay safe.

It is a development of traditional navigation apps but instead of simply providing the fastest route, it also has information on what is the safest. The Live Map relies on user data. Victims can report harassment or assault on the app. The information will then be available to other users to warn them of a potential threat in the area. Incidents can be ranked from a feeling of discomfort or threat, verbal harassment, or a physical assault. Whilst navigating, the Live Map can also alert users to potentially dangerous intersections coming. This reminds people to stay alert and not only focus on their phone while walking.

The Safe Sites feature is also a way of incorporating the community. Businesses and organisations can register to be Safe Sites. They will then receive training from SafeSeekers in how to provide the best support and assistance in emergency situations. The locations of such Sites will be available on the app, should a user need one.

The IOS app launched in March 2018 on International Women’s Day. It is currently only available for London…(More)”

Startup Offers To Sequence Your Genome Free Of Charge, Then Let You Profit From It


Richard Harris at NPR: “A startup genetics company says it’s now offering to sequence your entire genome at no cost to you. In fact, you would own the data and may even be able to make money off it.

Nebula Genomics, created by the prominent Harvard geneticist George Church and his lab colleagues, seeks to upend the usual way genomic information is owned.

Today, companies like 23andMe make some of their money by scanning your genetic patterns and then selling that information to drug companies for use in research. (You choose whether to opt in.)

Church says his new enterprise leaves ownership and control of the data in an individual’s hands. And the genomic analysis Nebula will perform is much more detailed than what 23andMe and similar companies offer.

Nebula will do a full genome sequence, rather than a snapshot of key gene variants. That wider range of genetic information would makes the data more appealing to biologists and biotech and pharmaceutical companies….

Church’s approach is part of a trend that’s pushing back against the multibillion-dollar industry to buy and sell medical information. Right now, companies reap those profits and control the data.

“Patients should have the right to decide for themselves whether they want to share their medical data, and, if so, with whom,” Adam Tanner, at Harvard’s Institute for Quantitative Social Science, says in an email. “Efforts to empower people to fine-tune the fate of their medical information are a step in the right direction.” Tanner, author of a book on the subject of the trade in medical data, isn’t involved in Nebula.

The current system is “very paternalistic,” Church says. He aims to give people complete control over who gets access to their data, and let individuals decide whether or not to sell the information, and to whom.

“In this case, everything is private information, stored on your computer or a computer you designate,” Church says. It can be encrypted so nobody can read it, even you, if that’s what you want.

Drug companies interested in studying, say, diabetes patients would ask Nebula to identify people in their system who have the disease. Nebula would then identify those individuals by launching an encrypted search of participants.

People who have indicated they’re interested in selling their genetic data to a company would then be given the option of granting access to the information, along with medical data that person has designated.

Other companies are also springing up to help people control — and potentially profit from — their medical data. EncrypGen lets people offer up their genetic data, though customers have to provide their own DNA sequence. Hu-manity.co is also trying to establish a system in which people can sell their medical data to pharmaceutical companies….(More)”.

What do we learn from Machine Learning?


Blog by Giovanni Buttarelli: “…There are few authorities monitoring the impact of new technologies on fundamental rights so closely and intensively as data protection and privacy commissioners. At the International Conference of Data Protection and Privacy Commissioners, the 40th ICDPPC (which the EDPS had the honour to host), they continued the discussion on AI which began in Marrakesh two years ago with a reflection paper prepared by EDPS experts. In the meantime, many national data protection authorities have invested considerable efforts and provided important contributions to the discussion. To name only a few, the data protection authorities from NorwayFrance, the UK and Schleswig-Holstein have published research and reflections on AI, ethics and fundamental rights. We all see that some applications of AI raise immediate concerns about data protection and privacy; but it also seems generally accepted that there are far wider-reaching ethical implications, as a group of AI researchers also recently concluded. Data protection and privacy commissioners have now made a forceful intervention by adopting a declaration on ethics and data protection in artificial intelligence which spells out six principles for the future development and use of AI – fairness, accountability, transparency, privacy by design, empowerment and non-discrimination – and demands concerted international efforts  to implement such governance principles. Conference members will contribute to these efforts, including through a new permanent working group on Ethics and Data Protection in Artificial Intelligence.

The ICDPPC was also chosen by an alliance of NGOs and individuals, The Public Voice, as the moment to launch its own Universal Guidelines on Artificial Intelligence (UGAI). The twelve principles laid down in these guidelines extend and complement those of the ICDPPC declaration.

We are only at the beginning of this debate. More voices will be heard: think tanks such as CIPL are coming forward with their suggestions, and so will many other organisations.

At international level, the Council of Europe has invested efforts in assessing the impact of AI, and has announced a report and guidelines to be published soon. The European Commission has appointed an expert group which will, among other tasks, give recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.

As I already pointed out in an earlier blogpost, it is our responsibility to ensure that the technologies which will determine the way we and future generations communicate, work and live together, are developed in such a way that the respect for fundamental rights and the rule of law are supported and not undermined….(More)”.

It’s time to let citizens tackle the wickedest public problems


Gabriella Capone at apolitical (a winner of the 2018 Apolitical Young Thought Leaders competition): “Rain ravaged Gdańsk in 2016, taking the lives of two residents and causing millions of euros in damage. Despite its 700-year history of flooding the city was overwhelmed by these especially devastating floods. Also, Gdańsk is one of the European coasts most exposed to rising sea levels. It needed a new approach to avoid similar outcomes for the next, inevitable encounter with this worsening problem.

Bringing in citizens to tackle such a difficult issue was not the obvious course of action. Yet this was the proposal of Dr. Marcin Gerwin, an advocate from a neighbouring town who paved the way for Poland’s first participatory budgeting experience.

Mayor Adamowicz of Gdańsk agreed and, within a year, they welcomed about 60 people to the first Citizens Assembly on flood mitigation. Implemented by Dr. Gerwin and a team of coordinators, the Assembly convened over four Saturdays, heard expert testimony, and devised solutions.

The Assembly was not only deliberative and educational, it was action-oriented. Mayor Adamowicz committed to implement proposals on which 80% or more of participants agreed. The final 16 proposals included the investment of nearly $40 million USD in monitoring systems and infrastructure, subsidies to incentivise individuals to improve water management on their property, and an educational “Do Not Flood” campaign to highlight emergency resources.

It may seem risky to outsource the solving of difficult issues to citizens. Yet, when properly designed, public problem-solving can produce creative resolutions to formidable challenges. Beyond Poland, public problem-solving initiatives in Mexico and the United States are making headway on pervasive issues, from flooding to air pollution, to technology in public spaces.

The GovLab, with support from the Tinker Foundation, is analysing what makes for more successful public problem-solving as part of its City Challenges program. Below, I provide a glimpse into the types of design choices that can amplify the impact of public problem-solving….(More)

You can’t characterize human nature if studies overlook 85 percent of people on Earth


Daniel Hruschka at the Conversation: “Over the last century, behavioral researchers have revealed the biases and prejudices that shape how people see the world and the carrots and sticks that influence our daily actions. Their discoveries have filled psychology textbooks and inspired generations of students. They’ve also informed how businesses manage their employees, how educators develop new curricula and how political campaigns persuade and motivate voters.

But a growing body of research has raised concerns that many of these discoveries suffer from severe biases of their own. Specifically, the vast majority of what we know about human psychology and behavior comes from studies conducted with a narrow slice of humanity – college students, middle-class respondents living near universities and highly educated residents of wealthy, industrialized and democratic nations.

Blue countries represent the locations of 93 percent of studies published in Psychological Science in 2017. Dark blue is U.S., blue is Anglophone colonies with a European descent majority, light blue is western Europe. Regions sized by population.

To illustrate the extent of this bias, consider that more than 90 percent of studies recently published in psychological science’s flagship journal come from countries representing less than 15 percent of the world’s population.

If people thought and behaved in basically the same ways worldwide, selective attention to these typical participants would not be a problem. Unfortunately, in those rare cases where researchers have reached out to a broader range of humanity, they frequently find that the “usual suspects” most often included as participants in psychology studies are actually outliers. They stand apart from the vast majority of humanity in things like how they divvy up windfalls with strangers, how they reason about moral dilemmas and how they perceive optical illusions.

Given that these typical participants are often outliers, many scholars now describe them and the findings associated with them using the acronym WEIRD, for Western, educated, industrialized, rich and democratic.

WEIRD isn’t universal

Because so little research has been conducted outside this narrow set of typical participants, anthropologists like me cannot be sure how pervasive or consequential the problem is. A growing body of case studies suggests, though, that assuming such typical participants are the norm worldwide is not only scientifically suspect but can also have practical consequences….(More)”.