Decision Making in a World of Comparative Effectiveness Research


Book by Howard G. Birnbaum and Paul E. Greenberg: “In the past decade there has been a worldwide evolution in evidence-based medicine that focuses on real-world Comparative Effectiveness Research (CER) to compare the effects of one medical treatment versus another in real world settings. While most of this burgeoning literature has focused on research findings, data and methods, Howard Birnbaum and Paul Greenberg (both of Analysis Group) have edited a book that provides a practical guide to decision making using the results of analysis and interpretation of CER. Decision Making in a World of Comparative Effectiveness contains chapters by senior industry executives, key opinion leaders, accomplished researchers, and leading attorneys involved in resolving disputes in the life sciences industry. The book is aimed at ‘users’ and ‘decision makers’ involved in the life sciences industry rather than those doing the actual research. This book appeals to those who commission CER within the life sciences industry (pharmaceutical, biologic, and device manufacturers), government (both public and private payers), as well as decision makers of all levels, both in the US and globally…(More)”.

Human Agency and Behavioral Economics: Nudging Fast and Slow


Book by Cass R. Sunstein: “This Palgrave Pivot offers comprehensive evidence about what people actually think of “nudge” policies designed to steer decision makers’ choices in positive directions. The data reveal that people in diverse nations generally favor nudges by strong majorities, with a preference for educative efforts – such as calorie labels – that equip individuals to make the best decisions for their own lives. On the other hand, there are significant arguments for noneducational nudges – such as automatic enrollment in savings plans – as they allow people to devote their scarce time and attention to their most pressing concerns.  The decision to use either educative or noneducative nudges raises fundamental questions about human freedom in both theory and practice. Sunstein’s findings and analysis offer lessons for those involved in law and policy who are choosing which method to support as the most effective way to encourage lifestyle changes….(More)”.

Why big-data analysis of police activity is inherently biased


 and  in The Conversation: “In early 2017, Chicago Mayor Rahm Emanuel announced a new initiative in the city’s ongoing battle with violent crime. The most common solutions to this sort of problem involve hiring more police officers or working more closely with community members. But Emanuel declared that the Chicago Police Department would expand its use of software, enabling what is called “predictive policing,” particularly in neighborhoods on the city’s south side.

The Chicago police will use data and computer analysis to identify neighborhoods that are more likely to experience violent crime, assigning additional police patrols in those areas. In addition, the software will identify individual people who are expected to become – but have yet to be – victims or perpetrators of violent crimes. Officers may even be assigned to visit those people to warn them against committing a violent crime.

Any attempt to curb the alarming rate of homicides in Chicago is laudable. But the city’s new effort seems to ignore evidence, including recent research from members of our policing study team at the Human Rights Data Analysis Group, that predictive policing tools reinforce, rather than reimagine, existing police practices. Their expanded use could lead to further targeting of communities or people of color.

Working with available data

At its core, any predictive model or algorithm is a combination of data and a statistical process that seeks to identify patterns in the numbers. This can include looking at police data in hopes of learning about crime trends or recidivism. But a useful outcome depends not only on good mathematical analysis: It also needs good data. That’s where predictive policing often falls short.

Machine-learning algorithms learn to make predictions by analyzing patterns in an initial training data set and then look for similar patterns in new data as they come in. If they learn the wrong signals from the data, the subsequent analysis will be lacking.

This happened with a Google initiative called “Flu Trends,” which was launched in 2008 in hopes of using information about people’s online searches to spot disease outbreaks. Google’s systems would monitor users’ searches and identify locations where many people were researching various flu symptoms. In those places, the program would alert public health authorities that more people were about to come down with the flu.

But the project failed to account for the potential for periodic changes in Google’s own search algorithm. In an early 2012 update, Google modified its search tool to suggest a diagnosis when users searched for terms like “cough” or “fever.” On its own, this change increased the number of searches for flu-related terms. But Google Flu Trends interpreted the data as predicting a flu outbreak twice as big as federal public health officials expected and far larger than what actually happened.

Criminal justice data are biased

The failure of the Google Flu Trends system was a result of one kind of flawed data – information biased by factors other than what was being measured. It’s much harder to identify bias in criminal justice prediction models. In part, this is because police data aren’t collected uniformly, and in part it’s because what data police track reflect longstanding institutional biases along income, race and gender lines….(More)”.

When Crowdsourcing Works (And Doesn’t Work) In The Law


LawXLab: “Crowdsourcing has revolutionized several industries.  Wikipedia has replaced traditional encyclopedias.  Stack Overflow houses the collective knowledge of software engineering.  And genealogical information stretches back thousands of years.  All due to crowdsourcing.

These successes have led to several attempts to crowdsource the law.  The potential is enticing.  The law is notoriously difficult to access, especially for non-lawyers.  Amassing the collective knowledge of the legal community could make legal research easier for lawyers, and open the law to lay people, reshaping the legal industry and displacing traditional powers like Westlaw and Lexis. As one legal crowdsourcing site touted, “No lawyer is smarter than all lawyers.”

But crowdsourcing the law has proved difficult.  The list of failed legal crowdsourcing sites is long.  As one legal commentator noted, “The legal Web is haunted by the spirits of the many crowdsourced sites that have come and gone.” (Ambrogi http://goo.gl/ZPuXh8).  …

There are several aspects of the law that make crowdsourcing difficult.  First, the base of contributors is not large.  According to the ABA, there were only 1.3 million licensed lawyers in 2015. (http://goo.gl/kw6Kab).  Second, there is no ethos of sharing information, like there is in other fields.  To the contrary, there is a tradition of keeping information secret, enshrined in rules regarding privilege, work product protection, and trade secrets.  Legal professionals disclose information with caution.

Every attempt to crowdsource the law, however, has not been a failure.  And the successes chart a promising path forward.  While lawyers will not go out of their way to crowdsource the law, attempts to weave crowdsourcing into activities that legal professionals already perform have achieved promising results.

For example, Casetext’s WeCite initiative has proved immensely successful.  When a judge cites another case in a published opinion, WeCite asks the reader to characterize case references as (1) positive, (2) referencing, (3) distinguishing, or (4) negative.  In 9 months, Casetext’s community had crowdsourced “over 300,000 citator entries.” (CALI https://goo.gl/yT9mc4.)  CaseText used these entries to fuel its flagship product, CARA.  CARA uses those crowdsourced citation entries to suggest other cases for litigators to cite.

The key to WeCite’s success is that it weaved crowdsourcing into an activity that lawyers and law students were already doing–reading cases.  All the reader needed to do was click a button to signify how the case was cited–a minor detour.

Another example is CO/COUNSEL, a site that crowdsources visual maps of the law. The majority of CO/COUNSEL’s crowdsourced contributions come from law school classes.  Teachers use the site as a teaching tool.  Classes map the law during the course of a semester as a learning activity.  In a few months, CO/COUNSEL received over 10,000 contributions.  As with WeCite, using CO/COUNSEL was not a big detour for professors.  It fit into an activity they were performing already–teaching….(More)”.

Blockchain transparency applied to newsfeeds


Springwise: “With fake news an ongoing challenge for media platforms, users and the wider world, Polish startup Userfeeds is developing new algorithms to help create transparency around the sources of news. The company sees a variety of current online dilemmas, including ad-blocking and targeting as well as moderation, as information ranking problems. The most negative effect of fake news is that the cost is absorbed by the user, regardless of how much validity he or she gives to each piece of information. On the other hand, as the current systems stand, the producers of content and distributors of it receive only benefits.

Userfeeds seeks to redress that imbalance by applying the transparency and strength of online currencies such as Bitcoin and Ethereum to the provision of information. The company is developing a system that would require information providers and distributors to prove, via third party algorithms, the strength of each individual claim. Because online tokens used in such systems are visible and accessible to anyone, everyone will be able to contribute to a ranking, which is what ultimately grabs users’ attention. Having just raised seed funding, Userfeeds is at the proof-of-concept stage and is encouraging interested parties to get involved in testing.

The transparency of blockchain systems is attracting attention from industries as wide-ranging as freelance marketplaces and international shipping companies….(More)”.

Blockchain 2.0: How it could overhaul the fabric of democracy and identity


Colm Gorey at SiliconRepublic: “…not all blockchain technologies need to be about making money. A recent report issued by the European Commission discussed the possible ways it could change people’s lives….
While many democratic nations still prefer a traditional paper ballot system to an electronic voting system over fears that digital votes could be tampered with, new technologies are starting to change that opinion.
One suggestion is blockchain enabled e-voting (BEV), which would take control from a central authority and put it back in the hands of the voter.
As a person’s vote would be timestamped with details of their last vote thanks to the encrypted algorithm, an illegitimate one would be spotted more easily by a digital system, or even those within digital-savvy communities.
Despite still being a fledgling technology, BEV has already begun working on the local scale of politics within Europe, such as the internal elections of political parties in Denmark.
But perhaps at this early stage, its actual use in governmental elections at a national level will remain limited, depending on “the extent to which it can reflect the values and structure of society, politics and democracy”, according to the EU….blockchain has also been offered as an answer to sustaining the public service, particularly with transparency of where people’s taxes are going.
One governmental concept could allow blockchain to form the basis for a secure method of distributing social welfare or other state payments, without the need for divisions running expensive and time-consuming fraud investigations.
Irish start-up Aid:Tech is one noticeable example that is working with Serbia to do just that, along with its efforts to use blockchain to create a transparent system for aid to be evenly distributed in countries such as Syria.
Bank of Ireland’s innovation manager, Stephen Moran, is certainly of the opinion that blockchain in the area of identity offers greater revolutionary change than BEV.
“By identity, that can cover everything from educational records, but can also cover the idea of a national identity card,” he said in conversation with Siliconrepublic.com….
But perhaps the wildest idea within blockchain – and one that is somewhat connected to governance – is that, through an amalgamation of smart contracts, it could effectively run itself as an artificially intelligent being.
Known as decentralised autonomous organisations (DAOs), these are, in effect, entities that can run a business or any operation autonomously, allocating tasks or distributing micropayments instantly to users….
An example similar to the DAO already exists, in a crowdsourced blockchain online organisation run entirely on the open source platform Ethereum.
Last year, through the sheer will of its users, it was able to crowdfund the largest sum ever – $100m – through smart contracts alone.
If it appears confusing and unyielding, then you are not alone.
However, as was simply summed up by writer Leda Glyptis, blockchain is a force to be reckoned with, but it will be so subtle that you won’t even notice….(More)”.

Scientists crowdsource autism data to learn where resource gaps exist


SCOPE: “How common is autism? Since 2000, the U.S. Centers for Disease Control and Prevention has revised its estimate several times, with the numbers ticking steadily upward. But the most recent figure of 1 in 68 kids affected is based on data from only 11 states. It gives no indication of where people with autism live around the country nor whether their communities have the resources to treat them.
That’s a knowledge gap Stanford biomedical data scientist Dennis Wall, PhD, wants to fill — not just in the United States but also around the world. A new paper, published online in JMIR Public Health & Surveillance, explains how Wall and his team created GapMap, an interactive website designed to crowdsource the missing autism data. They’re now inviting people and families affected by autism to contribute to the database….
The pilot phase of the research, which is described in the new paper, estimated that the average distance from an individual in the U.S. to the nearest autism diagnostic center is 50 miles, while those with an autism diagnosis live an average of 20 miles from the nearest diagnostic center. The researchers think this may reflect lower rates of diagnosis among people in rural areas….Data submitted to GapMap will be stored in a secure, HIPAA-compliant database. In addition to showing where more autism treatment resources are needed, the researchers hope the project will help build communities of families affected by autism and will inform them of treatment options nearby. Families will also have the option of participating in future autism research, and the scientists plan to add more features, including the locations of environmental factors such as local pollution, to understand if they contribute to autism…(More)”

What data do we want? Understanding demands for open data among civil society organisations in South Africa


Report by Kaliati, Andrew; Kachieng’a, Paskaliah and de Lanerolle, Indra: “Many governments, international agencies and civil society organisations (CSOs) support and promote open data. Most open government data initiatives have focused on supply – creating portals and publishing information. But much less attention has been given to demand – understanding data needs and nurturing engagement. This research examines the demand for open data in South Africa, and asks under what conditions meeting this demand might influence accountability. Recognising that not all open data projects are developed for accountability reasons, it also examines barriers to using government data for accountability processes. The research team identified and tested ‘use stories’ and ‘use cases’. How did a range of civil society groups with an established interest in holding local government accountable use – or feel that they could use – data in their work? The report identifies and highlights ten broad types of open data use, which they divided into two streams: ‘strategy and planning’ – in which CSOs used government data internally to guide their own actions; and ‘monitoring, mobilising and advocacy’ – in which CSOs undertake outward-facing activities….(More)”

SeeClickFix Empowers Citizens by Connecting Them to Their Local Governments


Paper by Ben Berkowitz and Jean-Paul Gagnon in Democratic Theory: “SeeClickFix began in 2009 when founder and present CEO Ben Berkowitz spotted a piece of graffiti in his New Haven, Connecticut, neighborhood. After calling numerous departments at city hall in a bid to have the graffiti removed, Berkowitz felt no closer to fixing the problem. Confused and frustrated, his emotions resonated with what many citizens in real- existing democracies feel today (Manning 2015): we see problems in public and want to fix them but can’t. This all too habitual inability for “common people” to fix problems they have to live with on a day-to-day basis is a prelude to the irascible citizen (White 2012), which, according to certain scholars (e.g., Dean 1960; Lee 2009), is itself a prelude to political apathy and a citizen’s alienation from specific political institutions….(More)”

Open data and the war on hunger – a challenge to be met


Diginomica: “Although the private sector is seen as the villain of the piece in some quarters, it actually has a substantial role to play in helping solve the problem of world hunger.

This is the view of Andre Laperriere, executive director of the Global Open Data for Agriculture and Nutrition (Godan) initiative, …

Laperriere himself heads up Godan’s small secretariat of five full-time equivalent employees who are based in Oxfordshire in the UK. The goal of the organisation, which currently has 511 members, is to encourage governmental, non-governmental (NGO) and private sector organisations to share open data about agriculture and nutrition. The idea is to make such information more available, accessible and usable in order to help tackle world food security in the face of mounting threats such as climate change.

But to do so, it is necessary to bring the three key actors originally identified by James Wolfensohn, former president of the World Bank, into play, believes Laperriere. He explains:

You have states, which generate and possess much of the data. There are citizens with lots of specific needs for which the data can be used, and there’s the private sector in between. It’s in the best position to exploit the data and use it to develop products that help meet the needs of the population. So the private sector is the motor of development and has a big role to play.

This is not least because NGOs, cooperatives and civil societies of all kinds often simply do not have the resources or technical knowledge to either find or deal with the massive quantities of open data that is released. Laperriere explains:

It’s a moral dilemma for a lot of research organisations. If, for example, they release 8,000 data sets about every kind of cattle disease, they’re doing so for the benefit of small farmers. But the only ones that can often do anything with it are the big companies as they have the appropriate skills. So the goal is the little guy rather than the big companies, but the alternative is not to release anything at all.

But for private sector businesses to truly get the most out of this open data as it is made available, Laperriere advocates getting together to create so-called pre-competition spaces. These spaces involve competitors collaborating in the early stages of commercial product development to solve common problems. To illustrate how such activity works, Laperriere cites his own past experience when working for a lighting company:

We were pushing fluorescent rather than incandescent lighting, but it contains mercury which pollutes, although it has a lower carbon footprint. It was also a lot more expensive. But we sat down together with the other manufacturers and shared our data to fix the problem together, which meant that everyone benefited by reducing the cost, the mercury pollution and the amount of energy consumed.

Next revolution

While Laperriere understands the fear of many organisations in potentially making themselves vulnerable to competition by disclosing their data, in reality, he attests, “it not the case”. Instead he points out:

If you release data in the right way to stimulate collaboration, it is positive economically and benefits both consumers and companies too as it helps reduce their costs and minimise other problems.

Due to growing amounts of government legislation and policies that require processed food manufacturers around the world to disclose product ingredients, he is, in fact, seeing rising interest in the approach not only among the manufacturers themselves but also among packaging and food preservation companies. The fact that agriculture and nutrition is a vast, complex area does mean there is still a long way to go, however….(More)”