Global platform launched to promote positive plagiarism among foundations


Ellie Ward at PioneersPost: “A group of leading foundations and NGOs, including the Rockefeller Foundation, Oxfam and the Skoll Foundation have launched a peer-to-peer platform to make solving pressing social issues easier.

Sphaera (pronounced s’faira) is a peer-to-peer online platform that will collate the knowledge of funders and practitioners working to solve social and environmental issues around the world.

Organisations will share their evidence-based solutions and research within the portal, which will then repurpose the information into tools, processes and frameworks that can be used by others. In theory a solution that helps fishermen log their catch could be repurposed for healthcare workers to track and improve treatment of contagious disease. …”Sphaera makes it easy to discover, share and remix solutions. We put the collective, practical knowledge of what works – in health, finance, conservation, education, in every sector relevant to wellbeing – at the fingertips of practitioners everywhere. Our hope is that together we are better, faster, and more effective in tackling the urgent problems of our time.”

Arthur Wood, founding partner of Total Impact Capital and a global leader in social finance, said: “With the birth of cloud technology we have seen a plethora of models changing the way we use, share, purchase and allocate resources. From AirBNB to Uber, folks are now asking why this trend has had zero impact in Philanthropy.”

Wood explained that Sphaera is “designed to liberate the silos of individual project knowledge and to leverage that expertise and knowledge to create scale and collaboration across the philanthropic landscape… Or simply stated, how can a great idea in one stovepipe be shared to the benefit of all?” (More)

The Art of Managing Complex Collaborations


Eric Knight, Joel Cutcher-Gershenfeld, and Barbara Mittleman at MIT Sloan Management Review: “It’s not easy for stakeholders with widely varying interests to collaborate effectively in a consortium. The experience of the Biomarkers Consortium offers five lessons on how to successfully navigate the challenges that arise….

Society’s biggest challenges are also its most complex. From shared economic growth to personalized medicine to global climate change, few of our most pressing problems are likely to have simple solutions. Perhaps the only way to make progress on these and other challenges is by bringing together the important stakeholders on a given issue to pursue common interests and resolve points of conflict.

However, it is not easy to assemble such groups or to keep them together. Many initiatives have stumbled and disbanded. The Biomarkers Consortium might have been one of them, but this consortium beat the odds, in large part due to the founding parties’ determination to make it work. Nine years after it was founded, this public-private partnership, which is managed by the Foundation for the National Institutes of Health and based in Bethesda, Maryland, is still working to advance the availability of biomarkers (biological indicators for disease states) as tools for drug development, including applications at the frontiers of personalized medicine.

The Biomarkers Consortium’s mandate — to bring together, in the group’s words, “the expertise and resources of various partners to rapidly identify, develop, and qualify potential high-impact biomarkers particularly to enable improvements in drug development, clinical care, and regulatory decision-making” — may look simple. However, the reality has been quite complex. The negotiations that led to the consortium’s formation in 2006 were complicated, and the subsequent balancing of common and competing interests remains challenging….

Many in the biomedical sector had seen the need to tackle drug discovery costs for a long time, with multiple companies concurrently spending millions, sometimes billions, of dollars only to hit common dead ends in the drug development process. In 2004 and 2005, then National Institutes of Health director Elias Zerhouni convened key people from the U.S. Food and Drug Administration, the NIH, and the Pharmaceutical Research and Manufacturers of America to create a multistakeholder forum.

Every member knew from the outset that their fellow stakeholders represented many divergent and sometimes opposing interests: large pharmaceutical companies, smaller entrepreneurial biotechnology companies, FDA regulators, NIH science and policy experts, university researchers and nonprofit patient advocacy organizations….(More)”

Inside the Nudge Unit: How small changes can make a big difference


Book by David Halpern: “Every day we make countless decisions, from the small, mundane things to tackling life’s big questions, but we don’t always make the right choices.

Behavioural scientist Dr David Halpern heads up Number 10’s ‘Nudge Unit’, the world’s first government institution that uses behavioural economics to examine and influence human behaviour, to ‘nudge’ us into making better decisions. Seemingly small and subtle solutions have led to huge improvements across tax, healthcare, pensions, employment, crime reduction, energy conservation and economic growth.

Adding a crucial line to a tax reminder brought forward millions in extra revenue; refocusing the questions asked at the job centre helped an extra 10 per cent of people come off their benefits and back into work; prompting people to become organ donors while paying for their car tax added an extra 100,000 donors to the register in a single year.

After two years and dozens of experiments in behavioural science, the results are undeniable. And now David Halpern and the Nudge Unit will help you to make better choices and improve your life…(More)”

Beyond the Jailhouse Cell: How Data Can Inform Fairer Justice Policies


Alexis Farmer at DataDrivenDetroit: “Government-provided open data is a value-added approach to providing transparency, analytic insights for government efficiency, innovative solutions for products and services, and increased civic participation. Two of the least transparent public institutions are jails and prisons. The majority of population has limited knowledge about jail and prison operations and the demographics of the jail and prison population, even though the costs of incarceration are substantial. The absence of public knowledge about one of the many establishments public tax dollars support can be resolved with an open data approach to criminal justice. Increasing access to administrative jail information enables communities to collectively and effectively find solutions to the challenges the system faces….

The data analysis that compliments open data practices is a part of the formula for creating transformational policies. There are numerous ways that recording and publishing data about jail operations can inform better policies and practices:

1. Better budgeting and allocation of funds. By monitoring the rate at which dollars are expended for a specific function, data allows for administrations to ensure accurate estimates of future expenditures.

2. More effective deployment of staff. Knowing the average daily population and annual average bookings can help inform staffing decisions to determine a total need of officers, shift responsibilities, and room arrangements. The population information also helps with facility planning, reducing overcrowding, controlling violence within the facility, staffing, determining appropriate programs and services, and policy and procedure development.

3. Program participation and effectiveness. Gauging the amount of inmates involved in jail work programs, educational training services, rehabilitation/detox programs, and the like is critical to evaluating methods to improve and expand such services. Quantifying participation and effectiveness of these programs can potentially lead to a shift in jail rehabilitating services.

4. Jail suicides. “The rate of jail suicides is about three times the rate of prison suicides.” Jails are isolating spaces that separate inmates from social support networks, diminish personal control, and often lack mental health resources. Most people in jail face minor charges and spend less time incarcerated due to shorter sentences. Reviewing the previous jail suicide statistics aids in pinpointing suicide risk, identifying high-risk groups, and ultimately, prescribing intervention procedures and best practices to end jail suicides.

5. Gender and race inequities. It is well known that Black men are disproportionately incarcerated, and the number of Black women in jails and prisons has rapidly increased . It is important to view this disparity as it reflects to the demographics of the total population of an area. Providing data that show trends in particular crimes committed by race and gender data might lead to further analysis and policy changes in the root causes of these crimes (poverty, employment, education, housing, etc.).

6. Prior interaction with the juvenile justice system. The school-to-prison pipeline describes the systematic school discipline policies that increase a student’s interaction with the juvenile justice system. Knowing how many incarcerated persons that have been suspended, expelled, or incarcerated as a juvenile can encourage schools to examine their discipline policies and institute more restorative justice programs for students. It would also encourage transitional programs for formerly incarcerated youth in order to decrease recidivism rate among young people.

7. Sentencing reforms. Evaluating the charges on which a person is arrested, the length of stay, average length of sentences, charges for which sentences are given, and the length of time from the first appearance to arraignment and trial disposition can inform more just and balanced sentencing laws enforced by the judicial branch….(More)”

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots


Book description: “Robots are poised to transform today’s society as completely as the Internet did twenty years ago. Pulitzer prize-winning New York Times science writer John Markoff argues that we must decide to design ourselves into our future, or risk being excluded from it altogether.

In the past decade, Google introduced us to driverless cars; Apple debuted Siri, a personal assistant that we keep in our pockets; and an Internet of Things connected the smaller tasks of everyday life to the farthest reaches of the Web. Robots have become an integral part of society on the battlefield and the road; in business, education, and health care. Cheap sensors and powerful computers will ensure that in the coming years, these robots will act on their own. This new era offers the promise of immensely powerful machines, but it also reframes a question first raised more than half a century ago, when the intelligent machine was born. Will we control these systems, or will they control us?

In Machines of Loving Grace, John Markoff offers a sweeping history of the complicated and evolving relationship between humans and computers. In recent years, the pace of technological change has accelerated dramatically, posing an ethical quandary. If humans delegate decisions to machines, who will be responsible for the consequences? As Markoff chronicles the history of automation, from the birth of the artificial intelligence and intelligence augmentation communities in the 1950s and 1960s, to the modern-day brain trusts at Google and Apple in Silicon Valley, and on to the expanding robotics economy around Boston, he traces the different ways developers have addressed this fundamental problem and urges them to carefully consider the consequences of their work. We are on the brink of the next stage of the computer revolution, Markoff argues, and robots will profoundly transform modern life. Yet it remains for us to determine whether this new world will be a utopia. Moreover, it is now incumbent upon the designers of these robots to draw a bright line between what is human and what is machine.

After nearly forty years covering the tech industry, Markoff offers an unmatched perspective on the most drastic technology-driven societal shifts since the introduction of the Internet. Machines of Loving Grace draws on an extensive array of research and interviews to present an eye-opening history of one of the most pressing questions of our time, and urges us to remember that we still have the opportunity to design ourselves into the future—before it’s too late….(More)”

How Africa can benefit from the data revolution


 in The Guardian: “….The modern information infrastructure is about movement of data. From data we derive information and knowledge, and that knowledge can be propagated rapidly across the country and throughout the world. Facebook and Google have both made massive investments in machine learning, the mainstay technology for converting data into knowledge. But the potential for these technologies in Africa is much larger: instead of simply advertising products to people, we can imagine modern distributed health systems, distributed markets, knowledge systems for disease intervention. The modern infrastructure should be data driven and deployed across the mobile network. A single good idea can then be rapidly implemented and distributed via the mobile phone app ecosystems.

The information infrastructure does not require large scale thinking and investment to deliver. In fact, it requires just the reverse. It requires agility and innovation. Larger companies cannot react quickly enough to exploit technological advances. Small companies with a good idea can grow quickly. From IBM to Microsoft, Google and now Facebook. All these companies now agree on one thing: data is where the value lies. Modern internet companies are data-driven from the ground up. Could the same thing happen in Africa’s economies? Can entire countries reformulate their infrastructures to be data-driven from the ground up?

Maybe, or maybe not, but it isn’t necessary to have a grand plan to give it a go. It is already natural to use data and communication to solve real world problems. In Silicon Valley these are the challenges of getting a taxi or reserving a restaurant. In Africa they are often more fundamental. John Quinn has been in Kampala, Uganda at Makerere University for eight years now targeting these challenges. In June this year, John and other researchers from across the region came together for Africa’s first workshop on data science at Dedan Kimathi University of Technology. The objective was to spread knowledge of technologies, ideas and solutions. For the modern information infrastructure to be successful software solutions need to be locally generated. African apps to solve African problems. With this in mind the workshop began with a three day summer school on data science which was then followed by two days of talks on challenges in African data science.

The ideas and solutions presented were cutting edge. The Umati project uses social media to understand the use of ethnic hate speech in Kenya (Sidney Ochieng, iHub, Nairobi). The use of social media for monitoring the evolution and effects of Ebola in west Africa (Nuri Pashwani, IBM Research Africa). The Kudusystem for market making in Ugandan farm produce distribution via SMS messages (Kenneth Bwire, Makerere University, Kampala). Telecommunications data for inferring the source and spread of a typhoid outbreak in Kampala (UN Pulse Lab, Kampala). The Punya system for prototyping and deployment of mobile phone apps to deal with emerging crises or market opportunities (Julius Adebayor, MIT) and large scale systems for collating and sharing data resources Open Data Kenya and UN OCHA Human Data Exchange….(More)”

Meaningful Consent: The Economics of Privity in Networked Environments


Paper by Jonathan Cave: “Recent work on privacy (e.g. WEIS 2013/4, Meaningful Consent in the Digital Economy project) recognises the unanticipated consequences of data-centred legal protections in a world of shifting relations between data and human actors. But the rules have not caught up with these changes, and the irreversible consequences of ‘make do and mend’ are not often taken into account when changing policy.

Many of the most-protected ‘personal’ data are not personal at all, but are created to facilitate the operation of larger (e.g. administrative, economic, transport) systems or inadvertently generated by using such systems. The protection given to such data typically rests on notions of informed consent even in circumstances where such consent may be difficult to define, harder to give and nearly impossible to certify in meaningful ways. Such protections typically involve a mix of data collection, access and processing rules that are either imposed on behalf of individuals or are to be exercised by them. This approach adequately protects some personal interests, but not all – and is definitely not future-proof. Boundaries between allowing individuals to discover and pursue their interests on one side and behavioural manipulation on the other are often blurred. The costs (psychological and behavioural as well as economic and practical) of exercising control over one’s data are rarely taken into account as some instances of the Right to be Forgotten illustrate. The purposes for which privacy rights were constructed are often forgotten, or have not been reinterpreted in a world of ubiquitous monitoring data, multi-person ‘private exchanges,’ and multiple pathways through which data can be used to create and to capture value. Moreover, the parties who should be involved in making decisions – those connected by a network of informational relationships – are often not in contractual, practical or legal contact. These developments, associated with e.g. the Internet of Things, Cloud computing and big data analytics, should be recognised as challenging privacy rules and, more fundamentally, the adequacy of informed consent (e.g. to access specified data for specified purposes) as a means of managing innovative, flexible, and complex informational architectures.

This paper presents a framework for organising these challenges using them to evaluate proposed policies, specifically in relation to complex, automated, automatic or autonomous data collection, processing and use. It argues for a movement away from a system of property rights based on individual consent to a values-based ‘privity’ regime – a collection of differentiated (relational as well as property) rights and consents that may be better able to accommodate innovations. Privity regimes (see deFillipis 2006) bundle together rights regarding e.g. confidential disclosure with ‘standing’ or voice options in relation to informational linkages.

The impacts are examined through a game-theoretic comparison between the proposed privity regime and existing privacy rights in personal data markets that include: conventional ‘behavioural profiling’ and search; situations where third parties may have complementary roles conflicting interests in such data and where data have value in relation both to specific individuals and to larger groups (e.g. ‘real-world’ health data); n-sided markets on data platforms (including social and crowd-sourcing platforms with long and short memories); and the use of ‘privity-like’ rights inherited by data objects and by autonomous systems whose ownership may be shared among many people….(More)”

Science Isn’t Broken


Christie Aschwanden at FiveThirtyEight: “Yet even in the face of overwhelming evidence, it’s hard to let go of a cherished idea, especially one a scientist has built a career on developing. And so, as anyone who’s ever tried to correct a falsehood on the Internet knows, the truth doesn’t always win, at least not initially, because we process new evidence through the lens of what we already believe. Confirmation bias can blind us to the facts; we are quick to make up our minds and slow to change them in the face of new evidence.

A few years ago, Ioannidis and some colleagues searched the scientific literature for references to two well-known epidemiological studies suggesting that vitamin E supplements might protect against cardiovascular disease. These studies were followed by several large randomized clinical trials that showed no benefit from vitamin E and one meta-analysis finding that at high doses, vitamin E actually increased the risk of death.

Human fallibilities send the scientific process hurtling in fits, starts and misdirections instead of in a straight line from question to truth.

Despite the contradictory evidence from more rigorous trials, the first studies continued to be cited and defended in the literature. Shaky claims about beta carotene’s ability to reduce cancer risk and estrogen’s role in staving off dementia also persisted, even after they’d been overturned by more definitive studies. Once an idea becomes fixed, it’s difficult to remove from the conventional wisdom.

Sometimes scientific ideas persist beyond the evidence because the stories we tell about them feel true and confirm what we already believe. It’s natural to think about possible explanations for scientific results — this is how we put them in context and ascertain how plausible they are. The problem comes when we fall so in love with these explanations that we reject the evidence refuting them.

The media is often accused of hyping studies, but scientists are prone to overstating their results too.

Take, for instance, the breakfast study. Published in 2013, it examined whether breakfast eaters weigh less than those who skip the morning meal and if breakfast could protect against obesity. Obesity researcher Andrew Brown and his colleagues found that despite more than 90 mentions of this hypothesis in published media and journals, the evidence for breakfast’s effect on body weight was tenuous and circumstantial. Yet researchers in the field seemed blind to these shortcomings, overstating the evidence and using causative language to describe associations between breakfast and obesity. The human brain is primed to find causality even where it doesn’t exist, and scientists are not immune.

As a society, our stories about how science works are also prone to error. The standard way of thinking about the scientific method is: ask a question, do a study, get an answer. But this notion is vastly oversimplified. A more common path to truth looks like this: ask a question, do a study, get a partial or ambiguous answer, then do another study, and then do another to keep testing potential hypotheses and homing in on a more complete answer. Human fallibilities send the scientific process hurtling in fits, starts and misdirections instead of in a straight line from question to truth.

Media accounts of science tend to gloss over the nuance, and it’s easy to understand why. For one thing, reporters and editors who cover science don’t always have training on how to interpret studies. And headlines that read “weak, unreplicated study finds tenuous link between certain vegetables and cancer risk” don’t fly off the newsstands or bring in the clicks as fast as ones that scream “foods that fight cancer!”

People often joke about the herky-jerky nature of science and health headlines in the media — coffee is good for you one day, bad the next — but that back and forth embodies exactly what the scientific process is all about. It’s hard to measure the impact of diet on health, Nosek told me. “That variation [in results] occurs because science is hard.” Isolating how coffee affects health requires lots of studies and lots of evidence, and only over time and in the course of many, many studies does the evidence start to narrow to a conclusion that’s defensible. “The variation in findings should not be seen as a threat,” Nosek said. “It means that scientists are working on a hard problem.”

The scientific method is the most rigorous path to knowledge, but it’s also messy and tough. Science deserves respect exactly because it is difficult — not because it gets everything correct on the first try. The uncertainty inherent in science doesn’t mean that we can’t use it to make important policies or decisions. It just means that we should remain cautious and adopt a mindset that’s open to changing course if new data arises. We should make the best decisions we can with the current evidence and take care not to lose sight of its strength and degree of certainty. It’s no accident that every good paper includes the phrase “more study is needed” — there is always more to learn….(More)”

Review Federal Agencies on Yelp…and Maybe Get a Response


Yelp Official Blog: “We are excited to announce that Yelp has concluded an agreement with the federal government that will allow federal agencies and offices to claim their Yelp pages, read and respond to reviews, and incorporate that feedback into service improvements.

We encourage Yelpers to review any of the thousands of agency field offices, TSA checkpoints, national parks, Social Security Administration offices, landmarks and other places already listed on Yelp if you have good or bad feedback to share about your experiences. Not only is it helpful to others who are looking for information on these services, but you can actually make an impact by sharing your feedback directly with the source.

It’s clear Washington is eager to engage with people directly through social media. Earlier this year a group of 46 lawmakers called for the creation of a “Yelp for Government” in order to boost transparency and accountability, and Representative Ron Kind reiterated this call in a letter to the General Services Administration (GSA). Luckily for them, there’s no need to create a new platform now that government agencies can engage directly on Yelp.

As this agreement is fully implemented in the weeks and months ahead, we’re excited to help the federal government more directly interact with and respond to the needs of citizens and to further empower the millions of Americans who use Yelp every day.

In addition to working with the federal government, last week we announced our our partnership with ProPublica to incorporate health care statistics and consumer opinion survey data onto the Yelp business pages of more than 25,000 medical treatment facilities. We’ve also partnered with local governments in expanding the LIVES open data standard to show restaurant health scores on Yelp….(More)”

Can big databases be kept both anonymous and useful?


The Economist: “….The anonymisation of a data record typically means the removal from it of personally identifiable information. Names, obviously. But also phone numbers, addresses and various intimate details like dates of birth. Such a record is then deemed safe for release to researchers, and even to the public, to make of it what they will. Many people volunteer information, for example to medical trials, on the understanding that this will happen.

But the ability to compare databases threatens to make a mockery of such protections. Participants in genomics projects, promised anonymity in exchange for their DNA, have been identified by simple comparison with electoral rolls and other publicly available information. The health records of a governor of Massachusetts were plucked from a database, again supposedly anonymous, of state-employee hospital visits using the same trick. Reporters sifting through a public database of web searches were able to correlate them in order to track down one, rather embarrassed, woman who had been idly searching for single men. And so on.

Each of these headline-generating stories creates a demand for more controls. But that, in turn, deals a blow to the idea of open data—that the electronic “data exhaust” people exhale more or less every time they do anything in the modern world is actually useful stuff which, were it freely available for analysis, might make that world a better place.

Of cake, and eating it

Modern cars, for example, record in their computers much about how, when and where the vehicle has been used. Comparing the records of many vehicles, says Viktor Mayer-Schönberger of the Oxford Internet Institute, could provide a solid basis for, say, spotting dangerous stretches of road. Similarly, an opening of health records, particularly in a country like Britain, which has a national health service, and cross-fertilising them with other personal data, might help reveal the multifarious causes of diseases like Alzheimer’s.

This is a true dilemma. People want both perfect privacy and all the benefits of openness. But they cannot have both. The stripping of a few details as the only means of assuring anonymity, in a world choked with data exhaust, cannot work. Poorly anonymised data are only part of the problem. What may be worse is that there is no standard for anonymisation. Every American state, for example, has its own prescription for what constitutes an adequate standard.

Worse still, devising a comprehensive standard may be impossible. Paul Ohm of Georgetown University, in Washington, DC, thinks that this is partly because the availability of new data constantly shifts the goalposts. “If we could pick an industry standard today, it would be obsolete in short order,” he says. Some data, such as those about medical conditions, are more sensitive than others. Some data sets provide great precision in time or place, others merely a year or a postcode. Each set presents its own dangers and requirements.

Fortunately, there are a few easy fixes. Thanks in part to the headlines, many now agree that public release of anonymised data is a bad move. Data could instead be released piecemeal, or kept in-house and accessible by researchers through a question-and-answer mechanism. Or some users could be granted access to raw data, but only in strictly controlled conditions.

All these approaches, though, are anathema to the open-data movement, because they limit the scope of studies. “If we’re making it so hard to share that only a few have access,” says Tim Althoff, a data scientist at Stanford University, “that has profound implications for science, for people being able to replicate and advance your work.”

Purely legal approaches might mitigate that. Data might come with what have been called “downstream contractual obligations”, outlining what can be done with a given data set and holding any onward recipients to the same standards. One perhaps draconian idea, suggested by Daniel Barth-Jones, an epidemiologist at Columbia University, in New York, is to make it illegal even to attempt re-identification….(More).”