Handbook: How to Catalyze Humanitarian Innovation in Computing Research Institutes


Patrick Meier: “The handbook below provides practical collaboration guidelines for both humanitarian organizations & computing research institutes on how to catalyze humanitarian innovation through successful partnerships. These actionable guidelines are directly applicable now and draw on extensive interviews with leading humanitarian groups and CRI’s including the International Committee of the Red Cross (ICRC), United Nations Office for the Coordination of Humanitarian Affairs (OCHA), United Nations Children’s Fund (UNICEF), United Nations High Commissioner for Refugees (UNHCR), UN Global Pulse, Carnegie Melon University (CMU), International Business Machines (IBM), Microsoft Research, Data Science for Social Good Program at the University of Chicago and others.

This handbook, which is the first of its kind, also draws directly on years of experience and lessons learned from the Qatar Computing Research Institute’s (QCRI) active collaboration and unique partnerships with multiple international humanitarian organizations. The aim of this blog post is to actively solicit feedback on this first, complete working draft, which is available here as an open and editable Google Doc. …(More)”

The Climatologist’s Almanac


Clara Chaisson at onEarth: “Forget your weather app with its five- or even ten-day forecasts—a supercomputer at NASA has just provided us with high-resolution climate projections through the end of the century. The massive new 11-terabyte data set combines historical daily temperatures and precipitation measurements with climate simulations under two greenhouse gas emissions scenarios. The project spans from 1950 to 2100, but users can easily zero in on daily timescales for their own locales—which is precisely the point.

The projections can be found on Amazon for free for all to see and plan by. The space agency hopes that developing nations and poorer communities that may not have any spare supercomputers lying around will use the info to predict and prepare for climate change. …(More)”

Architecting Transparency: Back to the Roots – and Forward to the Future?


Paper by Dieter Zinnbauer: “Where to go next in research and practice on information disclosure and institutional transparency? Where to learn and draw inspiration from? How about if we go back to the roots and embrace an original, material notion of transparency as the quality of a substance or element to be see-through? How about, if we then explore how the deliberate use and assemblage of such physical transparency strategies in architecture and design connects to – or could productively connect to – the institutional, political notions of transparency that we are concerned with in our area of institutional or political transparency? Or put more simply and zooming in on one core aspect of the conversation: what have the arrival of glass and its siblings done for democracy and what can we still hope they will do for open, transparent governance now and in the future?

This paper embarks upon this exploratory journey in four steps. It starts out (section 2.1) by revisiting the historic relationship between architecture, design and the build environment on the one side and institutional ambitions for democracy, openness, transparency and collective governance on the other side. Quite surprisingly it finds a very close and ancient relationship between the two. Physical and political transparency have through the centuries been joined at the hip and this relationship – overlooked as it is typically is – has persisted in very important ways in our contemporary institutions of governance. As a second step I seek to trace the major currents in the architectural debate and practice on transparency over the last century and ask three principal questions:

– How have architects as the master-designers of the built environment in theory, criticism and practice historically grappled with the concept of transparency? To what extent have they linked material notions and building strategies of transparency to political and social notions of transparency as tools for emancipation and empowerment? (section 2.2.)

– What is the status of transparency in architecture today and what is the degree of cross-fertilisation between physical and institutional/political transparency? (section 3)

– Where could a closer connect between material and political transparency lead us in terms of inspiring fresh experimentation and action in order to broaden the scope of available transparency tools and spawn fresh ideas and innovation? (section 4).

Along the way I will scan the fragmented empirical evidence base for the actual impact of physical transparency strategies and also flag interesting areas for future research. As it turns out, an obsession with material transparency in architecture and the built environment has evolved in parallel and in many ways predates the rising popularity of transparency in political science and governance studies. There are surprising parallels in the hype-and-skepticism curve, common challenges, interesting learning experiences and a rich repertoire of ideas for cross-fertilisation and joint ideation that is waiting to be tapped. However, this will require to find ways to bridge the current disconnect between the physical and institutional transparency professions and move beyond the current pessimism about an actual potential of physical transparency beyond empty gestures or deployment for surveillance, notions that seems to linger on both sides. But the analysis shows that this bridge-building could be an extremely worthwhile endeavor. Both the available empirical data, as well as the ideas that even just this first brief excursion into physical transparency has yielded bode well for embarking on this cross-disciplinary conversation about transparency. And as the essay also shows, help from three very unexpected corners might be on the way to re-ignite the spark for taking the physical dimension of transparency seriously again. Back to the roots has a bright future….(More)

Flawed Humans, Flawed Justice


Adam Benforado in the New York Times  on using …”lessons from behavioral science to make police and courts more fair…. WHAT would it take to achieve true criminal justice in America?

Imagine that we got rid of all of the cops who cracked racist jokes and prosecutors blinded by a thirst for power. Imagine that we cleansed our courtrooms of lying witnesses and foolish jurors. Imagine that we removed every judge who thought the law should bend to her own personal agenda and every sadistic prison guard.

We would certainly feel just then. But we would be wrong.

We would still have unarmed kids shot in the back and innocent men and women sentenced to death. We would still have unequal treatment, disregarded rights and profound mistreatment.

The reason is simple and almost entirely overlooked: Our legal system is based on an inaccurate model of human behavior. Until recently, we had no way of understanding what was driving people’s thoughts, perceptions and actions in the criminal arena. So, we built our institutions on what we had: untested assumptions about what deceit looks like, how memories work and when punishment is merited.

But we now have tools — from experimental methods and data collection approaches to brain-imaging technologies — that provide an incredible opportunity to establish a new and robust foundation.

Our justice system must be reconstructed upon scientific fact. We can start by acknowledging what the data says about the fundamental flaws in our current legal processes and structures.

Consider the evidence that we treat as nearly unassailable proof of guilt at trial — an unwavering eyewitness, a suspect’s signed confession or a forensic match to the crime scene.

While we charge tens of thousands of people with crimes each year after they are identified in police lineups, research shows that eyewitnesses chose an innocent person roughly one-third of the time. Our memories can fail us because we’re frightened. They can be altered by the word choice of a detective. They can be corrupted by previously seeing someone’s image on a social media site.

Picking out lying suspects from their body language is ineffective. And trying then to gain a confession by exaggerating the strength of the evidence and playing down the seriousness of the offense can encourage people to admit to terrible things they didn’t do.

Even seemingly objective forensic analysis is far from incorruptible. Recent data shows that fingerprint — and even DNA — matches are significantly more likely when the forensic expert is aware that the sample comes from someone the police believe is guilty.

With the aid of psychology, we see there’s a whole host of seemingly extraneous forces influencing behavior and producing systematic distortions. But they remain hidden because they don’t fit into our familiar legal narratives.

We assume that the specific text of the law is critical to whether someone is convicted of rape, but research shows that the details of the criminal code — whether it includes a “force” requirement or excuses a “reasonably mistaken” belief in consent — can be irrelevant. What matters are the backgrounds and identifies of the jurors.

When a black teenager is shot by a police officer, we expect to find a bigot at the trigger.

But studies suggest that implicit bias, rather than explicit racism, is behind many recent tragedies. Indeed, simulator experiments show that the biggest danger posed to young African-American men may not be hate-filled cops, but well-intentioned police officers exposed to pervasive, damaging stereotypes that link the concepts of blackness and violence.

Likewise, Americans have been sold a myth that there are two kinds of judges — umpires and activists — and that being unbiased is a choice that a person makes. But the truth is that all judges are swayed by countless forces beyond their conscious awareness or control. It should have no impact on your case, for instance, whether your parole hearing is scheduled first thing in the morning or right before lunch, but when scientists looked at real parole boards, they found that judges were far more likely to grant petitions at the beginning of the day than they were midmorning.

The choice of where to place the camera in an interrogation room may seem immaterial, yet experiments show that it can affect whether a confession is determined to be coerced. When people watch a recording with the camera behind the detective, they are far more likely to find that the confession was voluntary than when watching the interactions from the perspective of the suspect.

With such challenges to our criminal justice system, what can possibly be done? The good news is that an evidence-based approach also illuminates the path forward.

Once we have clear data that something causes a bias, we can then figure out how to remove that influence. …(More)

The death of data science – and rise of the citizen scientist


Ben Rossi at Information Age: “The notion of data science was born from the recent idea that if you have enough data, you don’t need much (if any) science to divine the truth and foretell the future – as opposed to the long-established rigours of statistical or actuarial science, which most times require painstaking efforts and substantial time to produce their version of ‘the truth’. …. Rather than embracing this untested and, perhaps, doomed form of science, and aimlessly searching for unicorns (also known as data scientists) to pay vast sums to, many organisations are now embracing the idea of making everyone data and analytics literate.

This leads me to what my column is really meant to focus on: the rise of the citizen scientist. 

The citizen scientist is not a new idea, having seen action in the space and earth sciences world for decades now, and has really come into its own as we enter the age of open data.

Cometh the hour

Given the exponential growth of open data initiatives across the world – the UK remains the leader, but has growing competition from all locations – the need for citizen scientists is now paramount. 

As governments open up vast repositories of new data of every type, the opportunity for these same governments (and commercial interests) to leverage the passion, skills and collective know-how of citizen scientists to help garner deeper insights into the scientific and civic challenges of the day is substantial. 

They can then take this knowledge and the collective energy of the citizen scientist community to develop common solution sets and applications to meet the needs of all their constituencies without expending much in terms of financial resources or suffering substantial development time lags. 

This can be a windfall of benefits for every level or type of government found around the world. The use of citizen scientists to tackle so-called ‘grand challenge’ problems has been a driving force behind many governments’ commitment to and investment in open data to date. 

There are so many challenges in governing today that it would be foolish not to employ these very capable resources to help tackle them. 

The benefits manifested from this approach are substantial and well proven. Many are well articulated in the open data success stories to date. 

Additionally, you only need to attend a local ‘hack fest’ to see how engaged citizen scientists can be of any age, gender and race, and feel the sense of community that these events foster as everyone focuses on the challenges at hand and works diligently to surmount them using very creative approaches. 

As open data becomes pervasive in use and matures in respect to the breadth and richness of the data sets being curated, the benefits returned to both government and its constituents will be manifold. 

The catalyst to realising these benefits and achieving return on investment will be the role of citizen scientists, which are not going to be statisticians, actuaries or so-called data gurus, but ordinary people with a passion for science and learning and a desire to contribute to solving the many grand challenges facing society at large….(More)

Policy Practice and Digital Science


New book edited by Janssen, Marijn, Wimmer, Maria A., and Deljoo, Ameneh: “The explosive growth in data, computational power, and social media creates new opportunities for innovating the processes and solutions of Information and communications technology (ICT) based policy-making and research. To take advantage of these developments in the digital world, new approaches, concepts, instruments and methods are needed to navigate the societal and computational complexity. This requires extensive interdisciplinary knowledge of public administration, policy analyses, information systems, complex systems and computer science. This book provides the foundation for this new interdisciplinary field, in which various traditional disciplines are blending. Both policy makers, executors and those in charge of policy implementations acknowledge that ICT is becoming more important and is changing the policy-making process, resulting in a next generation policy-making based on ICT support. Web 2.0 and even Web 3.0 point to the specific applications of social networks, semantically enriched and linked data, whereas policy-making has also to do with the use of the vast amount of data, predictions and forecasts, and improving the outcomes of policy-making, which is confronted with an increasing complexity and uncertainty of the outcomes. The field of policy-making is changing and driven by developments like open data, computational methods for processing data, opining mining, simulation and visualization of rich data sets, all combined with public engagement, social media and participatory tools….(More)”

Social Dimensions of Privacy


New book edited by Dorota Mokrosinska and Beate Roessler: “Written by a select international group of leading privacy scholars, Social Dimensions of Privacy endorses and develops an innovative approach to privacy. By debating topical privacy cases in their specific research areas, the contributors explore the new privacy-sensitive areas: legal scholars and political theorists discuss the European and American approaches to privacy regulation; sociologists explore new forms of surveillance and privacy on social network sites; and philosophers revisit feminist critiques of privacy, discuss markets in personal data, issues of privacy in health care and democratic politics. The broad interdisciplinary character of the volume will be of interest to readers from a variety of scientific disciplines who are concerned with privacy and data protection issues.

  • Takes an innovative approach to privacy which focuses on the social dimensions and value of privacy in contrast to the value of privacy for individuals
  • Addresses readers from a variety of disciplines, including law, philosophy, media studies, gender studies and political science
  • Addresses new privacy-sensitive areas triggered by recent technological developments (More)”

New ODI research shows open data reaching every sector of UK industry


ODI: “New research has been published today (1 June) by the Open Data Institute showing that open data is reaching every sector of UK industry.

In various forms, open data is being adopted by a wide variety of businesses – small and large, new and old, from right across the country. The findings from Open data means business: UK innovation across sectors and regions draw on 270 companies with a combined turnover of £92bn and over 500k employees, identified by the ODI as using, producing or investing in open data as part of their business. The project included desk research, surveys and interviews on the companies’ experiences.

Key findings from the research include:

  • Companies using open data come from many sectors; over 46% from outside the information and communication sector. These include finance & insurance, science & technology, business administration & support, arts & entertainment, health, retail, transportation, education and energy.
  • The most popular datasets for companies aregeospatial/mapping data (57%), transport data (43%) and environment data (42%).
  • 39% of companies innovating with open data are over 10 years old, with some more than 25 years old, proving open data isn’t just for new digital startups.
  • ‘Micro-enterprises’ (businesses with fewer than 10 employees) represented 70% of survey respondents, demonstrating athriving open data startup scene. These businesses are using it to create services, products and platforms. 8% of respondents were drawn from large companies of 251 or more employees….
  • The companies surveyed listed 25 different government sources for the data they use. Notably, Ordnance Survey data was cited most frequently, by 14% of the companies. The non-government source most commonly used was OpenStreetMap, an openly licenced map of the world created by volunteers….(More)

Governing methods: policy innovation labs, design and data science in the digital governance of education


Paper by Ben Williamson in the Journal of Educational Administration and History: “Policy innovation labs are emerging knowledge actors and technical experts in the governing of education. The article offers a historical and conceptual account of the organisational form of the policy innovation lab. Policy innovation labs are characterised by specific methods and techniques of design, data science, and digitisation in public services such as education. The second half of the article details how labs promote the use of digital data analysis, evidence-based evaluation and ‘design-for-policy’ techniques as methods for the governing of education. In particular, they promote the ‘computational thinking’ associated with computer programming as a capacity required by a ‘reluctant state’ that is increasingly concerned to delegate its responsibilities to digitally enabled citizens with the ‘designerly’ capacities and technical expertise to ‘code’ solutions to public and social problems. Policy innovation labs are experimental laboratories trialling new methods within education for administering and governing the future of the state itself….(More)”

Want better science? Quit hoarding data, genetics researchers say


Nidhi Subbaraman at BetaBoston: “When Andrea Downing was 25, she got screened for the BRCA genes known to be associated with a variety of cancers, including breast cancer. Both her great-grandmother and grandmother had been diagnosed with the disease, so the results were no surprise: Downing carried the BRCA1 mutation in her genes. She learned there was a 87 percent chance she would get breast cancer during her lifetime, and 60 percent chance she would get ovarian cancer.
The revelation brought with it a dizzying range of choices. Should she get a mastectomy before the cancer showed? Should she choose to have her ovaries removed? Could she wait until after she had kids?

For the first several years after her diagnosis, Downing sought out support groups, then began booking appointments with researchers and examining the latest literature. “I’m a little different from your usual patient who tested positive,” Downing said. “I wanted to go beyond to challenge myself and understand the science of cancer.”

Then, in 2013, she chanced on was ClinVar, a research database funded by the National Institute of Health that acts as a kind of Wikipedia to catalogue scientific research on mutations in genes. It gave her a roadmap for the research associated with her variant, called C16G.

Downing typed in the letters and numbers of her mutation, and the website spit out a list of companies and labs that have studied her variant. Though much of that information was technical, she said, “the things I do understand about it are very empowering. It’s a starting point to answering questions I don’t know.”

When the database first launched, the idea was that the single repository would present a unified picture of a variant, drawing from all available research that was publicly shared by companies and research labs.

Two years later, the team behind the operation has published a progress report of sorts in the New England Journal of Medicine. They argue that this shared approach is working — doctors and researchers are using the database — and they are advocating for more companies and groups to join the effort to reach a more comprehensive understanding of the variants in disease genes. In particular, they’re challenging companies to be more open with their data, instead of keeping it to themselves….(More)”