New Field Guide Explores Open Data Innovations in Disaster Risk and Resilience


Worldbank: “From Indonesia to Bangladesh to Nepal, community members armed with smartphones and GPS systems are contributing to some of the most extensive and versatile maps ever created, helping inform policy and better prepare their communities for disaster risk.
In Jakarta, more than 500 community members have been trained to collect data on thousands of hospitals, schools, private buildings, and critical infrastructure. In Sri Lanka, government and academic volunteers mapped over 30,000 buildings and 450 km of roadways using a collaborative online resource called OpenStreetMaps.
These are just a few of the projects that have been catalyzed by the Open Data for Resilience Initiative (OpenDRI), developed by the World Bank’s Global Facility for Disaster Reduction and Recovery (GFDRR). Launched in 2011, OpenDRI is active in more than 20 countries today, mapping tens of thousands of buildings and urban infrastructure, providing more than 1,000 geospatial datasets to the public, and developing innovative application tools.
To expand this work, the World Bank Group has launched the OpenDRI Field Guide as a showcase of successful projects and a practical guide for governments and other organizations to shape their own open data programs….
The field guide walks readers through the steps to build open data programs based on the OpenDRI methodology. One of the first steps is data collation. Relevant datasets are often locked because of proprietary arrangements or fragmented in government bureaucracies. The field guide explores tools and methods to enable the participatory mapping projects that can fill in gaps and keep existing data relevant as cities rapidly expand.

GeoNode: Mapping Disaster Damage for Faster Recovery
One example is GeoNode, a locally controlled and open source cataloguing tool that helps manage and visualize geospatial data. The tool, already in use in two dozen countries, can be modified and easily be integrated into existing platforms, giving communities greater control over mapping information.
GeoNode was used extensively after Typhoon Yolanda (Haiyan) swept the Philippines with 300 km/hour winds and a storm surge of over six meters last fall. The storm displaced nearly 11 million people and killed more than 6,000.
An event-specific GeoNode project was created immediately and ultimately collected more than 72 layers of geospatial data, from damage assessments to situation reports. The data and quick analysis capability contributed to recovery efforts and is still operating in response mode at Yolandadata.org.
InaSAFE: Targeting Risk Reduction
A sister project, InaSAFE, is an open, easy-to-use tool for creating impact assessments for targeted risk reduction. The assessments are based on how an impact layer – such as a tsunami, flood, or earthquake – affects exposure data, such as population or buildings.
With InaSAFE, users can generate maps and statistical information that can be easily disseminated and even fed back into projects like GeoNode for simple, open source sharing.
The initiative, developed in collaboration with AusAID and the Government of Indonesia, was put to the test in the 2012 flood season in Jakarta, and its successes provoked a rapid national rollout and widespread interest from the international community.
Open Cities: Improving Urban Planning & Resilience
The Open Cities project, another program operating under the OpenDRI platform, aims to catalyze the creation, management and use of open data to produce innovative solutions for urban planning and resilience challenges across South Asia.
In 2013, Kathmandu was chosen as a pilot city, in part because the population faces the highest mortality threat from earthquakes in the world. Under the project, teams from the World Bank assembled partners and community mobilizers to help execute the largest regional community mapping project to date. The project surveyed more than 2,200 schools and 350 health facilities, along with road networks, points of interest, and digitized building footprints – representing nearly 340,000 individual data nodes.”

After the Protests


Zeynep Tufekc in the New York Times on why social media is fueling a boom-and-bust cycle of political: “LAST Wednesday, more than 100,000 people showed up in Istanbul for a funeral that turned into a mass demonstration. No formal organization made the call. The news had come from Twitter: Berkin Elvan, 15, had died. He had been hit in the head by a tear-gas canister on his way to buy bread during the Gezi protests last June. During the 269 days he spent in a coma, Berkin’s face had become a symbol of civic resistance shared on social media from Facebook to Instagram, and the response, when his family tweeted “we lost our son” and then a funeral date, was spontaneous.

Protests like this one, fueled by social media and erupting into spectacular mass events, look like powerful statements of opposition against a regime. And whether these take place in Turkey, Egypt or Ukraine, pundits often speculate that the days of a ruling party or government, or at least its unpopular policies, must be numbered. Yet often these huge mobilizations of citizens inexplicably wither away without the impact on policy you might expect from their scale.

This muted effect is not because social media isn’t good at what it does, but, in a way, because it’s very good at what it does. Digital tools make it much easier to build up movements quickly, and they greatly lower coordination costs. This seems like a good thing at first, but it often results in an unanticipated weakness: Before the Internet, the tedious work of organizing that was required to circumvent censorship or to organize a protest also helped build infrastructure for decision making and strategies for sustaining momentum. Now movements can rush past that step, often to their own detriment….

But after all that, in the approaching local elections, the ruling party is expected to retain its dominance.

Compare this with what it took to produce and distribute pamphlets announcing the Montgomery bus boycott in 1955. Jo Ann Robinson, a professor at Alabama State College, and a few students sneaked into the duplicating room and worked all night to secretly mimeograph 52,000 leaflets to be distributed by hand with the help of 68 African-American political, religious, educational and labor organizations throughout the city. Even mundane tasks like coordinating car pools (in an era before there were spreadsheets) required endless hours of collaborative work.

By the time the United States government was faced with the March on Washington in 1963, the protest amounted to not just 300,000 demonstrators but the committed partnerships and logistics required to get them all there — and to sustain a movement for years against brutally enforced Jim Crow laws. That movement had the capacity to leverage boycotts, strikes and demonstrations to push its cause forward. Recent marches on Washington of similar sizes, including the 50th anniversary march last year, also signaled discontent and a desire for change, but just didn’t pose the same threat to the powers that be.

Social media can provide a huge advantage in assembling the strength in numbers that movements depend on. Those “likes” on Facebook, derided as slacktivism or clicktivism, can have long-term consequences by defining which sentiments are “normal” or “obvious” — perhaps among the most important levers of change. That’s one reason the same-sex marriage movement, which uses online and offline visibility as a key strategy, has been so successful, and it’s also why authoritarian governments try to ban social media.

During the Gezi protests, Prime Minister Recep Tayyip Erdogan called Twitter and other social media a “menace to society.” More recently, Turkey’s Parliament passed a law greatly increasing the government’s ability to censor online content and expand surveillance, and Mr. Erdogan said he would consider blocking access to Facebook and YouTube. It’s also telling that one of the first moves by President Vladimir V. Putin of Russia before annexing Crimea was to shut down the websites of dissidents in Russia.
Media in the hands of citizens can rattle regimes. It makes it much harder for rulers to maintain legitimacy by controlling the public sphere. But activists, who have made such effective use of technology to rally supporters, still need to figure out how to convert that energy into greater impact. The point isn’t just to challenge power; it’s to change it.”

The data gold rush


Neelie KROES (European Commission):  “Nearly 200 years ago, the industrial revolution saw new networks take over. Not just a new form of transport, the railways connected industries, connected people, energised the economy, transformed society.
Now we stand facing a new industrial revolution: a digital one.
With cloud computing its new engine, big data its new fuel. Transporting the amazing innovations of the internet, and the internet of things. Running on broadband rails: fast, reliable, pervasive.
My dream is that Europe takes its full part. With European industry able to supply, European citizens and businesses able to benefit, European governments able and willing to support. But we must get all those components right.
What does it mean to say we’re in the big data era?
First, it means more data than ever at our disposal. Take all the information of humanity from the dawn of civilisation until 2003 – nowadays that is produced in just two days. We are also acting to have more and more of it become available as open data, for science, for experimentation, for new products and services.
Second, we have ever more ways – not just to collect that data – but to manage it, manipulate it, use it. That is the magic to find value amid the mass of data. The right infrastructure, the right networks, the right computing capacity and, last but not least, the right analysis methods and algorithms help us break through the mountains of rock to find the gold within.
Third, this is not just some niche product for tech-lovers. The impact and difference to people’s lives are huge: in so many fields.
Transforming healthcare, using data to develop new drugs, and save lives. Greener cities with fewer traffic jams, and smarter use of public money.
A business boost: like retailers who communicate smarter with customers, for more personalisation, more productivity, a better bottom line.
No wonder big data is growing 40% a year. No wonder data jobs grow fast. No wonder skills and profiles that didn’t exist a few years ago are now hot property: and we need them all, from data cleaner to data manager to data scientist.
This can make a difference to people’s lives. Wherever you sit in the data ecosystem – never forget that. Never forget that real impact and real potential.
Politicians are starting to get this. The EU’s Presidents and Prime Ministers have recognised the boost to productivity, innovation and better services from big data and cloud computing.
But those technologies need the right environment. We can’t go on struggling with poor quality broadband. With each country trying on its own. With infrastructure and research that are individual and ineffective, separate and subscale. With different laws and practices shackling and shattering the single market. We can’t go on like that.
Nor can we continue in an atmosphere of insecurity and mistrust.
Recent revelations show what is possible online. They show implications for privacy, security, and rights.
You can react in two ways. One is to throw up your hands and surrender. To give up and put big data in the box marked “too difficult”. To turn away from this opportunity, and turn your back on problems that need to be solved, from cancer to climate change. Or – even worse – to simply accept that Europe won’t figure on this mapbut will be reduced to importing the results and products of others.
Alternatively: you can decide that we are going to master big data – and master all its dependencies, requirements and implications, including cloud and other infrastructures, Internet of things technologies as well as privacy and security. And do it on our own terms.
And by the way – privacy and security safeguards do not just have to be about protecting and limiting. Data generates value, and unlocks the door to new opportunities: you don’t need to “protect” people from their own assets. What you need is to empower people, give them control, give them a fair share of that value. Give them rights over their data – and responsibilities too, and the digital tools to exercise them. And ensure that the networks and systems they use are affordable, flexible, resilient, trustworthy, secure.
One thing is clear: the answer to greater security is not just to build walls. Many millennia ago, the Greek people realised that. They realised that you can build walls as high and as strong as you like – it won’t make a difference, not without the right awareness, the right risk management, the right security, at every link in the chain. If only the Trojans had realised that too! The same is true in the digital age: keep our data locked up in Europe, engage in an impossible dream of isolation, and we lose an opportunity; without gaining any security.
But master all these areas, and we would truly have mastered big data. Then we would have showed technology can take account of democratic values; and that a dynamic democracy can cope with technology. Then we would have a boost to benefit every European.
So let’s turn this asset into gold. With the infrastructure to capture and process. Cloud capability that is efficient, affordable, on-demand. Let’s tackle the obstacles, from standards and certification, trust and security, to ownership and copyright. With the right skills, so our workforce can seize this opportunity. With new partnerships, getting all the right players together. And investing in research and innovation. Over the next two years, we are putting 90 million euros on the table for big data and 125 million for the cloud.
I want to respond to this economic imperative. And I want to respond to the call of the European Council – looking at all the aspects relevant to tomorrow’s digital economy.
You can help us build this future. All of you. Helping to bring about the digital data-driven economy of the future. Expanding and depening the ecosystem around data. New players, new intermediaries, new solutions, new jobs, new growth….”

Climate Data Initiative Launches with Strong Public and Private Sector Commitments


John Podesta and Dr. John P. Holdren at the White House blog:  “…today, delivering on a commitment in the President’s Climate Action Plan, we are launching the Climate Data Initiative, an ambitious new effort bringing together extensive open government data and design competitions with commitments from the private and philanthropic sectors to develop data-driven planning and resilience tools for local communities. This effort will help give communities across America the information and tools they need to plan for current and future climate impacts.
The Climate Data Initiative builds on the success of the Obama Administration’s ongoing efforts to unleash the power of open government data. Since data.gov, the central site to find U.S. government data resources, launched in 2009, the Federal government has released troves of valuable data that were previously hard to access in areas such as health, energy, education, public safety, and global development. Today these data are being used by entrepreneurs, researchers, tech innovators, and others to create countless new applications, tools, services, and businesses.
Data from NOAA, NASA, the U.S. Geological Survey, the Department of Defense, and other Federal agencies will be featured on climate.data.gov, a new section within data.gov that opens for business today. The first batch of climate data being made available will focus on coastal flooding and sea level rise. NOAA and NASA will also be announcing an innovation challenge calling on researchers and developers to create data-driven simulations to help plan for the future and to educate the public about the vulnerability of their own communities to sea level rise and flood events.
These and other Federal efforts will be amplified by a number of ambitious private commitments. For example, Esri, the company that produces the ArcGIS software used by thousands of city and regional planning experts, will be partnering with 12 cities across the country to create free and open “maps and apps” to help state and local governments plan for climate change impacts. Google will donate one petabyte—that’s 1,000 terabytes—of cloud storage for climate data, as well as 50 million hours of high-performance computing with the Google Earth Engine platform. The company is challenging the global innovation community to build a high-resolution global terrain model to help communities build resilience to anticipated climate impacts in decades to come. And the World Bank will release a new field guide for the Open Data for Resilience Initiative, which is working in more than 20 countries to map millions of buildings and urban infrastructure….”

Quantified Health – It’s Just A Phase, Get Over It. Please.


Geoff McCleary at PSFK: “The near ubiquitous acceptance of smartphones and mobile internet access have ushered in a new wave of connected devices and smart objects that help us compile and track an unprecedented amount of previously unavailable data.
This quantification of self, which used to be the sole domain of fitness fanatics and professional athletes, is now being expanded out and applied to everything from how we drive and interface with our cars, to homes that adapt around us, to our daily interactions with others. But the most exciting application of this approach has to be the quantification of health – from how much time we spend on the couch, to how frequently a symptom flares up, even to how adherent we are with our medications.
But this new phase of quantified health is just that – it’s just a phase. How many steps a patient takes is a meaningless data point, unless the information means something to the patient. How many pills we take isn’t going to tell us if we are getting better.
Over time, we begin to see correlations between some of the data points and we can see that on the days a user takes their pill, they average 3,000 more steps, but that still doesn’t tell us what is getting better. We can see that when they get a pill reminder every day, that they will refill their prescription twice as much as other users.  As marketers, that information makes us happy, but does it make the patient any healthier? Can’t we both be happy?
We can pretty the data up with shiny infographics and widgets, but unless there is meaningful context to that data it is just a nicely organized set of data points. So, what will make a difference? What will get us out of the dark ages of quantified health and into the enlightened age of Personalized Health? What will need to change to get me the treatment I need because of who I am – on a genetic level?…
Our history, our future, our uniqueness and our sameness mean nothing if we cannot get this information on-demand, in real- time. This information has to be available when we need it (and when we don’t) on whatever screen is handy, in whatever setting we are in. Our physicians need access to our information and they need it in the context of how others have dealt with the same situation.
This access can only be enabled by a cloud-based, open health profile. As quantified self gave way to quantified health, quantified health must give way to Qualitative Health. This cloud based profile of our health past, present and future will need to be both quantified and qualitative.  Based not only on numbers and raw data, but relevance, context and meaning. Based not on a database or an app, but in the cloud where personal information will accessible by whomever we designate, our sameness open and shareable with all — with all contributing to the meaning of our data, and physicians interacting in an informed, consistent manner across our entire health being, instead of just the 20 minutes a year when they see us.
That is truly health care, and I cannot wait for it to get here.”

The Open Data/Environmental Justice Connection


Jeffrey Warren for Wilson’s Commons Lab: “… Open data initiatives seem to assume that all data is born in the hallowed halls of government, industry and academia, and that open data is primarily about convincing such institutions to share it to the public.
It is laudable when institutions with important datasets — such as campaign finance, pollution or scientific data — see the benefit of opening it to the public. But why do we assume unilateral control over data production?
The revolution in user-generated content shows the public has a great deal to contribute – and to gain—from the open data movement. Likewise, citizen science projects that solicit submissions or “task completion” from the public rarely invite higher-level participation in research –let alone true collaboration.
This has to change. Data isn’t just something you’re given if you ask nicely, or a kind of community service we perform to support experts. Increasingly, new technologies make it possible for local groups to generate and control data themselves — especially in environmental health. Communities on the front line of pollution’s effects have the best opportunities to monitor it and the most to gain by taking an active role in the research process.
DIY Data
Luckily, an emerging alliance between the maker/Do-It-Yourself (DIY) movement and watchdog groups is starting to challenge the conventional model.
The Smart Citizen project, the Air Quality Egg and a variety of projects in the Public Lab network are recasting members of the general public as actors in the framing of new research questions and designers of a new generation of data tools.
The Riffle, a <$100 water quality sensor built inside of hardware-store pipe, can be left in a creek near an industrial site to collect data around the clock for weeks or months. In the near future, when pollution happens – like the ash spill in North Carolina or the chemical spill in West Virginia – the public will be alerted and able to track its effects without depending on expensive equipment or distant labs.
This emerging movement is recasting environmental issues not as intractably large problems, but up-close-and-personal health issues — just what environmental justice (EJ) groups have been arguing for years. The difference is that these new initiatives hybridize such EJ community organizers and the technology hackers of the open hardware movement. Just as the Homebrew Computer Club’s tinkering with early prototypes led to the personal computer, a new generation of tinkerers sees that their affordable, accessible techniques can make an immediate difference in investigating lead in their backyard soil, nitrates in their tap water and particulate pollution in the air they breathe.
These practitioners see that environmental data collection is not a distant problem in a developing country, but an issue that anyone in a major metropolitan area, or an area affected by oil and gas extraction, faces on a daily basis. Though underserved communities are often disproportionally affected, these threats often transcend socioeconomic boundaries…”

“Open-washing”: The difference between opening your data and simply making them available


Christian Villum at the Open Knowledge Foundation Blog:  “Last week, the Danish it-magazine Computerworld, in an article entitled “Check-list for digital innovation: These are the things you must know“, emphasised how more and more companies are discovering that giving your users access to your data is a good business strategy. Among other they wrote:

(Translation from Danish) According to Accenture it is becoming clear to many progressive businesses that their data should be treated as any other supply chain: It should flow easily and unhindered through the whole organisation and perhaps even out into the whole eco-system – for instance through fully open API’s.

They then use Google Maps as an example, which firstly isn’t entirely correct, as also pointed out by the Neogeografen, a geodata blogger, who explains how Google Maps isn’t offering raw data, but merely an image of the data. You are not allowed to download and manipulate the data – or run it off your own server.

But secondly I don’t think it’s very appropriate to highlight Google and their Maps project as a golden example of a business that lets its data flow unhindered to the public. It’s true that they are offering some data, but only in a very limited way – and definitely not as open data – and thereby not as progressively as the article suggests.

Surely it’s hard to accuse Google of not being progressive in general. The article states how Google Maps’ data are used by over 800,000 apps and businesses across the globe. So yes, Google has opened its silo a little bit, but only in a very controlled and limited way, which leaves these 800,000 businesses dependent on the continual flow of data from Google and thereby not allowing them to control the very commodities they’re basing their business on. This particular way of releasing data brings me to the problem that we’re facing: Knowing the difference between making data available and making them open.

Open data is characterized by not only being available, but being both legally open (released under an open license that allows full and free reuse conditioned at most to giving credit to it’s source and under same license) and technically available in bulk and in machine readable formats – contrary to the case of Google Maps. It may be that their data are available, but they’re not open. This – among other reasons – is why the global community around the 100% open alternative Open Street Map is growing rapidly and an increasing number of businesses choose to base their services on this open initiative instead.

But why is it important that data are open and not just available? Open data strengthens the society and builds a shared resource, where all users, citizens and businesses are enriched and empowered, not just the data collectors and publishers. “But why would businesses spend money on collecting data and then give them away?” you ask. Opening your data and making a profit are not mutually exclusive. Doing a quick Google search reveals many businesses that both offer open data and drives a business on them – and I believe these are the ones that should be highlighted as particularly progressive in articles such as the one from Computerworld….

We are seeing a rising trend of what can be termed “open-washing” (inspired by “greenwashing“) – meaning data publishers that are claiming their data is open, even when it’s not – but rather just available under limiting terms. If we – at this critical time in the formative period of the data driven society – aren’t critically aware of the difference, we’ll end up putting our vital data streams in siloed infrastructure built and owned by international corporations. But also to give our praise and support to the wrong kind of unsustainable technological development.”

How Open Data Policies Unlock Innovation


Tim Cashman at Socrata: “Several trends made the Web 2.0 world we now live in possible. Arguably, the most important of these has been the evolution of online services as extensible technology platforms that enable users, application developers, and other collaborators to create value that extends far beyond the original offering itself.

The Era of ‘Government-as-a-Platform’

The same principles that have shaped the consumer web are now permeating government. Forward-thinking public sector organizations are catching on to the idea that, to stay relevant and vital, governments must go beyond offering a few basic services online. Some have even come to the realization that they are custodians of an enormously valuable resource: the data they collect through their day-to-day operations.  By opening up this data for public consumption online, innovative governments are facilitating the same kind of digital networks that consumer web services have fostered for years.  The era of government as a platform is here, and open data is the catalyst.

The Role of Open Data Policy in Unlocking Innovation in Government

The open data movement continues to transition from an emphasis on transparency to measuring the civic and economic impact of open data programs. As part of this transition, governments are realizing the importance of creating a formal policy to define strategic goals, describe the desired benefits, and provide the scope for data publishing efforts over time.  When well executed, open data policies yield a clear set of benefits. These range from spurring slow-moving bureaucracies into action to procuring the necessary funding to sustain open data initiatives beyond a single elected official’s term.

Four Types of Open Data Policies

There are four main types of policy levers currently in use regarding open data: executive orders, non-binding resolutions, new laws, new regulations, and codified laws. Each of these tools has specific advantages and potential limitations.

Executive Orders

The prime example of an open data executive order in action is President Barack Obama’s Open Data Initiative. While this executive order was short – only four paragraphs on two pages – the real policy magic was a mandate-by-reference that required all U.S. federal agencies to comply with a detailed set of time-bound actions. All of these requirements are publicly viewable on a GitHub repository – a free hosting service for open source software development projects – which is revolutionary in and of itself. Detailed discussions on government transparency took place not in closed-door boardrooms, but online for everyone to see, edit, and improve.

Non-Binding Resolutions

A classic example of a non-binding resolution can be found by doing an online search for the resolution of Palo Alto, California. Short and sweet, this town squire-like exercise delivers additional attention to the movement inside and outside of government. The lightweight policy tool also has the benefit of lasting a bit longer than any particular government official. Although, in recognition of the numerous resolutions that have ever come out of any small town, resolutions are only as timeless as people’s memory.

Internal Regulations

The New York State Handbook on Open Data is a great example of internal regulations put to good use. Originating from the Office of Information Technology Resources, the handbook is a comprehensive, clear, and authoritative guide on how open data is actually supposed to work. Also available on GitHub, the handbook resembles the federal open data project in many ways.

Codified Laws

The archetypal example of open data law comes from San Francisco.
Interestingly, what started as an “Executive Directive” from Mayor Gavin Newsom later turned into legislation and brought with it the power of stronger department mandates and a significant budget. Once enacted, laws are generally hard to revise. However, in the case of San Francisco, the city council has already revised the law two times in four years.
At the federal government level, the Digital Accountability and Transparency Act, or DATA Act, was introduced in both the U.S. House of Representatives (H.R. 2061) and the U.S. Senate (S. 994) in 2013. The act mandates the standardization and publication of a wide of variety of the federal government’s financial reports as open data. Although the Housed voted to pass the Data Act, it still awaits a vote in the Senate.

The Path to Government-as-a-Platform

Open data policies are an effective way to motivate action and provide clear guidance for open data programs. But they are not a precondition for public-sector organizations to embrace the government-as-a-platform model. In fact, the first step does not involve technology at all. Instead, it involves government leaders realizing that public data belongs to the people. And, it requires the vision to appreciate this data as a shared resource that only increases in value the more widely it is distributed and re-used for analytics, web and mobile apps, and more.
The consumer web has shown the value of open data networks in spades (think Facebook). Now, it’s government’s turn to create the next web.”

The myth of the keyboard warrior: public participation and 38 Degrees


James Dennis in Open Democracy: “A cursory glance at the comment section of the UK’s leading newspapers suggests that democratic engagement is at an all time low; we are generation apathetic. In their annual health check, the Audit of Political Engagement, the Hansard Society paint a bleak picture of participation trends in Britain. Only 41% of those surveyed are committed to voting in the next General Election. Moreover, less than 1% of the population is a member of a political party. However, 38 Degrees, the political activist movement, bucks these downward trends. In the four years since their foundation in 2009, 38 Degrees have amassed a membership of 1.8 million individuals—more than three times the entire combined memberships of all of Britain’s political parties.

The organisation is not without its critics, however. Earlier this week, during a debate in House of Commons on the Care Bill, David T. C. Davies MP cast doubt on the authenticity of the organisation’s ethos, “People. Power. Change”, claiming that:

These people purport to be happy-go-lucky students. They are always on first name terms; Ben and Fred and Rebecca and Sarah and the rest of it. The reality is that it is a hard-nosed left-wing Labour-supporting organisation with links to some very wealthy upper middle-class socialists, despite the pretence that it likes to give out.

Likewise, in a comment piece for The Guardian, Oscar Rickett argued that the form of participation cultivated by 38 Degrees is not beneficial to our civic culture as it encourages fragmented, issue-driven collective action in which “small urges are satisfied with the implication that they are bringing about large change”.
However, given the lack of empirical research undertaken on 38 Degrees, such criticisms are often anecdotal or campaign-specific. So here are just a couple of the significant findings emerging from my ongoing research.

New organisations

38 Degrees bears little resemblance to the organisational models that we’ve become accustomed to. Unlike political parties or traditional pressure groups, 38 Degrees operates on a more level playing field. Members are central to the key decisions that are made before and during a campaign and the staff facilitate these choices. Essentially, the organisation acts as a conduit for its membership, removing the layers of elite-level decision-making that characterised political groups in the twentieth century.
38 Degrees seeks to structure grassroots engagement in two ways. Firstly, the group fuses a vast range of qualitative and quantitative data sources from its membership to guide their campaign decisions and strategy. By using digital media, members are able to express their opinion very quickly on an unprecedented scale. One way in which they do this is through ad-hoc surveys of their members to decide on key strategic decisions, such as their survey regarding the decision to campaign against plans by the NHS to compile a database of medical records for potential use by private firms. In just 24 hours the group had a response from 137,000 of it’s members, with 93 per cent backing their plans to organise a mass opt out.
Secondly, the group offers the platform Campaigns By You, which provides members with the technological opportunities to structure and undertake their own campaigns, retaining complete autonomy over the decision-making process. In both cases, albeit to a differing degree, it is the mass of individual participants that direct the group strategy, with 38 Degrees offering the technological capacity to structure this. 38 Degrees assimilates the fragmented, competing individual voices of its membership, and offers cohesive, collective action.
David Karpf proposes that we consider this phenomenon as characteristic of new type of organisation. These new organisations challenge our traditional understanding of collective action as they are structurally fluid. 38 Degrees relies on central staff to structure the wants and needs of their membership. However, this doesn’t necessarily lead to a regimented hierarchy. Pablo Gerbaudo describes this as ‘soft leadership’ where the central staff act as choreographers, organising and structuring collective action whilst minimising their encroachment on the will of individual members. …
In conclusion, the successes of 38 Degrees, in terms of mobilising public participation, come down to how the organisation maximises the membership’s sense of efficacy, the feeling that each individual member has, or can have, an impact.
By providing influence over the decision-making process, either explicitly or implicitly, members become more than just cheerleaders observing elites from the sidelines; they are active and involved in the planning and execution of public participation.”

Personal Data for the Public Good


Final report on “New Opportunities to Enrich Understanding of Individual and Population Health” of the health data exploration project: “Individuals are tracking a variety of health-related data via a growing number of wearable devices and smartphone apps. More and more data relevant to health are also being captured passively as people communicate with one another on social networks, shop, work, or do any number of activities that leave “digital footprints.”
Almost all of these forms of “personal health data” (PHD) are outside of the mainstream of traditional health care, public health or health research. Medical, behavioral, social and public health research still largely rely on traditional sources of health data such as those collected in clinical trials, sifting through electronic medical records, or conducting periodic surveys.
Self-tracking data can provide better measures of everyday behavior and lifestyle and can fill in gaps in more traditional clinical data collection, giving us a more complete picture of health. With support from the Robert Wood Johnson Foundation, the Health Data Exploration (HDE) project conducted a study to better understand the barriers to using personal health data in research from the individuals who track the data about their own personal health, the companies that market self-track- ing devices, apps or services and aggregate and manage that data, and the researchers who might use the data as part of their research.
Perspectives
Through a series of interviews and surveys, we discovered strong interest in contributing and using PHD for research. It should be noted that, because our goal was to access individuals and researchers who are already generating or using digital self-tracking data, there was some bias in our survey findings—participants tended to have more educa- tion and higher household incomes than the general population. Our survey also drew slightly more white and Asian participants and more female participants than in the general population.
Individuals were very willing to share their self-tracking data for research, in particular if they knew the data would advance knowledge in the fields related to PHD such as public health, health care, computer science and social and behavioral science. Most expressed an explicit desire to have their information shared anonymously and we discovered a wide range of thoughts and concerns regarding thoughts over privacy.
Equally, researchers were generally enthusiastic about the potential for using self-tracking data in their research. Researchers see value in these kinds of data and think these data can answer important research questions. Many consider it to be of equal quality and importance to data from existing high quality clinical or public health data sources.
Companies operating in this space noted that advancing research was a worthy goal but not their primary business concern. Many companies expressed interest in research conducted outside of their company that would validate the utility of their device or application but noted the critical importance of maintaining their customer relationships. A number were open to data sharing with academics but noted the slow pace and administrative burden of working with universities as a challenge.
In addition to this considerable enthusiasm, it seems a new PHD research ecosystem may well be emerging. Forty-six percent of the researchers who participated in the study have already used self-tracking data in their research, and 23 percent of the researchers have already collaborated with application, device, or social media companies.
The Personal Health Data Research Ecosystem
A great deal of experimentation with PHD is taking place. Some individuals are experimenting with personal data stores or sharing their data directly with researchers in a small set of clinical experiments. Some researchers have secured one-off access to unique data sets for analysis. A small number of companies, primarily those with more of a health research focus, are working with others to develop data commons to regularize data sharing with the public and researchers.
SmallStepsLab serves as an intermediary between Fitbit, a data rich company, and academic research- ers via a “preferred status” API held by the company. Researchers pay SmallStepsLab for this access as well as other enhancements that they might want.
These promising early examples foreshadow a much larger set of activities with the potential to transform how research is conducted in medicine, public health and the social and behavioral sciences.
Opportunities and Obstacles
There is still work to be done to enhance the potential to generate knowledge out of personal health data:

  • Privacy and Data Ownership: Among individuals surveyed, the dominant condition (57%) for making their PHD available for research was an assurance of privacy for their data, and over 90% of respondents said that it was important that the data be anonymous. Further, while some didn’t care who owned the data they generate, a clear majority wanted to own or at least share owner- ship of the data with the company that collected it.
  • InformedConsent:Researchersareconcerned about the privacy of PHD as well as respecting the rights of those who provide it. For most of our researchers, this came down to a straightforward question of whether there is informed consent. Our research found that current methods of informed consent are challenged by the ways PHD are being used and reused in research. A variety of new approaches to informed consent are being evaluated and this area is ripe for guidance to assure optimal outcomes for all stakeholders.
  • Data Sharing and Access: Among individuals, there is growing interest in, as well as willingness and opportunity to, share personal health data with others. People now share these data with others with similar medical conditions in online groups like PatientsLikeMe or Crohnology, with the intention to learn as much as possible about mutual health concerns. Looking across our data, we find that individuals’ willingness to share is dependent on what data is shared, how the data will be used, who will have access to the data and when, what regulations and legal protections are in place, and the level of compensation or benefit (both personal and public).
  • Data Quality: Researchers highlighted concerns about the validity of PHD and lack of standard- ization of devices. While some of this may be addressed as the consumer health device, apps and services market matures, reaching the optimal outcome for researchers might benefit from strategic engagement of important stakeholder groups.

We are reaching a tipping point. More and more people are tracking their health, and there is a growing number of tracking apps and devices on the market with many more in development. There is overwhelming enthusiasm from individuals and researchers to use this data to better understand health. To maximize personal data for the public good, we must develop creative solutions that allow individual rights to be respected while providing access to high-quality and relevant PHD for research, that balance open science with intellectual property, and that enable productive and mutually beneficial collaborations between the private sector and the academic research community.”