John Podesta and Dr. John P. Holdren at the White House blog: “…today, delivering on a commitment in the President’s Climate Action Plan, we are launching the Climate Data Initiative, an ambitious new effort bringing together extensive open government data and design competitions with commitments from the private and philanthropic sectors to develop data-driven planning and resilience tools for local communities. This effort will help give communities across America the information and tools they need to plan for current and future climate impacts.
The Climate Data Initiative builds on the success of the Obama Administration’s ongoing efforts to unleash the power of open government data. Since data.gov, the central site to find U.S. government data resources, launched in 2009, the Federal government has released troves of valuable data that were previously hard to access in areas such as health, energy, education, public safety, and global development. Today these data are being used by entrepreneurs, researchers, tech innovators, and others to create countless new applications, tools, services, and businesses.
Data from NOAA, NASA, the U.S. Geological Survey, the Department of Defense, and other Federal agencies will be featured on climate.data.gov, a new section within data.gov that opens for business today. The first batch of climate data being made available will focus on coastal flooding and sea level rise. NOAA and NASA will also be announcing an innovation challenge calling on researchers and developers to create data-driven simulations to help plan for the future and to educate the public about the vulnerability of their own communities to sea level rise and flood events.
These and other Federal efforts will be amplified by a number of ambitious private commitments. For example, Esri, the company that produces the ArcGIS software used by thousands of city and regional planning experts, will be partnering with 12 cities across the country to create free and open “maps and apps” to help state and local governments plan for climate change impacts. Google will donate one petabyte—that’s 1,000 terabytes—of cloud storage for climate data, as well as 50 million hours of high-performance computing with the Google Earth Engine platform. The company is challenging the global innovation community to build a high-resolution global terrain model to help communities build resilience to anticipated climate impacts in decades to come. And the World Bank will release a new field guide for the Open Data for Resilience Initiative, which is working in more than 20 countries to map millions of buildings and urban infrastructure….”
Quantified Health – It’s Just A Phase, Get Over It. Please.
Geoff McCleary at PSFK: “The near ubiquitous acceptance of smartphones and mobile internet access have ushered in a new wave of connected devices and smart objects that help us compile and track an unprecedented amount of previously unavailable data.
This quantification of self, which used to be the sole domain of fitness fanatics and professional athletes, is now being expanded out and applied to everything from how we drive and interface with our cars, to homes that adapt around us, to our daily interactions with others. But the most exciting application of this approach has to be the quantification of health – from how much time we spend on the couch, to how frequently a symptom flares up, even to how adherent we are with our medications.
But this new phase of quantified health is just that – it’s just a phase. How many steps a patient takes is a meaningless data point, unless the information means something to the patient. How many pills we take isn’t going to tell us if we are getting better.
Over time, we begin to see correlations between some of the data points and we can see that on the days a user takes their pill, they average 3,000 more steps, but that still doesn’t tell us what is getting better. We can see that when they get a pill reminder every day, that they will refill their prescription twice as much as other users. As marketers, that information makes us happy, but does it make the patient any healthier? Can’t we both be happy?
We can pretty the data up with shiny infographics and widgets, but unless there is meaningful context to that data it is just a nicely organized set of data points. So, what will make a difference? What will get us out of the dark ages of quantified health and into the enlightened age of Personalized Health? What will need to change to get me the treatment I need because of who I am – on a genetic level?…
Our history, our future, our uniqueness and our sameness mean nothing if we cannot get this information on-demand, in real- time. This information has to be available when we need it (and when we don’t) on whatever screen is handy, in whatever setting we are in. Our physicians need access to our information and they need it in the context of how others have dealt with the same situation.
This access can only be enabled by a cloud-based, open health profile. As quantified self gave way to quantified health, quantified health must give way to Qualitative Health. This cloud based profile of our health past, present and future will need to be both quantified and qualitative. Based not only on numbers and raw data, but relevance, context and meaning. Based not on a database or an app, but in the cloud where personal information will accessible by whomever we designate, our sameness open and shareable with all — with all contributing to the meaning of our data, and physicians interacting in an informed, consistent manner across our entire health being, instead of just the 20 minutes a year when they see us.
That is truly health care, and I cannot wait for it to get here.”
The Open Data/Environmental Justice Connection
Jeffrey Warren for Wilson’s Commons Lab: “… Open data initiatives seem to assume that all data is born in the hallowed halls of government, industry and academia, and that open data is primarily about convincing such institutions to share it to the public.
It is laudable when institutions with important datasets — such as campaign finance, pollution or scientific data — see the benefit of opening it to the public. But why do we assume unilateral control over data production?
The revolution in user-generated content shows the public has a great deal to contribute – and to gain—from the open data movement. Likewise, citizen science projects that solicit submissions or “task completion” from the public rarely invite higher-level participation in research –let alone true collaboration.
This has to change. Data isn’t just something you’re given if you ask nicely, or a kind of community service we perform to support experts. Increasingly, new technologies make it possible for local groups to generate and control data themselves — especially in environmental health. Communities on the front line of pollution’s effects have the best opportunities to monitor it and the most to gain by taking an active role in the research process.
DIY Data
Luckily, an emerging alliance between the maker/Do-It-Yourself (DIY) movement and watchdog groups is starting to challenge the conventional model.
The Smart Citizen project, the Air Quality Egg and a variety of projects in the Public Lab network are recasting members of the general public as actors in the framing of new research questions and designers of a new generation of data tools.
The Riffle, a <$100 water quality sensor built inside of hardware-store pipe, can be left in a creek near an industrial site to collect data around the clock for weeks or months. In the near future, when pollution happens – like the ash spill in North Carolina or the chemical spill in West Virginia – the public will be alerted and able to track its effects without depending on expensive equipment or distant labs.
This emerging movement is recasting environmental issues not as intractably large problems, but up-close-and-personal health issues — just what environmental justice (EJ) groups have been arguing for years. The difference is that these new initiatives hybridize such EJ community organizers and the technology hackers of the open hardware movement. Just as the Homebrew Computer Club’s tinkering with early prototypes led to the personal computer, a new generation of tinkerers sees that their affordable, accessible techniques can make an immediate difference in investigating lead in their backyard soil, nitrates in their tap water and particulate pollution in the air they breathe.
These practitioners see that environmental data collection is not a distant problem in a developing country, but an issue that anyone in a major metropolitan area, or an area affected by oil and gas extraction, faces on a daily basis. Though underserved communities are often disproportionally affected, these threats often transcend socioeconomic boundaries…”
The myth of the keyboard warrior: public participation and 38 Degrees
The organisation is not without its critics, however. Earlier this week, during a debate in House of Commons on the Care Bill, David T. C. Davies MP cast doubt on the authenticity of the organisation’s ethos, “People. Power. Change”, claiming that:
These people purport to be happy-go-lucky students. They are always on first name terms; Ben and Fred and Rebecca and Sarah and the rest of it. The reality is that it is a hard-nosed left-wing Labour-supporting organisation with links to some very wealthy upper middle-class socialists, despite the pretence that it likes to give out.
Likewise, in a comment piece for The Guardian, Oscar Rickett argued that the form of participation cultivated by 38 Degrees is not beneficial to our civic culture as it encourages fragmented, issue-driven collective action in which “small urges are satisfied with the implication that they are bringing about large change”.
However, given the lack of empirical research undertaken on 38 Degrees, such criticisms are often anecdotal or campaign-specific. So here are just a couple of the significant findings emerging from my ongoing research.
New organisations
38 Degrees bears little resemblance to the organisational models that we’ve become accustomed to. Unlike political parties or traditional pressure groups, 38 Degrees operates on a more level playing field. Members are central to the key decisions that are made before and during a campaign and the staff facilitate these choices. Essentially, the organisation acts as a conduit for its membership, removing the layers of elite-level decision-making that characterised political groups in the twentieth century.
38 Degrees seeks to structure grassroots engagement in two ways. Firstly, the group fuses a vast range of qualitative and quantitative data sources from its membership to guide their campaign decisions and strategy. By using digital media, members are able to express their opinion very quickly on an unprecedented scale. One way in which they do this is through ad-hoc surveys of their members to decide on key strategic decisions, such as their survey regarding the decision to campaign against plans by the NHS to compile a database of medical records for potential use by private firms. In just 24 hours the group had a response from 137,000 of it’s members, with 93 per cent backing their plans to organise a mass opt out.
Secondly, the group offers the platform Campaigns By You, which provides members with the technological opportunities to structure and undertake their own campaigns, retaining complete autonomy over the decision-making process. In both cases, albeit to a differing degree, it is the mass of individual participants that direct the group strategy, with 38 Degrees offering the technological capacity to structure this. 38 Degrees assimilates the fragmented, competing individual voices of its membership, and offers cohesive, collective action.
David Karpf proposes that we consider this phenomenon as characteristic of new type of organisation. These new organisations challenge our traditional understanding of collective action as they are structurally fluid. 38 Degrees relies on central staff to structure the wants and needs of their membership. However, this doesn’t necessarily lead to a regimented hierarchy. Pablo Gerbaudo describes this as ‘soft leadership’ where the central staff act as choreographers, organising and structuring collective action whilst minimising their encroachment on the will of individual members. …
In conclusion, the successes of 38 Degrees, in terms of mobilising public participation, come down to how the organisation maximises the membership’s sense of efficacy, the feeling that each individual member has, or can have, an impact.
By providing influence over the decision-making process, either explicitly or implicitly, members become more than just cheerleaders observing elites from the sidelines; they are active and involved in the planning and execution of public participation.”
Personal Data for the Public Good
Final report on “New Opportunities to Enrich Understanding of Individual and Population Health” of the health data exploration project: “Individuals are tracking a variety of health-related data via a growing number of wearable devices and smartphone apps. More and more data relevant to health are also being captured passively as people communicate with one another on social networks, shop, work, or do any number of activities that leave “digital footprints.”
Almost all of these forms of “personal health data” (PHD) are outside of the mainstream of traditional health care, public health or health research. Medical, behavioral, social and public health research still largely rely on traditional sources of health data such as those collected in clinical trials, sifting through electronic medical records, or conducting periodic surveys.
Self-tracking data can provide better measures of everyday behavior and lifestyle and can fill in gaps in more traditional clinical data collection, giving us a more complete picture of health. With support from the Robert Wood Johnson Foundation, the Health Data Exploration (HDE) project conducted a study to better understand the barriers to using personal health data in research from the individuals who track the data about their own personal health, the companies that market self-track- ing devices, apps or services and aggregate and manage that data, and the researchers who might use the data as part of their research.
Perspectives
Through a series of interviews and surveys, we discovered strong interest in contributing and using PHD for research. It should be noted that, because our goal was to access individuals and researchers who are already generating or using digital self-tracking data, there was some bias in our survey findings—participants tended to have more educa- tion and higher household incomes than the general population. Our survey also drew slightly more white and Asian participants and more female participants than in the general population.
Individuals were very willing to share their self-tracking data for research, in particular if they knew the data would advance knowledge in the fields related to PHD such as public health, health care, computer science and social and behavioral science. Most expressed an explicit desire to have their information shared anonymously and we discovered a wide range of thoughts and concerns regarding thoughts over privacy.
Equally, researchers were generally enthusiastic about the potential for using self-tracking data in their research. Researchers see value in these kinds of data and think these data can answer important research questions. Many consider it to be of equal quality and importance to data from existing high quality clinical or public health data sources.
Companies operating in this space noted that advancing research was a worthy goal but not their primary business concern. Many companies expressed interest in research conducted outside of their company that would validate the utility of their device or application but noted the critical importance of maintaining their customer relationships. A number were open to data sharing with academics but noted the slow pace and administrative burden of working with universities as a challenge.
In addition to this considerable enthusiasm, it seems a new PHD research ecosystem may well be emerging. Forty-six percent of the researchers who participated in the study have already used self-tracking data in their research, and 23 percent of the researchers have already collaborated with application, device, or social media companies.
The Personal Health Data Research Ecosystem
A great deal of experimentation with PHD is taking place. Some individuals are experimenting with personal data stores or sharing their data directly with researchers in a small set of clinical experiments. Some researchers have secured one-off access to unique data sets for analysis. A small number of companies, primarily those with more of a health research focus, are working with others to develop data commons to regularize data sharing with the public and researchers.
SmallStepsLab serves as an intermediary between Fitbit, a data rich company, and academic research- ers via a “preferred status” API held by the company. Researchers pay SmallStepsLab for this access as well as other enhancements that they might want.
These promising early examples foreshadow a much larger set of activities with the potential to transform how research is conducted in medicine, public health and the social and behavioral sciences.
Opportunities and Obstacles
There is still work to be done to enhance the potential to generate knowledge out of personal health data:
- Privacy and Data Ownership: Among individuals surveyed, the dominant condition (57%) for making their PHD available for research was an assurance of privacy for their data, and over 90% of respondents said that it was important that the data be anonymous. Further, while some didn’t care who owned the data they generate, a clear majority wanted to own or at least share owner- ship of the data with the company that collected it.
- InformedConsent:Researchersareconcerned about the privacy of PHD as well as respecting the rights of those who provide it. For most of our researchers, this came down to a straightforward question of whether there is informed consent. Our research found that current methods of informed consent are challenged by the ways PHD are being used and reused in research. A variety of new approaches to informed consent are being evaluated and this area is ripe for guidance to assure optimal outcomes for all stakeholders.
- Data Sharing and Access: Among individuals, there is growing interest in, as well as willingness and opportunity to, share personal health data with others. People now share these data with others with similar medical conditions in online groups like PatientsLikeMe or Crohnology, with the intention to learn as much as possible about mutual health concerns. Looking across our data, we find that individuals’ willingness to share is dependent on what data is shared, how the data will be used, who will have access to the data and when, what regulations and legal protections are in place, and the level of compensation or benefit (both personal and public).
- Data Quality: Researchers highlighted concerns about the validity of PHD and lack of standard- ization of devices. While some of this may be addressed as the consumer health device, apps and services market matures, reaching the optimal outcome for researchers might benefit from strategic engagement of important stakeholder groups.
We are reaching a tipping point. More and more people are tracking their health, and there is a growing number of tracking apps and devices on the market with many more in development. There is overwhelming enthusiasm from individuals and researchers to use this data to better understand health. To maximize personal data for the public good, we must develop creative solutions that allow individual rights to be respected while providing access to high-quality and relevant PHD for research, that balance open science with intellectual property, and that enable productive and mutually beneficial collaborations between the private sector and the academic research community.”
Expanding Opportunity through Open Educational Resources
Hal Plotkin and Colleen Chien at the White House: “Using advanced technology to dramatically expand the quality and reach of education has long been a key priority for the Obama Administration.
In December 2013, the President’s Council of Advisors on Science and Technology (PCAST) issued a report exploring the potential of Massive Open Online Courses (MOOCs) to expand access to higher education opportunities. Last month, the President announced a $2B down payment, and another $750M in private-sector commitments to deliver on the President’s ConnectEd initiative, which will connect 99% of American K-12 students to broadband by 2017 at no cost to American taxpayers.
This week, we are happy to be joining with educators, students, and technologists worldwide to recognize and celebrate Open Education Week.
Open Educational Resources (“OER”) are educational resources that are released with copyright licenses allowing for their free use, continuous improvement, and modification by others. The world is moving fast, and OER enables educators and students to access, customize, and remix high-quality course materials reflecting the latest understanding of the world and materials that incorporate state of the art teaching methods – adding their own insights along the way. OER is not a silver bullet solution to the many challenges that teachers, students and schools face. But it is a tool increasingly being used, for example by players like edX and the Kahn Academy, to improve learning outcomes and create scalable platforms for sharing educational resources that reach millions of students worldwide.
Launched at MIT in 2001, OER became a global movement in 2007 when thousands of educators around the globe endorsed the Cape Town Declaration on Open Educational Resources. Another major milestone came in 2011, when Secretary of Education Arne Duncan and then-Secretary of Labor Hilda Solis unveiled the four-year, $2B Trade Adjustment Assistance Community College and Career Training Grant Program (TAACCCT). It was the first Federal program to leverage OER to support the development of a new generation of affordable, post-secondary educational programs that can be completed in two years or less to prepare students for careers in emerging and expanding industries….
Building on this record of success, OSTP and the U.S. Agency for International Development (USAID) are exploring an effort to inspire and empower university students through multidisciplinary OER focused on one of the USAID Grand Challenges, such as securing clean water, saving lives at birth, or improving green agriculture. This effort promises to be a stepping stone towards leveraging OER to help solve other grand challenges such as the NAE Grand Challenges in Engineering or Grand Challenges in Global Health.
This is great progress, but there is more work to do. We look forward to keeping the community updated right here. To see the winning videos from the U.S. Department of Education’s “Why Open Education Matters” Video Contest, click here.”
Open Data is a Civil Right
Yo Yoshida, Founder & CEO, Appallicious in GovTech: “As Americans, we expect a certain standardization of basic services, infrastructure and laws — no matter where we call home. When you live in Seattle and take a business trip to New York, the electric outlet in the hotel you’re staying in is always compatible with your computer charger. When you drive from San Francisco to Los Angeles, I-5 doesn’t all-of-a-sudden turn into a dirt country road because some cities won’t cover maintenance costs. If you take a 10-minute bus ride from Boston to the city of Cambridge, you know the money in your wallet is still considered legal tender.
Procurement and Civic Innovation
Derek Eder: “Have you ever used a government website and had a not-so-awesome experience? In our slick 2014 world of Google, Twitter and Facebook, why does government tech feel like it’s stuck in the 1990s?
The culprit: bad technology procurement.
Procurement is the procedure a government follows to buy something–letting suppliers know what they want, asking for proposals, restricting what kinds of proposal they will consider, limiting what kinds of firms they will do business with, and deciding if what they got what they paid for.
The City of Chicago buys technology about the same way that they buy health insurance, a bridge, or anything else in between. And that’s the problem.
Chicago’s government has a long history of corruption, nepotism and patronage. After each outrage, new rules are piled upon existing rules to prevent that crisis from happening again. Unfortunately, this accumulation of rules does not just protect against the bad guys, it also forms a huge barrier to entry for technology innovators.
So, the firms that end up building our city’s digital public services tend to be good at picking their way through the barriers of the procurement process, not at building good technology. Instead of making government tech contracting fair and competitive, procurement has unfortunately had the opposite effect.
So where does this leave us? Despite Chicago’s flourishing startup scene, and despite having one of the country’s largest community of civic technologists, the Windy City’s digital public services are still terribly designed and far too expensive to the taxpayer.
The Technology Gap
The best way to see the gap between Chicago’s volunteer civic tech community and the technology that the City pays is to look at an entire class of civic apps that are essentially facelifts on existing government websites….
You may have noticed an increase in quality and usability between these three civic apps and their official government counterparts.
Now consider this: all of the government sites took months to build and cost hundreds of thousands of dollars. Was My Car Towed, 2nd City Zoning and CrimeAround.us were all built by one to two people in a matter of days, for no money.
Think about that for a second. Consider how much the City is overpaying for websites its citizens can barely use. And imagine how much better our digital city services would be if the City worked with the very same tech startups they’re trying to nurture.
Why do these civic apps exist? Well, with the City of Chicago releasing hundreds of high quality datasets on their data portal over the past three years (for which they should be commended), a group of highly passionate and skilled technologists have started using their skills to develop these apps and many others.
It’s mostly for fun, learning, and a sense of civic duty, but it demonstrates there’s no shortage of highly skilled developers who are interested in using technology to make their city a better place to live in…
Two years ago, in the Fall of 2011, I learned about procurement in Chicago for the first time. An awesome group of developers, designers and I had just built ChicagoLobbyists.org – our very first civic app – for the City of Chicago’s first open data hackathon….
Since then, the City has often cited ChicagoLobbyists.org as evidence of the innovation-sparking potential of open data.
Shortly after our site launched, a Request For Proposals, or RFP, was issued by the City for an ‘Online Lobbyist Disclosure System.’
Hey! We just built one of those! Sure, we would need to make some updates to it—adding a way for lobbyists to log in and submit their info—but we had a solid start. So, our scrappy group of tech volunteers decided to respond to the RFP.
After reading all 152 pages of the document, we realized we had no chance of getting the bid. It was impossible for the ChicagoLobbyists.org group to meet the legal requirements (as it would have been for any small software shop):
- audited financial statements for the past 3 years
- an economic disclosure statement (EDS) and affidavit
- proof of $500k workers compensation and employers liability
- proof of $2 million in professional liability insurance”
Making digital government better
An McKinsey Insight interview with Mike Bracken (UK): “When it comes to the digital world, governments have traditionally placed political, policy, and system needs ahead of the people who require services. Mike Bracken, the executive director of the United Kingdom’s Government Digital Service, is attempting to reverse that paradigm by empowering citizens—and, in the process, improve the delivery of services and save money. In this video interview, Bracken discusses the philosophy behind the digital transformation of public services in the United Kingdom, some early successes, and next steps.
Interview transcript
Putting users first
Government around the world is pretty good at thinking about its own needs. Government puts its own needs first—they often put their political needs followed by the policy needs. The actual machine of government comes second. The third need then generally becomes the system needs, so the IT or whatever system’s driving it. And then out of those four, the user comes a poor fourth, really.
And we’ve inverted that. So let me give you an example. At the moment, if you want to know about tax in the UK , you’re probably going to know that Her Majesty’s Revenue and Customs is a part of government that deals with tax. You’re probably going to know that because you pay tax, right?
But why should you have to know that? Because, really, it’s OK to know that, for that one—but we’ve got 300 agencies, more than that; we’ve got 24 parts of government. If you want to know about, say, gangs, is that a health issue or is that a local issue? Is it a police issue? Is it a social issue, an education issue? Well, actually it’s all of those issues. But you shouldn’t have to know how government is constructed to know what each bit of government is doing about an esoteric issue like gangs.
What we’ve done with gov.uk, and what we’re doing with our transactions, is to make them consistent at the point of user need. Because there’s only one real user need of government digitally, and that’s to recognize that at the point of need, users need to deal with the government. Not a department name or an agency name, they’re dealing with the government. And when they do that, they need it to be consistent, and they need it to be easy to find. Ninety-five percent of our journeys digitally start with a search.
And so our elegantly constructed and expensively constructed front doors are often completely routed around. We’ve got to recognize that and construct our digital services based on user needs….”
Doctors’ #1 Source for Healthcare Information: Wikipedia
Generally, more people turn to Wikipedia for rare diseases than common conditions. The top five conditions looked up on the site over the past year were: tuberculosis, Crohn’s disease, pneumonia, multiple sclerosis, and diabetes. Patients tend to use Wikipedia as a “starting point for their online self education,” the report says. It also found a “direct correlation between Wikipedia page visits and prescription volumes.”
We already knew that more and more people were turning to the Internet in general and Wikipedia specifically for health information, and we could hardly stop them if we tried.
Being crowd-sourced, the information may well be neutral, but is it accurate? Knowing that doctors, too, are using these resources raises old concerns about the quality of information that comes up when you type your condition into Google.
But doctors are aware of this, and an effort called Wikiproject Medicine is dedicated to improving the quality of medical information on Wikipedia. The IMS report looked at changes to five articles—diabetes, multiple sclerosis, rheumatoid arthritis, breast cancer and prostate cancer—and found them to be in a state of constant flux. Those articles were changed, on average, between 16 and 46 times a month. But one of the major contributors to those articles was Dr. James Heilman, the founder of Wikiproject Medicine’s Medicine Translation task force.
“This task force’s goal is getting 200 medical articles to a good or featured status (only 0.1 percent of articles on Wikipedia have this status), simplifying the English and then translating this content to as many languages as possible,” the report says. “The aim is to improve the quality of the most read medical articles on Wikipedia and ensure that this quality will reach non-English speakers.”…”