HHS releases new data and tools to increase transparency on hospital utilization and other trends


Pressrelease: “With more than 2,000 entrepreneurs, investors, data scientists, researchers, policy experts, government employees and more in attendance, the Department of Health and Human Services (HHS) is releasing new data and launching new initiatives at the annual Health Datapalooza conference in Washington, D.C.
Today, the Centers for Medicare & Medicaid Services (CMS) is releasing its first annual update to the Medicare hospital charge data, or information comparing the average amount a hospital bills for services that may be provided in connection with a similar inpatient stay or outpatient visit. CMS is also releasing a suite of other data products and tools aimed to increase transparency about Medicare payments. The data trove on CMS’s website now includes inpatient and outpatient hospital charge data for 2012, and new interactive dashboards for the CMS Chronic Conditions Data Warehouse and geographic variation data. Also today, the Food and Drug Administration (FDA) will launch a new open data initiative. And before the end of the conference, the Office of the National Coordinator for Health Information Technology (ONC) will announce the winners of two data challenges.
“The release of these data sets furthers the administration’s efforts to increase transparency and support data-driven decision making which is essential for health care transformation,” said HHS Secretary Kathleen Sebelius.
“These public data resources provide a better understanding of Medicare utilization, the burden of chronic conditions among beneficiaries and the implications for our health care system and how this varies by where beneficiaries are located,” said Bryan Sivak, HHS chief technology officer. “This information can be used to improve care coordination and health outcomes for Medicare beneficiaries nationwide, and we are looking forward to seeing what the community will do with these releases. Additionally, the openFDA initiative being launched today will for the first time enable a new generation of consumer facing and research applications to embed relevant and timely data in machine-readable, API-based formats.”
2012 Inpatient and Outpatient Hospital Charge Data
The data posted today on the CMS website provide the first annual update of the hospital inpatient and outpatient data released by the agency last spring. The data include information comparing the average charges for services that may be provided in connection with the 100 most common Medicare inpatient stays at over 3,000 hospitals in all 50 states and Washington, D.C. Hospitals determine what they will charge for items and services provided to patients and these “charges” are the amount the hospital generally bills for those items or services.
With two years of data now available, researchers can begin to look at trends in hospital charges. For example, average charges for medical back problems increased nine percent from $23,000 to $25,000, but the total number of discharges decreased by nearly 7,000 from 2011 to 2012.
In April, ONC launched a challenge – the Code-a-Palooza challenge – calling on developers to create tools that will help patients use the Medicare data to make health care choices. Fifty-six innovators submitted proposals and 10 finalists are presenting their applications during Datapalooza. The winning products will be announced before the end of the conference.
Chronic Conditions Warehouse and Dashboard
CMS recently released new and updated information on chronic conditions among Medicare fee-for-service beneficiaries, including:

  • Geographic data summarized to national, state, county, and hospital referral regions levels for the years 2008-2012;
  • Data for examining disparities among specific Medicare populations, such as beneficiaries with disabilities, dual-eligible beneficiaries, and race/ethnic groups;
  • Data on prevalence, utilization of select Medicare services, and Medicare spending;
  • Interactive dashboards that provide customizable information about Medicare beneficiaries with chronic conditions at state, county, and hospital referral regions levels for 2012; and
  • Chartbooks and maps.

These public data resources support the HHS Initiative on Multiple Chronic Conditions by providing researchers and policymakers a better understanding of the burden of chronic conditions among beneficiaries and the implications for our health care system.
Geographic Variation Dashboard
The Geographic Variation Dashboards present Medicare fee-for-service per-capita spending at the state and county levels in interactive formats. CMS calculated the spending figures in these dashboards using standardized dollars that remove the effects of the geographic adjustments that Medicare makes for many of its payment rates. The dashboards include total standardized per capita spending, as well as standardized per capita spending by type of service. Users can select the indicator and year they want to display. Users can also compare data for a given state or county to the national average. All of the information presented in the dashboards is also available for download from the Geographic Variation Public Use File.
Research Cohort Estimate Tool
CMS also released a new tool that will help researchers and other stakeholders estimate the number of Medicare beneficiaries with certain demographic profiles or health conditions. This tool can assist a variety of stakeholders interested in specific figures on Medicare enrollment. Researchers can also use this tool to estimate the size of their proposed research cohort and the cost of requesting CMS data to support their study.
Digital Privacy Notice Challenge
ONC, with the HHS Office of Civil Rights, will be awarding the winner of the Digital Privacy Notice Challenge during the conference. The winning products will help consumers get notices of privacy practices from their health care providers or health plans directly in their personal health records or from their providers’ patient portals.
OpenFDA
The FDA’s new initiative, openFDA, is designed to facilitate easier access to large, important public health datasets collected by the agency. OpenFDA will make FDA’s publicly available data accessible in a structured, computer readable format that will make it possible for technology specialists, such as mobile application creators, web developers, data visualization artists and researchers to quickly search, query, or pull massive amounts of information on an as needed basis. The initiative is the result of extensive research to identify FDA’s publicly available datasets that are often in demand, but traditionally difficult to use. Based on this research, openFDA is beginning with a pilot program involving millions of reports of drug adverse events and medication errors submitted to the FDA from 2004 to 2013. The pilot will later be expanded to include the FDA’s databases on product recalls and product labeling.
For more information about CMS data products, please visit http://www.cms.gov/Research-Statistics-Data-and-Systems/Research-Statistics-Data-and-Systems.html.
For more information about today’s FDA announcement visit: http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/UCM399335 or http://open.fda.gov/

Estonian plan for 'data embassies' overseas to back up government databases


Graeme Burton in Computing: “Estonia is planning to open “data embassies” overseas to back up government databases and to operate government “in the cloud“.
The aim is partly to improve efficiency, but driven largely by fear of invasion and occupation, Jaan Priisalu, the director general of Estonian Information System Authority, told Sky News.
He said: “We are planning to actually operate our government in the cloud. It’s clear also how it helps to protect the country, the territory. Usually when you are the military planner and you are planning the occupation of the territory, then one of the rules is suppress the existing institutions.
“And if you are not able to do it, it means that this political price of occupying the country will simply rise for planners.”
Part of the rationale for the plan, he continued, was fear of attack from Russia in particular, which has been heightened following the occupation of Crimea, formerly in Ukraine.
“It’s quite clear that you can have problems with your neighbours. And our biggest neighbour is Russia, and nowadays it’s quite aggressive. This is clear.”
The plan is to back up critical government databases outside of Estonia so that affairs of state can be conducted in the cloud, even if the country is invaded. It would also have the benefit of keeping government information out of invaders’ hands – provided it can keep its government cloud secure.
According to Sky News, the UK is already in advanced talks about hosting the Estonian government databases and may make the UK the first of Estonia’s data embassies.
Having wrested independence from the Soviet Union in 1991, Estonia has experienced frequent tension with its much bigger neighbour. In 2007, for example, after the relocation of the “Bronze Soldier of Tallinn” and the exhumation of the soldiers buried in a square in the centre of the capital to a military cemetery in April 2007, the country was subject to a prolonged cyber-attack sourced to Russia.
Russian hacker “Sp0Raw” said that the most efficient of the online attacks on Estonia could not have been carried out without the approval of Russian authorities and added that the hackers seemed to act under “recommendations” from parties in government. However, claims by Estonia that the Russian government was directly involved in the attacks were “empty words, not supported by technical data”.
Mike Witt, deputy director of the US Computer Emergency Response Team (CERT), suggested that the distributed denial-of-service (DDOS) attacks, while crippling to the Estonian government at the time, were not significant in scale from a technical standpoint. However, the Estonian government was forced to shut down many of its online operations in response.
At the same time, the Estonian government has been accused of implementing anti-Russian laws and discriminating against its large ethnic Russian population.
Last week, the Estonian government unveiled a plan to allow anyone in the world to apply for “digital citizenship of the country, enabling them to use Estonian online services, open bank accounts, and start companies without having to physically reside in the country.”

How open data can help shape the way we analyse electoral behaviour


Harvey Lewis (Deloitte), Ulrich Atz, Gianfranco Cecconi, Tom Heath (ODI) in The Guardian: Even after the local council elections in England and Northern Ireland on 22 May, which coincided with polling for the European Parliament, the next 12 months remain a busy time for the democratic process in the UK.
In September, the people of Scotland make their choice in a referendum on the future of the Union. Finally, the first fixed-term parliament in Westminster comes to an end with a general election in all areas of Great Britain and Northern Ireland in May 2015.
To ensure that as many people as possible are eligible and able to vote, the government is launching an ambitious programme of Individual Electoral Registration (IER) this summer. This will mean that the traditional, paper-based approach to household registration will shift to a tailored and largely digital process more in-keeping with the data-driven demands of the twenty-first century.
Under IER, citizens will need to provide ‘identifying information’, such as date of birth or national insurance number, when applying to register.

Ballots: stuck in the past?

However, despite the government’s attempts through IER to improve the veracity of information captured prior to ballots being posted, little has changed in terms of the vision for capturing, distributing and analysing digital data from election day itself.

Advertisement

Indeed, paper is still the chosen medium for data collection.
Digitising elections is fraught with difficulty, though. In the US, for example, the introduction of new voting machines created much controversy even though they are capable of providing ‘near-perfect’ ballot data.
The UK’s democratic process is not completely blind, though. Numerous opinion surveys are conducted both before and after polling, including the long-running British Election Study, to understand the shifting attitudes of a representative cross-section of the electorate.
But if the government does not retain in sufficient geographic detail digital information on the number of people who vote, then how can it learn what is necessary to reverse the long-running decline in turnout?

The effects of lack of data

To add to the debate around democratic engagement, a joint research team, with data scientists from Deloitte and the Open Data Institute (ODI), have been attempting to understand what makes voters tick.
Our research has been hampered by a significant lack of relevant data describing voter behaviour at electoral ward level, as well as difficulties in matching what little data is available to other open data sources, such as demographic data from the 2011 Census.
Even though individual ballot papers are collected and verified for counting the number of votes per candidate – the primary aim of elections, after all – the only recent elections for which aggregate turnout statistics have been published at ward level are the 2012 local council elections in England and Wales. In these elections, approximately 3,000 wards from a total of over 8,000 voted.
Data published by the Electoral Commission for the 2013 local council elections in England and Wales purports to be at ward level but is, in fact, for ‘county electoral divisions’, as explained by the Office for National Statistics.
Moreover, important factors related to the accessibility of polling stations – such as the distance from main population centres – could not be assessed because the location of polling stations remains the responsibility of individual local authorities – and only eight of these have so far published their data as open data.
Given these fundamental limitations, drawing any robust conclusions is difficult. Nevertheless, our research shows the potential for forecasting electoral turnout with relatively few census variables, the most significant of which are age and the size of the electorate in each ward.

What role can open data play?

The limited results described above provide a tantalising glimpse into a possible future scenario: where open data provides a deeper and more granular understanding of electoral behaviour.
On the back of more sophisticated analyses, policies for improving democratic engagement – particularly among young people – have the potential to become focused and evidence-driven.
And, although the data captured on election day will always remain primarily for the use of electing the public’s preferred candidate, an important secondary consideration is aggregating and publishing data that can be used more widely.
This may have been prohibitively expensive or too complex in the past but as storage and processing costs continue to fall, and the appetite for such knowledge grows, there is a compelling business case.
The benefits of this future scenario potentially include:

  • tailoring awareness and marketing campaigns to wards and other segments of the electorate most likely to respond positively and subsequently turn out to vote
  • increasing the efficiency with which European, general and local elections are held in the UK
  • improving transparency around the electoral process and stimulating increased democratic engagement
  • enhancing links to the Government’s other significant data collection activities, including the Census.

Achieving these benefits requires commitment to electoral data being collected and published in a systematic fashion at least at ward level. This would link work currently undertaken by the Electoral Commission, the ONS, Plymouth University’s Election Centre, the British Election Study and the more than 400 local authorities across the UK.”

How to treat government like an open source project


Ben Balter in OpenSource.com: “Open government is great. At least, it was a few election cycles ago. FOIA requests, open data, seeing how your government works—it’s arguably brought light to a lot of not-so-great practices, and in many cases, has spurred citizen-centric innovation not otherwise imagined before the information’s release.
It used to be that sharing information was really, really hard. Open government wasn’t even a possibility a few hundred years ago. Throughout the history of communication tools—be it the printing press, fax machine, or floppy disks—new tools have generally done three things: lowered the cost to transmit information, increased who that information could be made available to, and increase how quickly that information could be distributed. But, printing presses and fax machines have two limitations: they are one way and asynchronous. They let you more easily request, and eventually see how the sausage was made but don’t let you actually take part in the sausage-making. You may be able to see what’s wrong, but you don’t have the chance to make it better. By the time you find out, it’s already too late.
As technology allows us to communicate with greater frequency and greater fidelity, we have the chance to make our government not only transparent, but truly collaborative.

So, how do we encourage policy makers and bureaucrats to move from open government to collaborative government, to learn open source’s lessons about openness and collaboration at scale?
For one, we geeks can help to create a culture of transparency and openness within government by driving up the demand side of the equation. Be vocal, demand data, expect to see process, and once released, help build lightweight apps. Show potential change agents in government that their efforts will be rewarded.
Second, it’s a matter of tooling. We’ve got great tools out there—things like Git that can track who made what change when and open standards like CSV or JSON that don’t require proprietary software—but by-and-large they’re a foreign concept in government, at least among those empowered to make change. Command line interfaces with black background and green text can be intimidating to government bureaucrats used to desktop publishing tools. Make it easier for government to do the right thing and choose open standards over proprietary tooling.”
Last, be a good open source ambassador. Help your home city or state get involved with open source. Encourage them to take their first step (be it consuming open source, publishing, or collaborating with the public), teach them what it means to do things in the open, And when they do push code outside the firewall, above all, be supportive. We’re in this together.
As technology makes it easier to work together, geeks can help make our government not just open, but in fact collaborative. Government is the world’s largest and longest running open source project (bugs, trolls, and all). It’s time we start treating it like one.

Democracy and open data: are the two linked?


Molly Shwartz at R-Street: “Are democracies better at practicing open government than less free societies? To find out, I analyzed the 70 countries profiled in the Open Knowledge Foundation’s Open Data Index and compared the rankings against the 2013 Global Democracy Rankings. As a tenet of open government in the digital age, open data practices serve as one indicator of an open government. Overall, there is a strong relationship between democracy and transparency.
Using data collected in October 2013, the top ten countries for openness include the usual bastion-of-democracy suspects: the United Kingdom, the United States, mainland Scandinavia, the Netherlands, Australia, New Zealand and Canada.
There are, however, some noteworthy exceptions. Germany ranks lower than Russia and China. All three rank well above Lithuania. Egypt, Saudi Arabia and Nepal all beat out Belgium. The chart (below) shows the democracy ranking of these same countries from 2008-2013 and highlights the obvious inconsistencies in the correlation between democracy and open data for many countries.
transparency
There are many reasons for such inconsistencies. The implementation of open-government efforts – for instance, opening government data sets – often can be imperfect or even misguided. Drilling down to some of the data behind the Open Data Index scores reveals that even countries that score very well, such as the United States, have room for improvement. For example, the judicial branch generally does not publish data and houses most information behind a pay-wall. The status of legislation and amendments introduced by Congress also often are not available in machine-readable form.
As internationally recognized markers of political freedom and technological innovation, open government initiatives are appealing political tools for politicians looking to gain prominence in the global arena, regardless of whether or not they possess a real commitment to democratic principles. In 2012, Russia made a public push to cultivate open government and open data projects that was enthusiastically endorsed by American institutions. In a June 2012 blog post summarizing a Russian “Open Government Ecosystem” workshop at the World Bank, one World Bank consultant professed the opinion that open government innovations “are happening all over Russia, and are starting to have genuine support from the country’s top leaders.”
Given the Russian government’s penchant for corruption, cronyism, violations of press freedom and increasing restrictions on public access to information, the idea that it was ever committed to government accountability and transparency is dubious at best. This was confirmed by Russia’s May 2013 withdrawal of its letter of intent to join the Open Government Partnership. As explained by John Wonderlich, policy director at the Sunlight Foundation:

While Russia’s initial commitment to OGP was likely a surprising boon for internal champions of reform, its withdrawal will also serve as a demonstration of the difficulty of making a political commitment to openness there.

Which just goes to show that, while a democratic government does not guarantee open government practices, a government that regularly violates democratic principles may be an impossible environment for implementing open government.
A cursory analysis of the ever-evolving international open data landscape reveals three major takeaways:

  1. Good intentions for government transparency in democratic countries are not always effectively realized.
  2. Politicians will gladly pay lip-service to the idea of open government without backing up words with actions.
  3. The transparency we’ve established can go away quickly without vigilant oversight and enforcement.”

Digital Social Innovation


Nesta: Digital technologies and the internet play an increasingly important role in how social innovation happens. We call this phenomenon digital social innovation (DSI) and created a network map that we’re inviting you to join.
But what do we really mean by the term DSI? Peter Baeck and Alice Casey take a closer look at the tools and platforms you use to help you start your own digital social innovation project or get involved in those that others have already begun.
As part of our DSI research project, we have been looking across Europe, and beyond, to find out more about how people are using digital technology to make a social impact. We’re inviting people involved in creating these new social innovations to map their activities over at our open data community map www.digitalsocial.eu. We hope this will give everyone working on digital social innovation more exposure and help funders and researchers to shape their work to support this exciting field.

Below, we highlight our top 11 DSI trends to watch. Although you can read about each one separately, many of the most exciting innovations come from combining several of these trends to form entirely new systems. We’d love to gather more examples, so please add those you may have to our crowdmap here.

Data.gov Turns Five


NextGov: “When government technology leaders first described a public repository for government data sets more than five years ago, the vision wasn’t totally clear.
“I just didn’t understand what they were talking about,” said Marion Royal of the General Services Administration, describing his first introduction to the project. “I was thinking, ‘this is not going to work for a number of reasons.’”
A few minutes later, he was the project’s program director. He caught onto and helped clarify that vision and since then has worked with a small team to help shepherd online and aggregate more than 100,000 data sets compiled and hosted by agencies across federal, state and local governments.
Many Americans still don’t know what Data.gov is, but chances are good they’ve benefited from the site, perhaps from information such as climate or consumer complaint data. Maybe they downloaded the Red Cross’ Hurricane App after Superstorm Sandy or researched their new neighborhood through a real estate website that drew from government information.
Hundreds of companies pull data they find on the site, which has seen 4.5 million unique visitors from 195 countries, according to GSA. Data.gov has proven a key part of President Obama’s open data policies, which aim to make government more efficient and open as well as to stimulate economic activity by providing private companies, organizations and individuals machine-readable ingredients for new apps, digital tools and programs.”

Open Data at Core of New Governance Paradigm


GovExec: “Rarely are federal agencies compared favorably with Facebook, Instagram, or other modern models of innovation, but there is every reason to believe they can harness innovation to improve mission effectiveness. After all, Aneesh Chopra, former U.S. Chief Technology Officer, reminded the Excellence in Government 2014 audience that government has a long history of innovation. From nuclear fusion to the Internet, the federal government has been at the forefront of technological development.
According to Chopra, the key to fueling innovation and economic prosperity today is open data. But to make the most of open data, government needs to adapt its culture. Chopra outlined three essential elements of doing so:

  1. Involve external experts – integrating outside ideas is second to none as a source of innovation.
  2. Leverage the experience of those on the front lines – federal employees who directly execute their agency’s mission often have the best sense of what does and does not work, and what can be done to improve effectiveness.
  3. Look to the public as a value multiplier – just as Facebook provides a platform for tens of thousands of developers to provide greater value, federal agencies can provide the raw material for many more to generate better citizen services.

In addition to these three broad elements, Chopra offered four specific levers government can use to help enact this paradigm shift:

  1. Democratize government data – opening government data to the public facilitates innovation. For example, data provided by the National Oceanic and Atmospheric Administration helps generate a 5 billion dollar industry by maintaining almost no intellectual property constraints on its weather data.
  2. Collaborate on technical standards – government can act as a convener of industry members to standardize technological development, and thereby increase the value of data shared.
  3. Issue challenges and prizes – incentivizing the public to get involved and participate in efforts to create value from government data enhances the government’s ability to serve the public.
  4. Launch government startups – programs like the Presidential Innovation Fellows initiative helps challenge rigid bureaucratic structures and permeate a culture of innovation.

Federal leaders will need a strong political platform to sustain this shift. Fortunately, this blueprint is also bipartisan, says Chopra. Political leaders on both sides of the aisle are already getting behind the movement to bring innovation to the core of government..

The rise of open data driven businesses in emerging markets


Alla Morrison at the Worldbank blog:

Key findings —

  • Many new data companies have emerged around the world in the last few years. Of these companies, the majority use some form of government data.
  • There are a large number of data companies in sectors with high social impact and tremendous development opportunities.
  • An actionable pipeline of data-driven companies exists in Latin America and in Asia. The most desired type of financing is equity, followed by quasi-equity in the amounts ranging from $100,000 to $5 million, with averages of between $2 and $3 million depending on the region. The total estimated need for financing may exceed $400 million.

“The economic value of open data is no longer a hypothesis
How can one make money with open data which is akin to air – free and open to everyone? Should the World Bank Group be in the catalyzer role for a sector that is just emerging?  And if so, what set of interventions would be the most effective? Can promoting open data-driven businesses contribute to the World Bank Group’s twin goals of fighting poverty and boosting shared prosperity?
These questions have been top of the mind since the World Bank Open Finances team convened a group of open data entrepreneurs from across Latin America to share their business models, success stories and challenges at the Open Data Business Models workshop in Uruguay in June 2013. We were in Uruguay to find out whether open data could lead to the creation of sustainable new businesses and jobs. To do so, we tested a couple of hypotheses: open data has economic value, beyond the benefits of increased transparency and accountability; and open data companies with sustainable business models already exist in emerging economies.
Encouraged by our findings in Uruguay we set out to further explore the economic development potential of open data, with a focus on:

  • Contribution of open data to countries’ GDP;
  • Innovative solutions to tackle social problems in key sectors like agriculture, health, education, transportation, climate change, financial services, especially those benefiting low income populations;
  • Economic benefits of governments’ buy-in into the commercial value of open data and resulting release of new datasets, which in turn would lead to increased transparency in public resource management (reductions in misallocations, a more level playing field in procurement) and better service delivery; and
  • Creation of data-related private sector jobs, especially suited for the tech savvy young generation.

We proposed a joint IFC/World Bank approach (From open data to development impact – the crucial role of private sector) that envisages providing financing to data-driven companies through a dedicated investment fund, as well as loans and grants to governments to create a favorable enabling environment. The concept was received enthusiastically for the most part by a wide group of peers at the Bank, the IFC, as well as NGOs, foundations, DFIs and private sector investors.
Thanks also in part to a McKinsey report last fall stating that open data could help unlock more than $3 trillion in value every year, the potential value of open data is now better understood. The acquisition of Climate Corporation (whose business model holds enormous potential for agriculture and food security, if governments open up the right data) for close to a billion dollars last November and the findings of the Open Data 500 project led by GovLab of the NYU further substantiated the hypothesis. These days no one asks whether open data has economic value; the focus has shifted to finding ways for companies, both startups and large corporations, and governments to unlock it. The first question though is – is it still too early to plan a significant intervention to spur open data driven economic growth in emerging markets?”

Conceptualizing Open Data ecosystems: A timeline analysis of Open Data development in the UK


New paper by Tom Heath et al: “In this paper, we conceptualize Open Data ecosystems by analysing the major stakeholders in the UK. The conceptualization is based on a review of popular Open Data definitions and business ecosystem theories, which we applied to empirical data using a timeline analysis. Our work is informed by a combination of discourse analysis and in-depth interviews, undertaken during the summer of 2013. Drawing on the UK as a best practice example, we identify a set of structural business ecosystem properties: circular flow of resources, sustainability, demand that encourages supply, and dependence developing between suppliers, intermediaries, and users. However, significant gaps and shortcomings are found to remain. Most prominently, demand is not yet fully encouraging supply and actors have yet to experience fully mutual interdependence.”