HHS releases new data and tools to increase transparency on hospital utilization and other trends


Pressrelease: “With more than 2,000 entrepreneurs, investors, data scientists, researchers, policy experts, government employees and more in attendance, the Department of Health and Human Services (HHS) is releasing new data and launching new initiatives at the annual Health Datapalooza conference in Washington, D.C.
Today, the Centers for Medicare & Medicaid Services (CMS) is releasing its first annual update to the Medicare hospital charge data, or information comparing the average amount a hospital bills for services that may be provided in connection with a similar inpatient stay or outpatient visit. CMS is also releasing a suite of other data products and tools aimed to increase transparency about Medicare payments. The data trove on CMS’s website now includes inpatient and outpatient hospital charge data for 2012, and new interactive dashboards for the CMS Chronic Conditions Data Warehouse and geographic variation data. Also today, the Food and Drug Administration (FDA) will launch a new open data initiative. And before the end of the conference, the Office of the National Coordinator for Health Information Technology (ONC) will announce the winners of two data challenges.
“The release of these data sets furthers the administration’s efforts to increase transparency and support data-driven decision making which is essential for health care transformation,” said HHS Secretary Kathleen Sebelius.
“These public data resources provide a better understanding of Medicare utilization, the burden of chronic conditions among beneficiaries and the implications for our health care system and how this varies by where beneficiaries are located,” said Bryan Sivak, HHS chief technology officer. “This information can be used to improve care coordination and health outcomes for Medicare beneficiaries nationwide, and we are looking forward to seeing what the community will do with these releases. Additionally, the openFDA initiative being launched today will for the first time enable a new generation of consumer facing and research applications to embed relevant and timely data in machine-readable, API-based formats.”
2012 Inpatient and Outpatient Hospital Charge Data
The data posted today on the CMS website provide the first annual update of the hospital inpatient and outpatient data released by the agency last spring. The data include information comparing the average charges for services that may be provided in connection with the 100 most common Medicare inpatient stays at over 3,000 hospitals in all 50 states and Washington, D.C. Hospitals determine what they will charge for items and services provided to patients and these “charges” are the amount the hospital generally bills for those items or services.
With two years of data now available, researchers can begin to look at trends in hospital charges. For example, average charges for medical back problems increased nine percent from $23,000 to $25,000, but the total number of discharges decreased by nearly 7,000 from 2011 to 2012.
In April, ONC launched a challenge – the Code-a-Palooza challenge – calling on developers to create tools that will help patients use the Medicare data to make health care choices. Fifty-six innovators submitted proposals and 10 finalists are presenting their applications during Datapalooza. The winning products will be announced before the end of the conference.
Chronic Conditions Warehouse and Dashboard
CMS recently released new and updated information on chronic conditions among Medicare fee-for-service beneficiaries, including:

  • Geographic data summarized to national, state, county, and hospital referral regions levels for the years 2008-2012;
  • Data for examining disparities among specific Medicare populations, such as beneficiaries with disabilities, dual-eligible beneficiaries, and race/ethnic groups;
  • Data on prevalence, utilization of select Medicare services, and Medicare spending;
  • Interactive dashboards that provide customizable information about Medicare beneficiaries with chronic conditions at state, county, and hospital referral regions levels for 2012; and
  • Chartbooks and maps.

These public data resources support the HHS Initiative on Multiple Chronic Conditions by providing researchers and policymakers a better understanding of the burden of chronic conditions among beneficiaries and the implications for our health care system.
Geographic Variation Dashboard
The Geographic Variation Dashboards present Medicare fee-for-service per-capita spending at the state and county levels in interactive formats. CMS calculated the spending figures in these dashboards using standardized dollars that remove the effects of the geographic adjustments that Medicare makes for many of its payment rates. The dashboards include total standardized per capita spending, as well as standardized per capita spending by type of service. Users can select the indicator and year they want to display. Users can also compare data for a given state or county to the national average. All of the information presented in the dashboards is also available for download from the Geographic Variation Public Use File.
Research Cohort Estimate Tool
CMS also released a new tool that will help researchers and other stakeholders estimate the number of Medicare beneficiaries with certain demographic profiles or health conditions. This tool can assist a variety of stakeholders interested in specific figures on Medicare enrollment. Researchers can also use this tool to estimate the size of their proposed research cohort and the cost of requesting CMS data to support their study.
Digital Privacy Notice Challenge
ONC, with the HHS Office of Civil Rights, will be awarding the winner of the Digital Privacy Notice Challenge during the conference. The winning products will help consumers get notices of privacy practices from their health care providers or health plans directly in their personal health records or from their providers’ patient portals.
OpenFDA
The FDA’s new initiative, openFDA, is designed to facilitate easier access to large, important public health datasets collected by the agency. OpenFDA will make FDA’s publicly available data accessible in a structured, computer readable format that will make it possible for technology specialists, such as mobile application creators, web developers, data visualization artists and researchers to quickly search, query, or pull massive amounts of information on an as needed basis. The initiative is the result of extensive research to identify FDA’s publicly available datasets that are often in demand, but traditionally difficult to use. Based on this research, openFDA is beginning with a pilot program involving millions of reports of drug adverse events and medication errors submitted to the FDA from 2004 to 2013. The pilot will later be expanded to include the FDA’s databases on product recalls and product labeling.
For more information about CMS data products, please visit http://www.cms.gov/Research-Statistics-Data-and-Systems/Research-Statistics-Data-and-Systems.html.
For more information about today’s FDA announcement visit: http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/UCM399335 or http://open.fda.gov/

Estonian plan for 'data embassies' overseas to back up government databases


Graeme Burton in Computing: “Estonia is planning to open “data embassies” overseas to back up government databases and to operate government “in the cloud“.
The aim is partly to improve efficiency, but driven largely by fear of invasion and occupation, Jaan Priisalu, the director general of Estonian Information System Authority, told Sky News.
He said: “We are planning to actually operate our government in the cloud. It’s clear also how it helps to protect the country, the territory. Usually when you are the military planner and you are planning the occupation of the territory, then one of the rules is suppress the existing institutions.
“And if you are not able to do it, it means that this political price of occupying the country will simply rise for planners.”
Part of the rationale for the plan, he continued, was fear of attack from Russia in particular, which has been heightened following the occupation of Crimea, formerly in Ukraine.
“It’s quite clear that you can have problems with your neighbours. And our biggest neighbour is Russia, and nowadays it’s quite aggressive. This is clear.”
The plan is to back up critical government databases outside of Estonia so that affairs of state can be conducted in the cloud, even if the country is invaded. It would also have the benefit of keeping government information out of invaders’ hands – provided it can keep its government cloud secure.
According to Sky News, the UK is already in advanced talks about hosting the Estonian government databases and may make the UK the first of Estonia’s data embassies.
Having wrested independence from the Soviet Union in 1991, Estonia has experienced frequent tension with its much bigger neighbour. In 2007, for example, after the relocation of the “Bronze Soldier of Tallinn” and the exhumation of the soldiers buried in a square in the centre of the capital to a military cemetery in April 2007, the country was subject to a prolonged cyber-attack sourced to Russia.
Russian hacker “Sp0Raw” said that the most efficient of the online attacks on Estonia could not have been carried out without the approval of Russian authorities and added that the hackers seemed to act under “recommendations” from parties in government. However, claims by Estonia that the Russian government was directly involved in the attacks were “empty words, not supported by technical data”.
Mike Witt, deputy director of the US Computer Emergency Response Team (CERT), suggested that the distributed denial-of-service (DDOS) attacks, while crippling to the Estonian government at the time, were not significant in scale from a technical standpoint. However, the Estonian government was forced to shut down many of its online operations in response.
At the same time, the Estonian government has been accused of implementing anti-Russian laws and discriminating against its large ethnic Russian population.
Last week, the Estonian government unveiled a plan to allow anyone in the world to apply for “digital citizenship of the country, enabling them to use Estonian online services, open bank accounts, and start companies without having to physically reside in the country.”

How to treat government like an open source project


Ben Balter in OpenSource.com: “Open government is great. At least, it was a few election cycles ago. FOIA requests, open data, seeing how your government works—it’s arguably brought light to a lot of not-so-great practices, and in many cases, has spurred citizen-centric innovation not otherwise imagined before the information’s release.
It used to be that sharing information was really, really hard. Open government wasn’t even a possibility a few hundred years ago. Throughout the history of communication tools—be it the printing press, fax machine, or floppy disks—new tools have generally done three things: lowered the cost to transmit information, increased who that information could be made available to, and increase how quickly that information could be distributed. But, printing presses and fax machines have two limitations: they are one way and asynchronous. They let you more easily request, and eventually see how the sausage was made but don’t let you actually take part in the sausage-making. You may be able to see what’s wrong, but you don’t have the chance to make it better. By the time you find out, it’s already too late.
As technology allows us to communicate with greater frequency and greater fidelity, we have the chance to make our government not only transparent, but truly collaborative.

So, how do we encourage policy makers and bureaucrats to move from open government to collaborative government, to learn open source’s lessons about openness and collaboration at scale?
For one, we geeks can help to create a culture of transparency and openness within government by driving up the demand side of the equation. Be vocal, demand data, expect to see process, and once released, help build lightweight apps. Show potential change agents in government that their efforts will be rewarded.
Second, it’s a matter of tooling. We’ve got great tools out there—things like Git that can track who made what change when and open standards like CSV or JSON that don’t require proprietary software—but by-and-large they’re a foreign concept in government, at least among those empowered to make change. Command line interfaces with black background and green text can be intimidating to government bureaucrats used to desktop publishing tools. Make it easier for government to do the right thing and choose open standards over proprietary tooling.”
Last, be a good open source ambassador. Help your home city or state get involved with open source. Encourage them to take their first step (be it consuming open source, publishing, or collaborating with the public), teach them what it means to do things in the open, And when they do push code outside the firewall, above all, be supportive. We’re in this together.
As technology makes it easier to work together, geeks can help make our government not just open, but in fact collaborative. Government is the world’s largest and longest running open source project (bugs, trolls, and all). It’s time we start treating it like one.

What happened to the idea of the Great Society?


John Micklethwait and Adrian Wooldridge in the Financial Times: “Most of the interesting experiments in government are taking place far from Washington: in Singapore, which delivers much better public services at a fraction of the cost; in Brazil, with its “conditional” welfare payments, dependent on behaviour; in Scandinavia, where “socialist” Sweden has cut state spending from 67 per cent of GDP in 1993 to 49 per cent, introduced school vouchers and brought entitlements into balance by raising the retirement age. In the US, the dynamic bits of government are in its cities, where pragmatic mayors are experimenting with technology.
What will replace the Great Society? For Republicans, the answer looks easy: just shrink government. But this gut instinct runs up against two big problems. The assumption that government is evil means they never take it seriously (Singapore has a tiny state but pays its best civil servants $2m a year). And, in practice, American conservatives are addicted to Big Government: hence the $1.3tn of exemptions in the US tax code, most of which are in effect a welfare state for the rich.
For Democrats, the problem is even worse. Having become used to promising ever more entitlements to voters, they face a series of unedifying choices: whether to serve society at large (by making schools better) or to protect public sector unions (teachers account for many of their activists); and whether to offer ever less generous universal benefits to the entire population or to target spending on the disadvantaged.
This is where the politics of the future will be fought, on both sides of the Atlantic. It will not be as inspiring as the Great Society. It will be about slimming and modernising government, tying pensions to life expectancy and unleashing technology on the public sector.
But what the US – and Europe – needs is cool-headed pragmatism. Government is neither a monster nor a saviour but an indispensable part of a decent society that, like most organisations, works best when it focuses on doing a few things well.”

Data.gov Turns Five


NextGov: “When government technology leaders first described a public repository for government data sets more than five years ago, the vision wasn’t totally clear.
“I just didn’t understand what they were talking about,” said Marion Royal of the General Services Administration, describing his first introduction to the project. “I was thinking, ‘this is not going to work for a number of reasons.’”
A few minutes later, he was the project’s program director. He caught onto and helped clarify that vision and since then has worked with a small team to help shepherd online and aggregate more than 100,000 data sets compiled and hosted by agencies across federal, state and local governments.
Many Americans still don’t know what Data.gov is, but chances are good they’ve benefited from the site, perhaps from information such as climate or consumer complaint data. Maybe they downloaded the Red Cross’ Hurricane App after Superstorm Sandy or researched their new neighborhood through a real estate website that drew from government information.
Hundreds of companies pull data they find on the site, which has seen 4.5 million unique visitors from 195 countries, according to GSA. Data.gov has proven a key part of President Obama’s open data policies, which aim to make government more efficient and open as well as to stimulate economic activity by providing private companies, organizations and individuals machine-readable ingredients for new apps, digital tools and programs.”

The Secret Science of Retweets


Emerging Technology From the arXiv: “If you send a tweet to a stranger asking them to retweet it, you probably wouldn’t be surprised if they ignored you entirely. But if you sent out lots of tweets like this, perhaps a few might end up being passed on.

How come? What makes somebody retweet information from a stranger? That’s the question addressed today by Kyumin Lee from Utah State University in Logan and a few pals from IBM’s Almaden research center in San Jose….by studying the characteristics of Twitter users, it is possible to identify strangers who are more likely to pass on your message than others. And in doing this, the researchers say they’ve been able to improve the retweet rate of messages sent strangers by up to 680 percent.
So how did they do it? The new technique is based on the idea that some people are more likely to tweet than others, particularly on certain topics and at certain times of the day. So the trick is to find these individuals and target them when they are likely to be most effective.
So the approach was straightforward. The idea is to study the individuals on Twitter, looking at their profiles and their past tweeting behavior, looking for clues that they might be more likely to retweet certain types of information. Having found these individuals, send your tweets to them.
That’s the theory. In practice, it’s a little more involved. Lee and co wanted to test people’s response to two types of information: local news (in San Francisco) and tweets about bird flu, a significant issue at the time of their research. They then created several Twitter accounts with a few followers, specifically to broadcast information of this kind.
Next, they selected people to receive their tweets. For the local news broadcasts, they searched for Twitter users geolocated in the Bay area, finding over 34,000 of them and choosing 1,900 at random.
They then a sent a single message to each user of the format:
“@ SFtargetuser “A man was killed and three others were wounded in a shooting … http://bit.ly/KOl2sC” Plz RT this safety news”
So the tweet included the user’s name, a short headline, a link to the story and a request to retweet.
Of these 1,900 people, 52 retweeted the message they received. That’s 2.8 percent.
For the bird flu information, Lee and co hunted for people who had already tweeted about bird flu, finding 13,000 of them and choosing 1,900 at random. Of these, 155 retweeted the message they received, a retweet rate of 8.4 percent.
But Lee and co found a way to significantly improve these retweet rates. They went back to the original lists of Twitter users and collected publicly available information about each of them, such as their personal profile, the number of followers, the people they followed, their 200 most recent tweets and whether they retweeted the message they had received
Next, the team used a machine learning algorithm to search for correlations in this data that might predict whether somebody was more likely to retweet. For example, they looked at whether people with older accounts were more likely to retweet or how the ratio of friends to followers influenced the retweet likelihood, or even how the types of negative or positive words they used in previous tweets showed any link. They also looked at the time of day that people were most active in tweeting.
The result was a machine learning algorithm capable of picking users who were most likely to retweet on a particular topic.
And the results show that it is surprisingly effective. When the team sent local information tweets to individuals identified by the algorithm, 13.3 percent retweeted it, compared to just 2.6 percent of people chosen at random.
And they got even better results when they timed the request to match the periods when people had been most active in the past. In that case, the retweet rate rose to 19.3 percent. That’s an improvement of over 600 percent.
Similarly, the rate for bird flu information rose from 8.3 percent for users chosen at random to 19.7 percent for users chosen by the algorithm.
That’s a significant result that marketers, politicians, news organizations will be eyeing with envy.
An interesting question is how they can make this technique more generally applicable. It raises the prospect of an app that allows anybody to enter a topic of interest and which then creates a list of people most likely to retweet on that topic in the next few hours.
Lee and co do not mention any plans of this kind. But if they don’t exploit it, then there will surely be others who will.
Ref: arxiv.org/abs/1405.3750 : Who Will Retweet This? Automatically Identifying and Engaging Strangers on Twitter to Spread Information”

Learning from The Wealth of the Commons


Paper by Mae Shaw in Special issue of the Community Development Journal on “Commons Sense New thinking about an old idea: “We are poised between an old world that no longer works and a new one struggling to be born. Surrounded by centralized hierarchies on the one hand and predatory markets on the other, people around the world are searching for alternatives’.

This is the starting point for what David Bollier and Silke Helfrich, the editors of The Wealth of the Commons: A World Beyond Market and State (2012), describe as ‘an extended global exercise in commoning’ – Peter Linebaugh’s term for ‘the self-determination of commoners in managing their shared resources’ (p. 396). In other words, the book itself is offered as an active process of ‘making the path’ by presenting ‘some of the most promising new paths now being developed’. It is intended to be ‘rigorous enough for academic readers yet accessible enough for the layperson’. In this, it more than achieves its ambitions. The Wealth of the Commons is an edited collection of seventy-three short papers from thirty countries: ‘a collective venture of sharing, collaboration, negotiation and creative production among some of the most diverse commons scholars, activists and projects leaders imaginable’. This rich and diverse source of knowledge and inspiration could be described as ‘polyvocal’ in the sense that it presents a multiplicity of voices improvising around a single theme – sometimes in harmony, sometimes discordant, but always interesting.

The book brings together an impressive collection of contributors from different places, backgrounds and interests to explore the meaning of the commons and to advocate for it ‘as a new paradigm’ for the organization of public and private life. In this sense, it represents a project rather than an analysis: essentially espousing a cause with imperative urgency. This is not necessarily a weakness, but it does raise specific questions about what is included and what is absent or marginalized in this particular selection of accounts, and what might be lost along the way. What counts as ‘commons’ or ‘the commons’ or ‘the common’ (all used in the text) is a subject of discussion and contestation here, as elsewhere. The effort to ‘name and claim’ is an integral aspect of the project. As Jeffrey et al. (2012, p. 10) comment, ‘the struggle for the commons has never been without its own politics of separation and division’, raising valid questions about the prospects for a coherent paradigm at this stage. At the very least, however, this rich resource may prove seminal in countering those dominant paradigms of growth and development in which structural and cultural adjustments ‘serve as a justifying rhetoric for continuity in plunder’ of common resources (Mattei, p. 41).

The contributions fall into three general categories: those offering a critique of existing ‘increasingly dysfunctional’ market/state relations; those that ‘enlarge theoretical understandings of the commons as a way to change the world’; and those that ‘describe innovative working projects which demonstrate the feasibility’ of the commons.

What counts as the commons?

As acknowledged in many of the chapters, defining the commons in any consistent and convincing way can be deeply problematic. Like ‘community’ itself, it can be regarded to some degree as an ideological portmanteau which contains a variety of meanings. Nonetheless, there is a general commitment to confront such difficulties in an open way, and to be as clear as possible about what the commons might represent, what it might replace, and what it should not be confused with. Put most simply, the commons refers to what human beings share in nature and society that should be cherished for all now and for the future: ‘the term … provides the binding element between the natural and the social or cultural worlds’ (Weber p.11). Its profound challenge to the logic of competitive capitalist relations, therefore, is to ‘validate new schemes of human relations, production and governance … commonance’ (Bollier and Helfrich, p. xiv) that penetrate all levels of public and private life. This idea is explored in detail in many of the contributions.

The commons, then, claims to represent a philosophical stance, an intellectual framework, a moral and economic imperative, a set of organizing principles and commitments, a movement, and an emerging ‘global community of practice’ (O’Connell, 2012). It has also developed an increasingly shared discourse, which is designed to unsettle institutionalized norms and values and to reclaim or remake the language of co-operation, fairness and social justice. As the editorial points out, the language of capitalism is one that becomes ‘encoded into the epistemology of our language and internalized by people’. In community development, and elsewhere, we have become sensitized to the way in which progressive language can be appropriated to support individualistic market values. When empowerment can mean facilitated asset-stripping of local communities, and solidarity targets can be set by government (e.g. Scottish Government, 2007), then we must be wary about assuming proprietorial closure on the term ‘commons’ itself.

As Federici, in a particularly persuasive chapter, warns: ‘… capital is learning about the virtues of the common good’ (p. 46). She argues that, ‘since at least the 1990s, the language of the commons has been appropriated … by the World Bank and put at the service of privatization’. For this reason, it is important to think of the commons as a ‘quality of relations, a principle of co-operation and of responsibility to each other and to the earth, the forests, the seas, the animals’ (p. 50). This produces a different operational logic, which is explored in depth across the collection.

Deficiencies in the commons framework

To advance the commons as ‘a new paradigm’, it is necessary to locate it historically and to show the ways in which it has been colonized and compromised, as some of these pieces do. It may seem ironic that the meaning of ‘the commons’ to many people in the UK, for example, is that bear pit of parliamentary business, the House of Commons, in which adversarial rather than consensual politics is the order of the day. Reclaiming such foundational ideas is a lengthy and demanding process, as David Graeber shows in The Democracy Project, his recent account of the Occupy Movement, which for a time commanded considerable international interest. Drawing on Linebaugh, Federici contends that ‘commons have been the thread that has connected the history of the class struggle into our time’.

It is unfortunate, therefore, that the volume fails to address the relationship between organized labour and the commons, as highlighted in the introduction, because there is a distinctive contribution to be made here. As Harvey (2012) argues, decentralization and autonomy are also primary vehicles for reinforcing neoliberal class strategies of social reproduction and producing greater inequality. For example, in urban environments in particular, ‘the better the common qualities a social group creates, the more likely it is to be raided and appropriated by private profit-maximising interests’ leading inexorably to economic cleansing of whole areas. Gentrification and tourism are the clearest examples. The salience of class in general is an underdeveloped line of argument. If this authoritative collection is anything to go by, this may be a significant deficiency in the commons framework.

Without historical continuity – honouring the contribution of those ‘commoners’ who came before in various guises and places – there is a danger of falling into the contemporary trap of regarding ‘innovation’ as a way of separating us from our past. History in the past as well as in the making is as essential a part of our commons as is the present and the future – material, temporal and spiritual….”

New Research Suggests Collaborative Approaches Produce Better Plans


JPER: “In a previous blog post (see, http://goo.gl/pAjyWE), we discussed how many of the most influential articles in the Journal of Planning Education and Research (and in peer publications, like JAPA) over the last two decades have focused on communicative or collaborative planning. Proponents of these approaches, most notably Judith Innes, Patsy Healey, Larry Susskind, and John Forester, developed the idea that the collaborative and communicative structures that planners use impact the quality, legitimacy, and equity of planning outcomes. In practice, communicative theory has led to participatory initiatives, such as those observed in New Orleans (post-Katrina, http://goo.gl/A5J5wk), Chattanooga (to revitalize its downtown and riverfront, http://goo.gl/zlQfKB), and in many other smaller efforts to foment wider involvement in decision making. Collaboration has also impacted regional governance structures, leading to more consensus based forms of decision making, notably CALFED (SF Bay estuary governance, http://goo.gl/EcXx9Q) and transportation planning with Metropolitan Planning Organizations (MPOs)….
Most studies testing the implementation of collaborative planning have been case studies. Previous work by authors such as Innes and Booher, has provided valuable qualitative data about collaboration in planning, but few studies have attempted to empirically test the hypothesis that consensus building and participatory practices lead to better planning outcomes.
Robert Deyle (Florida State) and Ryan Weidenman (Atkins Global) build on previous case study research by surveying officials in involved in developing long-range transportation plans in 88 U.S. MPOs about the process and outcomes of those plans. The study tests the hypothesis that collaborative processes provide better outcomes and enhanced long-term relationships in situations where “many stakeholders with different needs” have “shared interests in common resources or challenges” and where “no actor can meet his/her interests without the cooperation of many others (Innes and Booher 2010, 7; Innes and Gruber 2005, 1985–2186). Current theory posits that consensus-based collaboration requires 1) the presence of all relevant interests, 2) mutual interdependence for goal achievement, and 3) honest and authentic dialog between participants (Innes and Booher 2010, 35–36, Deyle and Weidenmann, 2014).

Figure 2 Deyle and Weidenman (2014)
By surveying planning authorities, the authors found that most of the conditions (See Figure 2, above) posited in collaborative planning literature had statistically significant impacts on planning outcomes.These included perceptions of plan quality, participant satisfaction with the plan, as well as intangible outcomes that benefit both the participants and their ongoing collaboration efforts. However, having a planning process in which all or most decisions were made by consensus did not improve outcomes.  ….
Deyle, Robert E., and Ryan E. Wiedenman. “Collaborative Planning by Metropolitan Planning Organizations A Test of Causal Theory.” Journal of Planning Education and Research (2014): 0739456X14527621.
To access this article FREE until May 31 click the following links: Online, http://goo.gl/GU9inf, PDF, http://goo.gl/jehAf1.”

#BringBackOurGirls: Can Hashtag Activism Spur Social Change?


Nancy Ngo at TechChange: “In our modern times of media cycles fighting for our short attention spans, it is easy to ride the momentum of a highly visible campaign that can quickly fizzle out once another competing story emerges. Since the kidnappings of approximately 300 Nigerian girls by militant Islamist group Boko Haram last month, the international community has embraced the hashtag, “#BringBackOurGirls”, in a very vocal and visible social media campaign demanding action to rescue the Chibok girls. But one month since the mass kidnapping without the rescue of the girls, do we need to take a different approach? Will #BringBackOurGirls be just another campaign we forget about once the next celebrity scandal becomes breaking news?

#BringBackOurGirls goes global starting in Nigeria

Most of the #BringBackOurGirls campaign activity has been highly visible on Twitter, Facebook, and international media outlets. In this fascinating Twitter heat map created using the tool, CartoDB, featured in TIME Magazine, we can see a time-lapsed digital map of how the hashtag, “#BringBackOurGirls” spread globally, starting organically from within Nigeria in mid April.

The #BringBackOurGirls hashtag has been embraced widely by many public figures and has garnered wide support across the world. Michelle Obama, David Cameron, and Malala Yusafzai have posted images with the hashtag, along with celebrities such as Ellen Degeneres, Angelina Jolie, and Dwayne Johnson. To date, nearly 1 million people signed the Change.org petition. Countries including the USA, UK, China, Israel have pledged to join the rescue efforts, and other human rights campaigns have joined the #BringBackOurGirls Twitter momentum, as seen on this Hashtagify map.

Is #BringBackOurGirls repeating the mistakes of #KONY2012?

Kony_2012_Poster_3

A great example of a past campaign where this happened was with the KONY2012 campaign, which brought some albeit short-lived urgency to addressing the child soldiers recruited by Joseph Kony, leader of the Lord’s Resistance Army (LRA). Michael Poffenberger, who worked on that campaign, will join us a guest expert in TC110: Social Media for Social Change online course in June 2013 and compare it the current #BringBackOurGirls campaign. Many have drawn parallels to both campaigns and warned of the false optimism that hyped social media messages can bring when context is not fully considered and understood.

According to Lauren Wolfe of Foreign Policy magazine, “Understanding what has happened to the Nigerian girls and how to rescue them means beginning to face what has happened to hundreds of thousands, if not millions, of girls over years in global armed conflict.” To some critics, this hashtag trivializes the weaknesses of Nigerian democracy that have been exposed. Critics of using social media in advocacy campaigns have used the term “slacktivism” to describe the passive, minimal effort needed to participate in these movements. Others have cited such media waves being exploited for individual gain, as opposed to genuinely benefiting the girls. Florida State University Political Science professor, Will H. Moore, argues that this hashtag activism is not only hurting the larger cause of rescuing the kidnapped girls, but actually helping Boko Haram. Jumoke Balogun, Co-Founder of CompareAfrique, also highlights the limits of the #BringBackOurGirls hashtag impact.

Hashtag activism, alone, is not enough

With all this social media activity and international press, what actual progress has been made in rescuing the kidnapped girls? If the objective is raising awareness of the issue, yes, the hashtag has been successful. If the objective is to rescue the girls, we still have a long way to go, even if the hashtag campaign has been part of a multi-pronged approach to galvanize resources into action.

The bottom line: social media can be a powerful tool to bring visibility and awareness to a cause, but a hashtag alone is not enough to bring about social change. There are a myriad of resources that must be coordinated to effectively implement this rescue mission, which will only become more difficult as more time passes. However, prioritizing and shining a sustained light on the problem, instead getting distracted by competing media cycles on celebrities getting into petty fights, is the first step toward a solution…”

Health plan giants to make payment data accessible to public


Paul Demko in ModernHealthCare: “A new initiative by three of the country’s largest health plans has the potential to transform the accessibility of claims payment data, according to healthcare finance experts. UnitedHealthcare, Aetna and Humana announced a partnership on Wednesday with the Health Care Cost Institute to create a payment database that will be available to the public for free. …The database will be created by HCCI, a not-for-profit group established in 2011, from information provided by the insurers. HCCI expects it to be available in 2015 and that more health plans will join the initiative prior to its launch.
UnitedHealthcare is the largest insurer in the country in terms of the number of individuals covered through its products. All three participating plans are publicly traded, for-profit companies.
Stephen Parente, chair of HCCI’s board, said the organization was approached by the insurance companies about the initiative. “I’m not quite sure what the magic trigger was,” said Parente, who is a professor at the University of Minnesota and advised John McCain’s 2008 presidential campaign on healthcare issues. “We’ve kind of proven as a nonprofit and an independent group that we can be trustworthy in working with their data.”
Experts say cost transparency is being spurred by a number of developments in the healthcare sector. The trend towards high-deductible plans is giving consumers a greater incentive to understand how much healthcare costs and to utilize it more efficiently. In addition, the launch of the exchanges under the Patient Protection and Affordable Care Act has brought unprecedented attention to the difficulties faced by individuals in shopping for insurance coverage.
“There’s so many things that are kind of pushing the industry toward this more transparent state,” Hempstead said. “There’s just this drumbeat that people want to have this information.”
Insurers may also be realizing they aren’t likely to have a choice about sharing payment information. In recent years, more and more states have passed laws requiring the creation of claims databases. Currently, 11 states have all payer claims databases, and six other states are in the process of creating such a resource, according to the All-Payer Claims Database Council….”