CollaborativeScience.org: Sustaining Ecological Communities Through Citizen Science and Online Collaboration


David Mellor at CommonsLab: “In any endeavor, there can be a tradeoff between intimacy and impact. The same is true for science in general and citizen science in particular. Large projects with thousands of collaborators can have incredible impact and robust, global implications. On the other hand, locally based projects can foster close-knit ties that encourage collaboration and learning, but face an uphill battle when it comes to creating rigorous and broadly relevant investigations. Online collaboration has the potential to harness the strengths of both of these strategies if a space can be created that allows for the easy sharing of complex ideas and conservation strategies.
CollaborativeScience.org was created by researchers from five different universities to train Master Naturalists in ecology, scientific modeling and adaptive management, and then give these capable volunteers a space to put their training to work and create conservation plans in collaboration with researchers and land managers.
We are focusing on scientific modeling throughout this process because environmental managers and ecologists have been trained to intuitively create explanations based on a very large number of related observations. As new data are collected, these explanations are revised and are put to use in generating new, testable hypotheses. The modeling tools that we are providing to our volunteers allow them to formalize this scientific reasoning by adding information, sources and connections, then making predictions based on possible changes to the system. We integrate their projects into the well-established citizen science tools at CitSci.org and guide them through the creation of an adaptive management plan, a proven conservation project framework…”

The Trend towards “Smart Cities”


Chien-Chu Chen in the International Journal of Automation and Smart Technology (AUSMT): “Looking back over the past century, the steady pace of development in many of the world’s cities has resulted in a situation where a high percentage of these cities are now faced with the problem of aging, decrepit urban infrastructure; a considerable number of cities are having to undertake large-scale infrastructure renewal projects. While creating new opportunities in the area of infrastructure, ongoing urbanization is also creating problems, such as excessive consumption of water, electric power and heat energy, environmental pollution, increased greenhouse gas emissions, traffic jams, and the aging of the existing residential housing stock, etc. All of these problems present a challenge to cities’ ability to achieve sustainable development. In response to these issues, the concept of the “smart city” has grown in popularity throughout the world. The aim of smart city initiatives is to make the city a vehicle for “smartification” through the integration of different industries and sectors. As initiatives of this kind move beyond basic automation into the realm of real “smartification,” the smart city concept is beginning to take concrete form….”

Choice, Rules and Collective Action


New book on “The Ostroms on the Study of Institutions and Governance”: “This volume brings a set of key works by Elinor Ostrom, co-recipient of the Nobel Prize in Economic Sciences, together with those of Vincent Ostrom, one of the originators of Public Choice political economy. The two scholars introduce and expound their approaches and analytical perspectives on the study of institutions and governance.
The book puts together works representing the main analytical and conceptual vehicles articulated by the Ostroms to create the Bloomington School of public choice and institutional theory. Their endeavours sought to ‘re-establish the priority of theory over data collection and analysis’, and to better integrate theory and practice.
These efforts are illustrated via selected texts, organised around three themes: the political economy and public choice roots of their work in creating a distinct branch of political economy; the evolutionary nature of their work that led them to go beyond mainstream public choice, thereby enriching the public choice tradition itself; and, finally, the foundational and epistemological dimensions and implications of their work.”

HHS releases new data and tools to increase transparency on hospital utilization and other trends


Pressrelease: “With more than 2,000 entrepreneurs, investors, data scientists, researchers, policy experts, government employees and more in attendance, the Department of Health and Human Services (HHS) is releasing new data and launching new initiatives at the annual Health Datapalooza conference in Washington, D.C.
Today, the Centers for Medicare & Medicaid Services (CMS) is releasing its first annual update to the Medicare hospital charge data, or information comparing the average amount a hospital bills for services that may be provided in connection with a similar inpatient stay or outpatient visit. CMS is also releasing a suite of other data products and tools aimed to increase transparency about Medicare payments. The data trove on CMS’s website now includes inpatient and outpatient hospital charge data for 2012, and new interactive dashboards for the CMS Chronic Conditions Data Warehouse and geographic variation data. Also today, the Food and Drug Administration (FDA) will launch a new open data initiative. And before the end of the conference, the Office of the National Coordinator for Health Information Technology (ONC) will announce the winners of two data challenges.
“The release of these data sets furthers the administration’s efforts to increase transparency and support data-driven decision making which is essential for health care transformation,” said HHS Secretary Kathleen Sebelius.
“These public data resources provide a better understanding of Medicare utilization, the burden of chronic conditions among beneficiaries and the implications for our health care system and how this varies by where beneficiaries are located,” said Bryan Sivak, HHS chief technology officer. “This information can be used to improve care coordination and health outcomes for Medicare beneficiaries nationwide, and we are looking forward to seeing what the community will do with these releases. Additionally, the openFDA initiative being launched today will for the first time enable a new generation of consumer facing and research applications to embed relevant and timely data in machine-readable, API-based formats.”
2012 Inpatient and Outpatient Hospital Charge Data
The data posted today on the CMS website provide the first annual update of the hospital inpatient and outpatient data released by the agency last spring. The data include information comparing the average charges for services that may be provided in connection with the 100 most common Medicare inpatient stays at over 3,000 hospitals in all 50 states and Washington, D.C. Hospitals determine what they will charge for items and services provided to patients and these “charges” are the amount the hospital generally bills for those items or services.
With two years of data now available, researchers can begin to look at trends in hospital charges. For example, average charges for medical back problems increased nine percent from $23,000 to $25,000, but the total number of discharges decreased by nearly 7,000 from 2011 to 2012.
In April, ONC launched a challenge – the Code-a-Palooza challenge – calling on developers to create tools that will help patients use the Medicare data to make health care choices. Fifty-six innovators submitted proposals and 10 finalists are presenting their applications during Datapalooza. The winning products will be announced before the end of the conference.
Chronic Conditions Warehouse and Dashboard
CMS recently released new and updated information on chronic conditions among Medicare fee-for-service beneficiaries, including:

  • Geographic data summarized to national, state, county, and hospital referral regions levels for the years 2008-2012;
  • Data for examining disparities among specific Medicare populations, such as beneficiaries with disabilities, dual-eligible beneficiaries, and race/ethnic groups;
  • Data on prevalence, utilization of select Medicare services, and Medicare spending;
  • Interactive dashboards that provide customizable information about Medicare beneficiaries with chronic conditions at state, county, and hospital referral regions levels for 2012; and
  • Chartbooks and maps.

These public data resources support the HHS Initiative on Multiple Chronic Conditions by providing researchers and policymakers a better understanding of the burden of chronic conditions among beneficiaries and the implications for our health care system.
Geographic Variation Dashboard
The Geographic Variation Dashboards present Medicare fee-for-service per-capita spending at the state and county levels in interactive formats. CMS calculated the spending figures in these dashboards using standardized dollars that remove the effects of the geographic adjustments that Medicare makes for many of its payment rates. The dashboards include total standardized per capita spending, as well as standardized per capita spending by type of service. Users can select the indicator and year they want to display. Users can also compare data for a given state or county to the national average. All of the information presented in the dashboards is also available for download from the Geographic Variation Public Use File.
Research Cohort Estimate Tool
CMS also released a new tool that will help researchers and other stakeholders estimate the number of Medicare beneficiaries with certain demographic profiles or health conditions. This tool can assist a variety of stakeholders interested in specific figures on Medicare enrollment. Researchers can also use this tool to estimate the size of their proposed research cohort and the cost of requesting CMS data to support their study.
Digital Privacy Notice Challenge
ONC, with the HHS Office of Civil Rights, will be awarding the winner of the Digital Privacy Notice Challenge during the conference. The winning products will help consumers get notices of privacy practices from their health care providers or health plans directly in their personal health records or from their providers’ patient portals.
OpenFDA
The FDA’s new initiative, openFDA, is designed to facilitate easier access to large, important public health datasets collected by the agency. OpenFDA will make FDA’s publicly available data accessible in a structured, computer readable format that will make it possible for technology specialists, such as mobile application creators, web developers, data visualization artists and researchers to quickly search, query, or pull massive amounts of information on an as needed basis. The initiative is the result of extensive research to identify FDA’s publicly available datasets that are often in demand, but traditionally difficult to use. Based on this research, openFDA is beginning with a pilot program involving millions of reports of drug adverse events and medication errors submitted to the FDA from 2004 to 2013. The pilot will later be expanded to include the FDA’s databases on product recalls and product labeling.
For more information about CMS data products, please visit http://www.cms.gov/Research-Statistics-Data-and-Systems/Research-Statistics-Data-and-Systems.html.
For more information about today’s FDA announcement visit: http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/UCM399335 or http://open.fda.gov/

Closing the Feedback Loop: Can Technology Bridge the Accountability Gap


(WorldBank) Book edited by Björn-Sören Gigler and Savita Bailur:  “This book is a collection of articles, written by both academics and practitioners as an evidence base for citizen engagement through information and communication technologies (ICTs). In it, the authors ask: how do ICTs empower through participation, transparency and accountability? Specifically, the authors examine two principal questions: Are technologies an accelerator to closing the “accountability gap” – the space between the supply (governments, service providers) and demand (citizens, communities, civil society organizations or CSOs) that requires bridging for open and collaborative governance? And under what conditions does this occur? The introductory chapters lay the theoretical groundwork for understanding the potential of technologies to achieving intended goals. Chapter 1 takes us through the theoretical linkages between empowerment, participation, transparency and accountability. In Chapter 2, the authors devise an informational capability framework, relating human abilities and well-being to the use of ICTs. The chapters to follow highlight practical examples that operationalize ICT-led initiatives. Chapter 3 reviews a sample of projects targeting the goals of transparency and accountability in governance to make preliminary conclusions around what evidence exists to date, and where to go from here. In chapter 4, the author reviews the process of interactive community mapping (ICM) with examples that support general local development and others that mitigate natural disasters. Chapter 5 examines crowdsourcing in fragile states to track aid flows, report on incitement or organize grassroots movements. In chapter 6, the author reviews Check My School (CMS), a community monitoring project in the Philippines designed to track the provision of services in public schools. Chapter 7 introduces four key ICT-led, citizen-governance initiatives in primary health care in Karnataka, India. Chapter 8 analyzes the World Bank Institute’s use of ICTs in expanding citizen project input to understand the extent to which technologies can either engender a new “feedback loop” or ameliorate a “broken loop”. The authors’ analysis of the evidence signals ICTs as an accelerator to closing the “accountability gap”. In Chapter 9, the authors conclude with the Loch Ness model to illustrate how technologies contribute to shrinking the gap, why the gap remains open in many cases, and what can be done to help close it. This collection is a critical addition to existing literature on ICTs and citizen engagement for two main reasons: first, it is expansive, covering initiatives that leverage a wide range of technology tools, from mobile phone reporting to crowdsourcing to interactive mapping; second, it is the first of its kind to offer concrete recommendations on how to close feedback loops.”

User motivation and knowledge sharing in idea crowdsourcing


MIIA KOSONEN et al, Int. J. Innov. Mgt.: “We investigate how the propensity to trust, intrinsic motivation, and extrinsic motivation drive the intentions of individuals to share knowledge in idea crowdsourcing. Building on motivation theories and Uses & Gratifications (U&G) approach, we conducted a web-based survey within IdeasProject, an open innovation and brainstorming community dedicated to harvesting ideas. Based on a sample of 244 users, our research shows that the key driver of knowledge-sharing intentions is made up of two intrinsic motivations — social benefits and learning benefits. We also found that recognition from the host company affects intention to share knowledge. From the management point of view, the relative importance of social integrative benefits calls for better facilities available for users to be able to help each other in formulating and developing their ideas. Learning and creativity could be inspired by feedback from professionals and experts, while providing insight into technological advances and features dealing with the current tasks.”

Estonian plan for 'data embassies' overseas to back up government databases


Graeme Burton in Computing: “Estonia is planning to open “data embassies” overseas to back up government databases and to operate government “in the cloud“.
The aim is partly to improve efficiency, but driven largely by fear of invasion and occupation, Jaan Priisalu, the director general of Estonian Information System Authority, told Sky News.
He said: “We are planning to actually operate our government in the cloud. It’s clear also how it helps to protect the country, the territory. Usually when you are the military planner and you are planning the occupation of the territory, then one of the rules is suppress the existing institutions.
“And if you are not able to do it, it means that this political price of occupying the country will simply rise for planners.”
Part of the rationale for the plan, he continued, was fear of attack from Russia in particular, which has been heightened following the occupation of Crimea, formerly in Ukraine.
“It’s quite clear that you can have problems with your neighbours. And our biggest neighbour is Russia, and nowadays it’s quite aggressive. This is clear.”
The plan is to back up critical government databases outside of Estonia so that affairs of state can be conducted in the cloud, even if the country is invaded. It would also have the benefit of keeping government information out of invaders’ hands – provided it can keep its government cloud secure.
According to Sky News, the UK is already in advanced talks about hosting the Estonian government databases and may make the UK the first of Estonia’s data embassies.
Having wrested independence from the Soviet Union in 1991, Estonia has experienced frequent tension with its much bigger neighbour. In 2007, for example, after the relocation of the “Bronze Soldier of Tallinn” and the exhumation of the soldiers buried in a square in the centre of the capital to a military cemetery in April 2007, the country was subject to a prolonged cyber-attack sourced to Russia.
Russian hacker “Sp0Raw” said that the most efficient of the online attacks on Estonia could not have been carried out without the approval of Russian authorities and added that the hackers seemed to act under “recommendations” from parties in government. However, claims by Estonia that the Russian government was directly involved in the attacks were “empty words, not supported by technical data”.
Mike Witt, deputy director of the US Computer Emergency Response Team (CERT), suggested that the distributed denial-of-service (DDOS) attacks, while crippling to the Estonian government at the time, were not significant in scale from a technical standpoint. However, the Estonian government was forced to shut down many of its online operations in response.
At the same time, the Estonian government has been accused of implementing anti-Russian laws and discriminating against its large ethnic Russian population.
Last week, the Estonian government unveiled a plan to allow anyone in the world to apply for “digital citizenship of the country, enabling them to use Estonian online services, open bank accounts, and start companies without having to physically reside in the country.”

How open data can help shape the way we analyse electoral behaviour


Harvey Lewis (Deloitte), Ulrich Atz, Gianfranco Cecconi, Tom Heath (ODI) in The Guardian: Even after the local council elections in England and Northern Ireland on 22 May, which coincided with polling for the European Parliament, the next 12 months remain a busy time for the democratic process in the UK.
In September, the people of Scotland make their choice in a referendum on the future of the Union. Finally, the first fixed-term parliament in Westminster comes to an end with a general election in all areas of Great Britain and Northern Ireland in May 2015.
To ensure that as many people as possible are eligible and able to vote, the government is launching an ambitious programme of Individual Electoral Registration (IER) this summer. This will mean that the traditional, paper-based approach to household registration will shift to a tailored and largely digital process more in-keeping with the data-driven demands of the twenty-first century.
Under IER, citizens will need to provide ‘identifying information’, such as date of birth or national insurance number, when applying to register.

Ballots: stuck in the past?

However, despite the government’s attempts through IER to improve the veracity of information captured prior to ballots being posted, little has changed in terms of the vision for capturing, distributing and analysing digital data from election day itself.

Advertisement

Indeed, paper is still the chosen medium for data collection.
Digitising elections is fraught with difficulty, though. In the US, for example, the introduction of new voting machines created much controversy even though they are capable of providing ‘near-perfect’ ballot data.
The UK’s democratic process is not completely blind, though. Numerous opinion surveys are conducted both before and after polling, including the long-running British Election Study, to understand the shifting attitudes of a representative cross-section of the electorate.
But if the government does not retain in sufficient geographic detail digital information on the number of people who vote, then how can it learn what is necessary to reverse the long-running decline in turnout?

The effects of lack of data

To add to the debate around democratic engagement, a joint research team, with data scientists from Deloitte and the Open Data Institute (ODI), have been attempting to understand what makes voters tick.
Our research has been hampered by a significant lack of relevant data describing voter behaviour at electoral ward level, as well as difficulties in matching what little data is available to other open data sources, such as demographic data from the 2011 Census.
Even though individual ballot papers are collected and verified for counting the number of votes per candidate – the primary aim of elections, after all – the only recent elections for which aggregate turnout statistics have been published at ward level are the 2012 local council elections in England and Wales. In these elections, approximately 3,000 wards from a total of over 8,000 voted.
Data published by the Electoral Commission for the 2013 local council elections in England and Wales purports to be at ward level but is, in fact, for ‘county electoral divisions’, as explained by the Office for National Statistics.
Moreover, important factors related to the accessibility of polling stations – such as the distance from main population centres – could not be assessed because the location of polling stations remains the responsibility of individual local authorities – and only eight of these have so far published their data as open data.
Given these fundamental limitations, drawing any robust conclusions is difficult. Nevertheless, our research shows the potential for forecasting electoral turnout with relatively few census variables, the most significant of which are age and the size of the electorate in each ward.

What role can open data play?

The limited results described above provide a tantalising glimpse into a possible future scenario: where open data provides a deeper and more granular understanding of electoral behaviour.
On the back of more sophisticated analyses, policies for improving democratic engagement – particularly among young people – have the potential to become focused and evidence-driven.
And, although the data captured on election day will always remain primarily for the use of electing the public’s preferred candidate, an important secondary consideration is aggregating and publishing data that can be used more widely.
This may have been prohibitively expensive or too complex in the past but as storage and processing costs continue to fall, and the appetite for such knowledge grows, there is a compelling business case.
The benefits of this future scenario potentially include:

  • tailoring awareness and marketing campaigns to wards and other segments of the electorate most likely to respond positively and subsequently turn out to vote
  • increasing the efficiency with which European, general and local elections are held in the UK
  • improving transparency around the electoral process and stimulating increased democratic engagement
  • enhancing links to the Government’s other significant data collection activities, including the Census.

Achieving these benefits requires commitment to electoral data being collected and published in a systematic fashion at least at ward level. This would link work currently undertaken by the Electoral Commission, the ONS, Plymouth University’s Election Centre, the British Election Study and the more than 400 local authorities across the UK.”

How to treat government like an open source project


Ben Balter in OpenSource.com: “Open government is great. At least, it was a few election cycles ago. FOIA requests, open data, seeing how your government works—it’s arguably brought light to a lot of not-so-great practices, and in many cases, has spurred citizen-centric innovation not otherwise imagined before the information’s release.
It used to be that sharing information was really, really hard. Open government wasn’t even a possibility a few hundred years ago. Throughout the history of communication tools—be it the printing press, fax machine, or floppy disks—new tools have generally done three things: lowered the cost to transmit information, increased who that information could be made available to, and increase how quickly that information could be distributed. But, printing presses and fax machines have two limitations: they are one way and asynchronous. They let you more easily request, and eventually see how the sausage was made but don’t let you actually take part in the sausage-making. You may be able to see what’s wrong, but you don’t have the chance to make it better. By the time you find out, it’s already too late.
As technology allows us to communicate with greater frequency and greater fidelity, we have the chance to make our government not only transparent, but truly collaborative.

So, how do we encourage policy makers and bureaucrats to move from open government to collaborative government, to learn open source’s lessons about openness and collaboration at scale?
For one, we geeks can help to create a culture of transparency and openness within government by driving up the demand side of the equation. Be vocal, demand data, expect to see process, and once released, help build lightweight apps. Show potential change agents in government that their efforts will be rewarded.
Second, it’s a matter of tooling. We’ve got great tools out there—things like Git that can track who made what change when and open standards like CSV or JSON that don’t require proprietary software—but by-and-large they’re a foreign concept in government, at least among those empowered to make change. Command line interfaces with black background and green text can be intimidating to government bureaucrats used to desktop publishing tools. Make it easier for government to do the right thing and choose open standards over proprietary tooling.”
Last, be a good open source ambassador. Help your home city or state get involved with open source. Encourage them to take their first step (be it consuming open source, publishing, or collaborating with the public), teach them what it means to do things in the open, And when they do push code outside the firewall, above all, be supportive. We’re in this together.
As technology makes it easier to work together, geeks can help make our government not just open, but in fact collaborative. Government is the world’s largest and longest running open source project (bugs, trolls, and all). It’s time we start treating it like one.

Harnessing the Power of Data, Technology, and Innovation for a Clean Energy Economy


The White House: “Today, the White House, the Energy Department, and the General Services Administration are teaming up to host an Energy Datapalooza, highlighting important new steps in the public and private sectors to leverage data and innovation in ways that promote a clean energy economy in America.
Advances in technology are making it easier for consumers and businesses across the nation to better understand how they are using and saving energy. Empowering citizens with information about their energy usage can help them make smart choices that cut energy waste, cut down energy bills, and preserve our environment.
The federal government has an important role to play in unleashing energy-related data and catalyzing innovation to support these savings. That is why the Obama Administration has taken unprecedented steps to make open government data more available to citizens, companies, and innovators — including by launching both an Energy Data Initiative and a Climate Data Initiative.
In addition, in 2011, the Administration launched the Green Button Initiative to provide families and businesses with easy and secure access to their own energy-usage information. And today, the Obama Administration is announcing a number of new steps to continue this momentum, including: a successful federal pilot applying the Green Button to help building managers achieve greater efficiencies; and new or expanded data resources and tools in the areas of geothermal, solar, hydropower, bio energy, and buildings.
Private-sector entrepreneurs and innovators are important partners in this effort. They are continually finding new ways to use groundbreaking software and technologies to analyze data about energy usage, building efficiency, renewable energy sources, and more, and providing those data to consumers in ways that help them achieve energy savings and help advance America’s clean energy goals.
At today’s Energy Datapalooza, companies, utilities, and innovators who are leading the charge in this important domain are announcing new commitments to make energy data available to their customers, provide consumers and first-responders with information about power outages, publish data about open building energy performance, and more. These innovators — and dozens more students, researchers, and technologists — will demonstrate exciting tools, apps, and services at a live Innovation Showcase, highlighting just some of the cutting-edge advances already underway in the energy-data space….
FACT SHEET: Harnessing the Power of Data for a Clean, Secure, and Reliable Energy Future