How’s the Weather There? Crowdsourcing App Promises Better Forecasts


Rachel Metz  at MIT Technology Review: “An app called Sunshine wants you to help it create more accurate, localized weather forecasts.
The app, currently in a private beta test, combines data from the National Oceanic and Atmospheric Administration (NOAA) with atmospheric pressure readings captured by a smartphone. The latest iPhones, and some Android smartphones, include barometers for measuring atmospheric pressure. These sensors are generally used to determine elevation for navigation, but changes in air pressure can also signal changes in the weather.
Sunshine will also rely on users to report sudden weather hazards like fog, cofounder Katerina Stroponiati says. About 250 people spread out among the Bay Area, New York, and Dallas are now using Sunshine, she says, and the team behind it plans to release the app publicly at the end of March for the iPhone. It will be free, though some features may eventually cost extra.
While weather predictions have gotten more accurate over the years, they’re far from perfect. Weather information usually isn’t localized, either. The goal of Sunshine is to better serve places like its home base of San Francisco, where weather can be markedly different over just a few blocks.
Stroponiati aims for Sunshine to get enough people sending in data—three per square mile would be needed, according to experiments the team has conducted—that the app can be used to make weather prediction more accurate than it tends to be today. Some other apps, like PressureNet and WeatherSignal, already gather data entered manually by users, but they don’t yet offer crowdsourced forecasts….(More)
 

The Data Disclosure Decision


“The CIO Council Innovation Committee has released its first Open Data case study, The Data Disclosure Decision, showcasing the Department of Education (Education) Disclosure Review Board.
The Department of Education is a national warehouse for open data across a decentralized educational system, managing and exchanging education related data from across the country. Education collects large amounts of aggregate data at the state, district, and school level, disaggregated by a number of demographic variables. A majority of the data Education collects is considered personally identifiable information (PII), making data disclosure avoidance plans a mandatory component of Education’s data releases. With their expansive data sets and a need to protect sensitive information, Education quickly realized the need to organize and standardize their data disclosure protocol.
Education formally established the Data Disclosure Board with Secretary of Education Arne Duncan signing their Charter in August 2013. Since its inception, the Disclosure Review Board has recognized substantial successes and has greatly increased the volume and quality of data being released. Education’s Disclosure Review Board is continually learning through its open data journey and improving their approach through cultural change and leadership buy-in.
Learn more about Education’s Data Review Board’s story by reading The Data Disclosure Decision where you will find the full account of their experience and what they learned along the way. Read The Data Disclosure Decision

Index: Prizes and Challenges


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on prizes and challenges and was originally published in 2015.

This index highlights recent findings about two key techniques in shifting innovation from institutions to the general public:

  • Prize-Induced Contests – using monetary rewards to incentivize individuals and other entities to develop solutions to public problems; and
  • Grand Challenges – posing large, audacious goals to the public to spur collaborative, non-governmental efforts to solve them.

You can read more about Governing through Prizes and Challenges here. You can also watch Alph Bingham, co-founder of Innocentive, answer the GovLab’s questions about challenge authoring and defining the problem here.

Previous installments of the Index include Measuring Impact with Evidence, The Data Universe, Participation and Civic Engagement and Trust in Institutions. Please share any additional statistics and research findings on the intersection of technology in governance with us by emailing shruti at thegovlab.org.

Prize-Induced Contests

  • Year the British Government introduced the Longitude Prize, one of the first instances of prizes by government to spur innovation: 1714
  • President Obama calls on “all agencies to increase their use of prizes to address some of our Nation’s most pressing challenges” in his Strategy for American Innovation: September 2009
  • The US Office of Management and Budget issues “a policy framework to guide agencies in using prizes to mobilize American ingenuity and advance their respective core missions”:  March 2010
  • Launch of Challenge.gov, “a one-stop shop where entrepreneurs and citizen solvers can find public-sector prize competitions”: September 2010
    • Number of competitions currently live on Challenge.gov in February 2015: 22 of 399 total
    • How many competitions on Challenge.gov are for $1 million or above: 23
  • The America COMPETES Reauthorization Act is introduced, which grants “all Federal agencies authority to conduct prize competitions to spur innovation, solve tough problems, and advance their core missions”: 2010
  • Value of prizes authorized by COMPETES: prizes up to $50 million
  • Fact Sheet and Frequently Asked Questions memorandum issued by the Office of Science and Technology Policy and the Office of Management and Budget to aid agencies to take advantage of authorities in COMPETES: August 2011
  • Number of prize competitions run by the Federal government from 2010 to April 2012: 150
  • How many Federal agencies have run prize competitions by 2012: 40
  • Prior to 1991, the percentage of prize money that recognized prior achievements according to an analysis by McKinsey and Company: 97%
    • Since 1991, percentage of new prize money that “has been dedicated to inducement-style prizes that focus on achieving a specific, future goal”: 78%
  • Value of the prize sector as estimated by McKinsey in 2009: $1-2 billion
  • Growth rate of the total value of new prizes: 18% annually
  • Growth rate in charitable giving in the US: 2.5% annually
  • Value of the first Horizon Prize awarded in 2014 by the European Commission to German biopharmaceutical company CureVac GmbH “for progress towards a novel technology to bring life-saving vaccines to people across the planet in safe and affordable ways”: €2 million
  • Number of solvers registered on InnoCentive, a crowdsourcing company: 355,000+ from nearly 200 countries
    • Total Challenges Posted: 2,000+ External Challenges
    • Total Solution Submissions: 40,000+
    • Value of the awards: $5,000 to $1+ million
    • Success Rate for premium challenges: 85%

Grand Challenges

  • Value of the Progressive Insurance Automotive X Prize, sponsored in part by DOE to develop production-capable super fuel-efficient vehicles: $10 million
    • Number of teams around the world who took part in the challenge “to develop a new generation of technologies” for production-capable super fuel-efficient vehicles: 111 teams
  • Time it took for the Air Force Research Laboratory to receive a workable solution on “a problem that had vexed military security forces and civilian police for years” by opening the challenge to the world: 60 days
  • Value of the HHS Investing in Innovation initiative to spur innovation in Health IT, launched under the new COMPETES act: $5 million program
  • Number of responses received by NASA for its Asteroid Grand Challenge RFI which seeks to identify and address all asteroid threats to the human population: over 400
  • The decreased cost of sequencing a single human genome as a result of the Human Genome Project Grand Challenge: $7000 from $100 million
  • Amount the Human Genome Project Grand Challenge has contributed to the US economy for every $1 invested by the US federal government: $141 for every $1 invested
  • The amount of funding for research available for the “Brain Initiative,” a collaboration between the National Institute of Health, DARPA and the National Science Foundation, which seeks to uncover new prevention and treatment methods for brain disorders like Alzheimer’s, autism and schizophrenia: $100 million
  • Total amount offered in cash awards by the Department of Energy’s “SunShot Grand Challenge,” which seeks to eliminate the cost disparity between solar energy and coal by the end of the decade: $10 million

Sources

Civic Media Project


Site and Book edited by Eric Gordon and Paul Mihailidis: “Civic life is comprised of the attention and actions an individual devotes to a common good. Participating in a human rights rally, creating and sharing a video online about unfair labor practices, connecting with neighbors after a natural disaster: these are all civic actions wherein the actor seeks to benefit a perceived common good. But where and how civic life takes place, is an open question. The lines between the private and the public, the self-interested and the civic are blurring as digital cultures transform means and patterns of communication around the world.

As the definition of civic life is in flux, there is urgency in defining and questioning the mediated practices that compose it. Civic media are the mediated practices of designing, building, implementing or using digital tools to intervene in or participate in civic life. The Civic Media Project (CMP) is a collection of short case studies from scholars and practitioners from all over the world that range from the descriptive to the analytical, from the single tool to the national program, from the enthusiastic to the critical. What binds them together is not a particular technology or domain (i.e. government or social movements), but rather the intentionality of achieving a common good. Each of the case studies collected in this project reflects the practices associated with the intentional effort of one or many individuals to benefit or disrupt a community or institution outside of one’s intimate and professional spheres.

As the examples of civic media continue to grow every day, the Civic Media Project is intended as a living resource. New cases will be added on a regular basis after they have gone through an editorial process. Most importantly, the CMP is meant to be a place for conversation and debate about what counts as civic, what makes a citizen, what practices are novel, and what are the political, social and cultural implications of the integration of technology into civic lives.

How to Use the Site

Case studies are divided into four sections: Play + CreativitySystems + DesignLearning + Engagement, and Community + Action. Each section contains about 25 case studies that address the themes of the section. But there is considerable crossover and thematic overlap between sections as well. For those adventurous readers, the Tag Cloud provides a more granular entry point to the material and a more diverse set of connections.

We have also developed a curriculum that provides some suggestions for educators interested in using the Civic Media Project and other resources to explore the conceptual and practical implications of civic media examples.

One of the most valuable elements of this project is the dialogue about the case studies. We have asked all of the project’s contributors to write in-depth reviews of others’ contributions, and we also invite all readers to comment on cases and reviews. Do not be intimidated by the long “featured comments” in the Disqus section—these formal reviews should be understood as part of the critical commentary that makes each of these cases come alive through discussion and debate.

The Book

Civic Media: Technology, Design, Practice is forthcoming from MIT Press and will serve as the print book companion to the Civic Media Project. The book identifies the emerging field of Civic Media by brining together leading scholars and practitioners from a diversity of disciplines to shape theory, identify problems and articulate opportunities.  The book includes 19 chapters (and 25 case studies) from fields as diverse as philosophy, communications, education, sociology, media studies, art, policy and philanthropy, and attempts to find common language and common purpose through the investigation of civic media….(More)”

New portal to crowdsource captions, transcripts of old photos, national archives


Irene Tham at The Straits Times: “Wanted: history enthusiasts to caption old photographs and transcribe handwritten manuscripts that contain a piece of Singapore’s history.

They are invited to contribute to an upcoming portal that will carry some 3,000 unidentified photographs dating back to the late 1800s, and 3,000 pages of Straits Settlement records including letters written during Sir Stamford Raffles’ administration of Singapore.

These are collections from the Government and individuals waiting to be “tagged” on the new portal – The Citizen Archivist Project at www.nas.gov.sg/citizenarchivist….

Without tagging – such as by photo captioning and digital transcription – these records cannot be searched. There are over 140,000 photos and about one million pages of Straits Settlements Records in total that cannot be searched today.

These records date back to the 1800s, and include letters written during Sir Stamford Raffles’ administration in Singapore.

“The key challenge is that they were written in elaborate cursive penmanship which is not machine-readable,” said Dr Yaacob, adding that the knowledge and wisdom of the public can be tapped on to make these documents more accessible.

Mr Arthur Fong (West Coast GRC) had asked how the Government could get young people interested in history, and Dr Yaacob said this initiative was something they would enjoy.

Portal users must first log in using their existing Facebook, Google or National Library Board accounts. Contributions will be saved in users’ profiles, automatically created upon signing in.

Transcript contributions on the portal work in similar ways to Wikipedia; contributed text will be uploaded immediately on the portal.

However, the National Archives will take up to three days to review photo caption contributions. Approved captions will be uploaded on its website at www.nas.gov.sg/archivesonline….(More)”

Tweets Can Predict Health Insurance Exchange Enrollment


PennMedicine: “An increase in Twitter sentiment (the positivity or negativity of tweets) is associated with an increase in state-level enrollment in the Affordable Care Act’s (ACA) health insurance marketplaces — a phenomenon that points to use of the social media platform as a real-time gauge of public opinion and provides a way for marketplaces to quickly identify enrollment changes and emerging issues. Although Twitter has been previously used to measure public perception on a range of health topics, this study, led by researchers at the Perelman School of Medicine at the University of Pennsylvania and published online in the Journal of Medical Internet Research, is the first to look at its relationship with the new national health insurance marketplace enrollment.

The study examined 977,303 ACA and “Obamacare”-related tweets — along with those directed toward the Twitter handle for HealthCare.gov and the 17 state-based marketplace Twitter accounts — in March 2014, then tested a correlation of Twitter sentiment with marketplace enrollment by state. Tweet sentiment was determined using the National Research Council (NRC) sentiment lexicon, which contains more than 54,000 words with corresponding sentiment weights ranging from positive to negative. For example, the word “excellent” has a positive sentiment weight, and is more positive than the word “good,” but the word “awful” is negative. Using this lexicon, researchers found that a .10 increase in the sentiment of tweets was associated with a nine percent increase in health insurance marketplace enrollment at the state level. While a .10 increase may seem small, these numbers indicate a significant correlation between Twitter sentiment and enrollment based on a continuum of sentiment scores that were examined over a million tweets.

“The correlation between Twitter sentiment and the number of eligible individuals who enrolled in a marketplace plan highlights the potential for Twitter to be a real-time monitoring strategy for future enrollment periods,” said first author Charlene A. Wong, MD, a Robert Wood Johnson Foundation Clinical Scholar and Fellow in Penn’s Leonard Davis Institute of Health Economics. “This would be especially valuable for quickly identifying emerging issues and making adjustments, instead of having to wait weeks or months for that information to be released in enrollment reports, for example.”…(More)”

Crowdsourcing America’s cybersecurity is an idea so crazy it might just work


at the Washington Post: “One idea that’s starting to bubble up from Silicon Valley is the concept of crowdsourcing cybersecurity. As Silicon Valley venture capitalist Robert R. Ackerman, Jr. has pointed out, due to “the interconnectedness of our society in cyberspace,” cyber networks are best viewed as an asset that we all have a shared responsibility to protect. Push on that concept hard enough and you can see how many of the core ideas from Silicon Valley – crowdsourcing, open source software, social networking, and the creative commons – can all be applied to cybersecurity.

Silicon Valley venture capitalists are already starting to fund companies that describe themselves as crowdsourcing cybersecurity. For example, take Synack, a “crowd security intelligence” company that received $7.5 million in funding from Kleiner Perkins (one of Silicon Valley’s heavyweight venture capital firms), Allegis Ventures, and Google Ventures in 2014. Synack’s two founders are ex-NSA employees, and they are using that experience to inform an entirely new type of business model. Synack recruits and vets a global network of “white hat hackers,” and then offers their services to companies worried about their cyber networks. For a fee, these hackers are able to find and repair any security risks.

So how would crowdsourced national cybersecurity work in practice?

For one, there would be free and transparent sharing of computer code used to detect cyber threats between the government and private sector. In December, the U.S. Army Research Lab added a bit of free source code, a “network forensic analysis network” known as Dshell, to the mega-popular code sharing site GitHub. Already, there have been 100 downloads and more than 2,000 unique visitors. The goal, says William Glodek of the U.S. Army Research Laboratory, is for this shared code to “help facilitate the transition of knowledge and understanding to our partners in academia and industry who face the same problems.”

This open sourcing of cyber defense would be enhanced with a scaled-up program of recruiting “white hat hackers” to become officially part of the government’s cybersecurity efforts. Popular annual events such as the DEF CON hacking conference could be used to recruit talented cyber sleuths to work alongside the government.

There have already been examples of communities where people facing a common cyber threat gather together to share intelligence. Perhaps the best-known example is the Conficker Working Group, a security coalition that was formed in late 2008 to share intelligence about malicious Conficker malware. Another example is the Financial Services Information Sharing and Analysis Center, which was created by presidential mandate in 1998 to share intelligence about cyber threats to the nation’s financial system.

Of course, there are some drawbacks to this crowdsourcing idea. For one, such a collaborative approach to cybersecurity might open the door to government cyber defenses being infiltrated by the enemy. Ackerman makes the point that you never really know who’s contributing to any community. Even on a site such as Github, it’s theoretically possible that an ISIS hacker or someone like Edward Snowden could download the code, reverse engineer it, and then use it to insert “Trojan Horses” intended for military targets into the code….  (More)

US government and private sector developing ‘precrime’ system to anticipate cyber-attacks


Martin Anderson at The Stack: “The USA’s Office of the Director of National Intelligence (ODNI) is soliciting the involvement of the private and academic sectors in developing a new ‘precrime’ computer system capable of predicting cyber-incursions before they happen, based on the processing of ‘massive data streams from diverse data sets’ – including social media and possibly deanonymised Bitcoin transactions….
At its core the predictive technologies to be developed in association with the private sector and academia over 3-5 years are charged with the mission ‘to invest in high-risk/high-payoff research that has the potential to provide the U.S. with an overwhelming intelligence advantage over our future adversaries’.
The R&D program is intended to generate completely automated, human-free prediction systems for four categories of event: unauthorised access, Denial of Service (DoS), malicious code and scans and probes which are seeking access to systems.
The CAUSE project is an unclassified program, and participating companies and organisations will not be granted access to NSA intercepts. The scope of the project, in any case, seems focused on the analysis of publicly available Big Data, including web searches, social media exchanges and trawling ungovernable avalanches of information in which clues to future maleficent actions are believed to be discernible.
Program manager Robert Rahmer says: “It is anticipated that teams will be multidisciplinary and might include computer scientists, data scientists, social and behavioral scientists, mathematicians, statisticians, content extraction experts, information theorists, and cyber-security subject matter experts having applied experience with cyber capabilities,”
Battelle, one of the concerns interested in participating in CAUSE, is interested in employing Hadoop and Apache Spark as an approach to the data mountain, and includes in its preliminary proposal an intent to ‘de-anonymize Bitcoin sale/purchase activity to capture communication exchanges more accurately within threat-actor forums…’.
Identifying and categorising quality signal in the ‘white noise’ of Big Data is a central plank in CAUSE, and IARPA maintains several offices to deal with different aspects of it. Its pointedly-named ‘Office for Anticipating Surprise’  frames the CAUSE project best, since it initiated it. The OAS is occupied with ‘Detecting and forecasting the emergence of new technical capabilities’, ‘Early warning of social and economic crises, disease outbreaks, insider threats, and cyber attacks’ and ‘Probabilistic forecasts of major geopolitical trends and rare events’.
Another concerned department is The Office of Incisive Analysis, which is attempting to break down the ‘data static’ problem into manageable mission stages:
1) Large data volumes and varieties – “Providing powerful new sources of information from massive, noisy data that currently overwhelm analysts”
2) Social-Cultural and Linguistic Factors – “Analyzing language and speech to produce insights into groups and organizations. “
3) Improving Analytic Processes – “Dramatic enhancements to the analytic process at the individual and group level. “
The Office of Smart Collection develops ‘new sensor and transmission technologies, with the seeking of ‘Innovative approaches to gain access to denied environments’ as part of its core mission, while the Office of Safe and Secure Operations concerns itself with ‘Revolutionary advances in science and engineering to solve problems intractable with today’s computers’.
The CAUSE program, which attracted 150 developers, organisations, academics and private companies to the initial event, will announce specific figures about funding later in the year, and practice ‘predictions’ from participants will begin in the summer, in an accelerating and stage-managed program over five years….(More)”

Reclaiming Accountability: Transparency, Executive Power, and the U.S. Constitution


New book by Heidi Kitrosser: “Americans tend to believe in government that is transparent and accountable. Those who govern us work for us, and therefore they must also answer to us. But how do we reconcile calls for greater accountability with the competing need for secrecy, especially in matters of national security? Those two imperatives are usually taken to be antithetical, but Heidi Kitrosser argues convincingly that this is not the case—and that our concern ought to lie not with secrecy, but with the sort of unchecked secrecy that can result from “presidentialism,” or constitutional arguments for broad executive control of information.
In Reclaiming Accountability, Kitrosser traces presidentialism from its start as part of a decades-old legal movement through its appearance during the Bush and Obama administrations, demonstrating its effects on secrecy throughout. Taking readers through the key presidentialist arguments—including “supremacy” and “unitary executive theory”—she explains how these arguments misread the Constitution in a way that is profoundly at odds with democratic principles. Kitrosser’s own reading offers a powerful corrective, showing how the Constitution provides myriad tools, including the power of Congress and the courts to enforce checks on presidential power, through which we could reclaim government accountability….(More)”

Measuring government impact in a social media world


Arthur Mickoleit & Ryan Androsoff at OECD Insights: “There is hardly a government around the world that has not yet felt the impact of social media on how it communicates and engages with citizens. And while the most prominent early adopters in the public sector have tended to be politicians (think of US President Barack Obama’s impressive use of social media during his 2008 campaign), government offices are also increasingly jumping on the bandwagon. Yes, we are talking about those – mostly bricks-and-mortar – institutions that often toil away from the public gaze, managing the public administration in our countries. As the world changes, they too are increasingly engaging in a very public way through social media.
Research from our recent OECD working paper “Social Media Use by Governments” shows that as of November 2014, out of 34 OECD countries, 28 have a Twitter account for the office representing the top executive institution (head of state, head of government, or government as a whole), and 21 have a Facebook account….
 
But what is the impact governments can or should expect from social media? Is it all just vanity and peer pressure? Surely not.
Take the Spanish national police force (e.g. on Twitter, Facebook & YouTube), a great example of using social media to build long-term engagement, trust and a better public service. The thing so many governments yearn for, in this case the Spanish police seem to have managed well.
Or take the Danish “tax daddy” on Twitter – @Skattefar. It started out as the national tax administration’s quest to make it easier for everyone to submit correct tax filings; it is now one of the best examples around of a tax agency gone social.
Government administrations can use social media for internal purposes too. The Government of Canada used public platforms like Twitter and internal platforms like GCpedia and GCconnex to conduct a major employee engagement exercise (Blueprint 2020) to develop a vision for the future of the Canadian federal public service.
And when it comes to raising efficiency in the public sector, read this account of a Dutch research facility’s Director who decided to stop email. Not reduce it, but stop it altogether and replace it with social media.
There are so many other examples that could be cited. But the major question is how can we even begin to appraise the impact of these different initiatives? Because as we’ve known since the 19th century, “if you cannot measure it, you cannot improve it” (quote usually attributed to Lord Kelvin). Some aspects of impact measurement for social media can be borrowed from the private sector with regards to presence, popularity, penetration, and perception. But it’s around purpose that impact measurement agendas will split between the private sector and government. Virtually all companies will want to calculate the return on social media investments based on whether it helps them improve their financial returns. That’s different in the public sector where purpose is rarely defined in commercial terms.
A good impact assessment for social media in the public sector therefore needs to be built around its unique purpose-orientation. This is much more difficult to measure and it will involve a mix of quantitative data (e.g. reach of target audience) and qualitative data (e.g. case studies describing tangible impact). Social Media Use by Governments proposes a framework to start looking at social media measurement in gradual steps – from measuring presence, to popularity, to penetration, to perception, and finally, to purpose-orientation. The aim of this framework is to help governments develop truly relevant metrics and start treating social media activity by governments with the same public management rigour that is applied to other government activities. You can see a table summarising the framework by clicking on the thumbnail below.
This is far from an exact science, but we are beginning the work collaborating with member and partner governments to develop a toolkit that will help decision-makers implement the OECD Recommendation on Digital Government Strategies, including on the issue of social media metrics…(More)”.