White House Blog by Leigh Heyman: “The White House petitions platform, We the People, just became more accessible and open than ever before. We are very excited to announce the launch of the “write” version of the Petitions Application Programming Interface, or “API.”
Starting today, people can sign We the People petitions even when they’re not on WhiteHouse.gov. Now, users can also use third-party platforms, including other petitions services, or even their own websites or blogs. All of those signatures, once validated, will count towards a petition’s objective of meeting the 100,000-signature threshold needed for an official White House response.
We the People started with a simple goal: to give more Americans a way to reach their government. To date, the platform has been more successful than we could have imagined, with more than 16 million users creating and signing more than 360,000 petitions.
We launched our Write API beta test last year, and since then we’ve been hard at work, both internally and in collaboration with our beta test participants. Last spring, as part of the National Day of Civic Hacking, we hosted a hackathon right here at the White House, where our engineers spent a day sitting side-by-side with our beta testers to help get our code and theirs ready for the big day.
That big day has finally come.
Click here if you want to get started right away, or read on to learn more about the Petitions Write API….”
The government wants to study ‘social pollution’ on Twitter
Washington Post: “If you take to Twitter to express your views on a hot-button issue, does the government have an interest in deciding whether you are spreading “misinformation’’? If you tweet your support for a candidate in the November elections, should taxpayer money be used to monitor your speech and evaluate your “partisanship’’?
in theMy guess is that most Americans would answer those questions with a resounding no. But the federal government seems to disagree. The National Science Foundation , a federal agency whose mission is to “promote the progress of science; to advance the national health, prosperity and welfare; and to secure the national defense,” is funding a project to collect and analyze your Twitter data.
The project is being developed by researchers at Indiana University, and its purported aim is to detect what they deem “social pollution” and to study what they call “social epidemics,” including how memes — ideas that spread throughout pop culture — propagate. What types of social pollution are they targeting? “Political smears,” so-called “astroturfing” and other forms of “misinformation.”
Named “Truthy,” after a term coined by TV host Stephen Colbert, the project claims to use a “sophisticated combination of text and data mining, social network analysis, and complex network models” to distinguish between memes that arise in an “organic manner” and those that are manipulated into being.
But there’s much more to the story. Focusing in particular on political speech, Truthy keeps track of which Twitter accounts are using hashtags such as #teaparty and #dems. It estimates users’ “partisanship.” It invites feedback on whether specific Twitter users, such as the Drudge Report, are “truthy” or “spamming.” And it evaluates whether accounts are expressing “positive” or “negative” sentiments toward other users or memes…”
Open data for open lands
Radar: “President Obama’s well-publicized national open data policy (pdf) makes it clear that government data is a valuable public resource for which the government should be making efforts to maximize access and use. This policy was based on lessons from previous government open data success stories, such as weather data and GPS, which form the basis for countless commercial services that we take for granted today and that deliver enormous value to society. (You can see an impressive list of companies reliant on open government data via GovLab’s Open Data 500 project.)
Based on this open data policy, I’ve been encouraging entrepreneurs to invest their time and ingenuity to explore entrepreneurial opportunities based on government data. I’ve even invested (through O’Reilly AlphaTech Ventures) in one such start-up, Hipcamp, which provides user-friendly interfaces to making reservations at national and state parks.
A better system is sorely needed. The current reservation system, managed by the Active Network / Reserve America is clunky and almost unusable. Hipcamp changes all that, making it a breeze to reserve camping spots.
But now this is under threat. Active Network / Reserve America’s 10-year contract is up for renewal, and the Department of the Interior had promised an RFP for a new contract that conformed with the open data mandate. Ideally, that RFP would require an API so that independent companies could provide alternate interfaces, just like travel sites provide booking interfaces for air travel, hotels, and more. That explosion of consumer convenience should be happening for customers of our nation’s parks as well, don’t you think?…”
The Role Of Open Data In Choosing Neighborhood
PlaceILive Blog: “To what extent is it important to get familiar with our environment?
If we think about how the world surrounding us has changed throughout the years, it is not so unreasonable that, while walking to work, we might encounter some new little shops, restaurants, or gas stations we had never noticed before. Likewise, how many times did we wander about for hours just to find green spaces for a run? And the only one we noticed was even more polluted than other urban areas!
Citizens are not always properly informed about the evolution of the places they live in. And that is why it would be crucial for people to be constantly up-to-date with accurate information of the neighborhood they have chosen or are going to choose.
London is a neat evidence of how transparency in providing data is basic in order to succeed as a Smart City.
The GLA’s London Datastore, for instance, is a public platform of datasets revealing updated figures on the main services offered by the town, in addition to population’s lifestyle and environmental risks. These data are then made more easily accessible to the community through the London Dashboard.
The importance of dispensing free information can be also proved by the integration of maps, which constitute an efficient means of geolocation. Consulting a map where it’s easy to find all the services you need as close as possible can be significant in the search for a location.
(source: Smart London Plan)
The Open Data Index, published by The Open Knowledge Foundation in 2013, is another useful tool for data retrieval: it showcases a rank of different countries in the world with scores based on openness and availability of data attributes such as transport timetables and national statistics.
Here it is possible to check UK Open Data Census and US City Open Data Census.
As it was stated, making open data available and easily findable online not only represented a success for US cities but favoured apps makers and civic hackers too. Lauren Reid, a spokesperson at Code for America, reported according to Government Technology: “The more data we have, the better picture we have of the open data landscape.”
That is, on the whole, what Place I Live puts the biggest effort into: fostering a new awareness of the environment by providing free information, in order to support citizens willing to choose the best place they can live.
The outcome is soon explained. The website’s homepage offers visitors the chance to type address of their interest, displaying an overview of neighborhood parameters’ evaluation and a Life Quality Index calculated for every point on the map.
The research of the nearest medical institutions, schools or ATMs thus gets immediate and clear, as well as the survey about community’s generic information. Moreover, data’s reliability and accessibility are constantly examined by a strong team of professionals with high competence in data analysis, mapping, IT architecture and global markets.
For the moment the company’s work is focused on London, Berlin, Chicago, San Francisco and New York, while higher goals to reach include more than 200 cities.
US Open Data Census finally saw San Francisco’s highest score achievement as a proof of the city’s labour in putting technological expertise at everyone’s disposal, along with the task of fulfilling users’ needs through meticulous selections of datasets. This challenge seems to be successfully overcome by San Francisco’s new investment, partnering with the University of Chicago, in a data analytics dashboard on sustainability performance statistics named Sustainable Systems Framework, which is expected to be released in beta version by the the end of 2015’s first quarter.
Another remarkable collaboration in Open Data’s spread comes from the Bartlett Centre for Advanced Spatial Analysis (CASA) of the University College London (UCL); Oliver O’Brien, researcher at UCL Department of Geography and software developer at the CASA, is indeed one of the contributors to this cause.
Among his products, an interesting accomplishment is London’s CityDashboard, a real-time reports’ control panel in terms of spatial data. The web page also allows to visualize the whole data translated into a simplified map and to look at other UK cities’ dashboards.
Plus, his Bike Share Map is a live global view to bicycle sharing systems in over a hundred towns around the world, since bike sharing has recently drawn a greater public attention as an original form of transportation, in Europe and China above all….”
Why Are Political Scientists Studying Ice Bucket Challenges?
Fawn Johnson at the National Journal: “Who is more civically engaged—the person who votes in every election or the nonvoter who volunteers as a crossing guard at the local elementary school? What about the person who comments on an online news story? Does it count more if he posts the article on his Facebook page and urges his friends to act? What about the retired couple who takes care of the next-door neighbor’s kid after school until her single mom gets home from work?
The concept of civic engagement is mutating so fast that researchers are having a hard time keeping up with it. The Bureau of Labor Statistics has been collecting data on volunteering—defined as doing unpaid work through or for an organization—only since 2002. But even in that relatively short time period, that definition of “volunteering” has become far too limiting to cover the vast array of civic activity sprouting up online and in communities across the country.
Here’s just one example: Based on the BLS data alone, you would think that whites who graduated from college are far more likely to volunteer than African Americans or Hispanics with only high school degrees. But the the BLS’s data doesn’t take into account the retired couple mentioned above, who, based on cultural norms, is more likely to be black or Hispanic. It doesn’t capture the young adults in poor neighborhoods who tell those researchers that they consider being a role model to younger kids their most important contribution to their communities. Researchers say those informal forms of altruism are more common among minority communities, while BLS-type “volunteering”—say, being a tutor to a disadvantaged child—is more common among middle-class whites. Moreover, the BLS’s data only scratches the surface of political involvement…”
Training Students to Extract Value from Big Data
The nation’s ability to make use of data depends heavily on the availability of a workforce that is properly trained and ready to tackle high-need areas. Training students to be capable in exploiting big data requires experience with statistical analysis, machine learning, and computational infrastructure that permits the real problems associated with massive data to be revealed and, ultimately, addressed. Analysis of big data requires cross-disciplinary skills, including the ability to make modeling decisions while balancing trade-offs between optimization and approximation, all while being attentive to useful metrics and system robustness. To develop those skills in students, it is important to identify whom to teach, that is, the educational background, experience, and characteristics of a prospective data-science student; what to teach, that is, the technical and practical content that should be taught to the student; and how to teach, that is, the structure and organization of a data-science program.
Training Students to Extract Value from Big Data summarizes a workshop convened in April 2014 by the National Research Council’s Committee on Applied and Theoretical Statistics to explore how best to train students to use big data. The workshop explored the need for training and curricula and coursework that should be included. One impetus for the workshop was the current fragmented view of what is meant by analysis of big data, data analytics, or data science. New graduate programs are introduced regularly, and they have their own notions of what is meant by those terms and, most important, of what students need to know to be proficient in data-intensive work. This report provides a variety of perspectives about those elements and about their integration into courses and curricula…”
The Data Manifesto
Development Initiatives: “Staging a Data Revolution
Accessible, useable, timely and complete data is core to sustainable development and social progress. Access to information provides people with a base to make better choices and have more control over their lives. Too often attempts to deliver sustainable economic, social and environmental results are hindered by the failure to get the right information, in the right format, to the right people, at the right time. Worse still, the most acute data deficits often affect the people and countries facing the most acute problems.
The Data Revolution should be about data grounded in real life. Data and information that gets to the people who need it at national and sub-national levels to help with the decisions they face – hospital directors, school managers, city councillors, parliamentarians. Data that goes beyond averages – that is disaggregated to show the different impacts of decisions, policies and investments on gender, social groups and people living in different places and over time.
We need a Data Revolution that sets a new political agenda, that puts existing data to work, that improves the way data is gathered and ensures that information can be used. To deliver this vision, we need the following steps.
12 steps to a Data Revolution
1. Implement a national ‘Data Pledge’ to citizens that is supported by governments, private and non-governmental sectors
2. Address real world questions with joined up and disaggregated data
3. Empower and up-skill data users of the future through education
4. Examine existing frameworks and publish existing data
5. Build an information bank of data assets
6. Allocate funding available for better data according to national and sub-national priorities
7. Strengthen national statistical systems’ capacity to collect data
8. Implement a policy that data is ‘open by default’
9. Improve data quality by subjecting it to public scrutiny
10. Put information users’ needs first
11. Recognise technology cannot solve all barriers to information
12. Invest in infomediaries’ capacity to translate data into information that policymakers, civil society and the media can actually use…”
Things Fall Apart: How Social Media Leads to a Less Stable World
Commentary by Curtis Hougland at Knowledge@Wharton: “James Foley. David Haines. Steven Sotloff. The list of people beheaded by followers of the Islamic State of Iraq and Syria (ISIS) keeps growing. The filming of these acts on video and distribution via social media platforms such as Twitter represent a geopolitical trend in which social media has become the new frontline for proxy wars across the globe. While social media does indeed advance connectivity and wealth among people, its proliferation at the same time results in a markedly less stable world.
That social media benefits mankind is irrefutable. I have been an evangelist for the power of new media for 20 years. However, technology in the form of globalized communication, transportation and supply chains conspires to make today’s world more complex. Events in any corner of the world now impact the rest of the globe quickly and sharply. Nations are being pulled apart along sectarian seams in Iraq, tribal divisions in Afghanistan, national interests in Ukraine and territorial fences in Gaza. These conflicts portend a quickening of global unrest, confirmed by Foreign Policy magazine’s map of civil protest. The ISIS videos are simply the exposed wire. I believe that over the next century, even great nations will Balkanize — break into smaller nations. One of the principal drivers of this Balkanization is social media Twitter .
Social media is a behavior, an expression of the innate human need to socialize and share experiences. Social media is not simply a set of technology channels and networks. Both the public and private sectors have underestimated the human imperative to behave socially. The evidence is now clear with more than 52% of the population living in cities and approximately 2 billion people active in social media globally. Some 96% of content emanates from individuals, not brands, media or governments — a volume that far exceeds participation in democratic elections.
Social media is not egalitarian, though. Despite the exponential growth of user-generated content, people prefer to congregate online around like-minded individuals. Rather than seek out new beliefs, people choose to reinforce their existing political opinions through their actions online. This is illustrated in Pew Internet’s 2014 study, “Mapping Twitter Topic Networks from Polarized Crowds to Community Clusters.” Individuals self-organize by affinity, and within affinity, by sensibility and personality. The ecosystem of social media is predicated on delivering more of what the user already likes. This, precisely, is the function of a Follow or Like. In this way, media coagulates rather than fragments online….”
Driving Innovation With Open Data
Research Article by The GovLab’s Joel Gurin (Chapter 6 in the report, “The Future of Data-Driven Innovation.”): The chapters in this report provide ample evidence of the power of data and its business potential. But like any business resource, data is only valuable if the benefit of using it outweighs its cost. Data collection, management, distribution, quality control, and application all come at a price—a potential obstacle for companies of any size, though especially for small and medium-sized enterprises.
Over the last several years, however, the “I” of data’s return on investment (ROI) has become less of a hurdle, and new data-driven companies are developing rapidly as a result. One major reason is that governments at the federal, state, and local level are making more data available at little or no charge for the private sector and the public to use. Governments collect data of all kinds—including scientific, demographic, and financial data—at taxpayer expense.
Now, public sector agencies and departments are increasingly repaying that public investment by making their data available to all for free or at a low cost. This is Open Data. While there are still costs in putting the data to use, the growing availability of this national resource is becoming a significant driver for hundreds of new businesses. This chapter describes the growing potential of Open Data and the data-driven innovation it supports, the types of data and applications that are most promising, and the policies that will encourage innovation going forward. Read and download this article in PDF format. “
Codifying Collegiality: Recent Developments in Data Sharing Policy in the Life Sciences
New paper by et al in PLoS ONE: “Over the last decade, there have been significant changes in data sharing policies and in the data sharing environment faced by life science researchers. Using data from a 2013 survey of over 1600 life science researchers, we analyze the effects of sharing policies of funding agencies and journals. We also examine the effects of new sharing infrastructure and tools (i.e., third party repositories and online supplements). We find that recently enacted data sharing policies and new sharing infrastructure and tools have had a sizable effect on encouraging data sharing. In particular, third party repositories and online supplements as well as data sharing requirements of funding agencies, particularly the NIH and the National Human Genome Research Institute, were perceived by scientists to have had a large effect on facilitating data sharing. In addition, we found a high degree of compliance with these new policies, although noncompliance resulted in few formal or informal sanctions. Despite the overall effectiveness of data sharing policies, some significant gaps remain: about one third of grant reviewers placed no weight on data sharing plans in their reviews, and a similar percentage ignored the requirements of material transfer agreements. These patterns suggest that although most of these new policies have been effective, there is still room for policy improvement.”