The Participatory Approach to Open Data


at the SmartChicagoCollaborative: “…Having vast stores of government data is great, but to make this data useful – powerful – takes a different type of approach. The next step in the open data movement will be about participatory data.

Systems that talk back

One of the great advantages behind Chicago’s 311 ServiceTracker is that when you submit something to the system, the system has the capacity to talk back giving you a tracking number and an option to get email updates about your request. What also happens is that as soon as you enter your request, the data get automatically uploaded into the city’s data portal giving other 311 apps like SeeClickFix and access to the information as well…

Participatory Legislative Apps

We already see a number of apps that allow user to actively participate using legislative data.
At the Federal level, apps like PopVox allow users to find and track legislation that’s making it’s way through Congress. The app then allows users to vote if they approve or disapprove of a particular bill. You can then send explain your reasoning in a message that will be sent to all of your elected officials. The app makes it easier for residents to send feedback on legislation by creating a user interface that cuts through the somewhat difficult process of keeping tabs on legislation.
At the state level, New York’s OpenLegislation site allows users to search for state legislation and provide commentary on each resolution.
At the local level, apps like Councilmatic allows users to post comments on city legislation – but these comments aren’t mailed or sent to alderman the same way PopVox does. The interaction only works if the alderman are also using Councilmatic to receive feedback…

Crowdsourced Data

Chicago has hardwired several datasets into their computer systems, meaning that this data is automatically updated as the city does the people’s business.
But city governments can’t be everywhere at once. There are a number of apps that are designed to gather information from residents to better understand what’s going on their cities.
In Gary, the city partnered with the University of Chicago and LocalData to collect information on the state of buildings in Gary, IN. LocalData is also being used in Chicago, Houston, and Detroit by both city governments and non-profit organizations.
Another method the City of Chicago has been using to crowdsource data has been to put several of their datasets on GitHub and accept pull requests on that data. (A pull request is when one developer makes a change to a code repository and asks the original owner to merge the new changes into the original repository.) An example of this is bikers adding private bike rack locations to the city’s own bike rack dataset.

Going from crowdsourced to participatory

Shareabouts is a mapping platform by OpenPlans that gives city the ability to collect resident input on city infrastructure. Chicago’s Divvy Bikeshare program is using the tool to collect resident feedback on where the new Divvy stations should go. The app allows users to comment on suggested locations and share the discussion on social media.
But perhaps the most unique participatory app has been piloted by the City of South Bend, Indiana. CityVoice is a Code for America fellowship project designed to get resident feedback on abandoned buildings in South Bend…. (More)”

Democratizing Inequalities: Dilemmas of the New Public Participation


New book edited by Caroline W. Lee, Michael McQuarrie and Edward T. Walker: “Opportunities to “have your say,” “get involved,” and “join the conversation” are everywhere in public life. From crowdsourcing and town hall meetings to government experiments with social media, participatory politics increasingly seem like a revolutionary antidote to the decline of civic engagement and the thinning of the contemporary public sphere. Many argue that, with new technologies, flexible organizational cultures, and a supportive policymaking context, we now hold the keys to large-scale democratic revitalization.
Democratizing Inequalities shows that the equation may not be so simple. Modern societies face a variety of structural problems that limit potentials for true democratization, as well as vast inequalities in political action and voice that are not easily resolved by participatory solutions. Popular participation may even reinforce elite power in unexpected ways. Resisting an oversimplified account of participation as empowerment, this collection of essays brings together a diverse range of leading scholars to reveal surprising insights into how dilemmas of the new public participation play out in politics and organizations. Through investigations including fights over the authenticity of business-sponsored public participation, the surge of the Tea Party, the role of corporations in electoral campaigns, and participatory budgeting practices in Brazil, Democratizing Inequalities seeks to refresh our understanding of public participation and trace the reshaping of authority in today’s political environment.”

The Emerging Science of Human-Data Interaction


Emerging Technology From the arXiv: “The rapidly evolving ecosystems associated with personal data is creating an entirely new field of scientific study, say computer scientists. And this requires a much more powerful ethics-based infrastructure….
Now Richard Mortier at the University of Nottingham in the UK and a few pals say the increasingly complex, invasive and opaque use of data should be a call to arms to change the way we study data, interact with it and control its use. Today, they publish a manifesto describing how a new science of human-data interaction is emerging from this “data ecosystem” and say that it combines disciplines such as computer science, statistics, sociology, psychology and behavioural economics.
They start by pointing out that the long-standing discipline of human-computer interaction research has always focused on computers as devices to be interacted with. But our interaction with the cyber world has become more sophisticated as computing power has become ubiquitous, a phenomenon driven by the Internet but also through mobile devices such as smartphones. Consequently, humans are constantly producing and revealing data in all kinds of different ways.
Mortier and co say there is an important distinction between data that is consciously created and released such as a Facebook profile; observed data such as online shopping behaviour; and inferred data that is created by other organisations about us, such as preferences based on friends’ preferences.
This leads the team to identify three key themes associated with human-data interaction that they believe the communities involved with data should focus on.
The first of these is concerned with making data, and the analytics associated with it, both transparent and comprehensible to ordinary people. Mortier and co describe this as the legibility of data and say that the goal is to ensure that people are clearly aware of the data they are providing, the methods used to draw inferences about it and the implications of this.
Making people aware of the data being collected is straightforward but understanding the implications of this data collection process and the processing that follows is much harder. In particular, this could be in conflict with the intellectual property rights of the companies that do the analytics.
An even more significant factor is that the implications of this processing are not always clear at the time the data is collected. A good example is the way the New York Times tracked down an individual after her seemingly anonymized searches were published by AOL. It is hard to imagine that this individual had any idea that the searches she was making would later allow her identification.
The second theme is concerned with giving people the ability to control and interact with the data relating to them. Mortier and co describe this as “agency”. People must be allowed to opt in or opt out of data collection programs and to correct data if it turns out to be wrong or outdated and so on. That will require simple-to-use data access mechanisms that have yet to be developed
The final theme builds on this to allow people to change their data preferences in future, an idea the team call “negotiability”. Something like this is already coming into force in the European Union where the Court of Justice has recently begun to enforce the “right to be forgotten”, which allows people to remove information from search results under certain circumstances….”
Ref: http://arxiv.org/abs/1412.6159  Human-Data Interaction: The Human Face of the Data-Driven Society

HyperCities: Thick Mapping in the Digital Humanities


Book by Todd Presner, David Shepard, Yoh Kawano: “The prefix “hyper” refers to multiplicity and abundance. More than a physical space, a hypercity is a real city overlaid with information networks that document the past, catalyze the present, and project future possibilities. Hypercities are always under construction.
Todd Presner, David Shepard, and Yoh Kawano put digital humanities theory into practice to chart the proliferating cultural records of places around the world. A digital platform transmogrified into a book, it explains the ambitious online project of the same name that maps the historical layers of city spaces in an interactive, hypermedia environment. The authors examine the media archaeology of Google Earth and the cultural–historical meaning of map projections, and explore recent events—the “Arab Spring” and the Fukushima nuclear power plant disaster—through social media mapping that incorporates data visualizations, photographic documents, and Twitter streams. A collaboratively authored and designed work, HyperCities includes a “ghost map” of downtown Los Angeles, polyvocal memory maps of LA’s historic Filipinotown, avatar-based explorations of ancient Rome, and hour-by-hour mappings of the Tehran election protests of 2009.
Not a book about maps in the literal sense, HyperCities describes thick mapping: the humanist project of participating and listening that transforms mapping into an ethical undertaking. Ultimately, the digital humanities do not consist merely of computer-based methods for analyzing information. They are a means of integrating scholarship with the world of lived experience, making sense of the past in the layered spaces of the present for the sake of the open future.”

Launching Disasters.Data.Gov


Meredith Lee, Heather King, and Brian Forde at the OSTP Blog: “Strengthening our Nation’s resilience to disasters is a shared responsibility, with all community members contributing their unique skills and perspectives. Whether you’re a data steward who can unlock information and foster a culture of open data, an innovator who can help address disaster preparedness challenges, or a volunteer ready to join the “Innovation for Disasters” movement, we are excited for you to visit the new disasters.data.gov site, launching today.
First previewed at the White House Innovation for Disaster Response and Recovery Initiative Demo Day, disasters.data.gov is designed to be a public resource to foster collaboration and the continual improvement of disaster-related open data, free tools, and new ways to empower first responders, survivors, and government officials with the information needed in the wake of a disaster.
A screenshot from the new disasters.data.gov web portal.
Today, the Administration is unveiling the first in a series of Innovator Challenges that highlight pressing needs from the disaster preparedness community. The inaugural Innovator Challenge focuses on a need identified from firsthand experience of local emergency management, responders, survivors, and Federal departments and agencies. The challenge asks innovators across the nation: “How might we leverage real-time sensors, open data, social media, and other tools to help reduce the number of fatalities from flooding?”
In addition to this first Innovator Challenge, here are some highlights from disasters.data.gov:….(More)”

Google And Autism Speaks Team Up To Create Genomic Database On Autism


Emma Hutchings at PSFK: “Google and Autism Speaks have collaborated to launch ‘Mssng‘, an awareness campaign to support the development of the world’s largest genomic database on autism. By sequencing the DNA of over 10,000 people with autism spectrum disorder (ASD) and their families, Mssng aims to answer many of the questions about the disorder that still remain unknown.
The Google Cloud will ensure that all of the stored sequenced data is accessible for free to researchers around the world. Scientists will be able to study trillions of data points in one single database. As a result of this open resource, many subtypes of autism could be identified, leading to more accurate and personalized treatments for individuals.
The huge amount of data being collected has created unique challenges for storage, analysis, and access. The Google Cloud Platform provides a solution with the engineering innovation needed to address the storage and analysis challenges. It also provides a portal for open source access by qualified researchers.
The campaign for Mssng includes visual components with striking images of crystalized DNA that tell the story of who we are as individuals. It will be supported on social media by a movement to try to raise both awareness and donations. Supporters are encouraged to remove vowels from their Twitter display name and post the following tweet: “We’re missing a lot of information on autism. Support @AutismSpeaks project #MSSNG by removing letters from your name: http://mss.ng.”

The Free 'Big Data' Sources Everyone Should Know


Bernard Marr at Linkedin Pulse: “…The moves by companies and governments to put large amounts of information into the public domain have made large volumes of data accessible to everyone….here’s my rundown of some of the best free big data sources available today.

Data.gov

The US Government pledged last year to make all government data available freely online. This site is the first stage and acts as a portal to all sorts of amazing information on everything from climate to crime. To check it out, click here.

US Census Bureau

A wealth of information on the lives of US citizens covering population data, geographic data and education. To check it out, click here. To check it out, click here.

European Union Open Data Portal

As the above, but based on data from European Union institutions. To check it out, click here.

Data.gov.uk

Data from the UK Government, including the British National Bibliography – metadata on all UK books and publications since 1950. To check it out, click here.

The CIA World Factbook

Information on history, population, economy, government, infrastructure and military of 267 countries. To check it out, click here.

Healthdata.gov

125 years of US healthcare data including claim-level Medicare data, epidemiology and population statistics. To check it out, click here.

NHS Health and Social Care Information Centre

Health data sets from the UK National Health Service. To check it out, click here.

Amazon Web Services public datasets

Huge resource of public data, including the 1000 Genome Project, an attempt to build the most comprehensive database of human genetic information and NASA’s database of satellite imagery of Earth. To check it out, click here.

Facebook Graph

Although much of the information on users’ Facebook profile is private, a lot isn’t – Facebook provide the Graph API as a way of querying the huge amount of information that its users are happy to share with the world (or can’t hide because they haven’t worked out how the privacy settings work). To check it out, click here.

Gapminder

Compilation of data from sources including the World Health Organization and World Bank covering economic, medical and social statistics from around the world. To check it out, click here.

Google Trends

Statistics on search volume (as a proportion of total search) for any given term, since 2004. To check it out, click here.

Google Finance

40 years’ worth of stock market data, updated in real time. To check it out, click here.

Google Books Ngrams

Search and analyze the full text of any of the millions of books digitised as part of the Google Books project. To check it out, click here.

National Climatic Data Center

Huge collection of environmental, meteorological and climate data sets from the US National Climatic Data Center. The world’s largest archive of weather data. To check it out, click here.

DBPedia

Wikipedia is comprised of millions of pieces of data, structured and unstructured on every subject under the sun. DBPedia is an ambitious project to catalogue and create a public, freely distributable database allowing anyone to analyze this data. To check it out, click here.

Topsy

Free, comprehensive social media data is hard to come by – after all their data is what generates profits for the big players (Facebook, Twitter etc) so they don’t want to give it away. However Topsy provides a searchable database of public tweets going back to 2006 as well as several tools to analyze the conversations. To check it out, click here.

Likebutton

Mines Facebook’s public data – globally and from your own network – to give an overview of what people “Like” at the moment. To check it out, click here.

New York Times

Searchable, indexed archive of news articles going back to 1851. To check it out, click here.

Freebase

A community-compiled database of structured data about people, places and things, with over 45 million entries. To check it out, click here.

Million Song Data Set

Metadata on over a million songs and pieces of music. Part of Amazon Web Services. To check it out, click here.”
See also Bernard Marr‘s blog at Big Data Guru

Lab Rats


Clare Dwyer Hogg at the Long and Short:  “Do you remember how you were feeling between 11 and 18 January, 2012? If you’re a Facebook user, you can scroll back and have a look. Your status updates might show you feeling a little bit down, or cheery. All perfectly natural, maybe. But if you were one of 689,003 unwitting users selected for an experiment to determine whether emotions are contagious, then maybe not. The report on its findings was published in March this year: “Experimental evidence of massive-scale emotional contagion through social networks”. How did Facebook do it? Very subtly, by adjusting the algorithm of selected users’ news feeds. One half had a reduced chance of being exposed to positive updates, the other had a more upbeat newsfeed. Would users be more inclined to feel positive or negative themselves, depending on which group they were in? Yes. The authors of the report found – by extracting the posts of the people they were experimenting on – that, indeed, emotional states can be transferred to others, “leading people to experience the same emotions without their awareness”.

It was legal (see Facebook’s Data Use Policy). Ethical? The answer to that lies in the shadows. A one-off? Not likely. When revealed last summer, the Facebook example created headlines around the world – and another story quickly followed. On 28 July, Christian Rudder, a Harvard math graduate and one of the founders of the internet dating site OkCupid, wrote a blog post titled “We Experiment on Human Beings!”. In it, he outlined a number of experiments they performed on their users, one of which was to tell people who were “bad matches” (only 30 per cent compatible, according to their algorithm) that they were actually “exceptionally good for each other” (which usually requires a 90 per cent match). OkCupid wanted to see if mere suggestion would inspire people to like each other (answer: yes). It was a technological placebo. The experiment found that the power of suggestion works – but so does the bona fide OkCupid algorithm. Outraged debates ensued, with Rudder defensive. “This is the only way to find this stuff out,” he said, in one heated radio interview. “If you guys have an alternative to the scientific method, I’m all ears.”…

The debate, says Mark Earls, should primarily be about civic responsibility, even before the ethical concerns. Earls is a towering figure in the world of advertising and communication, and his book Herd: How to Change Mass Behaviour By Harnessing our True Nature, was a gamechanger in how people in the industry thought about what drives us to make decisions. That was a decade ago, before Facebook, and it’s increasingly clear that his theories were prescient.

He kept an eye on the Facebook experiment furore, and was, he says, heavily against the whole concept. “They’re supporting the private space between people, their contacts and their social media life,” he says. “And then they abused it.”…”

4 Tech Trends Changing How Cities Operate


at Governing: “Louis Brandeis famously characterized states as laboratories for democracy, but cities could be called labs for innovation or new practices….When Government Technology magazine (produced by Governing’s parent company, e.Republic, Inc.) published its annual Digital Cities Survey, the results provided an interesting look at how local governments are using technology to improve how they deliver services, increase production and streamline operations…the survey also showed four technology trends changing how local government operates and serves its citizens:

1. Open Data

…Big cities were the first to open up their data and gained national attention for their transparency. New York City, which passed an open data law in 2012, leads all cities with more than 1,300 data sets open to the public; Chicago started opening up data to the public in 2010 following an executive order and is second among cities with more than 600; and San Francisco, which was the first major city to open the doors to transparency in 2009, had the highest score from the U.S. Open Data Census for the quality of its open data.
But the survey shows that a growing number of mid-sized jurisdictions are now getting involved, too. Tacoma, Wash., has a portal with 40 data sets that show how the city is spending tax dollars on public works, economic development, transportation and public safety. Ann Arbor, Mich., has a financial transparency tool that reveals what the city is spending on a daily basis, in some cases….

2. ‘Stat’ Programs and Data Analytics

…First, the so-called “stat” programs are proliferating. Started by the New York Police Department in the 1980s, CompStat was a management technique that merged data with staff feedback to drive better performance by police officers and precinct captains. Its success led to many imitations over the years and, as the digital survey shows, stat programs continue to grow in importance. For example, Louisville has used its “LouieStat” program to cut the city’s bill for unscheduled employee overtime by $23 million as well as to spot weaknesses in performance.
Second, cities are increasing their use of data analytics to measure and improve performance. Denver, Jacksonville, Fla., and Phoenix have launched programs that sift through data sets to find patterns that can lead to better governance decisions. Los Angeles has combined transparency with analytics to create an online system that tracks performance for the city’s economy, service delivery, public safety and government operations that the public can view. Robert J. O’Neill Jr., executive director of the International City/County Management Association, said that both of these tech-driven performance trends “enable real-time decision-making.” He argued that public leaders who grasp the significance of these new tools can deliver government services that today’s constituents expect.

3. Online Citizen Engagement

…Avondale, Ariz., population 78,822, is engaging citizens with a mobile app and an online forum that solicits ideas that other residents can vote up or down.
In Westminster, Colo., population 110,945, a similar forum allows citizens to vote online about community ideas and gives rewards to users who engage with the online forum on a regular basis (free passes to a local driving range or fitness program). Cities are promoting more engagement activities to combat a decline in public trust in government. The days when a public meeting could provide citizen engagement aren’t enough in today’s technology-dominated  world. That’s why social media tools, online surveys and even e-commerce rewards programs are popping up in cities around the country to create high-value interaction with its citizens.

4. Geographic Information Systems

… Cities now use them to analyze financial decisions to increase performance, support public safety, improve public transit, run social service activities and, increasingly, engage citizens about their city’s governance.
Augusta, Ga., won an award for its well-designed and easy-to-use transit maps. Sugar Land, Texas, uses GIS to support economic development and, as part of its citizen engagement efforts, to highlight its capital improvement projects. GIS is now used citywide by 92 percent of the survey respondents. That’s significant because GIS has long been considered a specialized (and expensive) technology primarily for city planning and environmental projects….”

Introducing Hatch: Tell Stories With Purpose


Jay Geneske at the Rockefeller Foundation: “Stories with purpose don’t just materialize—they’re strategically planned, they’re creatively crafted, and designed to achieve measurable outcomes.
Using the landscape report Digital Storytelling for Social Impact as our guide, we’ve rolled up our sleeves with our lead grantee, Hattaway Communications, and dozens of experts and leaders to come up with a tool that we think will be game-changing for the social impact sector.
We’ve named it Hatch.
Hatch is a concierge that connects you to a suite of tools and a growing community of storytellers to help you leverage your stories to drive social impact.
Each of Hatch’s five sections are designed to help you craft, curate and share impactful stories. As you build your storytelling profile, Hatch will suggest tools, case studies and resources customized to your needs. These recommendations will always be saved to your profile so you can access them later.

Here’s just a sampling of what you’ll find:

How to Make the Case to Invest in Story
Taming the Measurement Monster
Your CEO as Master Storyteller
The 40/60 Content Rule: Less Time Writing, More Time Sharing
What Makes a Story Great
Case studies from UNICEF, The Gates Foundation, charity: water, and Greenpeace.
Tips like Nonprofit Photography Ethics, Recruiting Volunteers on LinkedIn, and Using Tumblr to Collect and Share Stories.
Guides for use and measuring impact of platforms like Facebook, Medium, Twitter, and Instagram….”