Infoglut: How Too Much Information Is Changing the Way We Think and Know


New book by Mark Andrejevic: “Today, more mediated information is available to more people than at any other time in human history. New and revitalized sense-making strategies multiply in response to the challenges of “cutting through the clutter” of competing narratives and taming the avalanche of information. Data miners, “sentiment analysts,” and decision markets offer to help bodies of data “speak for themselves”—making sense of their own patterns so we don’t have to. Neuromarketers and body language experts promise to peer behind people’s words to see what their brains are really thinking and feeling. New forms of information processing promise to displace the need for expertise and even comprehension—at least for those with access to the data.
Infoglut explores the connections between these wide-ranging sense-making strategies for an era of information overload and “big data,” and the new forms of control they enable. Andrejevic critiques the popular embrace of deconstructive debunkery, calling into question the post-truth, post-narrative, and post-comprehension politics it underwrites, and tracing a way beyond them.”

Infographics: Winds of change


Book Review in the Economist:

  • Data Points: Visualisation That Means Something. By Nathan Yau. Wiley; 300 pages; $32 and £26.99.
  • Facts are Sacred. By Simon Rogers. Faber and Faber; 311 pages; £20.
  • The Infographic History of the World. By James Ball and Valentina D’Efilippo. Collins; 224 pages; £20.

“IN THE late 1700s William Playfair, a Scottish engineer, created the bar chart, pie chart and line graph. These amounted to visual breakthroughs, innovations that allowed people to see patterns in data that they would otherwise have missed if they just stared at long tables of numbers.
Big data, the idea that the world is replete with more information than ever, is now all the rage. And the search for fresh and enlightened ways to help people absorb it is causing a revolution. A new generation of statisticians and designers—often the same person—are working on computer technologies and visual techniques that will depict data at scales and in forms previously unimaginable. The simple line graph and pie chart are being supplemented by things like colourful, animated bubble charts, which can present more variables. Three-dimensional network diagrams show ratios and relationships that were impossible to depict before.

The 20 Basics of Open Government


About The 20 Basics of Open Government: “The 20 Basics of Open Government was created with digital love and sweat by the Open Forum Foundation. We did this primarily because it didn’t exist, but really needed to. As we started looking around, we also realized that the terminology of open government is used by a lot of different people to mean a lot of different things. For example, there are multiple groupings of transparency advocates each with their own perspective, there’s the participation community, and then more generally there are techies and govies, each of which use different languages normally anyway.

Watching what is going on around the world in national, state, and local governments, we think opengov is maturing and that the time has come for a basics resource for newbies. Our goal was to include the full expanse of open government and show how it all ties together so that when you, the astute reader, meet up with one of the various opengov cliques that uses the terminology in a narrowly defined way, you can see how they fit into the bigger picture. You should also be able to determine how opengov can best be applied to benefit whatever you’re up to, while keeping in mind the need to provide both access for citizens to engage with government and access to information.
Have a read through it, and let us know what you think! When you find a typo – or something you disagree with – or something we missed, let us know that as well. The easiest way to do it is right there in the comments (we’re not afraid to be called out in public!), but we’re open to email and twitter as well. We’re looking forward to hearing what you think!.”

Governing Gets Social


Government Executive: “More than 4 million people joined together online in December 2011 to express outrage over the Stop Online Piracy Act, a bill Congress was considering that would have made content-sharing websties legally responsible for their users’ copyright violations, with punishments including prison time.
Experts called the campaign a victory for digital democracy: The people had spoken— the ones who don’t have lobbyists or make large campaign donations. And just as important, their representatives had listened.
There was a problem, though. Through social media, ordinary citizens told Congress and the president what they didn’t want. But the filmmakers, recording artists and others concerned about protecting intellectual property rights, many of whom supported SOPA, had a legitimate beef. And there was no good way to gauge what measures the public would support to address that.
A handful of staffers in the office of Rep. Darrell Issa, R-Calif., thought they might have a solution. As the debate over SOPA rose to a boil, they launched the Madison Project, an online forum where users could comment on proposed legislation, suggest alternative text and vote those suggestions up or down. It was a cross between Microsoft Word’s track changes function and crowdsourced book reviews on Amazon.
Not all examples of this new breed of interactive social media happen at the macro level of legislation and presidential directives. Agencies across government have been turning to the platform IdeaScale, for instance, to gather feedback on more granular policy questions.
Once an agency poses a question on IdeaScale, anyone can offer a response or suggestion and other discussion participants can vote those suggestions up or down. That typically means the wisdom of the masses will drive the best ideas from the most qualified participants to the top of the queue without officials having to sift through every suggestion….
What many people see as the endgame for projects like Madison and Textizen is a vibrant civic culture in which people report potholes, sign petitions and even vote online or through mobile devices.
The Internet is great at gathering and processing information, but it’s not as good at verifying who that information is coming from, says Alan Shark, a Rutgers University professor and executive director of the Public Technology Institute, a nonprofit that focuses on technology issues affecting local governments.
“Star Trek is here,” Shark says. “We have these personal communicators, their use is continuing to grow dramatically and we’re going to have broader civic participation because of it. The missing piece is trusted identities.”

The Real-Time City? Big Data and Smart Urbanism


New paper by Rob Kitchin from the National University of Ireland, Maynooth (NUI Maynooth) – NIRSA: “‘Smart cities’ is a term that has gained traction in academia, business and government to describe cities that, on the one hand, are increasingly composed of and monitored by pervasive and ubiquitous computing and, on the other, whose economy and governance is being driven by innovation, creativity and entrepreneurship, enacted by smart people. This paper focuses on the former and how cities are being instrumented with digital devices and infrastructure that produce ‘big data’ which enable real-time analysis of city life, new modes of technocratic urban governance, and a re-imagining of cities. The paper details a number of projects that seek to produce a real-time analysis of the city and provides a critical reflection on the implications of big data and smart urbanism”
 
 

Open Government is an Open Conversation


Lisa Ellman and Hollie Russon Gilman at the White House Blog: “President Obama launched the first U.S. Open Government National Action Plan in September 2011, as part of the Nation’s commitment to the principles of the global Open Government Partnership. The Plan laid out twenty-six concrete steps the United States would take to promote public participation in government, increase transparency in government, and manage public resources more effectively.
A  year and a half later, we have fulfilled twenty-four of the Plan’s prescribed commitments—including launching the online We the People petition platform, which has been used by more than 9.6 million people, and unleashing thousands of government data resources as part of the Administration’s Open Data Initiatives.
We are proud of this progress, but recognize that there is always more work to be done to build a more efficient, effective, and transparent government. In that spirit, as part of our ongoing commitment to the international Open Government Partnership, the Obama Administration has committed to develop a second National Action Plan on Open Government.
To accomplish this task effectively, we’ll need all-hands-on-deck. That’s why we plan to solicit and incorporate your input as we develop the National Action Plan “2.0.”…
Over the next few months, we will continue to gather your thoughts. We will leverage online platforms such as Quora, Google+, and Twitter to communicate with the public and collect feedback.  We will meet with members of open government civil society organizations and other experts, to ensure all voices are brought to the table.  We will solicit input from Federal agencies on lessons learned from their unique experiences, and gather information about successful initiatives that could potentially be scaled across government.  And finally, we will canvass the international community for their diverse insights and innovative ideas.”

Frontiers in Massive Data Analysis


New Report from the National Research Council: “From Facebook to Google searches to bookmarking a webpage in our browsers, today’s society has become one with an enormous amount of data. Some internet-based companies such as Yahoo! are even storing exabytes (10 to the 18 bytes) of data. Like these companies and the rest of the world, scientific communities are also generating large amounts of data-—mostly terabytes and in some cases near petabytes—from experiments, observations, and numerical simulation. However, the scientific community, along with defense enterprise, has been a leader in generating and using large data sets for many years. The issue that arises with this new type of large data is how to handle it—this includes sharing the data, enabling data security, working with different data formats and structures, dealing with the highly distributed data sources, and more.
Frontiers in Massive Data Analysis presents the Committee on the Analysis of Massive Data’s work to make sense of the current state of data analysis for mining of massive sets of data, to identify gaps in the current practice and to develop methods to fill these gaps. The committee thus examines the frontiers of research that is enabling the analysis of massive data which includes data representation and methods for including humans in the data-analysis loop. The report includes the committee’s recommendations, details concerning types of data that build into massive data, and information on the seven computational giants of massive data analysis.”

City Data: Big, Open and Linked


Working Paper by Mark S. Fox (University of Toronto): “Cities are moving towards policymaking based on data. They are publishing data using Open Data standards, linking data from disparate sources, allowing the crowd to update their data with Smart Phone Apps that use Open APIs, and applying “Big Data” Techniques to discover relationships that lead to greater efficiencies.
One Big City Data example is from New York City (Schönberger & Cukier, 2013). Building owners were illegally converting their buildings into rooming houses that contained 10 times the number people they were designed for. These buildings posed a number of problems, including fire hazards, drugs, crime, disease and pest infestations. There are over 900,000 properties in New York City and only 200 inspectors who received over 25,000 illegal conversion complaints per year. The challenge was to distinguish nuisance complaints from those worth investigating where current methods were resulting in only 13% of the inspections resulting in vacate orders.
New York’s Analytics team created a dataset that combined data from 19 agencies including buildings, preservation, police, fire, tax, and building permits. By combining data analysis with expertise gleaned from inspectors (e.g., buildings that recently received a building permit were less likely to be a problem as they were being well maintained), the team was able to develop a rating system for complaints. Based on their analysis of this data, they were able to rate complaints such that in 70% of their visits, inspectors issued vacate orders; a fivefold increase in efficiency…
This paper provides an introduction to the concepts that underlie Big City Data. It explains the concepts of Open, Unified, Linked and Grounded data that lie at the heart of the Semantic Web. It then builds on this by discussing Data Analytics, which includes Statistics, Pattern Recognition and Machine Learning. Finally we discuss Big Data as the extension of Data Analytics to the Cloud where massive amounts of computing power and storage are available for processing large data sets. We use city data to illustrate each.”

Microsensors help map crowdsourced pollution data


air-quality-egg-mapElena Craft in GreenBiz: Michael Heimbinder, a Brooklyn entrepreneur, hopes to empower individuals with his small-scale air quality monitoring system, AirCasting. The AirCasting system uses a mobile, Bluetooth-enabled air monitor not much larger than a smartphone to measure carbon dioxide, carbon monoxide, nitrogen dioxide, particulate matter and other pollutants. An accompanying Android app records and formats the information to an emissions map.
Alternatively, another instrument, the Air Quality Egg, comes pre-assembled ready to use. Innovative air monitoring systems, such as AirCasting or the Air Quality Egg, empower ordinary citizens to monitor the pollution they encounter daily and proactively address problematic sources of pollution.
This technology is part of a growing movement to enable the use of small sensors. In response to inquiries about small-sensor data, the EPA is researching the next generation of air measuring technologies. EPA experts are working with sensor developers to evaluate data quality and understand useful sensor applications. Through this ongoing collaboration, the EPA hopes to bolster measurements from conventional, stationary air-monitoring systems with data collected from individuals’ air quality microsensors….
Like many technologies emerging from the big data revolution and innovations in the energy sector, microsensing technology provides a wealth of high-quality data at a relatively low cost. It allows us to track previously undetected air pollution from traditional sources of urban smog, such as highways, and unconventional sources of pollution. Microsensing technology not only educates the public, but also helps to enlighten regulators so that policymakers can work from the facts to protect citizens’ health and welfare.

Capitol Words


CaptureAbout Capitol Words: “For every day Congress is in session, Capitol Words visualizes the most frequently used words in the Congressional Record, giving you an at-a-glance view of which issues lawmakers address on a daily, weekly, monthly and yearly basis. Capitol Words lets you see what are the most popular words spoken by lawmakers on the House and Senate floor.

Methodology

The contents of the Congressional Record are downloaded daily from the website of the Government Printing Office. The GPO distributes the Congressional Record in ZIP files containing the contents of the record in plain-text format.

Each text file is parsed and turned into an XML document, with things like the title and speaker marked up. The contents of each file are then split up into words and phrases — from one word to five.

The resulting data is saved to a search engine. Capitol Words has data from 1996 to the present.”