Five public participation books from 2014 you should take the time to read


at Bang The Table: “Every year dozens of books are published on the subject of community engagement, civic engagement, public engagement or public participation (depending on your fancy). None of us has time to read them all, so how to choose.
I’ve compiled a short and eclectic list here that span the breadth of issues that public participation practitioners and thier public sector managers are likely to be thinking about; legal, organisational culture, bringing joy back into citizen engagement, thoughtful living and thoughtful engagement, and DIY citizenship (and what that means for the public sector).
Blocking Public Participation: The use of strategic litigation to silence political expression
written by Byron M Sheldrick, published by Wilfred Laurier University Press
The blurb…

Strategic litigation against public participation (SLAPP) involves lawsuits brought by individuals, corporations, groups, or politicians to curtail political activism and expression. An increasingly large part of the political landscape in Canada, they are often launched against those protesting, boycotting, or participating in some form of political activism. A common feature of SLAPPs is that their intention is rarely to win the case or secure a remedy; rather, the suit is brought to create a chill on political expression….
Making Policy Public: Participatory Bureaucracy in American Democracy
written by Susan L. Moffit, published by Cambridge University Press
The blurb…
This book challenges the conventional wisdom that government bureaucrats inevitably seek secrecy and demonstrates how and when participatory bureaucracy manages the enduring tension between bureaucratic administration and democratic accountability….
Making Democracy Fun: How Game Design Can Empower Citizens and Transform Politics
written by Josh A. Lerner, published by MIT
The blurb…

Anyone who has ever been to a public hearing or community meeting would agree that participatory democracy can be boring. Hours of repetitive presentations, alternatingly alarmist or complacent, for or against, accompanied by constant heckling, often with no clear outcome or decision….

What Would Socrates Do?: Self-Examination, Civic Engagement, and the Politics of Philosophy

written by Joel Alden Schlosser, published by Cambridge University Press
The blurb…
Socrates continues to be an extremely influential force to this day; his work is featured prominently in the work of contemporary thinkers ranging from Hannah Arendt and Leo Strauss, to Michel Foucault and Jacques Rancière….
DIY Citizenship: Critical Making and Social Media
edited by Matt Ratto & Megan Boler, published by MIT
The blurb…
Today, DIY — do-it-yourself — describes more than self-taught carpentry. Social media enables DIY citizens to organize and protest in new ways (as in Egypt’s “Twitter revolution” of 2011) and to re-purpose corporate content (or create new user-generated content) in order to offer political counter-narratives….”

Macon Money: A serious game for civic engagement


Wilson Center Commons Lab: “In 2011, residents of Macon, Georgia received over $65,000 in free local currency—with a catch.
This money was locked in bonds redeemable for an unknown value between $10 and $100. Prior to circulation, each bond was cut in half. Residents of Macon wishing to “cash” their bonds were required to first find the missing half, held by an unknown community member.
These were the rules for Macon Money, a real-world game created by Area/Code Inc. in collaboration with several community partners. Benjamin Stokes was brought on board by the Knight Foundation as an advisor and researcher for the game. Stokes describes real-world games as activities where “playing the game is congruent with making impact in the world; making progress in the game, also does something in the real world.”  Macon Money was designed to foster civic engagement through a number of means.
First, the two halves of each bond were intentionally distributed in neighborhoods on opposite ends of Macon, or in neighborhoods characterized by different socio-economic status. This “game mechanic” forced residents who would not normally interact to collaborate towards a common goal.  Bond holders found each other through a designated website, social media platforms including Facebook and Twitter, and even serendipitous face-to-face interaction.
Bonds were redeemable for Macon Money, a currency that could only be spent at local businesses (which were reimbursed with U.S. currency).  This ensured continuing engagement with the Macon community, and in some cases continuing engagement between players.  Macon Money was also designed to foster community identity through the visual design of the currency itself.  Macon dollars depicted symbols of communal value, such a picture of Otis Redding, a native of the town.
While the game Macon Money is over, researchers continue to analyze the how the game helped foster civic engagement within a local community. Most recently, Stokes described these impacts during a talk at American University co-sponsored by The American University Game Lab, the Series Games Initiative at the Woodrow Wilson International Center for Scholars, the AU Library, and the American University Center for Media and Social Impact. A video for the talk was recently posted here:…”

Using twitter to get ground truth on floods


Interview with Floodtags founder Jurjen Wagemaker at Global Pulse: “…Twtitter has proved to be a fantastic flood monitor tool and we encourage people to share even more of their flood experiences on Twitter. Now the difficult part is, to create the right flood filters and enrichments, so that disaster managers only need to look at a fraction of the hundreds of thousands of observations coming in.

So we enrich and analyse all flood data in real-time, and present them in an understandable format through our web service. A good example is the water depth of a flood. It turns out that a large number of people both mention the flood depth as well as the location where they monitored it. Take for instance January 29th, 2014: out of the 360.000 tweets we collected on floods, 15.000 included water depth observations (see picture). Together with the Dutch water management institute Deltares (@arnejanvl) we are working to develop a sound interpretation framework for these observations to create real-time floodmaps. For reference, to make a reliable floodmap of the January 2013 flood took a total of nine days. This was thanks to the hard work of the disaster management office and the HOT team (Humanitarian OpenStreetMap Team)….
We will launch the site at the upcoming Data Innovation for Policy Makers conference in Bali. And from that date onwards you can use Floodtags to get realtime flood information in Indonesia. Just go to Floodtags.com and sign-up. Especially when it rains it can become quite interesting: you can search for different neighbourhoods and see what people tweeted and how deep the water is. There is also a realtime tweet density map and you can request tweet statistics (e.g. figure 5, where we compare flood tweets with flood response tweets) – and we have got so much more to come. “

Big Data: The Key Vocabulary Everyone Should Understand


Bernard Marr at LinkedIn Pulse: “The field of Big Data requires more clarity and I am a big fan of simple explanations. This is why I have attempted to provide simple explanations for some of the most important technologies and terms you will come across if you’re looking at getting into big data.

Here they are:
Algorithm: A mathematical formula or statistical process run by software to perform an analysis of data. It usually consists of multiple calculations steps and can be used to automatically process data or solve problems.
Amazon Web Services: A collection of cloud computing services offered by Amazon to help businesses carry out large scale computing operations (such as big data projects) without having to invest in their own server farms and data storage warehouses. Essentially, Storage space, processing power and software operations are rented rather than having to be bought and installed from scratch.
Analytics: The process of collecting, processing and analyzing data to generate insights that inform fact-based decision-making. In many cases it involves software-based analysis using algorithms. For more, have a look at my post: What the Heck is… Analytics
Big Table: Google’s proprietary data storage system, which it uses to host, among other things its Gmail, Google Earth and Youtube services. It is also made available for public use through the Google App Engine.
Biometrics: Using technology and analytics to identify people by one or more of their physical traits, such as face recognition, iris recognition, fingerprint recognition, etc. For more, see my post: Big Data and Biometrics
Cassandra: A popular open source database management system managed by The Apache Software Foundation that has been designed to handle large volumes of data across distributed servers.
Cloud: Cloud computing, or computing “in the cloud”, simply means software or data running on remote servers, rather than locally. Data stored “in the cloud” is typically accessible over the internet, wherever in the world the owner of that data might be. For more, check out my post: What The Heck is… The Cloud?
Distributed File System: Data storage system designed to store large volumes of data across multiple storage devices (often cloud based commodity servers), to decrease the cost and complexity of storing large amounts of data.
….”
See also: Big Data: Using SMART Big Data, Analytics and Metrics To Make Better Decisions and Improve Performance

Understanding "New Power"


Article by Jeremy Heimans and Henry Timms in Harvard Business Review: “We all sense that power is shifting in the world. We see increasing political protest, a crisis in representation and governance, and upstart businesses upending traditional industries. But the nature of this shift tends to be either wildly romanticized or dangerously underestimated.
There are those who cherish giddy visions of a new techno-utopia in which increased connectivity yields instant democratization and prosperity. The corporate and bureaucratic giants will be felled and the crowds coronated, each of us wearing our own 3D-printed crown. There are also those who have seen this all before. Things aren’t really changing that much, they say. Twitter supposedly toppled a dictator in Egypt, but another simply popped up in his place. We gush over the latest sharing-economy start-up, but the most powerful companies and people seem only to get more powerful.
Both views are wrong. They confine us to a narrow debate about technology in which either everything is changing or nothing is. In reality, a much more interesting and complex transformation is just beginning, one driven by a growing tension between two distinct forces: old power and new power.
Old power works like a currency. It is held by few. Once gained, it is jealously guarded, and the powerful have a substantial store of it to spend. It is closed, inaccessible, and leader-driven. It downloads, and it captures.
New power operates differently, like a current. It is made by many. It is open, participatory, and peer-driven. It uploads, and it distributes. Like water or electricity, it’s most forceful when it surges. The goal with new power is not to hoard it but to channel it.

The battle and the balancing between old and new power will be a defining feature of society and business in the coming years. In this article, we lay out a simple framework for understanding the underlying dynamics at work and how power is really shifting: who has it, how it is distributed, and where it is heading….”

Linguistic Mapping Reveals How Word Meanings Sometimes Change Overnight


Emerging Technology From the arXiv: “In October 2012, Hurricane Sandy approached the eastern coast of the United States. At the same time, the English language was undergoing a small earthquake of its own. Just months before, the word “sandy” was an adjective meaning “covered in or consisting mostly of sand” or “having light yellowish brown color.” Almost overnight, this word gained an additional meaning as a proper noun for one of the costliest storms in U.S. history.
A similar change occurred to the word “mouse” in the early 1970s when it gained the new meaning of “computer input device.” In the 1980s, the word “apple” became a proper noun synonymous with the computer company. And later, the word “windows” followed a similar course after the release of the Microsoft operating system.
All this serves to show how language constantly evolves, often slowly but at other times almost overnight. Keeping track of these new senses and meanings has always been hard. But not anymore.
Today, Vivek Kulkarni at Stony Brook University in New York and a few pals show how they have tracked these linguistic changes by mining the corpus of words stored in databases such as Google Books, movie reviews from Amazon, and of course the microblogging site Twitter.
These guys have developed three ways to spot changes in the language. The first is a simple count of how often words are used, using tools such as Google Trends. For example, in October 2012, the frequency of the words “Sandy” and “hurricane” both spiked in the runup to the storm. However, only one of these words changed its meaning, something that a frequency count cannot spot.
So Kulkarni and co have a second method in which they label all of the words in the databases according to their parts of speech, whether a noun, a proper noun, a verb, an adjective and so on. This clearly reveals a change in the way the word “Sandy” was used, from adjective to proper noun, while also showing that the word “hurricane” had not changed.
The parts of speech technique is useful but not infallible. It cannot pick up the change in meaning of the word mouse, both of which are nouns. So the team have a third approach.
This maps the linguistic vector space in which words are embedded. The idea is that words in this space are close to other words that appear in similar contexts. For example, the word “big” is close to words such as “large,” “huge,” “enormous,” and so on.
By examining the linguistic space at different points in history, it is possible to see how meanings have changed. For example, in the 1950s, the word “gay” was close to words such as “cheerful” and “dapper.” Today, however, it has moved significantly to be closer to words such as “lesbian,” homosexual,” and so on.
Kulkarni and co examine three different databases to see how words have changed: the set of five-word sequences that appear in the Google Books corpus, Amazon movie reviews since 2000, and messages posted on Twitter between September 2011 and October 2013.
Their results reveal not only which words have changed in meaning, but when the change occurred and how quickly. For example, before the 1970s, the word “tape” was used almost exclusively to describe adhesive tape but then gained an additional meaning of “cassette tape.”…”

A micro-democratic perspective on crowd-work


New paper by Karin Hansson: “Social media has provided governments with new means to improve efficiency and innovation, by engaging a crowd in the gathering and development of data. These collaborative processes are also described as a way to improve democracy by enabling a more transparent and deliberative democracy where citizens participate more directly in decision processes on different levels. However, the dominant research on the e-democratic field takes a government perspective rather then a citizen perspective. –democracy from the perspective of the individual actor, in a global context, is less developed.
In this paper I therefore develop a model for a democratic process outside the realm of the nation state, in a performative state where inequality is norm and the state is unclear and fluid. In this process e-participation means an ICT supported method to get a diversity of opinions and perspectives rather than one single. This micro perspective on democratic participation online might be useful for development of tools for more democratic online crowds…”

Building a complete Tweet index


Yi Zhuang (@yz) at Twitter: “Since that first simple Tweet over eight years ago, hundreds of billions of Tweets have captured everyday human experiences and major historical events. Our search engine excelled at surfacing breaking news and events in real time, and our search index infrastructure reflected this strong emphasis on recency. But our long-standing goal has been to let people search through every Tweet ever published.
This new infrastructure enables many use cases, providing comprehensive results for entire TV and sports seasons, conferences (#TEDGlobal), industry discussions (#MobilePayments), places, businesses and long-lived hashtag conversations across topics, such as #JapanEarthquake, #Election2012, #ScotlandDecides, #HongKong, #Ferguson and many more. This change will be rolling out to users over the next few days.
In this post, we describe how we built a search service that efficiently indexes roughly half a trillion documents and serves queries with an average latency of under 100ms….”

Innovating Practice in a Culture of Expertise


Aleem Walji at SSI Review: “When I joined the World Bank five years ago to lead a new innovation practice, the organization asked me to help expand the space for experimentation and learning with an emphasis on emergent technologies. But that mandate was intimidating and counter-intuitive in an “expert-driven” culture. Experts want detailed plans, budgets, clear success indicators, and minimal risk. But innovation is about managing risk and navigating uncertainty intelligently. You fail fast and fail forward. It has been a step-by-step process, and the journey is far from over, but the World Bank today sees innovation as essential to achieving its mission.
It’s taught me a lot about seeding innovation in a culture of expertise, including phasing change across approaches to technology, teaming, problem solving, and ultimately leadership.
Innovating technologies: As a newcomer, my goal was not to try to change the World Bank’s culture. I was content to carve out a space where my team could try new things we couldn’t do elsewhere in the institution, learn fast, and create impact. Our initial focus was leveraging technologies with approaches that, if they took root, could be very powerful.
Over the first 18 to 24 months, we served as an incubator for ideas and had a number of successes that built on senior management’s support for increased access to information. The Open Data Initiative, for example, made our trove of information on countries, people, projects, and programs widely available and searchable. To our surprise, people came in droves to access it. We also launched the Mapping for Results initiative, which mapped project results and poverty data to show the relationship between where we lend and where the poor live, and the results of our work. These programs are now mainstream at the World Bank and have penetrated other development institutions….
Innovating teams: The lab idea—phase two—would require collaboration and experimentation in an unprecedented way. For example, we worked with other parts of the World Bank and a number of outside organizations to incubate the Open Development Technology Alliance, now part of the digital engagement unit of the World Bank. It worked to enhance accountability, and improve the delivery and quality of public services through technology-enabled citizen engagement such as using mobile phones, interactive mapping, and social media to draw citizens into collective problem mapping and problem solving….
Innovating problem solving: At the same time, we recognized that we face some really complex problems that the World Bank’s traditional approach of lending to governments and supervising development projects is not solving. For this, we needed another type of lab that innovated the very way we solve problems. We needed a deliberate process for experimenting, learning, iterating, and adapting. But that’s easier said than done. At our core, we are an expert-driven organization with know-how in disciplines ranging from agricultural economics and civil engineering to maternal health and early childhood development. Our problem-solving architecture is rooted in designing technical solutions to complicated problems. Yet the hardest problems in the world defy technical fixes. We work in contexts where political environments shift, leaders change, and conditions on the ground constantly evolve. Problems like climate change, financial inclusion, food security, and youth unemployment demand new ways of solving old problems.
The innovation we most needed was innovation in the leadership architecture of how we confront complex challenges. We share knowledge and expertise on the “what” of reform, but the “how” is what we need most. We need to marry know-how with do-how. We need multiyear, multi-stakeholder, and systems approaches to solving problems. We need to get better at framing and reframing problems, integrative thinking, and testing a range of solutions. We need to iterate and course-correct as we learn what works and doesn’t work in which context. That’s where we are right now with what we call “integrated leadership learning innovation”—phase four. It’s all about shaping an innovative process to address complex problems….”

Can Government Mine Tweets to Assess Public Opinion?


at Government Technology: “What if instead of going to a city meeting, you could go on Twitter, tweet your opinion, and still be heard by those in government? New research suggests this is a possibility.
The Urban Attitudes Lab at Tufts University has conducted research on accessing “big data” on social networking sites for civic purposes, according to Justin Hollander, associate professor in the Department of Urban and Environmental Policy and Planning at Tufts.
About six months ago, Hollander began researching new ways of accessing how people think about the places they live, work and play. “We’re looking to see how tapping into social media data to understand attitudes and opinions can benefit both urban planning and public policy,” he said.
Harnessing natural comments — there are about one billion tweets per day — could help governments learn what people are saying and feeling, said Hollander. And while formal types of data can be used as proxies for how happy people are, people openly share their sentiments on social networking sites.
Twitter and other social media sites can also provide information in an unobtrusive way. “The idea is that we can capture a potentially more valid and reliable view [of people’s] opinions about the world,” he said. As an inexact science, social science relies on a wide range of data sources to inform research, including surveys, interviews and focus groups; but people respond to being the subject of study, possibly affecting outcomes, Hollander said.
Hollander is also interested in extracting data from social sites because it can be done on a 24/7 basis, which means not having to wait for government to administer surveys, like the Decennial Census. Information from Twitter can also be connected to place; Hollander has approximated that about 10 percent of all tweets are geotagged to location.
In its first study earlier this year, the lab looked at using big data to learn about people’s sentiments and civic interests in New Bedford, Mass., comparing Twitter messages with the city’s published meeting minutes.
To extract tweets over a six-week period from February to April, researchers used the lab’s own software to capture 122,186 tweets geotagged within the city that also had words pertaining to the New Bedford area. Hollander said anyone can get API information from Twitter to also mine data from an area as small as a neighborhood containing a couple hundred houses.
Researchers used IBM’s SPSS Modeler software, comparing this to custom-designed software, to leverage a sentiment dictionary of nearly 3,000 words, assigning a sentiment score to each phrase — ranging from -5 for awful feelings to +5 for feelings of elation. The lab did this for the Twitter messages, and found that about 7 percent were positive versus 5.5 percent negative, and correspondingly in the minutes, 1.7 percent were positive and .7 percent negative. In total, about 11,000 messages contained sentiments.
The lab also used NVivo qualitative software to analyze 24 key words in a one-year sample of the city’s meeting minutes. By searching for the same words in Twitter posts, the researchers found that “school,” “health,” “safety,” “parks,” “field” and “children” were used frequently across both mediums.
….
Next up for the lab is a new study contrasting Twitter posts from four Massachusetts cities with the recent election results.