Citizen Lobbying: How Your Skills Can Fix Democracy


TEDxBrussels Presentation: “The more society professionalises, the less is taking advantage if its own skills. Indeed, each of us has much more to give to society than what our job descriptions allow us to. How to then mobilize our skills for the greater good? Alberto Alemanno, an engaged academic and civic advocate, argues that besides voting and running for office there is also a third, less known – yet more promising -, way to make society progress: lobbying. Lobbying is no longer a prerogative of well-funded groups with huge memberships and countless political connections. This talk offers you a guide on how to become an effective citizen lobbyist in your daily life by tapping into your own talents, skills and experience….(More)” See also http://www.thegoodlobby.eu/

 

Digital Keywords: A Vocabulary of Information Society and Culture


Book edited by Benjamin Peters: “In the age of search, keywords increasingly organize research, teaching, and even thought itself. Inspired by Raymond Williams’s 1976 classic Keywords, the timely collection Digital Keywords gathers pointed, provocative short essays on more than two dozen keywords by leading and rising digital media scholars from the areas of anthropology, digital humanities, history, political science, philosophy, religious studies, rhetoric, science and technology studies, and sociology. Digital Keywords examines and critiques the rich lexicon animating the emerging field of digital studies.

This collection broadens our understanding of how we talk about the modern world, particularly of the vocabulary at work in information technologies. Contributors scrutinize each keyword independently: for example, the recent pairing of digital and analog is separated, while classic terms such as community, culture, event, memory, and democracy are treated in light of their historical and intellectual importance. Metaphors of the cloud in cloud computing and the mirror in data mirroring combine with recent and radical uses of terms such as information, sharing, gaming, algorithm, and internet to reveal previously hidden insights into contemporary life. Bookended by a critical introduction and a list of over two hundred other digital keywords, these essays provide concise, compelling arguments about our current mediated condition.

Digital Keywords delves into what language does in today’s information revolution and why it matters…(More)”.

Searching for Someone: From the “Small World Experiment” to the “Red Balloon Challenge,” and beyond


Essay by Manuel Cebrian, Iyad Rahwan, Victoriano Izquierdo, Alex Rutherford, Esteban Moro and Alex (Sandy) Pentland: “Our ability to search social networks for people and information is fundamental to our success. We use our personal connections to look for new job opportunities, to seek advice about what products to buy, to match with romantic partners, to find a good physician, to identify business partners, and so on.

Despite living in a world populated by seven billion people, we are able to navigate our contacts efficiently, only needing a handful of personal introductions before finding the answer to our question, or the person we are seeking. How does this come to be? In folk culture, the answer to this question is that we live in a “small world.” The catch-phrase was coined in 1929 by the visionary author Frigyes Karinthy in his Chain-Links essay, where these ideas are put forward for the first time.

Let me put it this way: Planet Earth has never been as tiny as it is now. It shrunk — relatively speaking of course — due to the quickening pulse of both physical and verbal communication. We never talked about the fact that anyone on Earth, at my or anyone’s will, can now learn in just a few minutes what I think or do, and what I want or what I would like to do. Now we live in fairyland. The only slightly disappointing thing about this land is that it is smaller than the real world has ever been. — Frigyes Karinthy, Chain-Links, 1929

Then, it was just a dystopian idea reflecting the anxiety of living in an increasingly more connected world. But there was no empirical evidence that this was actually the case, and it took almost 30 years to find any.

Six Degrees of Separation

In 1967, legendary psychologist Stanley Milgram conducted a ground-breaking experiment to test this “small world” hypothesis. He started with random individuals in the U.S. midwest, and asked them to send packages to people in Boston, Massachusetts, whose address was not given. They must contribute to this “search” only by sending the package to individuals known on a first-name basis. Milgram expected that successful searches (if any!) would require hundreds of individuals along the chain from the initial sender to the final recipient.

Surprisingly, however, Milgram found that the average path length was somewhere between five point five and six individuals, which made social search look astonishingly efficient. Although the experiment raised some methodological criticisms, its findings were profound. However, what it did not answer is why social networks have such short paths in the first place. The answer was not obvious. In fact, there were reasons to suspect that short paths were just a myth: social networks are very cliquish. Your friends’ friends are likely to also be your friends, and thus most social paths are short and circular. This “cliquishness” suggests that our search through the social network can easily get “trapped” within our close social community, making social search highly inefficient.

Architectures for Social Search

Again, it took a long time — more than 40 years — before this riddle was solved. In a 1998 seminal paper in Nature, Duncan Watts & Steven Strogatzcame up with an elegant mathematical model to explain the existence of these short paths. They started from a social network that is very cliquish, i.e., most of your friends are also friends of one another. In this model, the world is “large” since the social distance among individuals is very long. However, if we take only a tiny fraction of these connections (say one out of every hundred links), and rewire them to random individuals in the network, that same world suddenly becomes “small.” These random connections allow individuals to jump to faraway communities very quickly — using them as social network highways — thus reducing average path length in a dramatic fashion.

While this theoretical insight suggests that social networks are searchable due to the existence of short paths, it does not yet say much about the “procedure” that people use to find these paths. There is no reason, a priori, that we should know how to find these short chains, especially since there are many chains, and no individuals have knowledge of the network structure beyond their immediate communities. People do not know how the friends of their friends are connected among themselves, and therefore it is not obvious that they would have a good way of navigating their social network while searching.

Soon after Watts and Strogatz came up with this model at Cornell University, a computer scientist across campus, Jon Kleinberg, set out to investigate whether such “small world” networks are searchable. In a landmark Nature article, “Navigation in a Small World,” published in 200o, he showed that social search is easy without global knowledge of the network, but only for a very specific value of the probability of long-range connectivity (i.e., the probability that we know somebody far removed from us, socially, in the social network). With the advent of a publicly available social media dataset such as LiveJournal, David Liben-Nowell and colleagues showed that real-world social networks do indeed have these particular long-range ties. It appears the social architecture of the world we inhabit is remarkably fine-tuned for searchability….

The Tragedy of the Crowdsourcers

Some recent efforts have been made to try and disincentivize sabotage. If verification is also rewarded along the recruitment tree, then the individuals who recruited the saboteurs would have a clear incentive to verify, halt, and punish the saboteurs. This theoretical solution is yet to be tested in practice, and it is conjectured that a coalition of saboteurs, where saboteurs recruit other saboteurs pretending to “vet” them, would make recursive verification futile.

If we are to believe in theory, theory does not shed a promising light on reducing sabotage in social search. We recently proposed the “Crowdsourcing Dilemma.” In it, we perform a game-theoretic analysis of the fundamental tradeoff between the potential for increased productivity of social search and the possibility of being set back by malicious behavior, including misinformation. Our results show that, in competitive scenarios, such as those with multiple social searches competing for the same information, malicious behavior is the norm, not an anomaly — a result contrary to conventional wisdom. Even worse: counterintuitively, making sabotage more costly does not deter saboteurs, but leads all the competing teams to a less desirable outcome, with more aggression, and less efficient collective search for talent.

These empirical and theoretical findings have cautionary implications for the future of social search, and crowdsourcing in general. Social search is surprisingly efficient, cheap, easy to implement, and functional across multiple applications. But there are also surprises in the amount of evildoing that the social searchers will stumble upon while recruiting. As we get deeper and deeper into the recruitment tree, we stumble upon that evil force lurking in the dark side of the network.

Evil mutates and regenerates in the crowd in new forms impossible to anticipate by the designers or participants themselves. Crowdsourcing and its enemies will always be engaged in an co-evolutionary arms race.

Talent is there to be searched and recruited. But so are evil and malice. Ultimately, crowdsourcing experts need to figure out how to recruit more of the former, while deterring more of the later. We might be living on a small world, but the cost and fragility of navigating it could harm any potential strategy to leverage the power of social networks….

Being searchable is a way of being closely connected to everyone else, which is conducive to contagion, group-think, and, most crucially, makes it hard for individuals to differentiate from each other. Evolutionarily, for better or worse, our brain makes us mimic others, and whether this copying of others ends up being part of the Wisdom of the Crowds, or the “stupidity of many,” it is highly sensitive to the scenario at hand.

Katabasis, or the myth of the hero that descends to the underworld and comes back stronger, is as old as time and pervasive across ancient cultures. Creative people seem to need to “get lost.” Grigori Perelman, Shinichi Mochizuki, and Bob Dylan all disappeared for a few years to reemerge later as more creative versions of themselves. Others like J. D. Salinger and Bobby Fisher also vanished, and never came back to the public sphere. If others cannot search and find us, we gain some slack, some room to escape from what we are known for by others. Searching for our true creative selves may rest on the difficulty of others finding us….(More)”

Fan Favorites


Erin Reilly at Strategy + Business: “…In theory, new technological advances such as big data and machine learning, combined with more direct access to audience sentiment, behaviors, and preferences via social media and over-the-top delivery channels, give the entertainment and media industry unprecedented insight into what the audience actually wants. But as a professional in the television industry put it, “We’re drowning in data and starving for insights.” Just as my data trail didn’t trace an accurate picture of my true interest in soccer, no data set can quantify all that consumers are as humans. At USC’s Annenberg Innovation Lab, our research has led us to an approach that blends data collection with a deep understanding of the social and cultural context in which the data is created. This can be a powerful practice for helping researchers understand the behavior of fans — fans of sports, brands, celebrities, and shows.

A Model for Understanding Fans

Marketers and creatives often see audiences and customers as passive assemblies of listeners or spectators. But we believe it’s more useful to view them as active participants. The best analogy may be fans. Broadly characterized, fans have a continued connection with the property they are passionate about. Some are willing to declare their affinity through engagement, some have an eagerness to learn more about their passion, and some want to connect with others who share their interests. Fans are emotionally linked to the object of their passion, and experience their passion through their own subjective lenses. We all start out as audience members. But sometimes, when the combination of factors aligns in just the right way, we become engaged as fans.

For businesses, the key to building this engagement and solidifying the relationship is understanding the different types of fan motivations in different contexts, and learning how to turn the data gathered about them into actionable insights. Even if Jane Smith and her best friend are fans of the same show, the same team, or the same brand, they’re likely passionate for different reasons. For example, some viewers may watch the ABC melodrama Scandal because they’re fashionistas and can’t wait to see the newest wardrobe of star Kerry Washington; others may do so because they’re obsessed with politics and want to see how the newly introduced Donald Trump–like character will behave. And those differences mean fans will respond in varied ways to different situations and content.
Though traditional demographics may give us basic information about who fans are and where they’re located, current methods of understanding and measuring engagement are missing the answers to two essential questions: (1) Why is a fan motivated? and (2) What triggers the fan’s behavior? Our Innovation Lab research group is developing a new model called Leveraging Engagement, which can be used as a framework when designing media strategy….(More)”

Big Data Quality: a Roadmap for Open Data


Paper by Paolo Ciancarini, Francesco Poggi and Daniel Russo: “Open Data (OD) is one of the most discussed issue of Big Data which raised the joint interest of public institutions, citizens and private companies since 2009. In addition to transparency in public administrations, another key objective of these initiatives is to allow the development of innovative services for solving real world problems, creating value in some positive and constructive way. However, the massive amount of freely available data has not yet brought the expected effects: as of today, there is no application that has exploited the potential provided by large and distributed information sources in a non-trivial way, nor any service has substantially changed for the better the lives of people. The era of a new generation applications based on open data is far to come. In this context, we observe that OD quality is one of the major threats to achieving the goals of the OD movement. The starting point of this study is the quality of the OD released by the five Constitutional offices of Italy. W3C standards about OD are widely known accepted in Italy by the Italian Digital Agency (AgID). According to the most recent Italian Laws the Public Administration may release OD according to the AgID standards. Our exploratory study aims to assess the quality of such releases and the real implementations of OD. The outcome suggests the need of a drastic improvement in OD quality. Finally we highlight some key quality principles for OD, and propose a roadmap for further research….(more)”

The Perils of Experimentation


Paper by Michael A. Livermore: “More than eighty years after Justice Brandeis coined the phrase “laboratories of democracy,” the concept of policy experimentation retains its currency as a leading justification for decentralized governance. This Article examines the downsides of experimentation, and in particular the potential for decentralization to lead to the production of information that exacerbates public choice failures. Standard accounts of experimentation and policy learning focus on information concerning the social welfare effects of alternative policies. But learning can also occur along a political dimension as information about ideological preferences, campaign techniques, and electoral incentives is revealed. Both types of information can be put to use in the policy arena by a host of individual and institutional actors that have a wide range of motives, from public-spirited concern for the general welfare to a desire to maximize personal financial returns. In this complex environment, there is no guarantee that the information that is generated by experimentation will lead to social benefits. This Article applies this insight to prior models of federalism developed in the legal and political science literature to show that decentralization can lead to the over-production of socially harmful information. As a consequence, policy makers undertaking a decentralization calculation should seek a level of decentralization that best balances the costs and benefits of information production. To illustrate the legal and policy implications of the arguments developed here, this Article examines two contemporary environmental rulemakings of substantial political, legal, and economic significance: a rule to define the jurisdictional reach of the Clean Water Act; and a rule to limit greenhouse gas emissions from the electricity generating sector….(More)”.

 

Why Didn’t E-Gov Live Up To Its Promise?


Excerpt from the report Delivering on Digital: The Innovators and Technologies that are Transforming Government” by William Eggers: “Digital is becoming the new normal. Digital technologies have quietly and quickly pervaded every facet of our daily lives, transforming how we eat, shop, work, play and think.

An aging population, millennials assuming managerial positions, budget shortfalls and ballooning entitlement spending all will significantly impact the way government delivers services in the coming decade, but no single factor will alter citizens’ experience of government more than the pure power of digital technologies.

Ultimately, digital transformation means reimagining virtually every facet of what government does, from headquarters to the field, from health and human services to transportation and defense.

By now, some of you readers with long memories can’t be blamed for feeling a sense of déjà vu.

After all, technology was supposed to transform government 15 years ago; an “era of electronic government” was poised to make government faster, smaller, digitized and increasingly transparent.

Many analysts (including yours truly, in a book called “Government 2.0”) predicted that by 2016, digital government would already long be a reality. In practice, the “e-gov revolution” has been an exceedingly slow-moving one. Sure, technology has improved some processes, and scores of public services have moved online, but the public sector has hardly been transformed.

What initial e-gov efforts managed was to construct pretty storefronts—in the form of websites—as the entrance to government systems stubbornly built for the industrial age. Few fundamental changes altered the structures, systems and processes of government behind those websites.

With such halfhearted implementation, the promise of cost savings from information technology failed to materialize, instead disappearing into the black hole of individual agency and division budgets. Government websites mirrored departments’ short-term orientation rather than citizens’ long-term needs. In short, government became wired—but not transformed.

So why did the reality of e-gov fail to live up to the promise?

For one thing, we weren’t yet living in a digitized economy—our homes, cars and workplaces were still mostly analog—and the technology wasn’t as far along as we thought; without the innovations of cloud computing and open-source software, for instance, the process of upgrading giant, decades-old legacy systems proved costly, time-consuming and incredibly complex.

And not surprisingly, most governments—and private firms, for that matter—lacked deep expertise in managing digital services. What we now call “agile development”—an iterative development model that allows for constant evolution through recurrent testing and evaluation—was not yet mainstreamed.

Finally, most governments explicitly decided to focus first on the Hollywood storefront and postpone the bigger and tougher issues of reengineering underlying processes and systems. When budgets nosedived—even before the recession—staying solvent and providing basic services took precedence over digital transformation.

The result: Agencies automated some processes but failed to transform them; services were put online, but rarely were they focused logically and intelligently around the citizen.

Given this history, it’s natural to be skeptical after years of hype about government’s amazing digital future. But conditions on the ground (and in the cloud) are finally in place for change, and citizens are not only ready for digital government—many are demanding it.

Digital-native millennials are now consumers of public services, and millions of them work in and around government; they won’t tolerate balky and poorly designed systems, and they’ll let the world know through social media. Gen Xers and baby boomers, too, have become far more savvy consumers of digital products and services….(More)”

Soon Your City Will Know Everything About You


Currently, the biggest users of these sensor arrays are in cities, where city governments use them to collect large amounts of policy-relevant data. In Los Angeles, the crowdsourced traffic and navigation app Waze collects data that helps residents navigate the city’s choked highway networks. In Chicago, an ambitious program makes public data available to startups eager to build apps for residents. The city’s 49th ward has been experimenting with participatory budgeting and online votingto take the pulse of the community on policy issues. Chicago has also been developing the “Array of Things,” a network of sensors that track, among other things, the urban conditions that affect bronchitis.

Edmonton uses the cloud to track the condition of playground equipment. And a growing number of countries have purpose-built smart cities, like South Korea’s high tech utopia city of Songdo, where pervasive sensor networks and ubiquitous computing generate immense amounts of civic data for public services.

The drive for smart cities isn’t restricted to the developed world. Rio de Janeiro coordinates the information flows of 30 different city agencies. In Beijing and Da Nang (Vietnam), mobile phone data is actively tracked in the name of real-time traffic management. Urban sensor networks, in other words, are also developing in countries with few legal protections governing the usage of data.

These services are promising and useful. But you don’t have to look far to see why the Internet of Things has serious privacy implications. Public data is used for “predictive policing” in at least 75 cities across the U.S., including New York City, where critics maintain that using social media or traffic data to help officers evaluate probable cause is a form of digital stop-and-frisk. In Los Angeles, the security firm Palantir scoops up publicly generated data on car movements, merges it with license plate information collected by the city’s traffic cameras, and sells analytics back to the city so that police officers can decide whether or not to search a car. In Chicago, concern is growing about discriminatory profiling because so much information is collected and managed by the police department — an agency with a poor reputation for handling data in consistent and sensitive ways. In 2015, video surveillance of the police shooting Laquan McDonald outside a Burger King was erased by a police employee who ironically did not know his activities were being digitally recorded by cameras inside the restaurant.

Since most national governments have bungled privacy policy, cities — which have a reputation for being better with administrative innovations — will need to fill this gap. A few countries, such as Canada and the U.K., have independent “privacy commissioners” who are responsible for advocating for the public when bureaucracies must decide how to use or give out data. It is pretty clear that cities need such advocates too.

What would Urban Privacy Commissioners do? They would teach the public — and other government staff — about how policy algorithms work. They would evaluate the political context in which city agencies make big data investments. They would help a city negotiate contracts that protect residents’ privacy while providing effective analysis to policy makers and ensuring that open data is consistently serving the public good….(more)”.

The Values of Public Library in Promoting an Open Government Environment


Djoko Sigit Sayogo et al in the Proceedings of the 17th International Digital Government Research Conference on Digital Government Research: “Public participation has been less than ideal in many government-implemented ICT initiatives. Extant studies highlight the importance of public libraries as an intermediary between citizens and government. This study evaluates the role of public libraries as mediating the relationship between citizens and government in support of an open government environment. Using data from a national survey of “Library and Technology Use” conducted by PEW Internet in 2015, we test whether a citizen’s perception of public values provided by public libraries influence the likelihood of the citizen’s engagement within open-government environment contexts. The results signify a significant relationship between certain public values provided by public libraries with the propensity of citizens engaging government in an online environment. Our findings further indicate that varying public values generate different results in regard to the way citizens are stimulated to use public libraries to engage with government online. These findings imply that programs designed and developed to take into account a variety of values are more likely to effectively induce citizen engagement in an open government environment through the mediation of public libraries….(More)”

Big Crisis Data: Social Media in Disasters and Time-Critical Situations


Book by Carlos Castillo: “Social media is an invaluable source of time-critical information during a crisis. However, emergency response and humanitarian relief organizations that would like to use this information struggle with an avalanche of social media messages that exceeds human capacity to process. Emergency managers, decision makers, and affected communities can make sense of social media through a combination of machine computation and human compassion – expressed by thousands of digital volunteers who publish, process, and summarize potentially life-saving information. This book brings together computational methods from many disciplines: natural language processing, semantic technologies, data mining, machine learning, network analysis, human-computer interaction, and information visualization, focusing on methods that are commonly used for processing social media messages under time-critical constraints, and offering more than 500 references to in-depth information…(More)”