Systematic Thinking for Social Action


Re-issued book by Alice M. Rivlin: “In January 1970 Alice M. Rivlin spoke to an audience at the University of California–Berkeley. The topic was developing a more rational approach to decision-making in government. If digital video, YouTube, and TED Talks had been inventions of the 1960s, Rivlin’s talk would have been a viral hit. As it was, the resulting book, Systematic Thinking for Social Action, spent years on the Brookings Press bestseller list. It is a very personal and conversational volume about the dawn of new ways of thinking about government.

As a deputy assistant secretary for program coordination, and later as assistant secretary for planning and evaluation, at the Department of Health, Education and Welfare from 1966 to 1969, Rivlin was an early advocate of systems analysis, which had been introduced by  Robert McNamara at the Department of Defense as  PPBS (planning-programming-budgeting-system).

While Rivlin brushes aside the jargon, she digs into the substance of systematic analysis and a “quiet revolution in government.” In an evaluation of the evaluators, she issues mixed grades, pointing out where analysts had been helpful in finding solutions and where—because of inadequate data or methods—they had been no help at all.

Systematic Thinking for Social Action offers important insights for anyone interested in working to find the smartest ways to allocate scarce funds to promote the maximum well-being of all citizens.

This reissue is a Brookings Classics, a series of republished books for readers to revisit or discover previous, notable works by the Brookings Institution Press.

Met Office warns of big data floods on the horizon


 at V3: “The amount of data being collected by departments and agencies mean government services will not be able to implement truly open data strategies, according to Met Office CIO Charles Ewen.

Ewen said the rapidly increasing amount of data being stored by companies and government departments mean it will not be technologically possible able to share all their data in the near future.

During a talk at the Cloud World Forum on Wednesday, he said: “The future will be bigger and bigger data. Right now we’re talking about petabytes, in the near future it will be tens of petabytes, then soon after it’ll be hundreds of petabytes and then we’ll be off into imaginary figure titles.

“We see a future where data has gotten so big the notion of open data and the idea ‘lets share our data with everybody and anybody’ just won’t work. We’re struggling to make it work already and by 2020 the national infrastructure will not exist to shift this stuff [data] around in the way anybody could access and make use of it.”

Ewen added that to deal with the shift he expects many departments and agencies will adapt their processes to become digital curators that are more selective about the data they share, to try and ensure it is useful.

“This isn’t us wrapping our arms around our data and saying you can’t see it. We just don’t see how we can share all this big data in the way you would want it,” he said.

“We see a future where a select number of high-capacity nodes become information brokers and are used to curate and manage data. These curators will be where people bring their problems. That’s the future we see.”

Ewan added that the current expectations around open data are based on misguided views about the capabilities of cloud technology to host and provide access to huge amounts of data.

“The trendy stuff out there claims to be great at everything, but don’t get carried away. We don’t see cloud as anything but capability. We’ve been using appropriate IT and what’s available to deliver our mission services for over 50 to 60 years, and cloud is playing an increasing part of that, but purely for increased capability,” he said.

“It’s just another tool. The important thing is having the skill and knowledge to not just believe vendors but to look and identify the problem and say ‘we have to solve this’.”

The Met Office CIO’s comments follow reports from other government service providers that people’s desire for open data is growing exponentially….(More)”

Hacking the streets: ‘Smart’ writing in the smart city


Spencer Jordan at FirstMonday: “Cities have always been intimately bound up with technology. As important nodes within commercial and communication networks, cities became centres of sweeping industrialisation that affected all facets of life (Mumford, 1973). Alienation and estrangement became key characteristics of modernity, Mumford famously noting the “destruction and disorder within great cities” during the long nineteenth century. The increasing use of digital technology is yet another chapter in this process, exemplified by the rise of the ‘smart city’. Although there is no agreed definition, smart cities are understood to be those in which digital technology helps regulate, run and manage the city (Caragliu,et al., 2009). This article argues that McQuire’s definition of ‘relational space’, what he understands as the reconfiguration of urban space by digital technology, is critical here. Although some see the impact of digital technology on the urban environment as deepening social exclusion and isolation (Virilio, 1991), others, such as de Waal perceive digital technology in a more positive light. What is certainly clear, however, is that the city is once again undergoing rapid change. As Varnelis and Friedberg note, “place … is in a process of a deep and contested transformation”.

If the potential benefits from digital technology are to be maximised it is necessary that the relationship between the individual and the city is understood. This paper examines how digital technology can support and augment what de Certeau calls spatial practice, specifically in terms of constructions of ‘home’ and ‘belonging’ (de Certeau, 1984). The very act of walking is itself an act of enunciation, a process by which the city is instantiated; yet, as de Certeau and Bachelard remind us, the city is also wrought from the stories we tell, the narratives we construct about that space (de Certeau, 1984; Bachelard, 1994). The city is thus envisioned both through physical exploration but also language. As Turchi has shown, the creative stories we make on these voyages can be understood as maps of that world and those we meet (Turchi, 2004). If, as the situationists Kotányi and Vaneigem stated, “Urbanism is comparable to the advertising propagated around Coca-Cola — pure spectacular ideology”, there needs to be a way by which the hegemony of the market, Benjamin’s phantasmagoria, can be challenged. This would wrestle control from the market forces that are seen to have overwhelmed the high street, and allow a refocusing on the needs of both the individual and the community.

This article argues that, though anachronistic, some of the situationists’ ideas persist within hacking, what Himanen (2001) identified as the ‘hacker ethic’. As Taylor argues, although hacking is intimately connected to the world of computers, it can refer to the unorthodox use of any ‘artefact’, including social ‘systems’ . In this way, de Certeau’s urban itineraries, the spatial practice of each citizen through the city, can be understood as a form of hacking. As Wark states, “We do not lack communication. On the contrary, we have too much of it. We lack creation. We lack resistance to the present.” If the city itself is called into being through our physical journeys, in what de Certeau called ‘spaces of enunciation’, then new configurations and possibilities abound. The walker becomes hacker, Wark’s “abstractors of new worlds”, and the itinerary a deliberate subversion of an urban system, the dream houses of Benjamin’s arcades. This paper examines one small research project, Waterways and Walkways, in its investigation of a digitally mediated exploration across Cardiff, the Welsh capital. The article concludes by showing just one small way in which digital technology can play a role in facilitating the re-conceptualisation of our cities….(More)”

Algorithmic Life: Calculative Devices in the Age of Big Data


Book edited by Louise Amoore and Volha Piotukh: “This book critically explores forms and techniques of calculation that emerge with digital computation, and their implications. The contributors demonstrate that digital calculative devices matter beyond their specific functions as they progressively shape, transform and govern all areas of our life. In particular, it addresses such questions as:

  • How does the drive to make sense of, and productively use, large amounts of diverse data, inform the development of new calculative devices, logics and techniques?
  • How do these devices, logics and techniques affect our capacity to decide and to act?
  • How do mundane elements of our physical and virtual existence become data to be analysed and rearranged in complex ensembles of people and things?
  • In what ways are conventional notions of public and private, individual and population, certainty and probability, rule and exception transformed and what are the consequences?
  • How does the search for ‘hidden’ connections and patterns change our understanding of social relations and associative life?
  • Do contemporary modes of calculation produce new thresholds of calculability and computability, allowing for the improbable or the merely possible to be embraced and acted upon?
  • As contemporary approaches to governing uncertain futures seek to anticipate future events, how are calculation and decision engaged anew?

Drawing together different strands of cutting-edge research that is both theoretically sophisticated and empirically rich, this book makes an important contribution to several areas of scholarship, including the emerging social science field of software studies, and will be a vital resource for students and scholars alike….(More)”

Can crowdsourcing decipher the roots of armed conflict?


Stephanie Kanowitz at GCN: “Researchers at Pennsylvania State University and the University of Texas at Dallas are proving that there’s accuracy, not just safety, in numbers. The Correlates of War project, a long-standing effort that studies the history of warfare, is now experimenting with crowdsourcing as a way to more quickly and inexpensively create a global conflict database that could help explain when and why countries go to war.

The goal is to facilitate the collection, dissemination and use of reliable data in international relations, but a byproduct has emerged: the development of technology that uses machine learning and natural language processing to efficiently, cost-effectively and accurately create databases from news articles that detail militarized interstate disputes.

The project is in its fifth iteration, having released the fourth set of Militarized Dispute (MID) Data in 2014. To create those earlier versions, researchers paid subject-matter experts such as political scientists to read and hand code newswire articles about disputes, identifying features of possible militarized incidents. Now, however, they’re soliciting help from anyone and everyone — and finding the results are much the same as what the experts produced, except the results come in faster and with significantly less expense.

As news articles come across the wire, the researchers pull them and formulate questions about them that help evaluate the military events. Next, the articles and questions are loaded onto the Amazon Mechanical Turk, a marketplace for crowdsourcing. The project assigns articles to readers, who typically spend about 10 minutes reading an article and responding to the questions. The readers submit the answers to the project researchers, who review them. The project assigns the same article to multiple workers and uses computer algorithms to combine the data into one annotation.

A systematic comparison of the crowdsourced responses with those of trained subject-matter experts showed that the crowdsourced work was accurate for 68 percent of the news reports coded. More important, the aggregation of answers for each article showed that common answers from multiple readers strongly correlated with correct coding. This allowed researchers to easily flag the articles that required deeper expert involvement and process the majority of the news items in near-real time and at limited cost….(more)”

Big Data in U.S. Agriculture


Megan Stubbs at the Congressional Research Service: “Recent media and industry reports have employed the term big data as a key to the future of increased food production and sustainable agriculture. A recent hearing on the private elements of big data in agriculture suggests that Congress too is interested in potential opportunities and challenges big data may hold. While there appears to be great interest, the subject of big data is complex and often misunderstood, especially within the context of agriculture.

There is no commonly accepted definition of the term big data. It is often used to describe a modern trend in which the combination of technology and advanced analytics creates a new way of processing information that is more useful and timely. In other words, big data is just as much about new methods for processing data as about the data themselves. It is dynamic, and when analyzed can provide a useful tool in a decisionmaking process. Most see big data in agriculture at the end use point, where farmers use precision tools to potentially create positive results like increased yields, reduced inputs, or greater sustainability. While this is certainly the more intriguing part of the discussion, it is but one aspect and does not necessarily represent a complete picture.

Both private and public big data play a key role in the use of technology and analytics that drive a producer’s evidence-based decisions. Public-level big data represent records collected, maintained, and analyzed through publicly funded sources, specifically by federal agencies (e.g., farm program participant records and weather data). Private big data represent records generated at the production level and originate with the farmer or rancher (e.g., yield, soil analysis, irrigation levels, livestock movement, and grazing rates). While discussed separately in this report, public and private big data are typically combined to create a more complete picture of an agricultural operation and therefore better decisionmaking tools.

Big data may significantly affect many aspects of the agricultural industry, although the full extent and nature of its eventual impacts remain uncertain. Many observers predict that the growth of big data will bring positive benefits through enhanced production, resource efficiency, and improved adaptation to climate change. While lauded for its potentially revolutionary applications, big data is not without issues. From a policy perspective, issues related to big data involve nearly every stage of its existence, including its collection (how it is captured), management (how it is stored and managed), and use (how it is analyzed and used). It is still unclear how big data will progress within agriculture due to technical and policy challenges, such as privacy and security, for producers and policymakers. As Congress follows the issue a number of questions may arise, including a principal one—what is the federal role?…(More)”

Predictive Analytics


Revised book by Eric Siegel: “Prediction is powered by the world’s most potent, flourishing unnatural resource: data. Accumulated in large part as the by-product of routine tasks, data is the unsalted, flavorless residue deposited en masse as organizations churn away. Surprise! This heap of refuse is a gold mine. Big data embodies an extraordinary wealth of experience from which to learn.

Predictive analytics unleashes the power of data. With this technology, the computer literally learns from data how to predict the future behavior of individuals. Perfect prediction is not possible, but putting odds on the future drives millions of decisions more effectively, determining whom to call, mail, investigate, incarcerate, set up on a date, or medicate.

In this lucid, captivating introduction — now in its Revised and Updated edition — former Columbia University professor and Predictive Analytics World founder Eric Siegel reveals the power and perils of prediction:

    • What type of mortgage risk Chase Bank predicted before the recession.
    • Predicting which people will drop out of school, cancel a subscription, or get divorced before they even know it themselves.
    • Why early retirement predicts a shorter life expectancy and vegetarians miss fewer flights.
    • Five reasons why organizations predict death — including one health insurance company.
    • How U.S. Bank and Obama for America calculated — and Hillary for America 2016 plans to calculate — the way to most strongly persuade each individual.
    • Why the NSA wants all your data: machine learning supercomputers to fight terrorism.
    • How IBM’s Watson computer used predictive modeling to answer questions and beat the human champs on TV’s Jeopardy!
    • How companies ascertain untold, private truths — how Target figures out you’re pregnant and Hewlett-Packard deduces you’re about to quit your job.
    • How judges and parole boards rely on crime-predicting computers to decide how long convicts remain in prison.
    • 183 examples from Airbnb, the BBC, Citibank, ConEd, Facebook, Ford, Google, the IRS, LinkedIn, Match.com, MTV, Netflix, PayPal, Pfizer, Spotify, Uber, UPS, Wikipedia, and more….(More)”

 

Humanity 360: World Humanitarian Data and Trends 2015


OCHA: “WORLD HUMANITARIAN DATA AND TRENDS

Highlights major trends, challenges and opportunities in the nature of humanitarian crises, showing how the humanitarian landscape is evolving in a rapidly changing world.

EXPLORE...

LEAVING NO ONE BEHIND: HUMANITARIAN EFFECTIVENESS IN THE AGE OF THE SUSTAINABLE DEVELOPMENT GOALS

Exploring what humanitarian effectiveness means in today’s world ‐ better meeting the needs of people in crisis, better moving people out of crisis.

EXPLORE

TOOLS FOR DATA COORDINATION AND COLLECTION

 

HereHere


HereHere NYC generates weekly cartoons for NYC neighborhoods based on public data. We sum up how your neighborhood, or other NYC neighborhoods you care about, are doing via weekly email digest, neighborhood-specific Twitter & Instagram feeds, and with deeper data and context.

HereHere is a research project from FUSE Labs, Microsoft Research that explores:

  • Creating compelling stories with data to engage larger communities
  • Inventing new habits for connecting to the hyperlocal
  • Using cartoons as a tool to drive data engagement

HereHere does not use sentiment analysis, but uses a research platform with the intention of surfacing the most pertinent information with a human perspective. …

How It Works

Several times a day we grab the freshest NYC 311 data. The data comes in as a long list of categorized concerns issued by people in NYC (either via phone, email, or text message) and range from heating complaints to compliments to concerns about harboring bees and everything in between.

We separate the data by neighborhood for each of the 42 neighborhoods throughout the 5 boroughs of NYC, and count the total of each concern per neighborhood.

Next, we process the data through the Sentient Data Server. SDS equips each neighborhood with a personality (like a character in a movie or videogame) and we calculate the character’s response to the latest data based on pace, position and trend. For example, a neighborhood might be delighted if after several days of more than 30 heating complaints, heating complaints drops down to 0; or a neighborhood might be ashamed to see a sudden rise in homeless person assistance requests.

 

HereHere determines the most critical 311 issues for each neighborhood each week and uses that to procedurally generate a weekly cartoon for each neighborhood.

 HereHere summarizes the 311 concerns into categories for a quick sense of what’s happening in each neighborhood…(More)

How Facebook Makes Us Dumber


 in BloombergView: “Why does misinformation spread so quickly on the social media? Why doesn’t it get corrected? When the truth is so easy to find, why do people accept falsehoods?

A new study focusing on Facebook users provides strong evidence that the explanation is confirmation bias: people’s tendency to seek out information that confirms their beliefs, and to ignore contrary information.

Confirmation bias turns out to play a pivotal role in the creation of online echo chambers. This finding bears on a wide range of issues, including the current presidential campaign, the acceptance of conspiracy theories and competing positions in international disputes.

The new study, led by Michela Del Vicario of Italy’s Laboratory of Computational Social Science, explores the behavior of Facebook users from 2010 to 2014. One of the study’s goals was to test a question that continues to be sharply disputed: When people are online, do they encounter opposing views, or do they create the virtual equivalent of gated communities?

Del Vicario and her coauthors explored how Facebook users spread conspiracy theories (using 32 public web pages); science news (using 35 such pages); and “trolls,” which intentionally spread false information (using two web pages). Their data set is massive: It covers all Facebook posts during the five-year period. They explored which Facebook users linked to one or more of the 69 web pages, and whether they learned about those links from their Facebook friends.

In sum, the researchers find a lot of communities of like-minded people. Even if they are baseless, conspiracy theories spread rapidly within such communities.

More generally, Facebook users tended to choose and share stories containing messages they accept, and to neglect those they reject. If a story fits with what people already believe, they are far more likely to be interested in it and thus to spread it….(More)”