The six types of Twitter conversations


Lee Rainie: “Have you ever wondered what a Twitter conversation looks like from 10,000 feet? A new report from the Pew Research Center, in association with the Social Media Research Foundation, provides an aerial view of the social media network. By analyzing many thousands of Twitter conversations, we identified six different conversational archetypes. Our infographic describes each type of conversation network and an explanation of how it is shaped by the topic being discussed and the people driving the conversation.
FT_14.02.20_TwitterPoster (1)
Read the full report: Mapping the Twitter Conversation”

Crowdsourced transit app shows what time the bus will really come


Springwise: “The problem with most transport apps is that they rely on fixed data from transport company schedules and don’t truly reflect exactly what’s going on with the city’s trains and buses at any given moment. Operating like a Waze for public transport, Israel’s Ototo app crowdsources real-time information from passengers to give users the best suggestions for their commute.
The app relies on a community of ‘Riders’, who allow anonymous location data to be sent from their smartphone whenever they’re using public transport. By collating this data together, Ototo offers more realistic information about bus and train routes. While a bus may be due in five minutes, a Rider currently on that bus might be located more than five minutes away, indicating that the bus isn’t on time. Ototo can then suggest a quicker route for users. According to Fast Company, the service currently has a 12,000-strong global Riders community that powers its travel recommendations. On top of this, the app is designed in an easy-to-use infographic format that quickly and efficiently tells users where they need to be going and how long it will take. The app is free to download from the App Store, and the video below offers a demonstration:


Ototo faces competition from similar services such as New York City’s Moovit, which also details how crowded buses are.”

Sinkhole of bureaucracy


First article in a Washington Post series “examining the failures at the heart of troubled federal systems” by David A. Fahrenthold: “The trucks full of paperwork come every day, turning off a country road north of Pittsburgh and descending through a gateway into the earth. Underground, they stop at a metal door decorated with an American flag.

Behind the door, a room opens up as big as a supermarket, full of five-drawer file cabinets and people in business casual. About 230 feet below the surface, there is easy-listening music playing at somebody’s desk.
This is one of the weirdest workplaces in the U.S. government — both for where it is and for what it does.
Here, inside the caverns of an old Pennsylvania limestone mine, there are 600 employees of the Office of Personnel Management. Their task is nothing top-secret. It is to process the retirement papers of the government’s own workers.
But that system has a spectacular flaw. It still must be done entirely by hand, and almost entirely on paper.

The employees here pass thousands of case files from cavern to cavern and then key in retirees’ personal data, one line at a time. They work underground not for secrecy but for space. The old mine’s tunnels have room for more than 28,000 file cabinets of paper records.
This odd place is an example of how hard it is to get a time-wasting bug out of a big bureaucratic system.
Held up by all that paper, work in the mine runs as slowly now as it did in 1977….”
See also Data mining. The old-fashioned way: View the full graphic.

Exploration, Extraction and ‘Rawification’. The Shaping of Transparency in the Back Rooms of Open Data


Paper by Denis, Jerome and Goëta, Samuel: “With the advent of open data initiatives, raw data has been staged as a crucial element of government transparency. If the consequences of such data-driven transparency have already been discussed, we still don’t know much about its back rooms. What does it mean for an administration to open its data? Following information infrastructure studies, this communication aims to question the modes of existence of raw data in administrations. Drawing on an ethnography of open government data projects in several French administrations, it shows that data are not ready-at-hand resources. Indeed, three kinds of operations are conducted that progressively instantiate open data. The first one is exploration. Where are, and what are, the data within the institution are tough questions, the response to which entails organizational and technical inquiries. The second one is extraction. Data are encapsulated in databases and its release implies a sometimes complex disarticulation process. The third kind of operations is ‘rawification’. It consists in a series of tasks that transforms what used to be indexical professional data into raw data. To become opened, data are (re)formatted, cleaned, ungrounded. Though largely invisible, these operations foreground specific ‘frictions’ that emerge during the sociotechnical shaping of transparency, even before data publication and reuses.”

Government Surveillance and Internet Search Behavior


New paper by Marthews, Alex and Tucker, Catherine: “This paper uses data from Google Trends on search terms from before and after the surveillance revelations of June 2013 to analyze whether Google users’ search behavior shifted as a result of an exogenous shock in information about how closely their internet searches were being monitored by the U. S. government. We use data from Google Trends on search volume for 282 search terms across eleven different countries. These search terms were independently rated for their degree of privacy-sensitivity along multiple dimensions. Using panel data, our result suggest that cross-nationally, users were less likely to search using search terms that they believed might get them in trouble with the U. S. government. In the U. S., this was the main subset of search terms that were affected. However, internationally there was also a drop in traffic for search terms that were rated as personally sensitive. These results have implications for policy makers in terms of understanding the actual effects on search behavior of disclosures relating to the scale of government surveillance on the Internet and their potential effects on international competitiveness.

Charities Try New Ways to Test Ideas Quickly and Polish Them Later


Ben Gose in the Chronicle of Philanthropy: “A year ago, a division of TechSoup Global began working on an app to allow donors to buy a hotel room for victims of domestic violence when no other shelter is available. Now that app is a finalist in a competition run by a foundation that combats human trafficking—and a win could mean a grant worth several hundred thousand dollars. The app’s evolution—adding a focus on sex slaves to the initial emphasis on domestic violence—was hardly accidental.
Caravan Studios, the TechSoup division that created the app, has embraced a new management approach popular in Silicon Valley known as “lean start-up.”
The principles, which are increasingly popular among nonprofits, emphasize experimentation over long-term planning and urge groups to get products and services out to clients as early as possible so the organizations can learn from feedback and make changes.
When the app, known as SafeNight, was still early in the design phase, Caravan posted details about the project on its website, including applications for grants that Caravan had not yet received. In lean-start-up lingo, Caravan put out a “minimal viable product” and hoped for feedback that would lead to a better app.
Caravan soon heard from antitrafficking organizations, which were interested in the same kind of service. Caravan eventually teamed up with the Polaris Project and the State of New Jersey, which were working on a similar app, to jointly create an app for the final round of the antitrafficking contest. Humanity United, the foundation sponsoring the contest, plans to award $1.8-million to as many as three winners later this month.
Marnie Webb, CEO of Caravan, which is building an array of apps designed to curb social problems, says lean-start-up principles help Caravan work faster and meet real needs.
“The central idea is that any product that we develop will get better if it lives as much of its life as possible outside of our office,” Ms. Webb says. “If we had kept SafeNight inside and polished it and polished it, it would have been super hard to bring on a partner because we would have invested too much.”….
Nonprofits developing new tech tools are among the biggest users of lean-start-up ideas.
Upwell, an ocean-conservation organization founded in 2011, scans the web for lively ocean-related discussions and then pushes to turn them into full-fledged movements through social-media campaigns.
Lean principles urge groups to steer clear of “vanity metrics,” such as site visits, that may sound impressive but don’t reveal much. Upwell tracks only one number—“social mentions”—the much smaller group of people who actually say something about an issue online.
After identifying a hot topic, Upwell tries to assemble a social-media strategy within 24 hours—what it calls a “minimum viable campaign.”
“We do the least amount of work to get something out the door that will get results and information,” says Rachel Dearborn, Upwell’s campaign director.
Campaigns that don’t catch on are quickly scrapped. But campaigns that do catch on get more time, energy, and money from Upwell.
After Hurricane Sandy, in 2012, a prominent writer on ocean issues and others began pushing the idea that revitalizing the oyster beds near New York City could help protect the shore from future storm surges. Upwell’s “I (Oyster) New York” campaign featured a catchy logo and led to an even bigger spike in attention.

‘Build-Measure-Learn’

Some organizations that could hardly be called start-ups are also using lean principles. GuideStar, the 20-year-old aggregator of financial information about charities, is using the lean approach to develop tools more quickly that meet the needs of its users.
The lean process promotes short “build-measure-learn” cycles, in which a group frequently updates a product or service based on what it hears from its customers.
GuideStar and the Nonprofit Finance Fund have developed a tool called Financial Scan that allows charities to see how they compare with similar groups on various financial measures, such as their mix of earned revenue and grant funds.
When it analyzed who was using the tool, GuideStar found heavy interest from both foundations and accounting firms, says Evan Paul, GuideStar’s senior director of products and marketing.
In the future, he says, GuideStar may create three versions of Financial Scan to meet the distinct interests of charities, foundations, and accountants.
“We want to get more specific about how people are using our data to make decisions so that we can help make those decisions better and faster,” Mr. Paul says….


Lean Start-Up: a Glossary of Terms for a Hot New Management Approach

Build-Measure-Learn

Instead of spending considerable time developing a product or service for a big rollout, organizations should consider using a continuous feedback loop: “build” a program or service, even if it is not fully fleshed out; “measure” how clients are affected; and “learn” by improving the program or going in a new direction. Repeat the cycle.

Minimum Viable Product

An early version of a product or service that may be lacking some features. This approach allows an organization to obtain feedback from clients and quickly determine the usefulness of a product or service and how to improve it.

Get Out of the Building

To determine whether a product or service is needed, talk to clients and share your ideas with them before investing heavily.

A/B Testing

Create two versions of a product or service, show them to different groups, and see which performs best.

Failing Fast

By quickly realizing that a product or service isn’t viable, organizations save time and money and gain valuable information for their next effort.

Pivot

Making a significant change in strategy when the early testing of a minimum viable product shows that the product or service isn’t working or isn’t needed.

Vanity Metrics

Measures that seem to provide a favorable picture but don’t accurately capture the impact of a product. An example might be a tally of website page views. A more meaningful measure—or an “actionable metric,” in the lean lexicon—might be the number of active users of an online service.
Sources: The Lean Startup, by Eric Ries; The Ultimate Dictionary of Lean for Social Good, a publication by Lean Impact”

Behavioural economics and public policy


Tim Harford in the Financial Times:  “The past decade has been a triumph for behavioural economics, the fashionable cross-breed of psychology and economics. First there was the award in 2002 of the Nobel Memorial Prize in economics to a psychologist, Daniel Kahneman – the man who did as much as anything to create the field of behavioural economics. Bestselling books were launched, most notably by Kahneman himself (Thinking, Fast and Slow , 2011) and by his friend Richard Thaler, co-author of Nudge (2008). Behavioural economics seems far sexier than the ordinary sort, too: when last year’s Nobel was shared three ways, it was the behavioural economist Robert Shiller who grabbed all the headlines.

Behavioural economics is one of the hottest ideas in public policy. The UK government’s Behavioural Insights Team (BIT) uses the discipline to craft better policies, and in February was part-privatised with a mission to advise governments around the world. The White House announced its own behavioural insights team last summer.

So popular is the field that behavioural economics is now often misapplied as a catch-all term to refer to almost anything that’s cool in popular social science, from the storycraft of Malcolm Gladwell, author of The Tipping Point (2000), to the empirical investigations of Steven Levitt, co-author of Freakonomics (2005).
Yet, as with any success story, the backlash has begun. Critics argue that the field is overhyped, trivial, unreliable, a smokescreen for bad policy, an intellectual dead-end – or possibly all of the above. Is behavioural economics doomed to reflect the limitations of its intellectual parents, psychology and economics? Or can it build on their strengths and offer a powerful set of tools for policy makers and academics alike?…”

The Unwisdom of Crowds




Anne Applebaum on why people-powered revolutions are overrated in the New Republic: “..Yet a successful street revolution, like any revolution, is never guaranteed to leave anything positive in its aftermath—or anything at all. In the West, we often now associate protests with progress, or at least we assume that big crowds—the March on Washington, Paris in 1968—are the benign face of social change. But street revolutions are not always progressive, positive, or even important. Some replace a corrupt tyranny with violence and a political vacuum, which is what happened in Libya. Ukraine’s own Orange Revolution of 2004–2005 produced a new group of leaders who turned out to be just as incompetent as their predecessors. Crowds can be bullying, they can become violent, and they can give rise to extremists: Think Tehran 1979, or indeed Petrograd 1917.
The crowd may not even represent the majority. Because a street revolution makes good copy, and because it provides great photographs, we often mistakenly confuse “people power” with democracy itself. In fact, the creation of democratic institutions—courts, legal systems, bills of rights—is a long and tedious process that often doesn’t interest foreign journalists at all. Tunisia’s ratification of a new constitution earlier this year represented the most significant achievement of the Arab Spring to date, but the agonizing negotiations that led up to that moment were hard for outsiders to understand—and not remotely telegenic
Equally, it is a dangerous mistake to imagine that “people power” can ever be a substitute for actual elections. On television, a demonstration can loom larger than it should. In both Thailand and Turkey, an educated middle class has recently taken to the streets to protest against democratically elected leaders who have grown increasingly corrupt and autocratic, but who might well be voted back into office tomorrow. In Venezuela, elections are not fair and the media is not free, but the president is supported by many Venezuelans who still have faith in his far-left rhetoric, however much his policies may be damaging the country. Demonstrations might help bring change in some of these countries, but if the change is to be legitimate—and permanent—the electorate will eventually have to endorse it.
As we often forget, some of the most successful transitions to democracy did not involve crowds at all. Chile became a democracy because its dictator, Augusto Pinochet, decided it would become one. In early 1989, well before mass demonstrations in Prague or Berlin, the leaders of the Polish opposition sat down at a large round table with their former jailers and negotiated their way out of communism. There are no spectacular photographs of these transitions, and many people found them unsatisfying, even unjust. But Chile and Poland remain democracies today, not least because their new leaders came to power without any overt opposition from the old regime.
It would be nice if these kinds of transitions were more common, but not every dictator is willing to smooth the path toward change. For that reason, the post-revolutionary moment is often more important than the revolution itself, for this is when the emotion of the mob has to be channeled rapidly—immediately—into legitimate institutions. Not everybody finds this easy. In the wake of the Egyptian revolution, demonstrators found it difficult to abandon Tahrir Square. “We won’t leave because we have to make sure this country is set on the right path,” one protester said at the time. In fact, he should already have been at home, back in his neighborhood, perhaps creating the grassroots political party that might have given Egyptians a real alternative to the Muslim Brotherhood…”

Statistics and Open Data: Harvesting unused knowledge, empowering citizens and improving public services


House of Commons Public Administration Committee (Tenth Report):
“1. Open data is playing an increasingly important role in Government and society. It is data that is accessible to all, free of restrictions on use or redistribution and also digital and machine-readable so that it can be combined with other data, and thereby made more useful. This report looks at how the vast amounts of data generated by central and local Government can be used in open ways to improve accountability, make Government work better and strengthen the economy.

2. In this inquiry, we examined progress against a series of major government policy announcements on open data in recent years, and considered the prospects for further development. We heard of government open data initiatives going back some years, including the decision in 2009 to release some Ordnance Survey (OS) data as open data, and the Public Sector Mapping Agreement (PSMA) which makes OS data available for free to the public sector.  The 2012 Open Data White Paper ‘Unleashing the Potential’ says that transparency through open data is “at the heart” of the Government’s agenda and that opening up would “foster innovation and reform public services”. In 2013 the report of the independently-chaired review by Stephan Shakespeare, Chief Executive of the market research and polling company YouGov, of the use, re-use, funding and regulation of Public Sector Information urged Government to move fast to make use of data. He criticised traditional public service attitudes to data before setting out his vision:

    • To paraphrase the great retailer Sir Terry Leahy, to run an enterprise without data is like driving by night with no headlights. And yet that is what Government often does. It has a strong institutional tendency to proceed by hunch, or prejudice, or by the easy option. So the new world of data is good for government, good for business, and above all good for citizens. Imagine if we could combine all the data we produce on education and health, tax and spending, work and productivity, and use that to enhance the myriad decisions which define our future; well, we can, right now. And Britain can be first to make it happen for real.

3. This was followed by publication in October 2013 of a National Action Plan which sets out the Government’s view of the economic potential of open data as well as its aspirations for greater transparency.

4. This inquiry is part of our wider programme of work on statistics and their use in Government. A full description of the studies is set out under the heading “Statistics” in the inquiries section of our website, which can be found at www.parliament.uk/pasc. For this inquiry we received 30 pieces of written evidence and took oral evidence from 12 witnesses. We are grateful to all those who have provided evidence and to our Specialist Adviser on statistics, Simon Briscoe, for his assistance with this inquiry.”

Table of Contents:

Summary
1 Introduction
2 Improving accountability through open data
3 Open Data and Economic Growth
4 Improving Government through open data
5 Moving faster to make a reality of open data
6 A strategic approach to open data?
Conclusion
Conclusions and recommendations

How Twitter Could Help Police Departments Predict Crime


Eric Jaffe in Atlantic Cities: “Initially, Matthew Gerber didn’t believe Twitter could help predict where crimes might occur. For one thing, Twitter’s 140-character limit leads to slang and abbreviations and neologisms that are hard to analyze from a linguistic perspective. Beyond that, while criminals occasionally taunt law enforcement via Twitter, few are dumb or bold enough to tweet their plans ahead of time. “My hypothesis was there was nothing there,” says Gerber.
But then, that’s why you run the data. Gerber, a systems engineer at the University of Virginia’s Predictive Technology Lab, did indeed find something there. He reports in a new research paper that public Twitter data improved the predictions for 19 of 25 crimes that occurred early last year in metropolitan Chicago, compared with predictions based on historical crime patterns alone. Predictions for stalking, criminal damage, and gambling saw the biggest bump…..
Of course, the method says nothing about why Twitter data improved the predictions. Gerber speculates that people are tweeting about plans that correlate highly with illegal activity, as opposed to tweeting about crimes themselves.
Let’s use criminal damage as an example. The algorithm identified 700 Twitter topics related to criminal damage; of these, one topic involved the words “united center blackhawks bulls” and so on. Gather enough sports fans with similar tweets and some are bound to get drunk enough to damage public property after the game. Again this scenario extrapolates far more than the data tells, but it offers a possible window into the algorithm’s predictive power.

The map on the left shows predicted crime threat based on historical patterns; the one on the right includes Twitter data. (Via Decision Support Systems)
From a logistical standpoint, it wouldn’t be too difficult for police departments to use this method in their own predictions; both the Twitter data and modeling software Gerber used are freely available. The big question, he says, is whether a department used the same historical crime “hot spot” data as a baseline for comparison. If not, a new round of tests would have to be done to show that the addition of Twitter data still offered a predictive upgrade.
There’s also the matter of public acceptance. Data-driven crime prediction tends to raise any number of civil rights concerns. In 2012, privacy advocates criticized the FBI for a similar plan to use Twitter for crime predictions. In recent months the Chicago Police Department’s own methods have been knocked as a high-tech means of racial profiling. Gerber says his algorithms don’t target any individuals and only cull data posted voluntarily to a public account.”