Revolution in the Age of Social Media


Book by Linda Herrera on the “Egyptian Popular Insurrection and the Internet”: “Egypt’s January 25 revolution was triggered by a Facebook page and played out both in virtual spaces and the streets. Social media serves as a space of liberation, but it also functions as an arena where competing forces vie over the minds of the young as they battle over ideas as important as the nature of freedom and the place of the rising generation in the political order. This book provides piercing insights into the ongoing struggles between people and power in the digital age.”

The Golden Record 2.0 Will Crowdsource A Selfie of Human Culture


Helen Thompson in the Smithsonian: “In 1977, the Voyager 1 and 2 spacecraft left our solar system, carrying a “Golden Record”—a gold-plated phonograph record containing analogue images, greetings, and music from Earth. It was meant to be a snapshot of humanity. On the small chance that an alien lifeform encountered Voyager, they could get a sense of who made it.
“This record represents our hope and our determination and our goodwill in a vast and awesome universe,” said Carl Sagan who led the six-member team that created the Golden Record.
No spacecraft has left our solar system since Voyager, but in the next few years, NASA’s New Horizons probe, launched in 2006, will reach Pluto and then pass into the far edges of the solar system and beyond. A new project aims to create a “Golden Record 2.0”. Just like the original record, this new version will represent a sampling of human culture for NASA to transmit to New Horizons just before it soars off into the rest of the universe.
The genesis of the project came from Jon Lomberg, a scientific artist and the designer of the original Golden Record. Over the last year he’s recruited experts in a variety of fields to back the project. To convince NASA of public support, he launched a website and put together a petition, signed by over 10,000 people in 140 countries. When Lomberg presented the idea to NASA earlier this year, the agency was receptive and will be releasing a statement with further details on the project on August 25. In the meantime, he and his colleague Albert Yu-Min Lin, a research scientist at the University of California in San Diego, gave a preview of their plan at Smithsonian’s Future Is Here event in Washington, DC, today.
New Horizons will likely only have a small amount of memory space available for the content, so what should make the cut? Photos of landscapes and animals (including humans), sound bites of great speakers, popular music, or even videos could end up on the digital record. Lin is developing a platform where people will be able to explore and critique the submissions on the site. “We wanted to make this a democratic discussion,” says Lin. “How do we make this not a conversation about cute cats and Justin Beiber?” One can only guess what aliens might make of the Earth’s YouTube video fodder.
What sets this new effort apart from the original is that the content will be crowdsourced. “We thought this time why not let the people of earth speak for themselves,” says Lomberg. “Why not figure out a way to crowd source this message so that people would be able to decide what they wanted to say?” Lomberg has teamed up with Lin, who specializes in crowdsourcing technology, to create a platform where people from all over the world can submit content to be included on the record…”

Data.gov Turns Five


NextGov: “When government technology leaders first described a public repository for government data sets more than five years ago, the vision wasn’t totally clear.
“I just didn’t understand what they were talking about,” said Marion Royal of the General Services Administration, describing his first introduction to the project. “I was thinking, ‘this is not going to work for a number of reasons.’”
A few minutes later, he was the project’s program director. He caught onto and helped clarify that vision and since then has worked with a small team to help shepherd online and aggregate more than 100,000 data sets compiled and hosted by agencies across federal, state and local governments.
Many Americans still don’t know what Data.gov is, but chances are good they’ve benefited from the site, perhaps from information such as climate or consumer complaint data. Maybe they downloaded the Red Cross’ Hurricane App after Superstorm Sandy or researched their new neighborhood through a real estate website that drew from government information.
Hundreds of companies pull data they find on the site, which has seen 4.5 million unique visitors from 195 countries, according to GSA. Data.gov has proven a key part of President Obama’s open data policies, which aim to make government more efficient and open as well as to stimulate economic activity by providing private companies, organizations and individuals machine-readable ingredients for new apps, digital tools and programs.”

How Big Data Could Undo Our Civil-Rights Laws


Virginia Eubanks in the American Prospect: “From “reverse redlining” to selling out a pregnant teenager to her parents, the advance of technology could render obsolete our landmark civil-rights and anti-discrimination laws.
Big Data will eradicate extreme world poverty by 2028, according to Bono, front man for the band U2. But it also allows unscrupulous marketers and financial institutions to prey on the poor. Big Data, collected from the neonatal monitors of premature babies, can detect subtle warning signs of infection, allowing doctors to intervene earlier and save lives. But it can also help a big-box store identify a pregnant teenager—and carelessly inform her parents by sending coupons for baby items to her home. News-mining algorithms might have been able to predict the Arab Spring. But Big Data was certainly used to spy on American Muslims when the New York City Police Department collected license plate numbers of cars parked near mosques, and aimed surveillance cameras at Arab-American community and religious institutions.
Until recently, debate about the role of metadata and algorithms in American politics focused narrowly on consumer privacy protections and Edward Snowden’s revelations about the National Security Agency (NSA). That Big Data might have disproportionate impacts on the poor, women, or racial and religious minorities was rarely raised. But, as Wade Henderson, president and CEO of the Leadership Conference on Civil and Human Rights, and Rashad Robinson, executive director of ColorOfChange, a civil rights organization that seeks to empower black Americans and their allies, point out in a commentary at TPM Cafe, while big data can change business and government for the better, “it is also supercharging the potential for discrimination.”
In his January 17 speech on signals intelligence, President Barack Obama acknowledged as much, seeking to strike a balance between defending “legitimate” intelligence gathering on American citizens and admitting that our country has a history of spying on dissidents and activists, including, famously, Dr. Martin Luther King, Jr. If this balance seems precarious, it’s because the links between historical surveillance of social movements and today’s uses of Big Data are not lost on the new generation of activists.
“Surveillance, big data and privacy have a historical legacy,” says Amalia Deloney, policy director at the Center for Media Justice, an Oakland-based organization dedicated to strengthening the communication effectiveness of grassroots racial justice groups. “In the early 1960s, in-depth, comprehensive, orchestrated, purposeful spying was used to disrupt political movements in communities of color—the Yellow Peril, the American Indian Movement, the Brown Berets, or the Black Panthers—to create fear and chaos, and to spread bias and stereotypes.”
In the era of Big Data, the danger of reviving that legacy is real, especially as metadata collection renders legal protection of civil rights and liberties less enforceable….
Big Data and surveillance are unevenly distributed. In response, a coalition of 14 progressive organizations, including the ACLU, ColorOfChange, the Leadership Conference on Civil and Human Rights, the NAACP, National Council of La Raza, and the NOW Foundation, recently released five “Civil Rights Principles for the Era of Big Data.” In their statement, they demand:

  • An end to high-tech profiling;
  • Fairness in automated decisions;
  • The preservation of constitutional principles;
  • Individual control of personal information; and
  • Protection of people from inaccurate data.

This historic coalition aims to start a national conversation about the role of big data in social and political inequality. “We’re beginning to ask the right questions,” says O’Neill. “It’s not just about what can we do with this data. How are communities of color impacted? How are women within those communities impacted? We need to fold these concerns into the national conversation.”

Open Data at Core of New Governance Paradigm


GovExec: “Rarely are federal agencies compared favorably with Facebook, Instagram, or other modern models of innovation, but there is every reason to believe they can harness innovation to improve mission effectiveness. After all, Aneesh Chopra, former U.S. Chief Technology Officer, reminded the Excellence in Government 2014 audience that government has a long history of innovation. From nuclear fusion to the Internet, the federal government has been at the forefront of technological development.
According to Chopra, the key to fueling innovation and economic prosperity today is open data. But to make the most of open data, government needs to adapt its culture. Chopra outlined three essential elements of doing so:

  1. Involve external experts – integrating outside ideas is second to none as a source of innovation.
  2. Leverage the experience of those on the front lines – federal employees who directly execute their agency’s mission often have the best sense of what does and does not work, and what can be done to improve effectiveness.
  3. Look to the public as a value multiplier – just as Facebook provides a platform for tens of thousands of developers to provide greater value, federal agencies can provide the raw material for many more to generate better citizen services.

In addition to these three broad elements, Chopra offered four specific levers government can use to help enact this paradigm shift:

  1. Democratize government data – opening government data to the public facilitates innovation. For example, data provided by the National Oceanic and Atmospheric Administration helps generate a 5 billion dollar industry by maintaining almost no intellectual property constraints on its weather data.
  2. Collaborate on technical standards – government can act as a convener of industry members to standardize technological development, and thereby increase the value of data shared.
  3. Issue challenges and prizes – incentivizing the public to get involved and participate in efforts to create value from government data enhances the government’s ability to serve the public.
  4. Launch government startups – programs like the Presidential Innovation Fellows initiative helps challenge rigid bureaucratic structures and permeate a culture of innovation.

Federal leaders will need a strong political platform to sustain this shift. Fortunately, this blueprint is also bipartisan, says Chopra. Political leaders on both sides of the aisle are already getting behind the movement to bring innovation to the core of government..

The Secret Science of Retweets


Emerging Technology From the arXiv: “If you send a tweet to a stranger asking them to retweet it, you probably wouldn’t be surprised if they ignored you entirely. But if you sent out lots of tweets like this, perhaps a few might end up being passed on.

How come? What makes somebody retweet information from a stranger? That’s the question addressed today by Kyumin Lee from Utah State University in Logan and a few pals from IBM’s Almaden research center in San Jose….by studying the characteristics of Twitter users, it is possible to identify strangers who are more likely to pass on your message than others. And in doing this, the researchers say they’ve been able to improve the retweet rate of messages sent strangers by up to 680 percent.
So how did they do it? The new technique is based on the idea that some people are more likely to tweet than others, particularly on certain topics and at certain times of the day. So the trick is to find these individuals and target them when they are likely to be most effective.
So the approach was straightforward. The idea is to study the individuals on Twitter, looking at their profiles and their past tweeting behavior, looking for clues that they might be more likely to retweet certain types of information. Having found these individuals, send your tweets to them.
That’s the theory. In practice, it’s a little more involved. Lee and co wanted to test people’s response to two types of information: local news (in San Francisco) and tweets about bird flu, a significant issue at the time of their research. They then created several Twitter accounts with a few followers, specifically to broadcast information of this kind.
Next, they selected people to receive their tweets. For the local news broadcasts, they searched for Twitter users geolocated in the Bay area, finding over 34,000 of them and choosing 1,900 at random.
They then a sent a single message to each user of the format:
“@ SFtargetuser “A man was killed and three others were wounded in a shooting … http://bit.ly/KOl2sC” Plz RT this safety news”
So the tweet included the user’s name, a short headline, a link to the story and a request to retweet.
Of these 1,900 people, 52 retweeted the message they received. That’s 2.8 percent.
For the bird flu information, Lee and co hunted for people who had already tweeted about bird flu, finding 13,000 of them and choosing 1,900 at random. Of these, 155 retweeted the message they received, a retweet rate of 8.4 percent.
But Lee and co found a way to significantly improve these retweet rates. They went back to the original lists of Twitter users and collected publicly available information about each of them, such as their personal profile, the number of followers, the people they followed, their 200 most recent tweets and whether they retweeted the message they had received
Next, the team used a machine learning algorithm to search for correlations in this data that might predict whether somebody was more likely to retweet. For example, they looked at whether people with older accounts were more likely to retweet or how the ratio of friends to followers influenced the retweet likelihood, or even how the types of negative or positive words they used in previous tweets showed any link. They also looked at the time of day that people were most active in tweeting.
The result was a machine learning algorithm capable of picking users who were most likely to retweet on a particular topic.
And the results show that it is surprisingly effective. When the team sent local information tweets to individuals identified by the algorithm, 13.3 percent retweeted it, compared to just 2.6 percent of people chosen at random.
And they got even better results when they timed the request to match the periods when people had been most active in the past. In that case, the retweet rate rose to 19.3 percent. That’s an improvement of over 600 percent.
Similarly, the rate for bird flu information rose from 8.3 percent for users chosen at random to 19.7 percent for users chosen by the algorithm.
That’s a significant result that marketers, politicians, news organizations will be eyeing with envy.
An interesting question is how they can make this technique more generally applicable. It raises the prospect of an app that allows anybody to enter a topic of interest and which then creates a list of people most likely to retweet on that topic in the next few hours.
Lee and co do not mention any plans of this kind. But if they don’t exploit it, then there will surely be others who will.
Ref: arxiv.org/abs/1405.3750 : Who Will Retweet This? Automatically Identifying and Engaging Strangers on Twitter to Spread Information”

The Collective Intelligence Handbook: an open experiment


Michael Bernstein: “Is there really a wisdom of the crowd? How do we get at it and understand it, utilize it, empower it?
You probably have some ideas about this. I certainly do. But I represent just one perspective. What would an economist say? A biologist? A cognitive or social psychologist? An artificial intelligence or human-computer interaction researcher? A communications scholar?
For the last two years, Tom Malone (MIT Sloan) and I (Stanford CS) have worked to bring together all these perspectives into one book. We are nearing completion, and the Collective Intelligence Handbook will be published by the MIT Press later this year. I’m still relatively dumbfounded by the rockstar lineup we have managed to convince to join up.

It’s live.

Today we went live with the authors’ current drafts of the chapters. All the current preprints are here: http://cci.mit.edu/CIchapterlinks.html

And now is when you come in.

But we’re not done. We’d love for you — the crowd — to help us make this book better. We envisioned this as an open process, and we’re excited that all the chapters are now at a point where we’re ready for critique, feedback, and your contributions.
There are two ways you can help:

  • Read the current drafts and leave comments inline in the Google Docs to help us make them better.
  • Drop suggestions in the separate recommended reading list for each chapter. We (the editors) will be using that material to help us write an introduction to each chapter.

We have one month. The authors’ final chapters are due to us in mid-June. So off we go!”

Here’s what’s in the book:

Chapter 1. Introduction
Thomas W. Malone (MIT) and Michael S. Bernstein (Stanford University)
What is collective intelligence, anyway?
Chapter 2. Human-Computer Interaction and Collective Intelligence
Jeffrey P. Bigham (Carnegie Mellon University), Michael S. Bernstein (Stanford University), and Eytan Adar (University of Michigan)
How computation can help gather groups of people to tackle tough problems together.
Chapter 3. Artificial Intelligence and Collective Intelligence
Daniel S. Weld (University of Washington), Mausam (IIT Delhi), Christopher H. Lin (University of Washington), and Jonathan Bragg (University of Washington)
Mixing machine intelligence with human intelligence could enable a synthesized intelligent actor that brings together the best of both worlds.
Chapter 4. Collective Behavior in Animals: An Ecological Perspective
Deborah M. Gordon (Stanford University)
How do groups of animals work together in distributed ways to solve difficult problems?
Chapter 5. The Wisdom of Crowds vs. the Madness of Mobs
Andrew W. Lo (MIT)
Economics has studied a collectively intelligent forum — the market — for a long time. But are we as smart as we think we are?
Chapter 6. Collective Intelligence in Teams and Organizations
Anita Williams Woolley (Carnegie Mellon University), Ishani Aggarwal (Georgia Tech), Thomas W. Malone (MIT)
How do the interactions between groups of people impact how intelligently that group acts?
Chapter 7. Cognition and Collective Intelligence
Mark Steyvers (University of California, Irvine), Brent Miller (University of California, Irvine)
Understanding the conditions under which people are smart individually can help us predict when they might be smart collectively.

Chapter 8. Peer Production: A Modality of Collective Intelligence
Yochai Benkler (Harvard University), Aaron Shaw (Northwestern University), Benjamin Mako Hill (University of Washington)
What have collective efforts such as Wikipedia taught us about how large groups come together to create knowledge and creative artifacts?

The rise of open data driven businesses in emerging markets


Alla Morrison at the Worldbank blog:

Key findings —

  • Many new data companies have emerged around the world in the last few years. Of these companies, the majority use some form of government data.
  • There are a large number of data companies in sectors with high social impact and tremendous development opportunities.
  • An actionable pipeline of data-driven companies exists in Latin America and in Asia. The most desired type of financing is equity, followed by quasi-equity in the amounts ranging from $100,000 to $5 million, with averages of between $2 and $3 million depending on the region. The total estimated need for financing may exceed $400 million.

“The economic value of open data is no longer a hypothesis
How can one make money with open data which is akin to air – free and open to everyone? Should the World Bank Group be in the catalyzer role for a sector that is just emerging?  And if so, what set of interventions would be the most effective? Can promoting open data-driven businesses contribute to the World Bank Group’s twin goals of fighting poverty and boosting shared prosperity?
These questions have been top of the mind since the World Bank Open Finances team convened a group of open data entrepreneurs from across Latin America to share their business models, success stories and challenges at the Open Data Business Models workshop in Uruguay in June 2013. We were in Uruguay to find out whether open data could lead to the creation of sustainable new businesses and jobs. To do so, we tested a couple of hypotheses: open data has economic value, beyond the benefits of increased transparency and accountability; and open data companies with sustainable business models already exist in emerging economies.
Encouraged by our findings in Uruguay we set out to further explore the economic development potential of open data, with a focus on:

  • Contribution of open data to countries’ GDP;
  • Innovative solutions to tackle social problems in key sectors like agriculture, health, education, transportation, climate change, financial services, especially those benefiting low income populations;
  • Economic benefits of governments’ buy-in into the commercial value of open data and resulting release of new datasets, which in turn would lead to increased transparency in public resource management (reductions in misallocations, a more level playing field in procurement) and better service delivery; and
  • Creation of data-related private sector jobs, especially suited for the tech savvy young generation.

We proposed a joint IFC/World Bank approach (From open data to development impact – the crucial role of private sector) that envisages providing financing to data-driven companies through a dedicated investment fund, as well as loans and grants to governments to create a favorable enabling environment. The concept was received enthusiastically for the most part by a wide group of peers at the Bank, the IFC, as well as NGOs, foundations, DFIs and private sector investors.
Thanks also in part to a McKinsey report last fall stating that open data could help unlock more than $3 trillion in value every year, the potential value of open data is now better understood. The acquisition of Climate Corporation (whose business model holds enormous potential for agriculture and food security, if governments open up the right data) for close to a billion dollars last November and the findings of the Open Data 500 project led by GovLab of the NYU further substantiated the hypothesis. These days no one asks whether open data has economic value; the focus has shifted to finding ways for companies, both startups and large corporations, and governments to unlock it. The first question though is – is it still too early to plan a significant intervention to spur open data driven economic growth in emerging markets?”

The Social Machine


New book by Judith Donath: “Computers were first conceived as “thinking machines,” but in the twenty-first century they have become social machines, online places where people meet friends, play games, and collaborate on projects. In this book, Judith Donath argues persuasively that for social media to become truly sociable media, we must design interfaces that reflect how we understand and respond to the social world. People and their actions are still harder to perceive online than face to face: interfaces are clunky, and we have less sense of other people’s character and intentions, where they congregate, and what they do.
Donath presents new approaches to creating interfaces for social interaction. She addresses such topics as visualizing social landscapes, conversations, and networks; depicting identity with knowledge markers and interaction history; delineating public and private space; and bringing the online world’s open sociability into the physical world. Donath asks fundamental questions about how we want to live online and offers thought-provoking designs that explore radically new ways of interacting and communicating.”

Public service workers will have to become Jacks and Jills of all trades


Catherine Needham in the Guardian: “When Kent county council was looking to save money a couple of years ago, it hit upon the idea of merging the roles of library manager and registrar. Library managers were expected to register births and deaths on top of their existing duties, and registrars took on roles in libraries. One former library manager chose to leave the service as a result. It wasn’t, he said, what he signed up for: “I don’t associate the skills in running a library with those of a registrar. I don’t have the emotional skill to do it.”
Since the council was looking to cut staff numbers, it was probably not too troubled by his departure. But this does raise questions about how to support staff who are being asked to work well beyond their professional boundaries.
In our 21st Century Public Servant project at the University of Birmingham, we have found that this trend is evident across public services. We interviewed local government managers who said staff needed to think differently about their skills. As one put it: “We need to use people’s latent talent – if you are a librarian, for example, a key skill will be working with people from the local community. It’s about a different background mindset: ‘I am not just here to do a specific job, but to help the people of this town.'”

The skills of this generic public service worker include interpersonal skills (facilitation, empathy, political skills), analysing skills (sorting evidence, making judgements, offering critique and being creative), organisation (particularly for group work and collaboration) and communication skills (such as using social media and multimedia resources).
The growing interest in genericism seems to have two main drivers. The first, of course, is austerity. Cost cutting on an unprecedented scale in local authorities requires those staff that survive the waves of redundancies to be willing to take on new roles and work in multi-purpose settings. The second is the drive for whole-person approaches in which proper engagement with the public might require staff to cross traditional sector boundaries.
It is good that public service workers are being granted greater flexibility. But there are two main limitations to this move to greater genericism. The first is that multi-tasking in an era of cost cutting can look a lot like deprofessionalisation. Within social work, for example, concerns have been expressed about the downgrading of social work posts (by appointing brokers in their place, say) and the resulting loss of professional skills and knowledge.
A second limitation is that skills training continues to be sectoral, failing to catch up with the move to genericism….”