Free Online Lawmaking Platform for Washington, D.C.


OpenGov Foundation: “At-Large Councilmember David Grosso and The OpenGov Foundation today launched the beta version of MadisonDC, a free online lawmaking tool that empowers citizens to engage directly with their elected officials – and the policymaking process itself – by commenting on, proposing changes to, and debating real D.C. Council legislation.  Grosso is the first-ever District elected official to give citizens the opportunity to log on and legislate, putting him at the forefront of a nation-wide movement reinventing local legislatures with technology.  Three bills are now open for crowdsourcing on MadisonDC: a plan to fully legalize marijuana, a proposal to make zoning laws more friendly to urban farmers, and legislation to create open primary elections….
MadisonDC is the District of Columbia’s version of the freeMadison software that reinvents government for the Internet Age.  Madison 1.0 powered the American people’s successful defense of Internet freedom from Congressional threats.  It delivered the first crowdsourced bill in the history of the U.S. Congress.  And now, the non-partisan, non-profit OpenGov Foundation has released Madison 2.0, empowering you to participate in your government, efficiently access your elected officials, and hold them accountable.”

How Big Data Could Undo Our Civil-Rights Laws


Virginia Eubanks in the American Prospect: “From “reverse redlining” to selling out a pregnant teenager to her parents, the advance of technology could render obsolete our landmark civil-rights and anti-discrimination laws.
Big Data will eradicate extreme world poverty by 2028, according to Bono, front man for the band U2. But it also allows unscrupulous marketers and financial institutions to prey on the poor. Big Data, collected from the neonatal monitors of premature babies, can detect subtle warning signs of infection, allowing doctors to intervene earlier and save lives. But it can also help a big-box store identify a pregnant teenager—and carelessly inform her parents by sending coupons for baby items to her home. News-mining algorithms might have been able to predict the Arab Spring. But Big Data was certainly used to spy on American Muslims when the New York City Police Department collected license plate numbers of cars parked near mosques, and aimed surveillance cameras at Arab-American community and religious institutions.
Until recently, debate about the role of metadata and algorithms in American politics focused narrowly on consumer privacy protections and Edward Snowden’s revelations about the National Security Agency (NSA). That Big Data might have disproportionate impacts on the poor, women, or racial and religious minorities was rarely raised. But, as Wade Henderson, president and CEO of the Leadership Conference on Civil and Human Rights, and Rashad Robinson, executive director of ColorOfChange, a civil rights organization that seeks to empower black Americans and their allies, point out in a commentary at TPM Cafe, while big data can change business and government for the better, “it is also supercharging the potential for discrimination.”
In his January 17 speech on signals intelligence, President Barack Obama acknowledged as much, seeking to strike a balance between defending “legitimate” intelligence gathering on American citizens and admitting that our country has a history of spying on dissidents and activists, including, famously, Dr. Martin Luther King, Jr. If this balance seems precarious, it’s because the links between historical surveillance of social movements and today’s uses of Big Data are not lost on the new generation of activists.
“Surveillance, big data and privacy have a historical legacy,” says Amalia Deloney, policy director at the Center for Media Justice, an Oakland-based organization dedicated to strengthening the communication effectiveness of grassroots racial justice groups. “In the early 1960s, in-depth, comprehensive, orchestrated, purposeful spying was used to disrupt political movements in communities of color—the Yellow Peril, the American Indian Movement, the Brown Berets, or the Black Panthers—to create fear and chaos, and to spread bias and stereotypes.”
In the era of Big Data, the danger of reviving that legacy is real, especially as metadata collection renders legal protection of civil rights and liberties less enforceable….
Big Data and surveillance are unevenly distributed. In response, a coalition of 14 progressive organizations, including the ACLU, ColorOfChange, the Leadership Conference on Civil and Human Rights, the NAACP, National Council of La Raza, and the NOW Foundation, recently released five “Civil Rights Principles for the Era of Big Data.” In their statement, they demand:

  • An end to high-tech profiling;
  • Fairness in automated decisions;
  • The preservation of constitutional principles;
  • Individual control of personal information; and
  • Protection of people from inaccurate data.

This historic coalition aims to start a national conversation about the role of big data in social and political inequality. “We’re beginning to ask the right questions,” says O’Neill. “It’s not just about what can we do with this data. How are communities of color impacted? How are women within those communities impacted? We need to fold these concerns into the national conversation.”

Open Data at Core of New Governance Paradigm


GovExec: “Rarely are federal agencies compared favorably with Facebook, Instagram, or other modern models of innovation, but there is every reason to believe they can harness innovation to improve mission effectiveness. After all, Aneesh Chopra, former U.S. Chief Technology Officer, reminded the Excellence in Government 2014 audience that government has a long history of innovation. From nuclear fusion to the Internet, the federal government has been at the forefront of technological development.
According to Chopra, the key to fueling innovation and economic prosperity today is open data. But to make the most of open data, government needs to adapt its culture. Chopra outlined three essential elements of doing so:

  1. Involve external experts – integrating outside ideas is second to none as a source of innovation.
  2. Leverage the experience of those on the front lines – federal employees who directly execute their agency’s mission often have the best sense of what does and does not work, and what can be done to improve effectiveness.
  3. Look to the public as a value multiplier – just as Facebook provides a platform for tens of thousands of developers to provide greater value, federal agencies can provide the raw material for many more to generate better citizen services.

In addition to these three broad elements, Chopra offered four specific levers government can use to help enact this paradigm shift:

  1. Democratize government data – opening government data to the public facilitates innovation. For example, data provided by the National Oceanic and Atmospheric Administration helps generate a 5 billion dollar industry by maintaining almost no intellectual property constraints on its weather data.
  2. Collaborate on technical standards – government can act as a convener of industry members to standardize technological development, and thereby increase the value of data shared.
  3. Issue challenges and prizes – incentivizing the public to get involved and participate in efforts to create value from government data enhances the government’s ability to serve the public.
  4. Launch government startups – programs like the Presidential Innovation Fellows initiative helps challenge rigid bureaucratic structures and permeate a culture of innovation.

Federal leaders will need a strong political platform to sustain this shift. Fortunately, this blueprint is also bipartisan, says Chopra. Political leaders on both sides of the aisle are already getting behind the movement to bring innovation to the core of government..

Three projects meet the European Job Challenge and receive the Social Innovation Prize


EU Press Release: “Social innovation can be a tool to create new or better jobs, while giving an answer to pressing challenges faced by Europe. Today, Michel Barnier, European Commissioner, has awarded three European Social Innovation prizes to ground-breaking ideas to create new types of work and address social needs. The winning projects aim to help disadvantaged women by employing them to create affordable and limited fashion collections, create jobs in the sector of urban farming, and convert abandoned social housing into learning spaces and entrepreneurship labs.

After the success of the first edition in 2013, the European Commission launched a second round of the Social Innovation Competition in memory of Diogo Vasconcelos1. Its main goal was to invite Europeans to propose new solutions to answer The Job Challenge. The Commission received 1,254 ideas out of which three were awarded with a prize of €30,000 each.

Commissioner Michel Barnier said: “We believe that the winning projects can take advantage of unmet social needs and create sustainable jobs. I want these projects to be scaled up and replicated and inspire more social innovations in Europe. We need to tap into this potential to bring innovative solutions to the needs of our citizens and create new types of work.”

More informationon the Competition page

More jobs for Europe – three outstanding ideas

The following new and exceptional ideas are the winners of the second edition of the European Social Innovation Competition:

  • ‘From waste to wow! QUID project’ (Italy): fashion business demands perfection, and slightly damaged textile cannot be used for top brands. The project intends to recycle this first quality waste into limited collections and thereby provide jobs to disadvantaged women. This is about creating highly marketable products and social value through recycling.

  • ‘Urban Farm Lease’ (Belgium): urban agriculture could provide 6,000 direct jobs in Brussels, and an additional 1,500 jobs considering indirect employment (distribution, waste management, training or events). The project aims at providing training, connection and consultancy so that unemployed people take advantage of the large surfaces available for agriculture in the city (e.g. 908 hectares of land or 394 hectares of suitable flat roofs).

  • ‘Voidstarter’ (Ireland): all major cities in Europe have “voids”, units of social housing which are empty because city councils have insufficient budgets to make them into viable homes. At the same time these cities also experience pressure with social housing provision and homelessness. Voidstarter will provide unemployed people with learning opportunities alongside skilled tradespersons in the refurbishing of the voids.”

The Secret Science of Retweets


Emerging Technology From the arXiv: “If you send a tweet to a stranger asking them to retweet it, you probably wouldn’t be surprised if they ignored you entirely. But if you sent out lots of tweets like this, perhaps a few might end up being passed on.

How come? What makes somebody retweet information from a stranger? That’s the question addressed today by Kyumin Lee from Utah State University in Logan and a few pals from IBM’s Almaden research center in San Jose….by studying the characteristics of Twitter users, it is possible to identify strangers who are more likely to pass on your message than others. And in doing this, the researchers say they’ve been able to improve the retweet rate of messages sent strangers by up to 680 percent.
So how did they do it? The new technique is based on the idea that some people are more likely to tweet than others, particularly on certain topics and at certain times of the day. So the trick is to find these individuals and target them when they are likely to be most effective.
So the approach was straightforward. The idea is to study the individuals on Twitter, looking at their profiles and their past tweeting behavior, looking for clues that they might be more likely to retweet certain types of information. Having found these individuals, send your tweets to them.
That’s the theory. In practice, it’s a little more involved. Lee and co wanted to test people’s response to two types of information: local news (in San Francisco) and tweets about bird flu, a significant issue at the time of their research. They then created several Twitter accounts with a few followers, specifically to broadcast information of this kind.
Next, they selected people to receive their tweets. For the local news broadcasts, they searched for Twitter users geolocated in the Bay area, finding over 34,000 of them and choosing 1,900 at random.
They then a sent a single message to each user of the format:
“@ SFtargetuser “A man was killed and three others were wounded in a shooting … http://bit.ly/KOl2sC” Plz RT this safety news”
So the tweet included the user’s name, a short headline, a link to the story and a request to retweet.
Of these 1,900 people, 52 retweeted the message they received. That’s 2.8 percent.
For the bird flu information, Lee and co hunted for people who had already tweeted about bird flu, finding 13,000 of them and choosing 1,900 at random. Of these, 155 retweeted the message they received, a retweet rate of 8.4 percent.
But Lee and co found a way to significantly improve these retweet rates. They went back to the original lists of Twitter users and collected publicly available information about each of them, such as their personal profile, the number of followers, the people they followed, their 200 most recent tweets and whether they retweeted the message they had received
Next, the team used a machine learning algorithm to search for correlations in this data that might predict whether somebody was more likely to retweet. For example, they looked at whether people with older accounts were more likely to retweet or how the ratio of friends to followers influenced the retweet likelihood, or even how the types of negative or positive words they used in previous tweets showed any link. They also looked at the time of day that people were most active in tweeting.
The result was a machine learning algorithm capable of picking users who were most likely to retweet on a particular topic.
And the results show that it is surprisingly effective. When the team sent local information tweets to individuals identified by the algorithm, 13.3 percent retweeted it, compared to just 2.6 percent of people chosen at random.
And they got even better results when they timed the request to match the periods when people had been most active in the past. In that case, the retweet rate rose to 19.3 percent. That’s an improvement of over 600 percent.
Similarly, the rate for bird flu information rose from 8.3 percent for users chosen at random to 19.7 percent for users chosen by the algorithm.
That’s a significant result that marketers, politicians, news organizations will be eyeing with envy.
An interesting question is how they can make this technique more generally applicable. It raises the prospect of an app that allows anybody to enter a topic of interest and which then creates a list of people most likely to retweet on that topic in the next few hours.
Lee and co do not mention any plans of this kind. But if they don’t exploit it, then there will surely be others who will.
Ref: arxiv.org/abs/1405.3750 : Who Will Retweet This? Automatically Identifying and Engaging Strangers on Twitter to Spread Information”

The Collective Intelligence Handbook: an open experiment


Michael Bernstein: “Is there really a wisdom of the crowd? How do we get at it and understand it, utilize it, empower it?
You probably have some ideas about this. I certainly do. But I represent just one perspective. What would an economist say? A biologist? A cognitive or social psychologist? An artificial intelligence or human-computer interaction researcher? A communications scholar?
For the last two years, Tom Malone (MIT Sloan) and I (Stanford CS) have worked to bring together all these perspectives into one book. We are nearing completion, and the Collective Intelligence Handbook will be published by the MIT Press later this year. I’m still relatively dumbfounded by the rockstar lineup we have managed to convince to join up.

It’s live.

Today we went live with the authors’ current drafts of the chapters. All the current preprints are here: http://cci.mit.edu/CIchapterlinks.html

And now is when you come in.

But we’re not done. We’d love for you — the crowd — to help us make this book better. We envisioned this as an open process, and we’re excited that all the chapters are now at a point where we’re ready for critique, feedback, and your contributions.
There are two ways you can help:

  • Read the current drafts and leave comments inline in the Google Docs to help us make them better.
  • Drop suggestions in the separate recommended reading list for each chapter. We (the editors) will be using that material to help us write an introduction to each chapter.

We have one month. The authors’ final chapters are due to us in mid-June. So off we go!”

Here’s what’s in the book:

Chapter 1. Introduction
Thomas W. Malone (MIT) and Michael S. Bernstein (Stanford University)
What is collective intelligence, anyway?
Chapter 2. Human-Computer Interaction and Collective Intelligence
Jeffrey P. Bigham (Carnegie Mellon University), Michael S. Bernstein (Stanford University), and Eytan Adar (University of Michigan)
How computation can help gather groups of people to tackle tough problems together.
Chapter 3. Artificial Intelligence and Collective Intelligence
Daniel S. Weld (University of Washington), Mausam (IIT Delhi), Christopher H. Lin (University of Washington), and Jonathan Bragg (University of Washington)
Mixing machine intelligence with human intelligence could enable a synthesized intelligent actor that brings together the best of both worlds.
Chapter 4. Collective Behavior in Animals: An Ecological Perspective
Deborah M. Gordon (Stanford University)
How do groups of animals work together in distributed ways to solve difficult problems?
Chapter 5. The Wisdom of Crowds vs. the Madness of Mobs
Andrew W. Lo (MIT)
Economics has studied a collectively intelligent forum — the market — for a long time. But are we as smart as we think we are?
Chapter 6. Collective Intelligence in Teams and Organizations
Anita Williams Woolley (Carnegie Mellon University), Ishani Aggarwal (Georgia Tech), Thomas W. Malone (MIT)
How do the interactions between groups of people impact how intelligently that group acts?
Chapter 7. Cognition and Collective Intelligence
Mark Steyvers (University of California, Irvine), Brent Miller (University of California, Irvine)
Understanding the conditions under which people are smart individually can help us predict when they might be smart collectively.

Chapter 8. Peer Production: A Modality of Collective Intelligence
Yochai Benkler (Harvard University), Aaron Shaw (Northwestern University), Benjamin Mako Hill (University of Washington)
What have collective efforts such as Wikipedia taught us about how large groups come together to create knowledge and creative artifacts?

CrowdOut: A mobile crowdsourcing service for road safety in digital cities


New paper by Aubry, Elian: “Nowadays cities invest more in their public services, and particularly digital ones, to improve their resident’s quality of life and attract more people. Thus, new crowdsourcing services appear and they are based on contributions made by mobile users equipped with smartphones. For example, the respect of the traffic code is essential to ensure citizens’ security and welfare in their city. In this paper, we present CrowdOut, a new mobile crowdsourcing service for improving road safety in cities. CrowdOut allows users to report traffic offence they witness in real time and to map them on a city plan. CrowdOut service has been implemented and experiments and demonstrations have been performed in the urban environment of the Grand Nancy, in France. This service allows users appropriating their urban environment with an active participation regarding the collectivity. This service also represents a tool for city administrators to help for decisions and improve their urbanization policy, or to check the impact of their policy in the city environment.”

The rise of open data driven businesses in emerging markets


Alla Morrison at the Worldbank blog:

Key findings —

  • Many new data companies have emerged around the world in the last few years. Of these companies, the majority use some form of government data.
  • There are a large number of data companies in sectors with high social impact and tremendous development opportunities.
  • An actionable pipeline of data-driven companies exists in Latin America and in Asia. The most desired type of financing is equity, followed by quasi-equity in the amounts ranging from $100,000 to $5 million, with averages of between $2 and $3 million depending on the region. The total estimated need for financing may exceed $400 million.

“The economic value of open data is no longer a hypothesis
How can one make money with open data which is akin to air – free and open to everyone? Should the World Bank Group be in the catalyzer role for a sector that is just emerging?  And if so, what set of interventions would be the most effective? Can promoting open data-driven businesses contribute to the World Bank Group’s twin goals of fighting poverty and boosting shared prosperity?
These questions have been top of the mind since the World Bank Open Finances team convened a group of open data entrepreneurs from across Latin America to share their business models, success stories and challenges at the Open Data Business Models workshop in Uruguay in June 2013. We were in Uruguay to find out whether open data could lead to the creation of sustainable new businesses and jobs. To do so, we tested a couple of hypotheses: open data has economic value, beyond the benefits of increased transparency and accountability; and open data companies with sustainable business models already exist in emerging economies.
Encouraged by our findings in Uruguay we set out to further explore the economic development potential of open data, with a focus on:

  • Contribution of open data to countries’ GDP;
  • Innovative solutions to tackle social problems in key sectors like agriculture, health, education, transportation, climate change, financial services, especially those benefiting low income populations;
  • Economic benefits of governments’ buy-in into the commercial value of open data and resulting release of new datasets, which in turn would lead to increased transparency in public resource management (reductions in misallocations, a more level playing field in procurement) and better service delivery; and
  • Creation of data-related private sector jobs, especially suited for the tech savvy young generation.

We proposed a joint IFC/World Bank approach (From open data to development impact – the crucial role of private sector) that envisages providing financing to data-driven companies through a dedicated investment fund, as well as loans and grants to governments to create a favorable enabling environment. The concept was received enthusiastically for the most part by a wide group of peers at the Bank, the IFC, as well as NGOs, foundations, DFIs and private sector investors.
Thanks also in part to a McKinsey report last fall stating that open data could help unlock more than $3 trillion in value every year, the potential value of open data is now better understood. The acquisition of Climate Corporation (whose business model holds enormous potential for agriculture and food security, if governments open up the right data) for close to a billion dollars last November and the findings of the Open Data 500 project led by GovLab of the NYU further substantiated the hypothesis. These days no one asks whether open data has economic value; the focus has shifted to finding ways for companies, both startups and large corporations, and governments to unlock it. The first question though is – is it still too early to plan a significant intervention to spur open data driven economic growth in emerging markets?”

After Sustainable Cities?


New book edited by Mike Hodson, and Simon Marvin: “A sustainable city has been defined in many ways. Yet, the most common understanding is a vision of the city that is able to meet the needs of the present without compromising the ability of future generations to meet their own needs. Central to this vision are two ideas: cities should meet social needs, especially of the poor, and not exceed the ability of the global environment to meet needs.
After Sustainable Cities critically reviews what has happened to these priorities and asks whether these social commitments have been abandoned in a period of austerity governance and climate change and replaced by a darker and unfair city. This book provides the first comprehensive and comparative analysis of the new eco-logics reshaping conventional sustainable cities discourse and environmental priorities of cities in both the global north and south. The dominant discourse on sustainable cities, with a commitment to intergenerational equity, social justice and global responsibility, has come under increasing pressure. Under conditions of global ecological change, international financial and economic crisis and austerity governance new eco-logics are entering the urban sustainability lexicon – climate change, green growth, smart growth, resilience and vulnerability, ecological security. This book explores how these new eco-logics reshape our understanding of equity, justice and global responsibility, and how these more technologically and economically driven themes resonate and dissonate with conventional sustainable cities discourse. This book provides a warning that a more technologically driven and narrowly constructed economic agenda is driving ecological policy and weakening previous commitment to social justice and equity.
After Sustainable Cities brings together leading researchers to provide a critical examination of these new logics and identity what sort of city is now emerging, as well as consider the longer-term implication on sustainable cities research and policy.”

The Social Machine


New book by Judith Donath: “Computers were first conceived as “thinking machines,” but in the twenty-first century they have become social machines, online places where people meet friends, play games, and collaborate on projects. In this book, Judith Donath argues persuasively that for social media to become truly sociable media, we must design interfaces that reflect how we understand and respond to the social world. People and their actions are still harder to perceive online than face to face: interfaces are clunky, and we have less sense of other people’s character and intentions, where they congregate, and what they do.
Donath presents new approaches to creating interfaces for social interaction. She addresses such topics as visualizing social landscapes, conversations, and networks; depicting identity with knowledge markers and interaction history; delineating public and private space; and bringing the online world’s open sociability into the physical world. Donath asks fundamental questions about how we want to live online and offers thought-provoking designs that explore radically new ways of interacting and communicating.”