New Technology and the Prevention of Violence and Conflict


Report edited by Francesco Mancini for the International Peace Institute: “In an era of unprecedented interconnectivity, this report explores the ways in which new technologies can assist international actors, governments, and civil society organizations to more effectively prevent violence and conflict. It examines the contributions that cell phones, social media, crowdsourcing, crisis mapping, blogging, and big data analytics can make to short-term efforts to forestall crises and to long-term initiatives to address the root causes of violence.
Five case studies assess the use of such tools in a variety of regions (Africa, Asia, Latin America) experiencing different types of violence (criminal violence, election-related violence, armed conflict, short-term crisis) in different political contexts (restrictive and collaborative governments).
Drawing on lessons and insights from across the cases, the authors outline a how-to guide for leveraging new technology in conflict-prevention efforts:
1. Examine all tools.
2. Consider the context.
3. Do no harm.
4. Integrate local input.
5. Help information flow horizontally.
6. Establish consensus regarding data use.
7. Foster partnerships for better results.”

Things Fall Apart: How Social Media Leads to a Less Stable World


Commentary by Curtis Hougland at Knowledge@Wharton: “James Foley. David Haines. Steven Sotloff. The list of people beheaded by followers of the Islamic State of Iraq and Syria (ISIS) keeps growing. The filming of these acts on video and distribution via social media platforms such as Twitter represent a geopolitical trend in which social media has become the new frontline for proxy wars across the globe. While social media does indeed advance connectivity and wealth among people, its proliferation at the same time results in a markedly less stable world.
That social media benefits mankind is irrefutable. I have been an evangelist for the power of new media for 20 years. However, technology in the form of globalized communication, transportation and supply chains conspires to make today’s world more complex. Events in any corner of the world now impact the rest of the globe quickly and sharply. Nations are being pulled apart along sectarian seams in Iraq, tribal divisions in Afghanistan, national interests in Ukraine and territorial fences in Gaza. These conflicts portend a quickening of global unrest, confirmed by Foreign Policy magazine’s map of civil protest. The ISIS videos are simply the exposed wire. I believe that over the next century, even great nations will Balkanize — break into smaller nations. One of the principal drivers of this Balkanization is social media Twitter .
Social media is a behavior, an expression of the innate human need to socialize and share experiences. Social media is not simply a set of technology channels and networks. Both the public and private sectors have underestimated the human imperative to behave socially. The evidence is now clear with more than 52% of the population living in cities and approximately 2 billion people active in social media globally. Some 96% of content emanates from individuals, not brands, media or governments — a volume that far exceeds participation in democratic elections.
Social media is not egalitarian, though. Despite the exponential growth of user-generated content, people prefer to congregate online around like-minded individuals. Rather than seek out new beliefs, people choose to reinforce their existing political opinions through their actions online. This is illustrated in Pew Internet’s 2014 study, “Mapping Twitter Topic Networks from Polarized Crowds to Community Clusters.” Individuals self-organize by affinity, and within affinity, by sensibility and personality. The ecosystem of social media is predicated on delivering more of what the user already likes. This, precisely, is the function of a Follow or Like. In this way, media coagulates rather than fragments online….”

New Data for a New Energy Future


(This post originally appeared on the blog of the U.S. Chamber of Commerce Foundation.)

Two growing concerns—climate change and U.S. energy self-sufficiency—have accelerated the search for affordable, sustainable approaches to energy production and use. In this area, as in many others, data-driven innovation is a key to progress. Data scientists are working to help improve energy efficiency and make new forms of energy more economically viable, and are building new, profitable businesses in the process.
In the same way that government data has been used by other kinds of new businesses, the Department of Energy is releasing data that can help energy innovators. At a recent “Energy Datapalooza” held by the department, John Podesta, counselor to the President, summed up the rationale: “Just as climate data will be central to helping communities prepare for climate change, energy data can help us reduce the harmful emissions that are driving climate change.” With electric power accounting for one-third of greenhouse gas emissions in the United States, the opportunities for improvement are great.
The GovLab has been studying the business applications of public government data, or “open data,” for the past year. The resulting study, the Open Data 500, now provides structured, searchable information on more than 500 companies that use open government data as a key business driver. A review of those results shows four major areas where open data is creating new business opportunities in energy and is likely to build many more in the near future.

Commercial building efficiency
Commercial buildings are major energy consumers, and energy costs are a significant business expense. Despite programs like LEED Certification, many commercial buildings waste large amounts of energy. Now a company called FirstFuel, based in Boston, is using open data to drive energy efficiency in these buildings. At the Energy Datapalooza, Swap Shah, the company’s CEO, described how analyzing energy data together with geospatial, weather, and other open data can give a very accurate view of a building’s energy consumption and ways to reduce it. (Sometimes the solution is startlingly simple: According to Shah, the largest source of waste is running heating and cooling systems at the same time.) Other companies are taking on the same kind of task – like Lucid, which provides an operating system that can track a building’s energy use in an integrated way.

Home energy use
A number of companies are finding data-driven solutions for homeowners who want to save money by reducing their energy usage. A key to success is putting together measurements of energy use in the home with public data on energy efficiency solutions. PlotWatt, for example, promises to help consumers “save money with real-time energy tracking” through the data it provides. One of the best-known companies in this area, Opower, uses a psychological strategy: it simultaneously gives people access to their own energy data and lets them compare their energy use to their neighbors’ as an incentive to save. Opower partners with utilities to provide this information, and the Virginia-based company has been successful enough to open offices in San Francisco, London, and Singapore. Soon more and more people will have access to data on their home energy use: Green Button, a government-promoted program implemented by utilities, now gives about 100 million Americans data about their energy consumption.

Solar power and renewable energy
As solar power becomes more efficient and affordable, a number of companies are emerging to support this energy technology. Clean Power Finance, for example, uses its database to connect solar entrepreneurs with sources of capital. In a different way, a company called Solar Census is analyzing publicly available data to find exactly where solar power can be produced most efficiently. The kind of analysis that used to require an on-site survey over several days can now be done in less than a minute with their algorithms.
Other kinds of geospatial and weather data can support other forms of renewable energy. The data will make it easier to find good sites for wind power stations, water sources for small-scale hydroelectric projects, and the best opportunities to tap geothermal energy.

Supporting new energy-efficient vehicles
The Tesla and other electric vehicles are becoming commercially viable, and we will soon see even more efficient vehicles on the road. Toyota has announced that its first fuel-cell cars, which run on hydrogen, will be commercially available by mid-2015, and other auto manufacturers have announced plans to develop fuel-cell vehicles as well. But these vehicles can’t operate without a network to supply power, be it electricity for a Tesla battery or hydrogen for a fuel cell.
It’s a chicken-and-egg problem: People won’t buy large numbers of electric or fuel-cell cars unless they know they can power them, and power stations will be scarce until there are enough vehicles to support their business. Now some new companies are facilitating this transition by giving drivers data-driven tools to find and use the power sources they need. Recargo, for example, provides tools to help electric car owners find charging stations and operate their vehicles.
The development of new energy sources will involve solving social, political, economic, and technological issues. Data science can help develop solutions and bring us more quickly to a new kind of energy future.
Joel Gurin, senior advisor at the GovLab and project director, Open Data 500. He also currently serves as a fellow of the U.S. Chamber of Commerce Foundation.

Google’s Waze announces government data exchange program with 10 initial partners


Josh Ong at TheNextWeb blog: “Waze today announced “Connected Citizens,” a new government partnership program that will see both parties exchange data in order to improve traffic conditions.

For the program, Waze will provide real-time anonymized crowdsourced traffic data to government departments in exchange for information on public projects like construction, road sensors, and pre-planned road closures.

The first 10 partners include:

  • Rio de Janeiro, Brazil
  • Barcelona, Spain and the Government of Catalonia
  • Jakarta, Indonesia
  • Tel Aviv, Israel
  • San Jose, Costa Rica
  • Boston, USA
  • State of Florida, USA
  • State of Utah, USA
  • Los Angeles County
  • The New York Police Department (NYPD)

Waze has also signed on five other government partners and has received applications from more than 80 municipal groups. The company ran an initial pilot program in Rio de Janeiro where it partnered with the city’s traffic control center to supplement the department’s sensor data with reports from Waze users.

At an event celebrating the launch, Di-Ann Eisnor, head of Growth at Waze noted that the data exchange will only include public alerts, such as accidents and closures.

We don’t share anything beyond that, such as where individuals are located and who they are,” she said.

Eisnor also made it clear that Waze isn’t selling the data. GPS maker TomTom came under fire several years ago after customers learned that the company had sold their data to police departments to help find the best places to put speed traps.

“We keep [the data] clean by making sure we don’t have a business model around it,” Eisnor added.

Waze will requires that new Connected Citizens partners “prove their dedication to citizen engagement and commit to use Waze data to improve city efficiency.”…”

A Vision for Happier Cities


Post by at the Huffington Post:“…Governments such as Bhutan and Venezuela are creating departments of happiness, and in both the US and UK, ‘nudge’ teams have been set up to focus on behavioral psychology. This gets more interesting when we bring in urban planning and neuroscience research, which shows that community aesthetics are a key contributor to our happiness at the same time positive emotions can change our thoughts, and lead to changes in our behaviors.
It was only after moving to New York City that I realized all my experiences… painting, advising executive boards, creative workshops, statistics and writing books about organizational change…gave me a unique set of tools to create the Dept. of Well Being and start a global social impact initiative, which is powered by public art installations entitled Happy Street Signs™.
New York City got the first Happy Street Signs last November. I used my paintings containing positive phrases like “Honk Less Love More” and “New York Loves You” to manufacture 200 government-specification street signs. They were then installed by a team of fifty volunteers around Manhattan and Brooklyn in 90 minutes. Whilst it was unofficial, the objective was to generate smiles for New Yorkers and then survey reactions. We got clipboards out and asked over 600 New Yorkers if they liked the Happy Street Signs and if they wanted more: 92.5 percent of those people said yes!…”

CityBeat: Visualizing the Social Media Pulse of the City


CityBeat is a an academic research project set to develop an application that sources, monitors and analyzes hyper-local information from multiple social media platforms such as Instagram and Twitter in real time.

This project was led by researchers at the Jacobs Institute at Cornell Tech,  in collaboration with the The New York World (Columbia Journalism School), Rutgers University, NYU, and Columbia University….

If you are interested in the technical details, we have published several papers detailing the process of building CityBeat. Enjoy your read!

Xia C., Schwartz, R., Xie K., Krebs A., Langdon A., Ting J. and Naaman M., CityBeat: Real-time Social Media Visualization of Hyper-local City Data. In Proceedings, WWW 2014, Seoul, Korea, April 2014. [PDF]

Xie K., Xia C., Grinberg N., Schwartz R., and Naaman M., Robust detection of hyper-local events from geotagged social media data. In Proceedings of the 13th Workshop on Multimedia Data Mining in KDD, 2013. [PDF]

Schwartz, R., Naaman M., Matni, Z. (2013) Making Sense of Cities Using Social Media: Requirements for Hyper-Local Data Aggregation Tools. In Proceedings, WCMCW at ICWSM 2013, Boston, USA, July 2013. [PDF]

#OpenGovNow: Open Government and how it benefits you


#OpenGovNow:  “Open Governments are built on two things: information and participation. A government that is open, actively discloses information about what it does with its money and resources in a way that all citizens can understand. Equally important, an Open Government is one that actively involves all citizens to be participants in government decision-making. This two-way relationship between citizens and governments, in which governments and citizens share information with one another and work together, is the foundation of Open Government….

Why should I care?

The water that you drink, the public schools that children go to, the roads that you use every day: governments make those a reality. Governments and what they do affect each and every one of us. How governments operate and how they spend scarce public resources have a direct impact on our everyday lives and the future of our communities. For instance, it is estimated that $9.5 trillion US dollars are spent by governments all over the world through contracts — therefore you have a role to play in making sure that your share in that public money is not lost, stolen, or misused….
The Global Opening Government Survey was conducted as a response to the growing demand to better understand citizens’ views on the current state and the potential impact of openness. Using the innovative “random domain intercept technology,” the survey was based on a brief questionnaire and collected complete responses from over 65,000 web-enabled individuals in the first 61 member countries of the Open Government Partnership (OGP) plus India. The survey methodology, as any other, has its advantages and its limitations. More details about the methodology can be found in the Additional Resources section of this page..

Mapping the Next Frontier of Open Data: Corporate Data Sharing


Stefaan Verhulst at the GovLab (cross-posted at the UN Global Pulse Blog): “When it comes to data, we are living in the Cambrian Age. About ninety percent of the data that exists today has been generated within the last two years. We create 2.5 quintillion bytes of data on a daily basis—equivalent to a “new Google every four days.”
All of this means that we are certain to witness a rapid intensification in the process of “datafication”– already well underway. Use of data will grow increasingly critical. Data will confer strategic advantages; it will become essential to addressing many of our most important social, economic and political challenges.
This explains–at least in large part–why the Open Data movement has grown so rapidly in recent years. More and more, it has become evident that questions surrounding data access and use are emerging as one of the transformational opportunities of our time.
Today, it is estimated that over one million datasets have been made open or public. The vast majority of this open data is government data—information collected by agencies and departments in countries as varied as India, Uganda and the United States. But what of the terabyte after terabyte of data that is collected and stored by corporations? This data is also quite valuable, but it has been harder to access.
The topic of private sector data sharing was the focus of a recent conference organized by the Responsible Data Forum, Data and Society Research Institute and Global Pulse (see event summary). Participants at the conference, which was hosted by The Rockefeller Foundation in New York City, included representatives from a variety of sectors who converged to discuss ways to improve access to private data; the data held by private entities and corporations. The purpose for that access was rooted in a broad recognition that private data has the potential to foster much public good. At the same time, a variety of constraints—notably privacy and security, but also proprietary interests and data protectionism on the part of some companies—hold back this potential.
The framing for issues surrounding sharing private data has been broadly referred to under the rubric of “corporate data philanthropy.” The term refers to an emerging trend whereby companies have started sharing anonymized and aggregated data with third-party users who can then look for patterns or otherwise analyze the data in ways that lead to policy insights and other public good. The term was coined at the World Economic Forum meeting in Davos, in 2011, and has gained wider currency through Global Pulse, a United Nations data project that has popularized the notion of a global “data commons.”
Although still far from prevalent, some examples of corporate data sharing exist….

Help us map the field

A more comprehensive mapping of the field of corporate data sharing would draw on a wide range of case studies and examples to identify opportunities and gaps, and to inspire more corporations to allow access to their data (consider, for instance, the GovLab Open Data 500 mapping for open government data) . From a research point of view, the following questions would be important to ask:

  • What types of data sharing have proven most successful, and which ones least?
  • Who are the users of corporate shared data, and for what purposes?
  • What conditions encourage companies to share, and what are the concerns that prevent sharing?
  • What incentives can be created (economic, regulatory, etc.) to encourage corporate data philanthropy?
  • What differences (if any) exist between shared government data and shared private sector data?
  • What steps need to be taken to minimize potential harms (e.g., to privacy and security) when sharing data?
  • What’s the value created from using shared private data?

We (the GovLab; Global Pulse; and Data & Society) welcome your input to add to this list of questions, or to help us answer them by providing case studies and examples of corporate data philanthropy. Please add your examples below, use our Google Form or email them to us at corporatedata@thegovlab.org”

Bridging the Knowledge Gap: In Search of Expertise


New paper by Beth Simone Noveck, The GovLab, for Democracy: “In the early 2000s, the Air Force struggled with a problem: Pilots and civilians were dying because of unusual soil and dirt conditions in Afghanistan. The soil was getting into the rotors of the Sikorsky UH-60 helicopters and obscuring the view of its pilots—what the military calls a “brownout.” According to the Air Force’s senior design scientist, the manager tasked with solving the problem didn’t know where to turn quickly to get help. As it turns out, the man practically sitting across from him had nine years of experience flying these Black Hawk helicopters in the field, but the manager had no way of knowing that. Civil service titles such as director and assistant director reveal little about skills or experience.
In the fall of 2008, the Air Force sought to fill in these kinds of knowledge gaps. The Air Force Research Laboratory unveiled Aristotle, a searchable internal directory that integrated people’s credentials and experience from existing personnel systems, public databases, and users themselves, thus making it easy to discover quickly who knew and had done what. Near-term budgetary constraints killed Aristotle in 2013, but the project underscored a glaring need in the bureaucracy.
Aristotle was an attempt to solve a challenge faced by every agency and organization: quickly locating expertise to solve a problem. Prior to Aristotle, the DOD had no coordinated mechanism for identifying expertise across 200,000 of its employees. Dr. Alok Das, the senior scientist for design innovation tasked with implementing the system, explained, “We don’t know what we know.”
This is a common situation. The government currently has no systematic way of getting help from all those with relevant expertise, experience, and passion. For every success on Challenge.gov—the federal government’s platform where agencies post open calls to solve problems for a prize—there are a dozen open-call projects that never get seen by those who might have the insight or experience to help. This kind of crowdsourcing is still too ad hoc, infrequent, and unpredictable—in short, too unreliable—for the purposes of policy-making.
Which is why technologies like Aristotle are so exciting. Smart, searchable expert networks offer the potential to lower the costs and speed up the process of finding relevant expertise. Aristotle never reached this stage, but an ideal expert network is a directory capable of including not just experts within the government, but also outside citizens with specialized knowledge. This leads to a dual benefit: accelerating the path to innovative and effective solutions to hard problems while at the same time fostering greater citizen engagement.
Could such an expert-network platform revitalize the regulatory-review process? We might find out soon enough, thanks to the Food and Drug Administration…”

Data Mining Reveals How Social Coding Succeeds (And Fails)


Emerging Technology From the arXiv : “Collaborative software development can be hugely successful or fail spectacularly. An analysis of the metadata associated with these projects is teasing apart the difference….
The process of developing software has undergone huge transformation in the last decade or so. One of the key changes has been the evolution of social coding websites, such as GitHub and BitBucket.
These allow anyone to start a collaborative software project that other developers can contribute to on a voluntary basis. Millions of people have used these sites to build software, sometimes with extraordinary success.
Of course, some projects are more successful than others. And that raises an interesting question: what are the differences between successful and unsuccessful projects on these sites?
Today, we get an answer from Yuya Yoshikawa at the Nara Institute of Science and Technology in Japan and a couple of pals at the NTT Laboratories, also in Japan.  These guys have analysed the characteristics of over 300,000 collaborative software projects on GitHub to tease apart the factors that contribute to success. Their results provide the first insights into social coding success from this kind of data mining.
A social coding project begins when a group of developers outline a project and begin work on it. These are the “internal developers” and have the power to update the software in a process known as a “commit”. The number of commits is a measure of the activity on the project.
External developers can follow the progress of the project by “starring” it, a form of bookmarking on GitHub. The number of stars is a measure of the project’s popularity. These external developers can also request changes, such as additional features and so on, in a process known as a pull request.
Yoshikawa and co begin by downloading the data associated with over 300,000 projects from the GitHub website. This includes the number of internal developers, the number of stars a project receives over time and the number of pull requests it gets.
The team then analyse the effectiveness of the project by calculating factors such as the number of commits per internal team member, the popularity of the project over time, the number of pull requests that are fulfilled and so on.
The results provide a fascinating insight into the nature of social coding. Yoshikawa and co say the number of internal developers on a project plays a significant role in its success. “Projects with larger numbers of internal members have higher activity, popularity and sociality,” they say….
Ref: arxiv.org/abs/1408.6012 : Collaboration on Social Media: Analyzing Successful Projects on Social Coding”