Why Are Political Scientists Studying Ice Bucket Challenges?


at the National Journal: “Who is more civically engaged—the person who votes in every election or the nonvoter who volunteers as a crossing guard at the local elementary school? What about the person who comments on an online news story? Does it count more if he posts the article on his Facebook page and urges his friends to act? What about the retired couple who takes care of the next-door neighbor’s kid after school until her single mom gets home from work?
The concept of civic engagement is mutating so fast that researchers are having a hard time keeping up with it. The Bureau of Labor Statistics has been collecting data on volunteering—defined as doing unpaid work through or for an organization—only since 2002. But even in that relatively short time period, that definition of “volunteering” has become far too limiting to cover the vast array of civic activity sprouting up online and in communities across the country.

  Infographic

Here’s just one example: Based on the BLS data alone, you would think that whites who graduated from college are far more likely to volunteer than African Americans or Hispanics with only high school degrees. But the the BLS’s data doesn’t take into account the retired couple mentioned above, who, based on cultural norms, is more likely to be black or Hispanic. It doesn’t capture the young adults in poor neighborhoods who tell those researchers that they consider being a role model to younger kids their most important contribution to their communities. Researchers say those informal forms of altruism are more common among minority communities, while BLS-type “volunteering”—say, being a tutor to a disadvantaged child—is more common among middle-class whites. Moreover, the BLS’s data only scratches the surface of political involvement…”

Training Students to Extract Value from Big Data


New report by the National Research Council: “As the availability of high-throughput data-collection technologies, such as information-sensing mobile devices, remote sensing, internet log records, and wireless sensor networks has grown, science, engineering, and business have rapidly transitioned from striving to develop information from scant data to a situation in which the challenge is now that the amount of information exceeds a human’s ability to examine, let alone absorb, it. Data sets are increasingly complex, and this potentially increases the problems associated with such concerns as missing information and other quality concerns, data heterogeneity, and differing data formats.
The nation’s ability to make use of data depends heavily on the availability of a workforce that is properly trained and ready to tackle high-need areas. Training students to be capable in exploiting big data requires experience with statistical analysis, machine learning, and computational infrastructure that permits the real problems associated with massive data to be revealed and, ultimately, addressed. Analysis of big data requires cross-disciplinary skills, including the ability to make modeling decisions while balancing trade-offs between optimization and approximation, all while being attentive to useful metrics and system robustness. To develop those skills in students, it is important to identify whom to teach, that is, the educational background, experience, and characteristics of a prospective data-science student; what to teach, that is, the technical and practical content that should be taught to the student; and how to teach, that is, the structure and organization of a data-science program.
Training Students to Extract Value from Big Data summarizes a workshop convened in April 2014 by the National Research Council’s Committee on Applied and Theoretical Statistics to explore how best to train students to use big data. The workshop explored the need for training and curricula and coursework that should be included. One impetus for the workshop was the current fragmented view of what is meant by analysis of big data, data analytics, or data science. New graduate programs are introduced regularly, and they have their own notions of what is meant by those terms and, most important, of what students need to know to be proficient in data-intensive work. This report provides a variety of perspectives about those elements and about their integration into courses and curricula…”

3D printed maps could help the blind navigate their city


Springwise: “Modern technology has turned many of the things we consume from physical objects into pixels on a screen. While this has benefited the majority of us, those with sight difficulties don’t get along well with visual stimuli or touchscreen devices. In the past, we’ve seen Yahoo! Japan develop Hands On Search, a project that lets blind kids carry out web searches with 3D printed results. Now the country’s governmental department GSI is creating software that will enable those with visual impairments to print out 3D versions of online maps.
The official mapping body for Japan — much like the US Geological Survey — GSI already has paper maps for the blind, using embossed surfaces to mark out roads. It’s now developing a program that is able to do the same thing for digital maps.
The software first differentiates the highways, railway lines and walkways from the rest of the landscape. It then creates a 3D relief model that uses different textures to distinguish the features so that anyone running their finger along them will be able to determine what it is. The program also takes into account contour lines, creating accurate topographical representations of a particular area….
Website: www.gsi.go.jp

Peer Academy


About: “Peer Academy inspires change through peer-to-peer learning. Our goal is simple, to bring together innovators and collaborators across Corporate, Government and Not For Profits who are passionate about accelerating change in their organisations. Focussed on the skills needed for the 21st century, Peer Academy provides a platform for professionals to increase their capabilities through sharing skills, tools and knowledge…

As we enter into the collaborative economy, people will demand collaborative and flexible ways of learning. As people change jobs more frequently, the need for new skills will change in variety and pace.
Currently, many education delivery models are not keeping up with the pace of job or career changes. Internal options are often compliance-based and lack inspiration for 21st century skills. External options can be expensive or time consuming, making it a difficult pitch where budgets and resources for many organisations are getting tighter. Finally, many of us want to move beyond the expert vs student paradigm and would rather learn from peers who have gone down the tricky path we are venturing on.
We need a new education paradigm for professional development. One where learning happens on-demand, is low-cost, practical and peer-led. This is where Peer Academy comes in…”

The Data Manifesto


Development Initiatives: “Staging a Data Revolution

Accessible, useable, timely and complete data is core to sustainable development and social progress. Access to information provides people with a base to make better choices and have more control over their lives. Too often attempts to deliver sustainable economic, social and environmental results are hindered by the failure to get the right information, in the right format, to the right people, at the right time. Worse still, the most acute data deficits often affect the people and countries facing the most acute problems.

The Data Revolution should be about data grounded in real life. Data and information that gets to the people who need it at national and sub-national levels to help with the decisions they face – hospital directors, school managers, city councillors, parliamentarians. Data that goes beyond averages – that is disaggregated to show the different impacts of decisions, policies and investments on gender, social groups and people living in different places and over time.

We need a Data Revolution that sets a new political agenda, that puts existing data to work, that improves the way data is gathered and ensures that information can be used. To deliver this vision, we need the following steps.


12 steps to a Data Revolution

1.     Implement a national ‘Data Pledge’ to citizens that is supported by governments, private and non-governmental sectors
2.     Address real world questions with joined up and disaggregated data
3.      Empower and up-skill data users of the future through education
4.     Examine existing frameworks and publish existing data
5.     Build an information bank of data assets
6.     Allocate funding available for better data according to national and sub-national priorities
7.     Strengthen national statistical systems’ capacity to collect data
8.     Implement a policy that data is ‘open by default’
9.     Improve data quality by subjecting it to public scrutiny
10.  Put information users’ needs first
11.  Recognise technology cannot solve all barriers to information
12.  Invest in infomediaries’ capacity to translate data into information that policymakers, civil society and the media can actually use…”

Welcome to The Open Standard


Welcome to The Open Standard.

From the beginning, Mozilla has dedicated itself to advocating for an open Web in wholehearted belief that open systems create more opportunity for everyone.
From its advocacy work to web literacy programs, to the creation of the Firefox browser, Mozilla has exemplified the journalism adage, “show, don’t tell.” It’s in that tradition that we’re excited to bring you The Open Standard, an original news site dedicated to covering the ideas and opinions that support the open, transparent and collaborative systems at work in our daily lives.
We advocate that open systems create healthier communities and more successful societies overall. We will cover everything from open source to open government and the need for transparency; privacy and security, the “Internet of Things” vs. “pervasive computing”, to education and if it’s keeping up with the technological changes. The bottom line? Open is better.
This is just the beginning. Over the next few months, The Open Standard will open itself to collaboration with you, our readers; everything from contributing to the site, to drawing our attention to uncovered issues, to crowdsourcing the news…”

Ebola: Can big data analytics help contain its spread?


at BBC News: “While emergency response teams, medical charities and non-governmental organisations struggle to contain the virus, could big data analytics help?
A growing number of data scientists believe so….
Mobile phones, widely owned in even the poorest countries in Africa, are proving to be a rich source of data in a region where other reliable sources are sorely lacking.
Orange Telecom in Senegal handed over anonymised voice and text data from 150,000 mobile phones to Flowminder, a Swedish non-profit organisation, which was then able to draw up detailed maps of typical population movements in the region.
Authorities could then see where the best places were to set up treatment centres, and more controversially, the most effective ways to restrict travel in an attempt to contain the disease.
The drawback with this data was that it was historic, when authorities really need to be able to map movements in real time. People’s movements tend to change during an epidemic.
This is why the US Centers for Disease Control and Prevention (CDC) is also collecting mobile phone mast activity data from mobile operators and mapping where calls to helplines are mostly coming from.

Population movement map of West AfricaMobile phone data from West Africa is being used to map population movements and predict how the Ebola virus might spread

A sharp increase in calls to a helpline from one particular area would suggest an outbreak and alert authorities to direct more resources there.
Mapping software company Esri is helping CDC to visualise this data and overlay other existing sources of data from censuses to build up a richer picture.
The level of activity at each mobile phone mast also gives a kind of heatmap of where people are and crucially, where and how far they are moving.

“We’ve never had this large-scale, anonymised mobile phone data before as a species,” says Nuria Oliver, a scientific director at mobile phone company Telefonica.

“The most positive impact we can have is to help emergency relief organisations and governments anticipate how a disease is likely to spread.
“Until now they had to rely on anecdotal information, on-the-ground surveys, police and hospital reports.”…

This Headline Is One of Many Experiments on You


Will Knight at MIT Technology Review: “On your way to this article, you probably took part in several experiments. You may have helped a search engine test a new way of displaying its results or an online retailer fine-tune an algorithm for recommending products. You may even have helped a news website decide which of two headlines readers are most likely to click on.
In other words, whether you realize it or not, the Web is already a gigantic, nonstop user-testing laboratory. Experimentation offers companies a powerful way to understand what customers want and how they are likely to behave, but it also seems that few people realize quite how much of it is going on.

This became clear in June, when Facebook experienced a backlash after publishing a study on the way negative emotions can spread across its network. The study, conducted by a team of internal researchers and academics, involved showing some people more negative posts than they would otherwise have seen, and then measuring how this affected their behavior. They in fact tended to post more negative content themselves, revealing a kind of “emotional contagion” (see “Facebook’s Emotion Study Is Just Its Latest Effort to Prod Users”).
Businesses have performed market research and other small experiments for years, but the practice has reached new levels of sophistication and complexity, largely because it is so easy to control the user experience on the Web, and then track how people’s behavior changes (see “What Facebook Knows”).
So companies with large numbers of users routinely tweak the information some of them see, and measure the resulting effect on their behavior—a practice known in the industry as A/B testing. Next time you see a credit card offer, for example, you might be one of a small group of users selected at random to see a new design. Or when you log onto Gmail, you may one of a chosen subset that gets to use a new feature developed by Google’s engineers.
“When doing things online, there’s a very large probability you’re going to be involved in multiple experiments every day,” Sinan Aral, a professor at MIT’s Sloan School of Management, said during a break at a conference for practitioners of large-scale user experiments last weekend in Cambridge, Massachusetts. “Look at Google, Amazon, eBay, Airbnb, Facebook—all of these businesses run hundreds of experiments, and they also account for a large proportion of Web traffic.”
At the Sloan conference, Ron Kohavi, general manager of the analysis and experimentation team at Microsoft, said each time someone uses the company’s search engine, Bing, he or she is probably involved in around 300 experiments. The insights that designers, engineers, and product managers can glean from these experiments can be worth millions of dollars in advertising revenue, Kohavi said…”

Re-imagining Cities


In cities around the world, digital platforms are bringing together citizens and service providers in innovative ways. In a recent post on Medium Stefaan Verhulst, Co-Founder and Chief of R&D and Julia Root, Adjunct Fellow at the GovLab write about the ways in which we observe cities re-imagining themselves. We point to four distinct ways that cities are redefining how they plan, build and invest in their futures. Each way deploys a different set of technologies and tools that when combined with urban thinking and design, is changing not just our urban environments, but the pace of change as well.
Read full article here.

Putting Government Data to Work


U.S. Department of Commerce Press Release: “The Governance Lab (GovLab) at New York University today released “Realizing The Potential of Open Government Data: A Roundtable with the U.S. Department of Commerce,” a report on findings and recommendations for ways the U.S. Commerce Department can improve its data management, dissemination and use. The report summarizes a June 2014 Open Data Roundtable, co-hosted by The GovLab and the White House Office of Science and Technology Policy with the Commerce Department, which brought together Commerce data providers and 25 representatives from the private sector and nonprofit organizations for an action-oriented dialogue on data issues and potential solutions. The GovLab is convening a series of other Open Data Roundtables in its mission to help make government more effective and connected to the public through technology.

“We were honored to work with the White House and the Department of Commerce to convene this event,” said Joel Gurin, senior advisor at The GovLab and project director of the Open Data 500 and the Roundtable Series. “The Department’s commitment to engaging with its data customers opens up great opportunities for public-private collaboration.”
Under Secretary of Commerce for Economic Affairs Mark Doms said, “At the Commerce Department, we are only at the beginning of our open data effort. We share the goals and objectives embodied by the call of the Open Data 500: to deliver data that is valuable to industry and that provides greater economic opportunity for millions of Americans.” …”