5 cool ways connected data is being used


 at Wareable: “The real news behind the rise of wearable tech isn’t so much the gadgetry as the gigantic amount of personal data that it harnesses.

Concerns have already been raised over what companies may choose to do with such valuable information, with one US life insurance company already using Fitbits to track customers’ exercise and offer them discounts when they hit their activity goals.

Despite a mildly worrying potential dystopia in which our own data could be used against us, there are plenty of positive ways in which companies are using vast amounts of connected data to make the world a better place…

Parkinson’s disease research

Apple Health ResearchKit was recently unveiled as a platform for collecting collaborative data for medical studies, but Apple isn’t the first company to rely on crowdsourced data for medical research.

The Michael J. Fox Foundation for Parkinson’s Research recently unveiled a partnership with Intel to improve research and treatment for the neurodegenerative brain disease. Wearables are being used to unobtrusively gather real-time data from sufferers, which is then analysed by medical experts….

Saving the rhino

Connected data and wearable tech isn’t just limited to humans. In South Africa, the Madikwe Conservation Project is using wearable-based data to protect endangered rhinos from callous poachers.

A combination of ultra-strong Kevlar ankle collars powered by an Intel Galileo chip, along with an RFID chip implanted in each rhino’s horn allows the animals to be monitored. Any break in proximity between the anklet and horn results in anti-poaching teams being deployed to catch the bad guys….

Making public transport smart

A company called Snips is collecting huge amounts of urban data in order to improve infrastructure. In partnership with French national rail operator SNCF, Snips produced an app called Tranquilien to utilise location data from commuters’ phones and smartwatches to track which parts of the rail network were busy at which times.

Combining big data with crowdsourcing, the information helps passengers to pick a train where they can find a seat during peak times, while the data can also be useful to local businesses when serving the needs of commuters who are passing through.

Improving the sports fan experience

We’ve already written about how wearable tech is changing the NFL, but the collection of personal data is also set to benefit the fans.

Levi’s Stadium – the new home of the San Francisco 49ers – opened in 2014 and is one of the most technically advanced sports venues in the world. As well as a strong Wi-Fi signal throughout the stadium, fans also benefit from a dedicated app. This not only offers instant replays and real-time game information, but it also helps them find a parking space, order food and drinks directly to their seat and even check the lines at the toilets. As fans use the app, all of the data is collated to enhance the fan experience in future….

Creating interactive art

Don’t be put off by the words ‘interactive installation’. On Broadway is a cool work of art that “represents life in the 21st Century city through a compilation of images and data collected along the 13 miles of Broadway that span Manhattan”….(More)”

How to use mobile phone data for good without invading anyone’s privacy


Leo Mirani in Quartz: “In 2014, when the West African Ebola outbreak was at its peak, some academics argued that the epidemic could have been slowed by using mobile phone data.

Their premise was simple: call-data records show the true nature of social networks and human movement. Understanding social networks and how people really move—as seen from phone movements and calls—could give health officials the ability to predict how a disease will move and where a disease will strike next, and prepare accordingly.

The problem is that call-data records are very hard to get a hold of. The files themselves are huge, there are enormous privacy risks, and the process of making the records safe for distribution is long.
First, the technical basics

Every time you make a phone call from your mobile phone to another mobile phone, the network records the following information (note: this is not a complete list):

  • The number from which the call originated
  • The number at which the call terminated
  • Start time of the call
  • Duration of the call
  • The ID number of the phone making the call
  • The ID number of the SIM card used to make the call
  • The code for the antenna used to make the call

On their own, these records are not creepy. Indeed, without them, networks would be unable to connect calls or bill customers. But it is easy to see why operators aren’t rushing to share this information. Even though the data includes none of the actual content of a phone call in the data, simply knowing which number is calling which, and from where and when, is usually more than enough to identify people.
So how can network operators use this valuable data for good while also protecting their own interests and those of their customers? A good example can be found in Africa, where Orange, a French mobile phone network with interests across several African countries, has for the second year run its “Data for Development” (D4D) program, which offers researchers a chance to mine call data for clues on development problems.

Steps to safe sharing

After a successful first year in Ivory Coast, Orange this year ran the D4D program in Senegal. The aim of the program is to give researchers and scientists at universities and other research labs access to data in order to find novel ways to aid development in health, agriculture, transport or urban planning, energy, and national statistics….(More)”

Enhancing Social Accountability Through ICT: Success Factors and Challenges


Wakabi, Wairagala and  Grönlund, Åke for the International Conference for E-Democracy and Open Government 2015: “This paper examines the state of citizen participation in public accountability processes via Information and Communication Technologies (ICT). It draws on three projects that use ICT to report public service delivery failures in Uganda, mainly in the education, public health and the roads sectors. While presenting common factors hampering meaningful use of ICT for citizens’ monitoring of public services and eParticipation in general, the paper studies the factors that enabled successful whistle blowing using toll free calling, blogging, radio talk shows, SMS texting, and e-mailing. The paper displays examples of the positive impacts of whistle-blowing mechanisms and draws up a list of success factors applicable to these projects. It also outlines common challenges and drawbacks to initiatives that use ICT to enable citizen participation in social accountability. The paper provides pathways that could give ICT-for-participation and for-accountability initiatives in countries with characteristics similar to Uganda a good chance of achieving success. While focusing on Uganda, the paper may be of practical value to policy makers, development practitioners and academics in countries with similar socio-economic standings….(More)”

A new approach to measuring the impact of open data


 at SunLight Foundation: “Strong evidence on the long-term impact of open data initiatives is incredibly scarce. The lack of compelling proof is partly due to the relative novelty of the open government field, but also to the inherent difficulties in measuring good governance and social change. We know that much of the impact of policy advocacy, for instance, occurs even before a new law or policy is introduced, and is thus incredibly difficult to evaluate. At the same time, it is also very hard to detect the causality between a direct change in the legal environment and the specific activities of a policy advocacy group. Attribution is equally challenging when it comes to assessing behavioral changes – who gets to take credit for increased political engagement and greater participation in democratic processes?

Open government projects tend to operate in an environment where the contribution of other stakeholders and initiatives is essential to achieving sustainable change, making it even more difficult to show the causality between a project’s activities and the impact it strives to achieve. Therefore, these initiatives cannot be described through simple “cause and effect” relationships, as they mostly achieve changes through their contribution to outcomes produced by a complex ecosystem of stakeholders — including journalists, think tanks, civil society organizations, public officials and many more — making it even more challenging to measure their direct impact.

We at the Sunlight Foundation wanted to tackle some of the methodological challenges of the field through building an evidence base that can empower further generalizations and advocacy efforts, as well as developing a methodological framework to unpack theories of change and to evaluate the impact of open data and digital transparency initiatives. A few weeks ago, we presented our research at the Cartagena Data Festival, and today we are happy to launch the first edition of our paper, which you can read below or on Scribd.

The outputs of this research include:

  • A searchable repository of more than 100 examples on the outputs, outcomes and impacts of open data and digital technology projects;
  • Three distinctive theories of change for open data and digital transparency initiatives from the Global South;
  • A methodological framework to help develop more robust indicators of social and political change for the ecosystem of open data initiatives, by applying and revising the Outcome Mapping approach of IDRC to the field…(You can read the study at :The Social Impact of Open Data by juliakeseru)

New surveys reveal dynamism, challenges of open data-driven businesses in developing countries


Alla Morrison at World Bank Open Data blog: “Was there a class of entrepreneurs emerging to take advantage of the economic possibilities offered by open data, were investors keen to back such companies, were governments tuned to and responsive to the demands of such companies, and what were some of the key financing challenges and opportunities in emerging markets? As we began our work on the concept of an Open Fund, we partnered with Ennovent (India), MDIF (East Asia and Latin America) and Digital Data Divide (Africa) to conduct short market surveys to answer these questions, with a focus on trying to understand whether a financing gap truly existed in these markets. The studies were fairly quick (4-6 weeks) and reached only a small number of companies (193 in India, 70 in Latin America, 63 in South East Asia, and 41 in Africa – and not everybody responded) but the findings were fairly consistent.

  • Open data is still a very nascent concept in emerging markets. and there’s only a small class of entrepreneurs/investors that is aware of the economic possibilities; there’s a lot of work to do in the ‘enabling environment’
    • In many regions the distinction between open data, big data, and private sector generated/scraped/collected data was blurry at best among entrepreneurs and investors (some of our findings consequently are better indicators of  data-driven rather than open data-driven businesses)
  • There’s a small but growing number of open data-driven companies in all the markets we surveyed and these companies target a wide range of consumers/users and are active in multiple sectors
    • A large percentage of identified companies operate in sectors with high social impact – health and wellness, environment, agriculture, transport. For instance, in India, after excluding business analytics companies, a third of data companies seeking financing are in healthcare and a fifth in food and agriculture, and some of them have the low-income population or the rural segment of India as an intended beneficiary segment. In Latin America, the number of companies in business services, research and analytics was closely followed by health, environment and agriculture. In Southeast Asia, business, consumer services, and transport came out in the lead.
    • We found the highest number of companies in Latin America and Asia with the following countries leading the way – Mexico, Chile, and Brazil, with Colombia and Argentina closely behind in Latin America; and India, Indonesia, Philippines, and Malaysia in Asia
  • An actionable pipeline of data-driven companies exists in Latin America and in Asia
    • We heard demand for different kinds of financing (equity, debt, working capital) but the majority of the need was for equity and quasi-equity in amounts ranging from $100,000 to $5 million USD, with averages of between $2 and $3 million USD depending on the region.
  • There’s a significant financing gap in all the markets
    • The investment sizes required, while they range up to several million dollars, are generally small. Analysis of more than 300 data companies in Latin America and Asia indicates a total estimated need for financing of more than $400 million
  • Venture capitals generally don’t recognize data as a separate sector and club data-driven companies with their standard information communication technology (ICT) investments
    • Interviews with founders suggest that moving beyond seed stage is particularly difficult for data-driven startups. While many companies are able to cobble together an initial seed round augmented by bootstrapping to get their idea off the ground, they face a great deal of difficulty when trying to raise a second, larger seed round or Series A investment.
    • From the perspective of startups, investors favor banal e-commerce (e.g., according toTech in Asia, out of the $645 million in technology investments made public across the region in 2013, 92% were related to fashion and online retail) or consumer service startups and ignore open data-focused startups even if they have a strong business model and solid key performance indicators. The space is ripe for a long-term investor with a generous risk appetite and multiple bottom line goals.
  • Poor data quality was the number one issue these companies reported.
    • Companies reported significant waste and inefficiency in accessing/scraping/cleaning data.

The analysis below borrows heavily from the work done by the partners. We should of course mention that the findings are provisional and should not be considered authoritative (please see the section on methodology for more details)….(More).”

The International Handbook Of Public Administration And Governance


New book edited by Andrew Massey and Karen Johnston: “…Handbook explores key questions around the ways in which public administration and governance challenges can be addressed by governments in an increasingly globalized world. World-leading experts explore contemporary issues of government and governance, as well as the relationship between civil society and the political class. The insights offered will allow policy makers and officials to explore options for policy making in a new and informed way.

Adopting global perspectives of governance and public sector management, the Handbook includes scrutiny of current issues such as: public policy capacity, wicked policy problems, public sector reforms, the challenges of globalization and complexity management. Practitioners and scholars of public administration deliver a range of perspectives on the abiding wicked issues and challenges to delivering public services, and the way that delivery is structured. The Handbook uniquely provides international coverage of perspectives from Africa, Asia, North and South America, Europe and Australia.

Practitioners and scholars of public administration, public policy, public sector management and international relations will learn a great deal from this Handbook about the issues and structures of government and governance in an increasingly complex world. (Full table of contents)… (More).”

Open-Data Project Adds Transparency to African Elections


Jessica Weiss at the International Center for Journalists: “An innovative tool developed to help people register to vote in Kenya is proving to be a valuable asset to voters across the African continent.

GotToVote was created in 2012 by two software developers under the guidance of ICFJ’s Knight International Journalism Fellow Justin Arenstein for use during Kenya’s general elections. In just 24 hours, the developers took voter registration information in a government PDF and turned it into a simple website with usable data that helped people locate the nearest voting center where they could register for elections. Kenyan media drove a large audience to the site, which resulted in a major boost in voter registrations.

Since then, GotToVote has helped people register to vote in Malawi and Zimbabwe. Now, it is being adapted for use in national elections in Ghana and Uganda in 2016.

Ugandan civic groups led by The African Freedom of Information Centre are planning to use it to help people register, to verify registrations and for SMS registration drives. They are also proposing new features—including digital applications to help citizens post issues of concern and compare political positions between parties and candidates so voters better understand the choices they are being offered.

In Ghana, GotToVote is helping citizens find their nearest registration center to make sure they are eligible to vote in that country’s 2016 national elections. The tool, which is optimized for mobile devices, makes voter information easily accessible to the public. It explains who is eligible to register for the 2016 general elections and gives a simple overview of the voter registration process. It also tells users what documentation to take with them to register…..

Last year, Malawi’s national government used GotToVote to check whether voters were correctly registered. As a result, more than 20,000 were found to be incorrectly registered, because they were not qualified voters or were registered in the wrong constituency. In 2013, thousands used GotToVote via their mobile and tablet devices to find their polling places in Zimbabwe.

The successful experiment provides a number of lessons about the power and feasibility of open data projects, showing that they don’t require large teams, big budgets or a lot of time to build…(More)

New Desktop Application Has Potential to Increase Asteroid Detection, Now Available to Public


NASA Press Release: “A software application based on an algorithm created by a NASA challenge has the potential to increase the number of new asteroid discoveries by amateur astronomers.

Analysis of images taken of our solar system’s main belt asteroids between Mars and Jupiter using the algorithm showed a 15 percent increase in positive identification of new asteroids.

During a panel Sunday at the South by Southwest Festival in Austin, Texas, NASA representatives discussed how citizen scientists have made a difference in asteroid hunting. They also announced the release of a desktop software application developed by NASA in partnership with Planetary Resources, Inc., of Redmond, Washington. The application is based on an Asteroid Data Hunter-derived algorithm that analyzes images for potential asteroids. It’s a tool that can be used by amateur astronomers and citizen scientists.

The Asteroid Data Hunter challenge was part of NASA’s Asteroid Grand Challenge. The data hunter contest series, which was conducted in partnership with Planetary Resources under a Space Act Agreement, was announced at the 2014 South by Southwest Festival and concluded in December. The series offered a total of $55,000 in awards for participants to develop significantly improved algorithms to identify asteroids in images captured by ground-based telescopes. The winning solutions of each piece of the contest combined to create an application using the best algorithm that increased the detection sensitivity, minimized the number of false positives, ignored imperfections in the data, and ran effectively on all computer systems.

“The Asteroid Grand Challenge is seeking non-traditional partnerships to bring the citizen science and space enthusiast community into NASA’s work,” said Jason Kessler, program executive for NASA’s Asteroid Grand Challenge. “The Asteroid Data Hunter challenge has been successful beyond our hopes, creating something that makes a tangible difference to asteroid hunting astronomers and highlights the possibility for more people to play a role in protecting our planet.”…

The new asteroid hunting application can be downloaded at:

http://topcoder.com/asteroids

For information about NASA’s Asteroid Grand Challenge, visit:

http://www.nasa.gov/asteroidinitiative

Why governments need guinea pigs for policies


Jonathan Breckon in the Guardian:”People are unlikely to react positively to the idea of using citizens as guinea pigs; many will be downright disgusted. But there are times when government must experiment on us in the search for knowledge and better policy….

Though history calls into question the ethics of experimentation, unless we try things out, we will never learn. The National Audit Office says that £66bn worth of government projects have no plans to evaluate their impact. It is unethical to roll out policies in this arbitrary way. We have to experiment on a small scale to have a better understanding of how things work before rolling out policies across the UK. This is just as relevant to social policy, as it is to science and medicine, as set out in a new report by the Alliance for Useful Evidence.

Whether it’s the best ways to teach our kids to read, designing programmes to get unemployed people back to work, or encouraging organ donation – if the old ways don’t work, we have to test new ones. And that testing can’t always be done by a committee in Whitehall or in a university lab.

Experimentation can’t happen in isolation. What works in Lewisham or Londonnery, might not work in Lincoln – or indeed across the UK. For instance, there is a huge amount debate around the current practice of teaching children to read and spell using phonics, which was based on a small-scale study in Clackmannanshire, as well as evidence from the US. A government-commissioned review on the evidence for phonics led professor Carole Torgerson, then at York University, to warn against making national policy off the back of just one small Scottish trial.

One way round this problem is to do larger experiments. The increasing use of the internet in public services allows for more and faster experimentation, on a larger scale for lower cost – the randomised controlled trial on voter mobilisation that went to 61 million users in the 2010 US midterm elections, for example. However, the use of the internet doesn’t get us off the ethical hook. Facebook had to apologise after a global backlash to secret psychological tests on their 689,000 users.

Contentious experiments should be approved by ethics committees – normal practice for trials in hospitals and universities.

We are also not interested in freewheeling trial-and-error; robust and appropriate research techniques to learn from experiments are vital. It’s best to see experimentation as a continuum, ranging from the messiness of attempts to try something new to experiments using the best available social science, such as randomised controlled trials.

Experimental government means avoiding an approach where everything is fixed from the outset. What we need is “a spirit of experimentation, unburdened by promises of success”, as recommended by the late professor Roger Jowell, author of the 2003 Cabinet Office report, Trying it out [pdf]….(More)”

Big Data for Social Good


Introduction to a Special Issue of the Journal “Big Data” by Catlett Charlie and Ghani Rayid: “…organizations focused on social good are realizing the potential as well but face several challenges as they seek to become more data-driven. The biggest challenge they face is a paucity of examples and case studies on how data can be used for social good. This special issue of Big Data is targeted at tackling that challenge and focuses on highlighting some exciting and impactful examples of work that uses data for social good. The special issue is just one example of the recent surge in such efforts by the data science community. …

This special issue solicited case studies and problem statements that would either highlight (1) the use of data to solve a social problem or (2) social challenges that need data-driven solutions. From roughly 20 submissions, we selected 5 articles that exemplify this type of work. These cover five broad application areas: international development, healthcare, democracy and government, human rights, and crime prevention.

“Understanding Democracy and Development Traps Using a Data-Driven Approach” (Ranganathan et al.) details a data-driven model between democracy, cultural values, and socioeconomic indicators to identify a model of two types of “traps” that hinder the development of democracy. They use historical data to detect causal factors and make predictions about the time expected for a given country to overcome these traps.

“Targeting Villages for Rural Development Using Satellite Image Analysis” (Varshney et al.) discusses two case studies that use data and machine learning techniques for international economic development—solar-powered microgrids in rural India and targeting financial aid to villages in sub-Saharan Africa. In the process, the authors stress the importance of understanding the characteristics and provenance of the data and the criticality of incorporating local “on the ground” expertise.

In “Human Rights Event Detection from Heterogeneous Social Media Graphs,” Chen and Neil describe efficient and scalable techniques to use social media in order to detect emerging patterns in human rights events. They test their approach on recent events in Mexico and show that they can accurately detect relevant human rights–related tweets prior to international news sources, and in some cases, prior to local news reports, which could potentially lead to more timely, targeted, and effective advocacy by relevant human rights groups.

“Finding Patterns with a Rotten Core: Data Mining for Crime Series with Core Sets” (Wang et al.) describes a case study with the Cambridge Police Department, using a subspace clustering method to analyze the department’s full housebreak database, which contains detailed information from thousands of crimes from over a decade. They find that the method allows human crime analysts to handle vast amounts of data and provides new insights into true patterns of crime committed in Cambridge…..(More)