What Can Satellite Imagery Tell Us About Obesity in Cities?


Emily Matchar at Smithsonian: “About 40 percent of American adults are obese, defined as having a body mass index (BMI) over 30. But obesity is not evenly distributed around the country. Some cities and states have far more obese residents than others. Why? Genetics, stress, income levels and access to healthy foods are play a role. But increasingly researchers are looking at the built environment—our cities—to understand why people are fatter in some places than in others.

New research from the University of Washington attempts to take this approach one step further by using satellite data to examine cityscapes. By using the satellite images in conjunction with obesity data, they hope to uncover which urban features might influence a city’s obesity rate.

The researchers used a deep learning network to analyze about 150,000 high-resolution satellite image of four cities: Los Angeles, Memphis, San Antonio and Seattle. The cities were selected for being from states with both high obesity rates (Texas and Tennessee) and low obesity rates (California and Washington). The network extracted features of the built environment: crosswalks, parks, gyms, bus stops, fast food restaurants—anything that might be relevant to health.

“If there’s no sidewalk you’re less likely to go out walking,” says Elaine Nsoesie, a professor of global health at the University of Washington who led the research.

The team’s algorithm could then see what features were more or less common in areas with greater and lesser rates of obesity. Some findings were predictable: more parks, gyms and green spaces were correlated with lower obesity rates. Others were surprising: more pet stores equaled thinner residents (“a high density of pet stores could indicate high pet ownership, which could influence how often people go to parks and take walks around the neighborhood,” the team hypothesized).

A paper on the results was recently published in the journal JAMA Network Open….(More)”.

Emerging Labour Market Data Sources towards Digital Technical and Vocational Education and Training (TVET)


Paper by Nikos Askitas, Rafik Mahjoubi, Pedro S. Martins, Koffi Zougbede for Paris21/OECD: “Experience from both technology and policy making shows that solutions for labour market improvements are simply choices of new, more tolerable problems. All data solutions supporting digital Technical and Vocational Education and Training (TVET) will have to incorporate a roadmap of changes rather than an unrealistic super-solution. The ideal situation is a world in which labour market participants engage in intelligent strategic behavior in an informed, fair and sophisticated manner.

Labour market data captures transactions within labour market processes. In order to successfully capture such data, we need to understand the specifics of these market processes. Designing an ecosystem of labour market matching facilitators and rules of engagement for contributing to a lean and streamlined Logistics Management and Information System (LMIS) is the best way to create Big Data with context relevance. This is in contrast with pre-existing Big Data captured by global job boards or social media for which relevance is limited by the technology access gap and its variations across the developing world.

Network effects occur in technology and job facilitation, as seen in the developed world. Managing and instigating the right network effects might be crucial to avoid fragmented stagnation and inefficiency. This is key to avoid throwing money behind wrong choices that do not gain traction.

A mixed mode approach is possibly the ideal approach for developing countries. Mixing offline and online elements correctly will be crucial in bridging the technology access gap and reaping the benefits of digitisation at the same time.

Properly incentivising the various entities is critical for progression, and more specifically the private sector, which is significantly more agile and inventive, has “skin in the game” and a long-term commitment to the conditions in the field, has intimate knowledge of how to solve the the technology gap and brings a better understanding of the particular ambient context they are operating in. To summarise: Big Data starts small.

Managing expectations and creating incentives for the various stakeholders will be crucial in establishing digitally supported TVET. Developing the right business models will be crucial in the short term and beyond, and it will be the result of creating the right mix of technological and policy expertise with good knowledge of the situation on the ground….(More)”.

Crowdsourced social media data for disaster management: Lessons from the PetaJakarta.org project


R.I.Ogie, R.J.Clarke, H.Forehead and P.Perez in Computers, Environment and Urban Systems: “The application of crowdsourced social media data in flood mapping and other disaster management initiatives is a burgeoning field of research, but not one that is without challenges. In identifying these challenges and in making appropriate recommendations for future direction, it is vital that we learn from the past by taking a constructively critical appraisal of highly-praised projects in this field, which through real-world implementations have pioneered the use of crowdsourced geospatial data in modern disaster management. These real-world applications represent natural experiments, each with myriads of lessons that cannot be easily gained from computer-confined simulations.

This paper reports on lessons learnt from a 3-year implementation of a highly-praised project- the PetaJakarta.org project. The lessons presented derive from the key success factors and the challenges associated with the PetaJakarta.org project. To contribute in addressing some of the identified challenges, desirable characteristics of future social media-based disaster mapping systems are discussed. It is envisaged that the lessons and insights shared in this study will prove invaluable within the broader context of designing socio-technical systems for crowdsourcing and harnessing disaster-related information….(More)”.

The Stoplight Battling to End Poverty


Nick Dall at OZY: “Over midafternoon coffees and Fantas, Robyn-Lee Abrahams and Joyce Paulse — employees at my local supermarket in Cape Town, South Africa — tell me how their lives have changed in the past 18 months. “I never dreamed my daughter would go to college,” says Paulse. “But yesterday we went online together and started filling in the forms.”

Abrahams notes how she used to live hand to mouth. “But now I’ve got a savings account, which I haven’t ever touched.” The sacrifice? “I eat less chocolate now.”

Paulse and Abrahams are just two of thousands of beneficiaries of the Poverty Stoplight, a self-evaluation tool that’s now redefining poverty in countries as diverse as Argentina and the U.K.; Mexico and Tanzania; Chile and Papua New Guinea. By getting families to rank their own economic condition red, yellow or green based upon 50 indicators, the Poverty Stoplight gives families the agency to pull themselves out of poverty and offers organizations insight into whether their programs are working.

Social entrepreneur Martín Burt, who founded Fundación Paraguaya 33 years ago to promote entrepreneurship and economic empowerment in Paraguay, developed the first, paper-based prototype of the Poverty Stoplight in 2010 to help the organization’s microfinance clients escape the poverty cycle….Because poverty is multidimensional, “you can have a family with a proper toilet but no savings,” points out Burt. Determining questionnaires span six different aspects of people’s lives, including softer indicators such as community involvement, self-confidence and family violence. The survey, a series of 50 multiple-choice questions with visual cues, is aimed at households, not individuals, because “you cannot get a 10-year-old girl out of poverty in isolation,” says Burt. Confidentiality is another critical component….(More)”.

Walmart wants to track lettuce on the blockchain


Matthew Beedham at TNW: “Walmart is asking all of its leafy greens suppliers to get on blockchain by this time next year.

With instances of E. coli on the rise, particularly in romaine lettuce, Walmart is insisting that its suppliers use blockchain to track and trace products from source to the customer.

Walmart notes that, while health officials at the Centers for Disease Control told Americans have already warned citizens to avoid eating lettuce grown in Yuma, Arizona, it’s near impossible for consumers to know where their greens are coming from.

On one hand this could be a great system for reducing waste. Earlier this year, green grocers had to throw away produce thought to be infected with E. Coli.

The announcement states, “[h]ealth officials at the Centers for Disease Control told Americans to avoid eating lettuce that was grown in Yuma, Arizona”

However, it’s near impossible for consumers to know where their lettuce was grown.

It would seem that most producers and suppliers still rely on paper-based ledgers. As a result, tracking down vital information about where a product came from can be very time consuming.

By which time, it might be too late and many customers might have purchased and consumed infected produce.

If Walmart’s plans come to fruition, it would allow customers to view the entire supply chain of a product at the point of purchase… (More)”

Ethics & Algorithms Toolkit


Toolkit: “Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk.

GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them….We developed this because:

  • We saw a gap. There are many calls to arms and lots of policy papers, one of which was a DataSF research paper, but nothing practitioner-facing with a repeatable, manageable process.
  • We wanted an approach which governments are already familiar with: risk management. By identifing and quantifying levels of risk, we can recommend specific mitigations.. …(More)”.

Making Wage Data Work: Creating a Federal Resource for Evidence and Transparency


Christina Pena at the National Skills Coalition: “Administrative data on employment and earnings, commonly referred to as wage data or wage records, can be used to assess the labor market outcomes of workforce, education, and other programs, providing policymakers, administrators, researchers, and the public with valuable information. However, there is no single readily accessible federal source of wage data which covers all workers. Noting the importance of employment and earnings data to decision makers, the Commission on Evidence-Based Policymaking called for the creation of a single federal source of wage data for statistical purposes and evaluation. They recommended three options for further exploration: expanding access to systems that already exist at the U.S. Census Bureau or the U.S. Department of Health and Human Services (HHS), or creating a new database at the U.S. Department of Labor (DOL).

This paper reviews current coverage and allowable uses, as well as federal and state actions required to make each option viable as a single federal source of wage data that can be accessed by government agencies and authorized researchers. Congress and the President, in conjunction with relevant federal and state agencies, should develop one or more of those options to improve wage information for multiple purposes. Although not assessed in the following review, financial as well as privacy and security considerations would influence the viability of each scenario. Moreover, if a system like the Commission-recommended National Secure Data Service for sharing data between agencies comes to fruition, then a wage system might require additional changes to work with the new service….(More)”

Causal mechanisms and institutionalisation of open government data in Kenya


Paper by Paul W. Mungai: “Open data—including open government data (OGD)—has become a topic of prominence during the last decade. However, most governments have not realised the desired value streams or outcomes from OGD. The Kenya Open Data Initiative (KODI), a Government of Kenya initiative, is no exception with some moments of success but also sustainability struggles. Therefore, the focus for this paper is to understand the causal mechanisms that either enable or constrain institutionalisation of OGD initiatives. Critical realism is ideally suited as a paradigm to identify such mechanisms, but guides to its operationalisation are few. This study uses the operational approach of Bygstad, Munkvold & Volkoff’s six‐step framework, a hybrid approach that melds concepts from existing critical realism models with the idea of affordances. The findings suggest that data demand and supply mechanisms are critical in institutionalising KODI and that, underpinning basic data‐related affordances, are mechanisms engaging with institutional capacity, formal policy, and political support. It is the absence of such elements in the Kenya case which explains why it has experienced significant delays…(More)”.

The role of corporations in addressing AI’s ethical dilemmas


Darrell M. West at Brookings: “In this paper, I examine five AI ethical dilemmas: weapons and military-related applications, law and border enforcement, government surveillance, issues of racial bias, and social credit systems. I discuss how technology companies are handling these issues and the importance of having principles and processes for addressing these concerns. I close by noting ways to strengthen ethics in AI-related corporate decisions.

Briefly, I argue it is important for firms to undertake several steps in order to ensure that AI ethics are taken seriously:

  1. Hire ethicists who work with corporate decisionmakers and software developers
  2. Develop a code of AI ethics that lays out how various issues will be handled
  3. Have an AI review board that regularly addresses corporate ethical questions
  4. Develop AI audit trails that show how various coding decisions have been made
  5. Implement AI training programs so staff operationalizes ethical considerations in their daily work, and
  6. Provide a means for remediation when AI solutions inflict harm or damages on people or organizations….(More)”.

Illuminating GDP


Money and Banking: “GDP figures are ‘man-made’ and therefore unreliable,” reported remarks of Li Keqiang (then Communist Party secretary of the northeastern Chinese province of Liaoning), March 12, 2007.

Satellites are great. It is hard to imagine living without them. GPS navigation is just the tip of the iceberg. Taking advantage of the immense amounts of information collected over decades, scientists have been using satellite imagery to study a broad array of questions, ranging from agricultural land use to the impact of climate change to the geographic constraints on cities (see here for a recent survey).

One of the most well-known economic applications of satellite imagery is to use night-time illumination to enhance the accuracy of various reported measures of economic activity. For example, national statisticians in countries with poor information collection systems can employ information from satellites to improve the quality of their nationwide economic data (see here). Even where governments have relatively high-quality statistics at a national level, it remains difficult and costly to determine local or regional levels of activity. For example, while production may occur in one jurisdiction, the income generated may be reported in another. At a sufficiently high resolution, satellite tracking of night-time light emissions can help address this question (see here).

But satellite imagery is not just an additional source of information on economic activity, it is also a neutral one that is less prone to manipulation than standard accounting data. This makes it is possible to use information on night-time light to monitor the accuracy of official statistics. And, as we suggest later, the willingness of observers to apply a “satellite correction” could nudge countries to improve their own data reporting systems in line with recognized international standards.

As Luis Martínez inquires in his recent paper, should we trust autocrats’ estimates of GDP? Even in relatively democratic countries, there are prominent examples of statistical manipulation (recall the cases of Greek sovereign debt in 2009 and Argentine inflation in 2014). In the absence of democratic checks on the authorities, Martínez finds even greater tendencies to distort the numbers….(More)”.