Blockchain systems are tracking food safety and origins


Nir Kshetri at The Conversation: “When a Chinese consumer buys a package labeled “Australian beef,” there’s only a 50-50 chance the meat inside is, in fact, Australian beef. It could just as easily contain rat, dog, horse or camel meat – or a mixture of them all. It’s gross and dangerous, but also costly.

Fraud in the global food industry is a multi-billion-dollar problem that has lingered for years, duping consumers and even making them ill. Food manufacturers around the world are concerned – as many as 39 percent of them are worried that their products could be easily counterfeited, and 40 percent say food fraud is hard to detect.

In researching blockchain for more than three years, I have become convinced that this technology’s potential to prevent fraud and strengthen security could fight agricultural fraud and improve food safety. Many companies agree, and are already running various tests, including tracking wine from grape to bottle and even following individual coffee beans through international trade.

Tracing food items

An early trial of a blockchain system to track food from farm to consumer was in 2016, when Walmart collected information about pork being raised in China, where consumers are rightly skeptical about sellers’ claims of what their food is and where it’s from. Employees at a pork farm scanned images of farm inspection reports and livestock health certificates, storing them in a secure online database where the records could not be deleted or modified – only added to.

As the animals moved from farm to slaughter to processing, packaging and then to stores, the drivers of the freight trucks played a key role. At each step, they would collect documents detailing the shipment, storage temperature and other inspections and safety reports, and official stamps as authorities reviewed them – just as they did normally. In Walmart’s test, however, the drivers would photograph those documents and upload them to the blockchain-based database. The company controlled the computers running the database, but government agencies’ systems could also be involved, to further ensure data integrity.

As the pork was packaged for sale, a sticker was put on each container, displaying a smartphone-readable code that would link to that meat’s record on the blockchain. Consumers could scan the code right in the store and assure themselves that they were buying exactly what they thought they were. More recent advances in the technology of the stickers themselves have made them more secure and counterfeitresistant.

Walmart did similar tests on mangoes imported to the U.S. from Latin America. The company found that it took only 2.2 seconds for consumers to find out an individual fruit’s weight, variety, growing location, time it was harvested, date it passed through U.S. customs, when and where it was sliced, which cold-storage facility the sliced mango was held in and for how long it waited before being delivered to a store….(More)”.

Better “nowcasting” can reveal what weather is about to hit within 500 meters


MIT Technology Review: “Weather forecasting is impressively accurate given how changeable and chaotic Earth’s climate can be. It’s not unusual to get 10-day forecasts with a reasonable level of accuracy.

But there is still much to be done.  One challenge for meteorologists is to improve their “nowcasting,” the ability to forecast weather in the next six hours or so at a spatial resolution of a square kilometer or less.

In areas where the weather can change rapidly, that is difficult. And there is much at stake. Agricultural activity is increasingly dependent on nowcasting, and the safety of many sporting events depends on it too. Then there is the risk that sudden rainfall could lead to flash flooding, a growing problem in many areas because of climate change and urbanization. That has implications for infrastructure, such as sewage management, and for safety, since this kind of flooding can kill.

So meteorologists would dearly love to have a better way to make their nowcasts.

Enter Blandine Bianchi from EPFL in Lausanne, Switzerland, and a few colleagues, who have developed a method for combining meteorological data from several sources to produce nowcasts with improved accuracy. Their work has the potential to change the utility of this kind of forecasting for everyone from farmers and gardeners to emergency services and sewage engineers.

Current forecasting is limited by the data and the scale on which it is gathered and processed. For example, satellite data has a spatial resolution of 50 to 100 km and allows the tracking and forecasting of large cloud cells over a time scale of six to nine hours. By contrast, radar data is updated every five minutes, with a spatial resolution of about a kilometer, and leads to predictions on the time scale of one to three hours. Another source of data is the microwave links used by telecommunications companies, which are degraded by rainfall….(More)”

Governments fail to capitalise on swaths of open data


Valentina Romei in the Financial Times: “…Behind the push for open data is a desire to make governments more transparent, accountable and efficient — but also to allow businesses to create products and services that spark economic development. The global annual opportunity cost of failing to do this effectively is about $5tn, according to one estimate from McKinsey, the consultancy.

The UK is not the only country falling short, says the Open Data Barometer, which monitors the status of government data across the world. Among the 30 leading governments — those that have championed the open data movement and have made progress over five years — “less than a quarter of the data with the biggest potential for social and economic impact” is truly open. This goal of transparency, it seems, has not proved sufficient for “creating value” — the movement’s latest focus. In 2015, nearly a decade after advocates first discussed the principles of open government data, 62 countries adopted the six Open Data Charter principles — which called for data to be open by default, usable and comparable….

The use of open data has already bore fruit for some countries. In 2015, Japan’s ministry of land, infrastructure and transport set up an open data site aimed at disabled and elderly people. The 7,000 data points published are downloadable and the service can be used to generate a map that shows which passenger terminals on train, bus and ferry networksprovide barrier-free access.

In the US, The Climate Corporation, a digital agriculture company, combined 30 years of weather data and 60 years of crop yield data to help farmers increase their productivity. And in the UK, subscription service Land Insight merges different sources of land data to help individuals and developers compare property information, forecast selling prices, contact land owners and track planning applications…
Open Data 500, an international network of organisations that studies the use and impact of open data, reveals that private companies in South Korea are using government agency data, with technology, advertising and business services among the biggest users. It shows, for example, that Archidraw, a four-year-old Seoul-based company that provides 3D visualisation tools for interior design and property remodelling, has used mapping data from the Ministry of Land, Infrastructure and Transport…(More)”.

Governments fail to capitalise on swaths of open data


Valentina Romei in the Financial Times: “…Behind the push for open data is a desire to make governments more transparent, accountable and efficient — but also to allow businesses to create products and services that spark economic development. The global annual opportunity cost of failing to do this effectively is about $5tn, according to one estimate from McKinsey, the consultancy.

The UK is not the only country falling short, says the Open Data Barometer, which monitors the status of government data across the world. Among the 30 leading governments — those that have championed the open data movement and have made progress over five years — “less than a quarter of the data with the biggest potential for social and economic impact” is truly open. This goal of transparency, it seems, has not proved sufficient for “creating value” — the movement’s latest focus. In 2015, nearly a decade after advocates first discussed the principles of open government data, 62 countries adopted the six Open Data Charter principles — which called for data to be open by default, usable and comparable….

The use of open data has already bore fruit for some countries. In 2015, Japan’s ministry of land, infrastructure and transport set up an open data site aimed at disabled and elderly people. The 7,000 data points published are downloadable and the service can be used to generate a map that shows which passenger terminals on train, bus and ferry networksprovide barrier-free access.

In the US, The Climate Corporation, a digital agriculture company, combined 30 years of weather data and 60 years of crop yield data to help farmers increase their productivity. And in the UK, subscription service Land Insight merges different sources of land data to help individuals and developers compare property information, forecast selling prices, contact land owners and track planning applications…
Open Data 500, an international network of organisations that studies the use and impact of open data, reveals that private companies in South Korea are using government agency data, with technology, advertising and business services among the biggest users. It shows, for example, that Archidraw, a four-year-old Seoul-based company that provides 3D visualisation tools for interior design and property remodelling, has used mapping data from the Ministry of Land, Infrastructure and Transport…(More)”.

Positive deviance, big data, and development: A systematic literature review


Paper by Basma Albanna and Richard Heeks: “Positive deviance is a growing approach in international development that identifies those within a population who are outperforming their peers in some way, eg, children in low‐income families who are well nourished when those around them are not. Analysing and then disseminating the behaviours and other factors underpinning positive deviance are demonstrably effective in delivering development results.

However, positive deviance faces a number of challenges that are restricting its diffusion. In this paper, using a systematic literature review, we analyse the current state of positive deviance and the potential for big data to address the challenges facing positive deviance. From this, we evaluate the promise of “big data‐based positive deviance”: This would analyse typical sources of big data in developing countries—mobile phone records, social media, remote sensing data, etc—to identify both positive deviants and the factors underpinning their superior performance.

While big data cannot solve all the challenges facing positive deviance as a development tool, they could reduce time, cost, and effort; identify positive deviants in new or better ways; and enable positive deviance to break out of its current preoccupation with public health into domains such as agriculture, education, and urban planning. In turn, positive deviance could provide a new and systematic basis for extracting real‐world development impacts from big data…(More)”.

Study: Crowdsourced Hospital Ratings May Not Be Fair


Samantha Horton at WFYI: “Though many websites offer non-scientific ratings on a number of services, two Indiana University scientists say judging hospitals that way likely isn’t fair.

Their recently-released study compares the federal government’s Hospital Compare and crowdsourced sites such as Facebook, Yelp and Google. The research finds it’s difficult for people to accurately understand everything a hospital does, and that leads to biased ratings.

Patient experiences with food, amenities and bedside manner often aligns with federal government ratings. But IU professor Victoria Perez says judging quality of care and safety is much more nuanced and people often get it wrong.

“About 20 percent of the hospitals rated best within a local market on social media were rated worst in that market by Hospital Compare in terms of patient health outcomes,” she says.

For the crowdsourced ratings to be more useful, Perez says people would have to know how to cross-reference them with a more reliable data source, such as Hospital Compare. But even that site can be challenging to navigate depending on what the consumer is looking for.

“If you have a condition-specific concern and you can see the clinical measure for a hospital that may be helpful,” says Perez. “But if your particular medical concern is not listed there, it might be hard to extrapolate from the ones that are listed or to know which ones you should be looking at.”

She says consumers would need more information about patient outcomes and other quality metrics to be able to reliably crowdsource a hospital on a site such as Google…(More)”.

Statistics and data science degrees: Overhyped or the real deal?


 at The Conversation“Data science” is hot right now. The number of undergraduate degrees in statistics has tripled in the past decade, and as a statistics professor, I can tell you that it isn’t because freshmen love statistics.

Way back in 2009, economist Hal Varian of Google dubbed statistician the “next sexy job.” Since then, statistician, data scientist and actuary have topped various “best jobs” lists. Not to mention the enthusiastic press coverage of industry applications: Machine learning! Big dataAIDeep learning!

But is it good advice? I’m going to voice an unpopular opinion for the sake of starting a conversation. Stats is indeed useful, but not in the way that the popular media – and all those online data science degree programs – seem to suggest….

While all the press tends to go to the sensationalist applications – computers that watch cat videos, anyone? – the data science boom reflects a broad increase in demand for data literacy, as a baseline requirement for modern jobs.

The “big data era” doesn’t just mean large amounts of data; it also means increased ease and ability to collect data of all types, in all walks of life. Although the big five tech companies – Google, Apple, Amazon, Facebook and Microsoft – represent about 10 percent of the U.S. market cap and dominate the public imagination, they employ only one-half of one percent of all employees.

Therefore, to be a true revolution, data science will need to infiltrate nontech industries. And it is. The U.S. has seen its impact on political campaigns. I myself have consulted in the medical devices sector. A few years back, Walmart held a data analysis competition as a recruiting tool. The need for people that can dig into the data and parse it is everywhere.

In a speech at the National Academy of Sciences in 2015, Steven “Freakonomics” Levitt related his insights about the need for data-savvy workers, based on his experience as a sought-after consultant in fields ranging from the airline industry to fast food….(More)”.

Translating science into business innovation: The case of open food and nutrition data hackathons


Paper by Christopher TucciGianluigi Viscusi and Heidi Gautschi: “In this article, we explore the use of hackathons and open data in corporations’ open innovation portfolios, addressing a new way for companies to tap into the creativity and innovation of early-stage startup culture, in this case applied to the food and nutrition sector. We study the first Open Food Data Hackdays, held on 10-11 February 2017 in Lausanne and Zurich. The aim of the overall project that the Hackdays event was part of was to use open food and nutrition data as a driver for business innovation. We see hackathons as a new tool in the innovation manager’s toolkit, a kind of live crowdsourcing exercise that goes beyond traditional ideation and develops a variety of prototypes and new ideas for business innovation. Companies then have the option of working with entrepreneurs and taking some of the ideas forward….(More)”.

What Can Satellite Imagery Tell Us About Obesity in Cities?


Emily Matchar at Smithsonian: “About 40 percent of American adults are obese, defined as having a body mass index (BMI) over 30. But obesity is not evenly distributed around the country. Some cities and states have far more obese residents than others. Why? Genetics, stress, income levels and access to healthy foods are play a role. But increasingly researchers are looking at the built environment—our cities—to understand why people are fatter in some places than in others.

New research from the University of Washington attempts to take this approach one step further by using satellite data to examine cityscapes. By using the satellite images in conjunction with obesity data, they hope to uncover which urban features might influence a city’s obesity rate.

The researchers used a deep learning network to analyze about 150,000 high-resolution satellite image of four cities: Los Angeles, Memphis, San Antonio and Seattle. The cities were selected for being from states with both high obesity rates (Texas and Tennessee) and low obesity rates (California and Washington). The network extracted features of the built environment: crosswalks, parks, gyms, bus stops, fast food restaurants—anything that might be relevant to health.

“If there’s no sidewalk you’re less likely to go out walking,” says Elaine Nsoesie, a professor of global health at the University of Washington who led the research.

The team’s algorithm could then see what features were more or less common in areas with greater and lesser rates of obesity. Some findings were predictable: more parks, gyms and green spaces were correlated with lower obesity rates. Others were surprising: more pet stores equaled thinner residents (“a high density of pet stores could indicate high pet ownership, which could influence how often people go to parks and take walks around the neighborhood,” the team hypothesized).

A paper on the results was recently published in the journal JAMA Network Open….(More)”.

The rush for data risks growing the North-South divide


Laura Mann and Gianluca Lazzolino at SciDevNet: “Across the world, tech firms and software developers are embedding digital platforms into humanitarian and commercial infrastructures. There’s Jembi and Hello Doctor for the healthcare sector, for example; SASSA and Tamween for social policy; and M-farmi-CowEsoko among many others for agriculture.

While such systems proliferate, it is time we asked some tough questions about who is controlling this data, and for whose benefit. There is a danger that ‘platformisation’ widens the knowledge gap between firms and scientists in poorer countries and those in more advanced economies.

Digital platforms serve three purposes. They improve interactions between service providers and users; gather transactional data about those users; and nudge them towards behaviours, activities and products considered ‘virtuous’, profitable, or valued — often because they generate more data. This data  can be extremely valuable to policy-makers interested in developing interventions, to researchers exploring socio-economic trends and to businesses seeking new markets.

But the development and use of these platforms are not always benign.

Knowledge and power

Digital technologies are knowledge technologies because they record the personal information, assets, behaviour and networks of the people that use them.

Knowledge has a somewhat gentle image of a global good shared openly and evenly across the world. But in reality, it is competitive.
Simply put, knowledge shapes economic rivalry between rich and poor countries. It influences who has power over the rules of the economic game, and it does this in three key ways.

First, firms can use knowledge and technology to become more efficient and competitive in what they do. For example, a farmer can choose to buy technologically enhanced seeds, inputs such as fertilisers, and tools to process their crop.

This technology transfer is not automatic — the farmer must first invest time to learn how to use these tools.  In this sense, economic competition between nations is partly about how well-equipped their people are in using technology effectively.

The second key way in which knowledge impacts global economic competition depends on looking at development as a shift from cut-throat commodity production towards activities that bring higher profits and wages.

In farming, for example, development means moving out of crop production alone into a position of having more control over agricultural inputs, and more involvement in distributing or marketing agricultural goods and services….(More)”.