How Big Data is Helping to Tackle Climate Change


Bernard Marr at DataInformed: “Climate scientists have been gathering a great deal of data for a long time, but analytics technology’s catching up is comparatively recent. Now that cloud, distributed storage, and massive amounts of processing power are affordable for almost everyone, those data sets are being put to use. On top of that, the growing number of Internet of Things devices we are carrying around are adding to the amount of data we are collecting. And the rise of social media means more and more people are reporting environmental data and uploading photos and videos of their environment, which also can be analyzed for clues.

Perhaps one of the most ambitious projects that employ big data to study the environment is Microsoft’s Madingley, which is being developed with the intention of creating a simulation of all life on Earth. The project already provides a working simulation of the global carbon cycle, and it is hoped that, eventually, everything from deforestation to animal migration, pollution, and overfishing will be modeled in a real-time “virtual biosphere.” Just a few years ago, the idea of a simulation of the entire planet’s ecosphere would have seemed like ridiculous, pie-in-the-sky thinking. But today it’s something into which one of the world’s biggest companies is pouring serious money. Microsoft is doing this because it believes that analytical technology has finally caught up with the ability to collect and store data.

Another data giant that is developing tools to facilitate analysis of climate and ecological data is EMC. Working with scientists at Acadia National Park in Maine, the company has developed platforms to pull in crowd-sourced data from citizen science portals such as eBird and iNaturalist. This allows park administrators to monitor the impact of climate change on wildlife populations as well as to plan and implement conservation strategies.

Last year, the United Nations, under its Global Pulse data analytics initiative, launched the Big Data Climate Challenge, a competition aimed to promote innovate data-driven climate change projects. Among the first to receive recognition under the program is Global Forest Watch, which combines satellite imagery, crowd-sourced witness accounts, and public datasets to track deforestation around the world, which is believed to be a leading man-made cause of climate change. The project has been promoted as a way for ethical businesses to ensure that their supply chain is not complicit in deforestation.

Other initiatives are targeted at a more personal level, for example by analyzing transit routes that could be used for individual journeys, using Google Maps, and making recommendations based on carbon emissions for each route.

The idea of “smart cities” is central to the concept of the Internet of Things – the idea that everyday objects and tools are becoming increasingly connected, interactive, and intelligent, and capable of communicating with each other independently of humans. Many of the ideas put forward by smart-city pioneers are grounded in climate awareness, such as reducing carbon dioxide emissions and energy waste across urban areas. Smart metering allows utility companies to increase or restrict the flow of electricity, gas, or water to reduce waste and ensure adequate supply at peak periods. Public transport can be efficiently planned to avoid wasted journeys and provide a reliable service that will encourage citizens to leave their cars at home.

These examples raise an important point: It’s apparent that data – big or small – can tell us if, how, and why climate change is happening. But, of course, this is only really valuable to us if it also can tell us what we can do about it. Some projects, such as Weathersafe, which helps coffee growers adapt to changing weather patterns and soil conditions, are designed to help humans deal with climate change. Others are designed to tackle the problem at the root, by highlighting the factors that cause it in the first place and showing us how we can change our behavior to minimize damage….(More)”

Public Participation Organizations and Open Policy


Paper by Helen Pallett at Science Communication: “This article builds on work in Science and Technology Studies and cognate disciplines concerning the institutionalization of public engagement and participation practices. It describes and analyses ethnographic qualitative research into one “organization of participation,” the UK government–funded Sciencewise program. Sciencewise’s interactions with broader political developments are explored, including the emergence of “open policy” as a key policy object in the UK context. The article considers what the new imaginary of openness means for institutionalized forms of public participation in science policymaking, asking whether this is illustrative of a “constitutional moment” in relations between society and science policymaking….(More)

Looking for Open Data from a different country? Try the European Data portal


Wendy Carrara in DAE blog: “The Open Data movement is reaching all countries in Europe. Data Portals give you access to re-usable government information. But have you ever tried to find Open Data from another country whose language you do not speak? Or have you tried to see whether data from one country exist also in a similar way in another? The European Data Portal that we just launched can help you….

The European Data Portal project main work streams is the development of a new pan-European open data infrastructure. Its goal is to be a gateway offering access to data published by administrations in countries across Europe, from the EU and beyond.
The portal is launched during the European Data Forum in Luxembourg.

Additionally we will support public administrations in publishing more data as open data and have targeted actions to stimulate re-use. By taking a look at the data released by other countries and made available on the European Data Portal, governments can also be inspired to publish new data sets they had not though about in the first place.

The re-use of Open Data will further boost the economy. The benefits of Open Data are diverse and range from improved performance of public administrations and economic growth in the private sector to wider social welfare. The economic studyconducted by the European Data Portal team estimates that between 2016 and 2020, the market size of Open Data is expected to increase by 36.9% to a value of 75.7 bn EUR in 2020.

For data to be re-used, it has to be accessible

Currently, the portal includes over 240.000 datasets from 34 European countries. Information about the data available is structured into thirteen different categories ranging from agriculture to transport, including science, justice, health and so on. This enables you to quickly browse through categories and feel inspired by the data made accessible….(More)”

The promise and perils of predictive policing based on big data


H. V. Jagadish in the Conversation: “Police departments, like everyone else, would like to be more effective while spending less. Given the tremendous attention to big data in recent years, and the value it has provided in fields ranging from astronomy to medicine, it should be no surprise that police departments are using data analysis to inform deployment of scarce resources. Enter the era of what is called “predictive policing.”

Some form of predictive policing is likely now in force in a city near you.Memphis was an early adopter. Cities from Minneapolis to Miami have embraced predictive policing. Time magazine named predictive policing (with particular reference to the city of Santa Cruz) one of the 50 best inventions of 2011. New York City Police Commissioner William Bratton recently said that predictive policing is “the wave of the future.”

The term “predictive policing” suggests that the police can anticipate a crime and be there to stop it before it happens and/or apprehend the culprits right away. As the Los Angeles Times points out, it depends on “sophisticated computer analysis of information about previous crimes, to predict where and when crimes will occur.”

At a very basic level, it’s easy for anyone to read a crime map and identify neighborhoods with higher crime rates. It’s also easy to recognize that burglars tend to target businesses at night, when they are unoccupied, and to target homes during the day, when residents are away at work. The challenge is to take a combination of dozens of such factors to determine where crimes are more likely to happen and who is more likely to commit them. Predictive policing algorithms are getting increasingly good at such analysis. Indeed, such was the premise of the movie Minority Report, in which the police can arrest and convict murderers before they commit their crime.

Predicting a crime with certainty is something that science fiction can have a field day with. But as a data scientist, I can assure you that in reality we can come nowhere close to certainty, even with advanced technology. To begin with, predictions can be only as good as the input data, and quite often these input data have errors.

But even with perfect, error-free input data and unbiased processing, ultimately what the algorithms are determining are correlations. Even if we have perfect knowledge of your troubled childhood, your socializing with gang members, your lack of steady employment, your wacko posts on social media and your recent gun purchases, all that the best algorithm can do is to say it is likely, but not certain, that you will commit a violent crime. After all, to believe such predictions as guaranteed is to deny free will….

What data can do is give us probabilities, rather than certainty. Good data coupled with good analysis can give us very good estimates of probability. If you sum probabilities over many instances, you can usually get a robust estimate of the total.

For example, data analysis can provide a probability that a particular house will be broken into on a particular day based on historical records for similar houses in that neighborhood on similar days. An insurance company may add this up over all days in a year to decide how much to charge for insuring that house….(More)”

Build digital democracy


Dirk Helbing & Evangelos Pournaras in Nature: “Fridges, coffee machines, toothbrushes, phones and smart devices are all now equipped with communicating sensors. In ten years, 150 billion ‘things’ will connect with each other and with billions of people. The ‘Internet of Things’ will generate data volumes that double every 12 hours rather than every 12 months, as is the case now.

Blinded by information, we need ‘digital sunglasses’. Whoever builds the filters to monetize this information determines what we see — Google and Facebook, for example. Many choices that people consider their own are already determined by algorithms. Such remote control weakens responsible, self-determined decision-making and thus society too.

The European Court of Justice’s ruling on 6 October that countries and companies must comply with European data-protection laws when transferring data outside the European Union demonstrates that a new digital paradigm is overdue. To ensure that no government, company or person with sole control of digital filters can manipulate our decisions, we need information systems that are transparent, trustworthy and user-controlled. Each of us must be able to choose, modify and build our own tools for winnowing information.

With this in mind, our research team at the Swiss Federal Institute of Technology in Zurich (ETH Zurich), alongside international partners, has started to create a distributed, privacy-preserving ‘digital nervous system’ called Nervousnet. Nervousnet uses the sensor networks that make up the Internet of Things, including those in smartphones, to measure the world around us and to build a collective ‘data commons’. The many challenges ahead will be best solved using an open, participatory platform, an approach that has proved successful for projects such as Wikipedia and the open-source operating system Linux.

A wise king?

The science of human decision-making is far from understood. Yet our habits, routines and social interactions are surprisingly predictable. Our behaviour is increasingly steered by personalized advertisements and search results, recommendation systems and emotion-tracking technologies. Thousands of pieces of metadata have been collected about every one of us (seego.nature.com/stoqsu). Companies and governments can increasingly manipulate our decisions, behaviour and feelings1.

Many policymakers believe that personal data may be used to ‘nudge’ people to make healthier and environmentally friendly decisions. Yet the same technology may also promote nationalism, fuel hate against minorities or skew election outcomes2 if ethical scrutiny, transparency and democratic control are lacking — as they are in most private companies and institutions that use ‘big data’. The combination of nudging with big data about everyone’s behaviour, feelings and interests (‘big nudging’, if you will) could eventually create close to totalitarian power.

Countries have long experimented with using data to run their societies. In the 1970s, Chilean President Salvador Allende created computer networks to optimize industrial productivity3. Today, Singapore considers itself a data-driven ‘social laboratory’4 and other countries seem keen to copy this model.

The Chinese government has begun rating the behaviour of its citizens5. Loans, jobs and travel visas will depend on an individual’s ‘citizen score’, their web history and political opinion. Meanwhile, Baidu — the Chinese equivalent of Google — is joining forces with the military for the ‘China brain project’, using ‘deep learning’ artificial-intelligence algorithms to predict the behaviour of people on the basis of their Internet activity6.

The intentions may be good: it is hoped that big data can improve governance by overcoming irrationality and partisan interests. But the situation also evokes the warning of the eighteenth-century philosopher Immanuel Kant, that the “sovereign acting … to make the people happy according to his notions … becomes a despot”. It is for this reason that the US Declaration of Independence emphasizes the pursuit of happiness of individuals.

Ruling like a ‘benevolent dictator’ or ‘wise king’ cannot work because there is no way to determine a single metric or goal that a leader should maximize. Should it be gross domestic product per capita or sustainability, power or peace, average life span or happiness, or something else?

Better is pluralism. It hedges risks, promotes innovation, collective intelligence and well-being. Approaching complex problems from varied perspectives also helps people to cope with rare and extreme events that are costly for society — such as natural disasters, blackouts or financial meltdowns.

Centralized, top-down control of data has various flaws. First, it will inevitably become corrupted or hacked by extremists or criminals. Second, owing to limitations in data-transmission rates and processing power, top-down solutions often fail to address local needs. Third, manipulating the search for information and intervening in individual choices undermines ‘collective intelligence’7. Fourth, personalized information creates ‘filter bubbles’8. People are exposed less to other opinions, which can increase polarization and conflict9.

Fifth, reducing pluralism is as bad as losing biodiversity, because our economies and societies are like ecosystems with millions of interdependencies. Historically, a reduction in diversity has often led to political instability, collapse or war. Finally, by altering the cultural cues that guide peoples’ decisions, everyday decision-making is disrupted, which undermines rather than bolsters social stability and order.

Big data should be used to solve the world’s problems, not for illegitimate manipulation. But the assumption that ‘more data equals more knowledge, power and success’ does not hold. Although we have never had so much information, we face ever more global threats, including climate change, unstable peace and socio-economic fragility, and political satisfaction is low worldwide. About 50% of today’s jobs will be lost in the next two decades as computers and robots take over tasks. But will we see the macroeconomic benefits that would justify such large-scale ‘creative destruction’? And how can we reinvent half of our economy?

The digital revolution will mainly benefit countries that achieve a ‘win–win–win’ situation for business, politics and citizens alike10. To mobilize the ideas, skills and resources of all, we must build information systems capable of bringing diverse knowledge and ideas together. Online deliberation platforms and reconfigurable networks of smart human minds and artificially intelligent systems can now be used to produce collective intelligence that can cope with the diverse and complex challenges surrounding us….(More)” See Nervousnet project

The Power of Nudges, for Good and Bad


Richard H. Thaler in the New York Times: “Nudges, small design changes that can markedly affect individual behavior, have been catching on. These techniques rely on insights from behavioral science, and when used ethically, they can be very helpful. But we need to be sure that they aren’t being employed to sway people to make bad decisions that they will later regret.

Whenever I’m asked to autograph a copy of “Nudge,” the book I wrote with Cass Sunstein, the Harvard law professor, I sign it, “Nudge for good.” Unfortunately, that is meant as a plea, not an expectation.

Three principles should guide the use of nudges:

■ All nudging should be transparent and never misleading.

■ It should be as easy as possible to opt out of the nudge, preferably with as little as one mouse click.

■ There should be good reason to believe that the behavior being encouraged will improve the welfare of those being nudged.
As far as I know, the government teams in Britain and the United States that have focused on nudging have followed these guidelines scrupulously. But the private sector is another matter. In this domain, I see much more troubling behavior.

For example, last spring I received an email telling me that the first prominent review of a new book of mine had appeared: It was in The Times of London. Eager to read the review, I clicked on a hyperlink, only to run into a pay wall. Still, I was tempted by an offer to take out a one-month trial subscription for the price of just £1. As both a consumer and producer of newspaper articles, I have no beef with pay walls. But before signing up, I read the fine print. As expected, I would have to provide credit card information and would be automatically enrolled as a subscriber when the trial period expired. The subscription rate would then be £26 (about $40) a month. That wasn’t a concern because I did not intend to become a paying subscriber. I just wanted to read that one article.

But the details turned me off. To cancel, I had to give 15 days’ notice, so the one-month trial offer actually was good for just two weeks. What’s more, I would have to call London, during British business hours, and not on a toll-free number. That was both annoying and worrying. As an absent-minded American professor, I figured there was a good chance I would end up subscribing for several months, and that reading the article would end up costing me at least £100….

These examples are not unusual. Many companies are nudging purely for their own profit and not in customers’ best interests. In a recent column in The New York Times, Robert Shiller called such behavior “phishing.” Mr. Shiller and George Akerlof, both Nobel-winning economists, have written a book on the subject, “Phishing for Phools.”

Some argue that phishing — or evil nudging — is more dangerous in government than in the private sector. The argument is that government is a monopoly with coercive power, while we have more choice in the private sector over which newspapers we read and which airlines we fly.

I think this distinction is overstated. In a democracy, if a government creates bad policies, it can be voted out of office. Competition in the private sector, however, can easily work to encourage phishing rather than stifle it.

One example is the mortgage industry in the early 2000s. Borrowers were encouraged to take out loans that they could not repay when real estate prices fell. Competition did not eliminate this practice, because it was hard for anyone to make money selling the advice “Don’t take that loan.”

As customers, we can help one another by resisting these come-ons. The more we turn down questionable offers like trip insurance and scrutinize “one month” trials, the less incentive companies will have to use such schemes. Conversely, if customers reward firms that act in our best interests, more such outfits will survive and flourish, and the options available to us will improve….(More)

Building Trust and Protecting Privacy: Progress on the President’s Precision Medicine Initiative


The White House: “Today, the White House is releasing the Privacy and Trust Principles for the President’s Precision Medicine Initiative (PMI). These principles are a foundation for protecting participant privacy and building trust in activities within PMI.

PMI is a bold new research effort to transform how we characterize health and treat disease. PMI will pioneer a new model of patient-powered research that promises to accelerate biomedical discoveries and provide clinicians with new tools, knowledge, and therapies to select which treatments will work best for which patients. The initiative includes development of a new voluntary research cohort by the National Institutes of Health (NIH), a novel regulatory approach to genomic technologies by the Food and Drug Administration, and new cancer clinical trials by the National Cancer Institute at NIH.  In addition, PMI includes aligned efforts by the Federal government and private sector collaborators to pioneer a new approach for health research and healthcare delivery that prioritizes patient empowerment through access to information and policies that enable safe, effective, and innovative technologies to be tested and made available to the public.

Following President Obama’s launch of PMI in January 2015, the White House Office of Science and Technology Policy worked with an interagency group to develop the Privacy and Trust Principles that will guide the Precision Medicine effort. The White House convened experts from within and outside of government over the course of many months to discuss their individual viewpoints on the unique privacy challenges associated with large-scale health data collection, analysis, and sharing. This group reviewed the bioethics literature, analyzed privacy policies for large biobanks and research cohorts, and released a draft set of Principles for public comment in July 2015…..

The Privacy and Trust Principles are organized into 6 broad categories:

  1. Governance that is inclusive, collaborative, and adaptable;
  2. Transparency to participants and the public;
  3. Respecting participant preferences;
  4. Empowering participants through access to information;
  5. Ensuring appropriate data sharing, access, and use;
  6. Maintaining data quality and integrity….(More)”

Peer review in 2015: A global view


A white paper by Taylor & Francis: “Within the academic community, peer review is widely recognized as being at the heart of scholarly research. However, faith in peer review’s integrity is of ongoing and increasing concern to many. It is imperative that publishers (and academic editors) of peer-reviewed scholarly research learn from each other, working together to improve practices in areas such as ethical issues, training, and data transparency….Key findings:

  • Authors, editors and reviewers all agreed that the most important motivation to publish in peer reviewed journals is making a contribution to the field and sharing research with others.
  • Playing a part in the academic process and improving papers are the most important motivations for reviewers. Similarly, 90% of SAS study respondents said that playing a role in the academic community was a motivation to review.
  • Most researchers, across the humanities and social sciences (HSS) and science, technology and medicine (STM), rate the benefit of the peer review process towards improving their article as 8 or above out of 10. This was found to be the most important aspect of peer review in both the ideal and the real world, echoing the earlier large-scale peer review studies.
  • In an ideal world, there is agreement that peer review should detect plagiarism (with mean ratings of 7.1 for HSS and 7.5 for STM out of 10), but agreement that peer review is currently achieving this in the real world is only 5.7 HSS / 6.3 STM out of 10.
  • Researchers thought there was a low prevalence of gender bias but higher prevalence of regional and seniority bias – and suggest that double blind peer review is most capable of preventing reviewer discrimination where it is based on an author’s identity.
  • Most researchers wait between one and six months for an article they’ve written to undergo peer review, yet authors (not reviewers / editors) think up to two months is reasonable .
  • HSS authors say they are kept less well informed than STM authors about the progress of their article through peer review….(More)”

Politics and the New Machine


Jill Lepore in the NewYorker on “What the turn from polls to data science means for democracy”: “…The modern public-opinion poll has been around since the Great Depression, when the response rate—the number of people who take a survey as a percentage of those who were asked—was more than ninety. The participation rate—the number of people who take a survey as a percentage of the population—is far lower. Election pollsters sample only a minuscule portion of the electorate, not uncommonly something on the order of a couple of thousand people out of the more than two hundred million Americans who are eligible to vote. The promise of this work is that the sample is exquisitely representative. But the lower the response rate the harder and more expensive it becomes to realize that promise, which requires both calling many more people and trying to correct for “non-response bias” by giving greater weight to the answers of people from demographic groups that are less likely to respond. Pollster.com’s Mark Blumenthal has recalled how, in the nineteen-eighties, when the response rate at the firm where he was working had fallen to about sixty per cent, people in his office said, “What will happen when it’s only twenty? We won’t be able to be in business!” A typical response rate is now in the single digits.

Meanwhile, polls are wielding greater influence over American elections than ever….

Still, data science can’t solve the biggest problem with polling, because that problem is neither methodological nor technological. It’s political. Pollsters rose to prominence by claiming that measuring public opinion is good for democracy. But what if it’s bad?

A “poll” used to mean the top of your head. Ophelia says of Polonius, “His beard as white as snow: All flaxen was his poll.” When voting involved assembling (all in favor of Smith stand here, all in favor of Jones over there), counting votes required counting heads; that is, counting polls. Eventually, a “poll” came to mean the count itself. By the nineteenth century, to vote was to go “to the polls,” where, more and more, voting was done on paper. Ballots were often printed in newspapers: you’d cut one out and bring it with you. With the turn to the secret ballot, beginning in the eighteen-eighties, the government began supplying the ballots, but newspapers kept printing them; they’d use them to conduct their own polls, called “straw polls.” Before the election, you’d cut out your ballot and mail it to the newspaper, which would make a prediction. Political parties conducted straw polls, too. That’s one of the ways the political machine worked….

Ever since Gallup, two things have been called polls: surveys of opinions and forecasts of election results. (Plenty of other surveys, of course, don’t measure opinions but instead concern status and behavior: Do you own a house? Have you seen a doctor in the past month?) It’s not a bad idea to reserve the term “polls” for the kind meant to produce election forecasts. When Gallup started out, he was skeptical about using a survey to forecast an election: “Such a test is by no means perfect, because a preelection survey must not only measure public opinion in respect to candidates but must also predict just what groups of people will actually take the trouble to cast their ballots.” Also, he didn’t think that predicting elections constituted a public good: “While such forecasts provide an interesting and legitimate activity, they probably serve no great social purpose.” Then why do it? Gallup conducted polls only to prove the accuracy of his surveys, there being no other way to demonstrate it. The polls themselves, he thought, were pointless…

If public-opinion polling is the child of a strained marriage between the press and the academy, data science is the child of a rocky marriage between the academy and Silicon Valley. The term “data science” was coined in 1960, one year after the Democratic National Committee hired Simulmatics Corporation, a company founded by Ithiel de Sola Pool, a political scientist from M.I.T., to provide strategic analysis in advance of the upcoming Presidential election. Pool and his team collected punch cards from pollsters who had archived more than sixty polls from the elections of 1952, 1954, 1956, 1958, and 1960, representing more than a hundred thousand interviews, and fed them into a UNIVAC. They then sorted voters into four hundred and eighty possible types (for example, “Eastern, metropolitan, lower-income, white, Catholic, female Democrat”) and sorted issues into fifty-two clusters (for example, foreign aid). Simulmatics’ first task, completed just before the Democratic National Convention, was a study of “the Negro vote in the North.” Its report, which is thought to have influenced the civil-rights paragraphs added to the Party’s platform, concluded that between 1954 and 1956 “a small but significant shift to the Republicans occurred among Northern Negroes, which cost the Democrats about 1 per cent of the total votes in 8 key states.” After the nominating convention, the D.N.C. commissioned Simulmatics to prepare three more reports, including one that involved running simulations about different ways in which Kennedy might discuss his Catholicism….

Data science may well turn out to be as flawed as public-opinion polling. But a stage in the development of any new tool is to imagine that you’ve perfected it, in order to ponder its consequences. I asked Hilton to suppose that there existed a flawless tool for measuring public opinion, accurately and instantly, a tool available to voters and politicians alike. Imagine that you’re a member of Congress, I said, and you’re about to head into the House to vote on an act—let’s call it the Smeadwell-Nutley Act. As you do, you use an app called iThePublic to learn the opinions of your constituents. You oppose Smeadwell-Nutley; your constituents are seventy-nine per cent in favor of it. Your constituents will instantly know how you’ve voted, and many have set up an account with Crowdpac to make automatic campaign donations. If you vote against the proposed legislation, your constituents will stop giving money to your reëlection campaign. If, contrary to your convictions but in line with your iThePublic, you vote for Smeadwell-Nutley, would that be democracy? …(More)”

 

Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government


Paper by Jan Whittington et al: “Cities hold considerable information, including details about the daily lives of residents and employees, maps of critical infrastructure, and records of the officials’ internal deliberations. Cities are beginning to realize that this data has economic and other value: If done wisely, the responsible release of city information can also release greater efficiency and innovation in the public and private sector. New services are cropping up that leverage open city data to great effect.

Meanwhile, activist groups and individual residents are placing increasing pressure on state and local government to be more transparent and accountable, even as others sound an alarm over the privacy issues that inevitably attend greater data promiscuity. This takes the form of political pressure to release more information, as well as increased requests for information under the many public records acts across the country.

The result of these forces is that cities are beginning to open their data as never before. It turns out there is surprisingly little research to date into the important and growing area of municipal open data. This article is among the first sustained, cross-disciplinary assessments of an open municipal government system. We are a team of researchers in law, computer science, information science, and urban studies. We have worked hand-in-hand with the City of Seattle, Washington for the better part of a year to understand its current procedures from each disciplinary perspective. Based on this empirical work, we generate a set of recommendations to help the city manage risk latent in opening its data….(More)”