China will now officially try to extend its Great Firewall to blockchains


Mike Orcutt at Technology Review: “China’s crackdown on blockchain technology has taken another step: the country’s internet censorship agency has just approved new regulations aimed at blockchain companies. 

Hand over the data: The Cyberspace Administration of China (CAC) will require any “entities or nodes” that provide “blockchain information services” to collect users’ real names and national ID or telephone numbers, and allow government officials to access that data.

It will ban companies from using blockchain technology to “produce, duplicate, publish, or disseminate” any content that Chinese law prohibits. Last year, internet users evaded censors by recording the content of two banned articles on the Ethereum blockchain. The rules, first proposed in October, will go into effect next month.

Defeating the purpose? For more than a year, China has been cracking down on cryptocurrency trading and its surrounding industry while also singing the praises of blockchain. It appears its goal is to take advantage of the resiliency and tamper-proof nature of blockchains while canceling out their most most radical attribute: censorship resistance….(More)”.

IBM aims to use crowdsourced sensor data to improve local weather forecasting globally


Larry Dignan at ZDN: “IBM is hoping that mobile barometric sensors from individuals opting in, supercomputing ,and the Internet of Things can make weather forecasting more local globally.

Big Blue, which owns The Weather Company, will outline the IBM Global High-Resolution Atmospheric Forecasting System (GRAF). GRAF incorporates IoT data in its weather models via crowdsourcing.

While hyper local weather forecasts are available in the US, Japan, and some parts of Western Europe, many regions in the world lack an accurate picture of weather.

Mary Glackin, senior vice president of The Weather Company, said the company is “trying to fill in the blanks.” She added, “In a place like India, weather stations are kilometers away. We think this can be as significant as bringing satellite data into models.”

For instance, the developing world gets forecasts based on global data that are updated every 6 hours and resolutions at 10km to 15km. By using GRAF, IBM said it can offer forecasts for the day ahead that are updated hourly on average and have a 3km resolution….(More)”.

Los Angeles Accuses Weather Channel App of Covertly Mining User Data


Jennifer Valentino-DeVries and Natasha Singer in The New York Times: “The Weather Channel app deceptively collected, shared and profited from the location information of millions of American consumers, the city attorney of Los Angeles said in a lawsuit filed on Thursday.

One of the most popular online weather services in the United States, the Weather Channel app has been downloaded more than 100 million times and has 45 million active users monthly.

The government said the Weather Company, the business behind the app, unfairly manipulated users into turning on location tracking by implying that the information would be used only to localize weather reports. Yet the company, which is owned by IBM, also used the data for unrelated commercial purposes, like targeted marketing and analysis for hedge fundsaccording to the lawsuit

In the complaint, the city attorney excoriated the Weather Company, saying it unfairly took advantage of its app’s popularity and the fact that consumers were likely to give their location data to get local weather alerts. The city said that the company failed to sufficiently disclose its data practices when it got users’ permission to track their location and that it obscured other tracking details in its privacy policy.

“These issues certainly aren’t limited to our state,” Mr. Feuer said. “Ideally this litigation will be the catalyst for other action — either litigation or legislative activity — to protect consumers’ ability to assure their private information remains just that, unless they speak clearly in advance.”…(More)”.

Can a set of equations keep U.S. census data private?


Jeffrey Mervis at Science: “The U.S. Census Bureau is making waves among social scientists with what it calls a “sea change” in how it plans to safeguard the confidentiality of data it releases from the decennial census.

The agency announced in September 2018 that it will apply a mathematical concept called differential privacy to its release of 2020 census data after conducting experiments that suggest current approaches can’t assure confidentiality. But critics of the new policy believe the Census Bureau is moving too quickly to fix a system that isn’t broken. They also fear the changes will degrade the quality of the information used by thousands of researchers, businesses, and government agencies.

The move has implications that extend far beyond the research community. Proponents of differential privacy say a fierce, ongoing legal battle over plans to add a citizenship question to the 2020 census has only underscored the need to assure people that the government will protect their privacy....

Differential privacy, first described in 2006, isn’t a substitute for swapping and other ways to perturb the data. Rather, it allows someone—in this case, the Census Bureau—to measure the likelihood that enough information will “leak” from a public data set to open the door to reconstruction.

“Any time you release a statistic, you’re leaking something,” explains Jerry Reiter, a professor of statistics at Duke University in Durham, North Carolina, who has worked on differential privacy as a consultant with the Census Bureau. “The only way to absolutely ensure confidentiality is to release no data. So the question is, how much risk is OK? Differential privacy allows you to put a boundary” on that risk....

In the case of census data, however, the agency has already decided what information it will release, and the number of queries is unlimited. So its challenge is to calculate how much the data must be perturbed to prevent reconstruction....

A professor of labor economics at Cornell University, Abowd first learned that traditional procedures to limit disclosure were vulnerable—and that algorithms existed to quantify the risk—at a 2005 conference on privacy attended mainly by cryptographers and computer scientists. “We were speaking different languages, and there was no Rosetta Stone,” he says.

He took on the challenge of finding common ground. In 2008, building on a long relationship with the Census Bureau, he and a team at Cornell created the first application of differential privacy to a census product. It is a web-based tool, called OnTheMap, that shows where people work and live….

The three-step process required substantial computing power. First, the researchers reconstructed records for individuals—say, a 55-year-old Hispanic woman—by mining the aggregated census tables. Then, they tried to match the reconstructed individuals to even more detailed census block records (that still lacked names or addresses); they found “putative matches” about half the time.

Finally, they compared the putative matches to commercially available credit databases in hopes of attaching a name to a particular record. Even if they could, however, the team didn’t know whether they had actually found the right person.

Abowd won’t say what proportion of the putative matches appeared to be correct. (He says a forthcoming paper will contain the ratio, which he calls “the amount of uncertainty an attacker would have once they claim to have reidentified a person from the public data.”) Although one of Abowd’s recent papers notes that “the risk of re-identification is small,” he believes the experiment proved reidentification “can be done.” And that, he says, “is a strong motivation for moving to differential privacy.”…

Such arguments haven’t convinced Ruggles and other social scientists opposed to applying differential privacy on the 2020 census. They are circulating manuscripts that question the significance of the census reconstruction exercise and that call on the agency to delay and change its plan....

Ruggles, meanwhile, has spent a lot of time thinking about the kinds of problems differential privacy might create. His Minnesota institute, for instance, disseminates data from the Census Bureau and 105 other national statistical agencies to 176,000 users. And he fears differential privacy will put a serious crimp in that flow of information…

There are also questions of capacity and accessibility. The centers require users to do all their work onsite, so researchers would have to travel, and the centers offer fewer than 300 workstations in total....

Abowd has said, “The deployment of differential privacy within the Census Bureau marks a sea change for the way that official statistics are produced and published.” And Ruggles agrees. But he says the agency hasn’t done enough to equip researchers with the maps and tools needed to navigate the uncharted waters….(More)”.

In High-Tech Cities, No More Potholes, but What About Privacy?


Timothy Williams in The New York Times: “Hundreds of cities, large and small, have adopted or begun planning smart cities projects. But the risks are daunting. Experts say cities frequently lack the expertise to understand privacy, security and financial implications of such arrangements. Some mayors acknowledge that they have yet to master the responsibilities that go along with collecting billions of bits of data from residents….

Supporters of “smart cities” say that the potential is enormous and that some projects could go beyond creating efficiencies and actually save lives. Among the plans under development are augmented reality programs that could help firefighters find people trapped in burning buildings and the collection of sewer samples by robots to determine opioid use so that city services could be aimed at neighborhoods most in need.

The hazards are also clear.

“Cities don’t know enough about data, privacy or security,” said Lee Tien, a lawyer at the Electronic Frontier Foundation, a nonprofit organization focused on digital rights. “Local governments bear the brunt of so many duties — and in a lot of these cases, they are often too stupid or too lazy to talk to people who know.”

Cities habitually feel compelled to outdo each other, but the competition has now been intensified by lobbying from tech companies and federal inducements to modernize.

“There is incredible pressure on an unenlightened city to be a ‘smart city,’” said Ben Levine, executive director at MetroLab Network, a nonprofit organization that helps cities adapt to technology change.

That has left Washington, D.C., and dozens of other cities testing self-driving cars and Orlando trying to harness its sunshine to power electric vehicles. San Francisco has a system that tracks bicycle traffic, while Palm Beach, Fla., uses cycling data to decide where to send street sweepers. Boise, Idaho, monitors its trash dumps with drones. Arlington, Tex., is looking at creating a transit system based on data from ride-sharing apps….(More)”.

A Research Roadmap to Advance Data Collaboratives Practice as a Novel Research Direction


Iryna Susha, Theresa A. Pardo, Marijn Janssen, Natalia Adler, Stefaan G. Verhulst and Todd Harbour in the  International Journal of Electronic Government Research (IJEGR): “An increasing number of initiatives have emerged around the world to help facilitate data sharing and collaborations to leverage different sources of data to address societal problems. They are called “data collaboratives”. Data collaboratives are seen as a novel way to match real life problems with relevant expertise and data from across the sectors. Despite its significance and growing experimentation by practitioners, there has been limited research in this field. In this article, the authors report on the outcomes of a panel discussing critical issues facing data collaboratives and develop a research and development agenda. The panel included participants from the government, academics, and practitioners and was held in June 2017 during the 18th International Conference on Digital Government Research at City University of New York (Staten Island, New York, USA). The article begins by discussing the concept of data collaboratives. Then the authors formulate research questions and topics for the research roadmap based on the panel discussions. The research roadmap poses questions across nine different topics: conceptualizing data collaboratives, value of data, matching data to problems, impact analysis, incentives, capabilities, governance, data management, and interoperability. Finally, the authors discuss how digital government research can contribute to answering some of the identified research questions….(More)”. See also: http://datacollaboratives.org/

Firm Led by Google Veterans Uses A.I. to ‘Nudge’ Workers Toward Happiness


Daisuke Wakabayashi in the New York Times: “Technology companies like to promote artificial intelligence’s potential for solving some of the world’s toughest problems, like reducing automobile deaths and helping doctors diagnose diseases. A company started by three former Google employees is pitching A.I. as the answer to a more common problem: being happier at work.

The start-up, Humu, is based in Google’s hometown, and it builds on some of the so-called people-analytics programs pioneered by the internet giant, which has studied things like the traits that define great managers and how to foster better teamwork.

Humu wants to bring similar data-driven insights to other companies. It digs through employee surveys using artificial intelligence to identify one or two behavioral changes that are likely to make the biggest impact on elevating a work force’s happiness. Then it uses emails and text messages to “nudge” individual employees into small actions that advance the larger goal.

At a company where workers feel that the way decisions are made is opaque, Humu might nudge a manager before a meeting to ask the members of her team for input and to be prepared to change her mind. Humu might ask a different employee to come up with questions involving her team that she would like to have answered.

At the heart of Humu’s efforts is the company’s “nudge engine” (yes, it’s trademarked). It is based on the economist Richard Thaler’s Nobel Prize-winning research into how people often make decisions because of what is easier rather than what is in their best interest, and how a well-timed nudge can prompt them to make better choices.

Google has used this approach to coax employees into the corporate equivalent of eating their vegetables, prodding them to save more for retirement, waste less food at the cafeteria and opt for healthier snacks….

But will workers consider the nudges useful or manipulative?

Todd Haugh, an assistant professor of business law and ethics at Indiana University’s Kelley School of Business, said nudges could push workers into behaving in ways that benefited their employers’ interests over their own.

“The companies are the only ones who know what the purpose of the nudge is,” Professor Haugh said. “The individual who is designing the nudge is the one whose interests are going to be put in the forefront.”…(More)”.


Google Searches Could Predict Heroin Overdoses


Rod McCullom at Scientific American: “About 115 people nationwide die every day from opioid overdoses, according to the U.S. Centers for Disease Control and Prevention. A lack of timely, granular data exacerbates the crisis; one study showed opioid deaths were undercounted by as many as 70,000 between 1999 and 2015, making it difficult for governments to respond. But now Internet searches have emerged as a data source to predict overdose clusters in cities or even specific neighborhoods—information that could aid local interventions that save lives. 

The working hypothesis was that some people searching for information on heroin and other opioids might overdose in the near future. To test this, a researcher at the University of California Institute for Prediction Technology (UCIPT) and his colleagues developed several statistical models to forecast overdoses based on opioid-related keywords, metropolitan income inequality and total number of emergency room visits. They discovered regional differences (graphic) in where and how people searched for such information and found that more overdoses were associated with a greater number of searches per keyword. The best-fitting model, the researchers say, explained about 72 percent of the relation between the most popular search terms and heroin-related E.R. visits. The authors say their study, published in the September issue of Drug and Alcohol Dependence, is the first report of using Google searches in this way. 

To develop their models, the researchers obtained search data for 12 prescription and nonprescription opioids between 2005 and 2011 in nine U.S. metropolitan areas. They compared these with Substance Abuse and Mental Health Services Administration records of heroin-related E.R. admissions during the same period. The models can be modified to predict overdoses of other opioids or narrow searches to specific zip codes, says lead study author Sean D. Young, a behavioral psychologist and UCIPT executive director. That could provide early warnings of overdose clusters and help to decide where to distribute the overdose reversal medication Naloxone….(More)”.

Democracy and Digital Technology


Article by Ted Piccone in the International Journal on Human Rights: “Democratic governments are facing unique challenges in maximising the upside of digital technology while minimizing its threats to their more open societies. Protecting fair elections, fundamental rights online, and multi-stakeholder approaches to internet governance are three interrelated priorities central to defending strong democracies in an era of rising insecurity, increasing restrictions, and geopolitical competition.

The growing challenges democracies face in managing the complex dimensions of digital technology have become a defining domestic and foreign policy issue with direct implications for human rights and the democratic health of nations. The progressive digitisation of nearly all facets of society and the inherent trans-border nature of the internet raise a host of difficult problems when public and private information online is subject to manipulation, hacking, and theft.

This article addresses digital technology as it relates to three distinct but interrelated subtopics: free and fair elections, human rights, and internet governance. In all three areas, governments and the private sector are struggling to keep up with the positive and negative aspects of the rapid diffusion of digital technology. To address these challenges, democratic governments and legislators, in partnership with civil society and media and technology companies, should urgently lead the way toward devising and implementing rules and best practices for protecting free and fair electoral processes from external manipulation, defending human rights online, and protecting internet governance from restrictive, lowest common denominator approaches. The article concludes by setting out what some of these rules and best practices should be…(More)”.

Selling Smartness: Corporate Narratives and the Smart City as a Sociotechnical Imaginary


Jathan Sadowski and Roy Bendor in Science, Technology and Human Values: “This article argues for engaging with the smart city as a sociotechnical imaginary. By conducting a close reading of primary source material produced by the companies IBM and Cisco over a decade of work on smart urbanism, we argue that the smart city imaginary is premised in a particular narrative about urban crises and technological salvation. This narrative serves three main purposes: (1) it fits different ideas and initiatives into a coherent view of smart urbanism, (2) it sells and disseminates this version of smartness, and (3) it crowds out alternative visions and corresponding arrangements of smart urbanism.

Furthermore, we argue that IBM and Cisco construct smart urbanism as both a reactionary and visionary force, plotting a model of the near future, but one that largely reflects and reinforces existing sociopolitical systems. We conclude by suggesting that breaking IBM’s and Cisco’s discursive dominance over the smart city imaginary requires us to reimagine what smart urbanism means and create counter-narratives that open up space for alternative values, designs, and models….(More)”.