The free flow of non-personal data


Joint statement by Vice-President Ansip and Commissioner Gabriel on the European Parliament’s vote on the new EU rules facilitating the free flow of non-personal data: “The European Parliament adopted today a Regulation on the free flow of non-personal data proposed by the European Commission in September 2017. …

We welcome today’s vote at the European Parliament. A digital economy and society cannot exist without data and this Regulation concludes another key pillar of the Digital Single Market. Only if data flows freely can Europe get the best from the opportunities offered by digital progress and technologies such as artificial intelligence and supercomputers.  

This Regulation does for non-personal data what the General Data Protection Regulation has already done for personal data: free and safe movement across the European Union. 

With its vote, the European Parliament has sent a clear signal to all businesses of Europe: it makes no difference where in the EU you store and process your data – data localisation requirements within the Member States are a thing of the past. 

The new rules will provide a major boost to the European data economy, as it opens up potential for European start-ups and SMEs to create new services through cross-border data innovation. This could lead to a 4% – or €739 billion – higher EU GDP until 2020 alone. 

Together with the General Data Protection Regulation, the Regulation on the free flow of non-personal data will allow the EU to fully benefit from today’s and tomorrow’s data-based global economy.” 

Background

Since the Communication on the European Data Economy was adopted in January 2017 as part of the Digital Single Market strategy, the Commission has run a public online consultation, organised structured dialogues with Member States and has undertaken several workshops with different stakeholders. These evidence-gathering initiatives have led to the publication of an impact assessment….The Regulation on the free flow of non-personal data has no impact on the application of the General Data Protection Regulation (GDPR), as it does not cover personal data. However, the two Regulations will function together to enable the free flow of any data – personal and non-personal – thus creating a single European space for data. In the case of a mixed dataset, the GDPR provision guaranteeing free flow of personal data will apply to the personal data part of the set, and the free flow of non-personal data principle will apply to the non-personal part. …(More)”.

Text Analysis Systems Mine Workplace Emails to Measure Staff Sentiments


Alan Rothman at LLRX: “…For all of these good, bad or indifferent workplaces, a key question is whether any of the actions of management to engage the staff and listen to their concerns ever resulted in improved working conditions and higher levels of job satisfaction?

The answer is most often “yes”. Just having a say in, and some sense of control over, our jobs and workflows can indeed have a demonstrable impact on morale, camaraderie and the bottom line. As posited in the Hawthorne Effect, also termed the “Observer Effect”, this was first discovered during studies in the 1920’s and 1930’s when the management of a factory made improvements to the lighting and work schedules. In turn, worker satisfaction and productivity temporarily increased. This was not so much because there was more light, but rather, that the workers sensed that management was paying attention to, and then acting upon, their concerns. The workers perceived they were no longer just cogs in a machine.

Perhaps, too, the Hawthorne Effect is in some ways the workplace equivalent of the Heisenberg’s Uncertainty Principle in physics. To vastly oversimplify this slippery concept, the mere act of observing a subatomic particle can change its position.¹

Giving the processes of observation, analysis and change at the enterprise level a modern (but non-quantum) spin, is a fascinating new article in the September 2018 issue of The Atlantic entitled What Your Boss Could Learn by Reading the Whole Company’s Emails, by Frank Partnoy.  I highly recommend a click-through and full read if you have an opportunity. I will summarize and annotate it, and then, considering my own thorough lack of understanding of the basics of y=f(x), pose some of my own physics-free questions….

Today the text analytics business, like the work done by KeenCorp, is thriving. It has been long-established as the processing behind email spam filters. Now it is finding other applications including monitoring corporate reputations on social media and other sites.²

The finance industry is another growth sector, as investment banks and hedge funds scan a wide variety of information sources to locate “slight changes in language” that may point towards pending increases or decreases in share prices. Financial research providers are using artificial intelligence to mine “insights” from their own selections of news and analytical sources.

But is this technology effective?

In a paper entitled Lazy Prices, by Lauren Cohen (Harvard Business School and NBER), Christopher Malloy (Harvard Business School and NBER), and Quoc Nguyen (University of Illinois at Chicago), in a draft dated February 22, 2018, these researchers found that the share price of company, in this case NetApp in their 2010 annual report, measurably went down after the firm “subtly changes” its reporting “descriptions of certain risks”. Algorithms can detect such changes more quickly and effectively than humans. The company subsequently clarified in its 2011 annual report their “failure to comply” with reporting requirements in 2010. A highly skilled stock analyst “might have missed that phrase”, but once again its was captured by “researcher’s algorithms”.

In the hands of a “skeptical investor”, this information might well have resulted in them questioning the differences in the 2010 and 2011 annual reports and, in turn, saved him or her a great deal of money. This detection was an early signal of a looming decline in NetApp’s stock. Half a year after the 2011 report’s publication, it was reported that the Syrian government has bought the company and “used that equipment to spy on its citizen”, causing further declines.

Now text analytics is being deployed at a new target: The composition of employees’ communications. Although it has been found that workers have no expectations of privacy in their workplaces, some companies remain reluctant to do so because of privacy concerns. Thus, companies are finding it more challenging to resist the “urge to mine employee information”, especially as text analysis systems continue to improve.

Among the evolving enterprise applications are the human resources departments in assessing overall employee morale. For example, Vibe is such an app that scans through communications on Slack, a widely used enterprise platform. Vibe’s algorithm, in real-time reporting, measures the positive and negative emotions of a work team….(More)”.

Open Government Data Report: Enhancing Policy Maturity for Sustainable Impact


Report by the OECD: This report provides an overview of the state of open data policies across OECD member and partner countries, based on data collected through the OECD Open Government Data survey (2013, 2014, 2016/17), country reviews and comparative analysis. The report analyses open data policies using an analytical framework that is in line with the OECD OUR data Index and the International Open Data Charter. It assesses governments’ efforts to enhance the availability, accessibility and re-use of open government data. It makes the case that beyond countries’ commitment to open up good quality government data, the creation of public value requires engaging user communities from the entire ecosystem, such as journalists, civil society organisations, entrepreneurs, major tech private companies and academia. The report also underlines how open data policies are elements of broader digital transformations, and how public sector data policies require interaction with other public sector agendas such as open government, innovation, employment, integrity, public budgeting, sustainable development, urban mobility and transport. It stresses the relevance of measuring open data impacts in order to support the business case for open government data….(More)”.

Renovating democracy from the bottom up


Nathan Gardels at the Washington Post: “The participatory power of social media is a game changer for governance. It levels the playing field among amateurs and experts, peers and authorities and even challenges the legitimacy of representative government. Its arrival coincides with and reinforces the widespread distrust of elites across the Western world, ripening the historical moment for direct democracy.

For the first time, an Internet-based movement has come to power in a major country, Italy, under the slogan “Participate, don’t delegate!” All of the Five Star Movement’s parliamentarians, who rule the country in a coalition with the far-right League party, were nominated and elected to stand for office online. And they have appointed the world’s first minister for direct democracy, Riccardo Fraccaro.

In Rome this week, he explained the participatory agenda of Italy’s ruling coalition government to The WorldPost at a meeting of the Global Forum on Modern Direct Democracy. “Citizens must be granted the same possibility to actively intervene in the process of managing and administrating public goods as normally carried out by their elected representatives,” he enthused. “What we have witnessed in our democracy is a drift toward ‘partyocracy,’ in which a restricted circle of policymakers have been so fully empowered with decision-making capacity that they could virtually ignore and bypass the public will. The mere election of a representative every so many years is no longer sufficient to prevent this from happening. That is why our government will take the next step forward in order to innovate and enhance our democracy.”

Fraccaro went on: “Referenda, public petitions and the citizens’ ballot initiative are nothing other than the direct means available for the citizenry to submit laws that political parties are not willing to propose or to reject rules approved by political parties that are not welcome by the people. Our aim, therefore, is to establish the principles and practices of direct democracy alongside the system of representative government in order to give real, authentic sovereignty to the citizens.”

At the Rome forum, Deputy Prime Minister Luigi di Maio, a Five Star member, railed against the technocrats and banks he says are trying to frustrate the will of the people. He promised forthcoming changes in the Italian constitution to follow through on Fraccaro’s call for citizen-initiated propositions that will go to the public ballot if the legislature does not act on them.

The program that has so far emerged out of the government’s participatory agenda is a mixed bag. It includes everything from anti-immigrant and anti-vaccine policies to the expansion of digital networks and planting more trees. In a move that has unsettled the European Union authorities as well as Italy’s non-partisan, indirectly-elected president, the governing coalition last week proposed both a tax cut and the provision of a universal basic income — despite the fact that Italy’s long-term debt is already 130 percent of GDP.

The Italian experiment warrants close attention as a harbinger of things to come elsewhere. It reveals a paradox for governance in this digital age: the more participation there is, the greater the need for the counterbalance of impartial mediating practices and institutions that can process the cacophony of voices, sort out the deluge of contested information, dispense with magical thinking and negotiate fair trade-offs among the welter of conflicting interests….(More)”.

What is machine learning?


Chris Meserole at Brookings: “In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term “artificial intelligence” to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observations—to demonstrate, that is, an innate intelligence.

The question was how to achieve that goal. Early efforts focused primarily on what’s known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950s—one of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as “machine learning”—it wasn’t until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.

The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, it’s not possible to tease out the implications of AI without understanding how machine learning works—as well as how it doesn’t….(More)”.

The law and ethics of big data analytics: A new role for international human rights in the search for global standards


David Nersessian at Business Horizons: “The Economist recently declared that digital information has overtaken oil as the world’s most valuable commodity. Big data technology is inherently global and borderless, yet little international consensus exists over what standards should govern its use. One source of global standards benefitting from considerable international consensus might be used to fill the gap: international human rights law.

This article considers the extent to which international human rights law operates as a legal or ethical constraint on global commercial use of big data technologies. By providing clear baseline standards that apply worldwide, human rights can help shape cultural norms—implemented as ethical practices and global policies and procedures—about what businesses should do with their information technologies. In this way, human rights could play a broad and important role in shaping business thinking about the proper handling of this increasingly valuable commodity in the modern global society…(More)”.

Translating science into business innovation: The case of open food and nutrition data hackathons


Paper by Christopher TucciGianluigi Viscusi and Heidi Gautschi: “In this article, we explore the use of hackathons and open data in corporations’ open innovation portfolios, addressing a new way for companies to tap into the creativity and innovation of early-stage startup culture, in this case applied to the food and nutrition sector. We study the first Open Food Data Hackdays, held on 10-11 February 2017 in Lausanne and Zurich. The aim of the overall project that the Hackdays event was part of was to use open food and nutrition data as a driver for business innovation. We see hackathons as a new tool in the innovation manager’s toolkit, a kind of live crowdsourcing exercise that goes beyond traditional ideation and develops a variety of prototypes and new ideas for business innovation. Companies then have the option of working with entrepreneurs and taking some of the ideas forward….(More)”.

Privacy and Interoperability Challenges Could Limit the Benefits of Education Technology


Report by Katharina Ley Best and John F. Pane: “The expansion of education technology is transforming the learning environment in classrooms, schools, school systems, online, and at home. The rise of education technology brings with it an increased opportunity for the collection and application of data, which are valuable resources for educators, schools, policymakers, researchers, and software developers.

RAND researchers examine some of the possible implications of growing data collection and availability related to education technology. Specifically, this Perspective discusses potential data infrastructure challenges that could limit data usefulness, consider data privacy implications in an education technology context, and review privacy principles that could help educators and policymakers evaluate the changing education data privacy landscape in anticipation of potential future changes to regulations and best practices….(More)”.

Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier.


Paper by Christine L. Borgman: “As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of “grey data” about individuals in their daily activities of research, teaching, learning, services, and administration.

The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This Article explores the competing values inherent in data stewardship and makes recommendations for practice by drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk….(More)”.

Senators introduce the ‘Artificial Intelligence in Government Act’


Tajha Chappellet-Lanier at FedScoop: “A cadre of senators is looking to prompt the federal government to be a bit more proactive in utilizing artificial intelligence technologies.

To this end, the bipartisan group including Sens. Brian Schatz, D-Hawaii, Cory Gardner, R-Colo., Rob Portman, R-Ohio, and Kamala Harris, D-Calif., introduced the Artificial Intelligence in Government Acton Wednesday. Per a news release, the bill would seek to “improve the use of AI across the federal government by providing resources and directing federal agencies to include AI in data-related planning.”

The bill aims to do a number of things, including establishing an AI in government advisory board, directing the White House Office of Management and Budget to look into AI as part of the federal data strategy, getting the Office of Personnel Management to look at what kinds of employee skills are necessary for AI competence in government and expanding “an office” at the General Services Administration that will provide expertise, do research and “promote U.S. competitiveness.”

“Artificial intelligence has the potential to benefit society in ways we cannot imagine today,” Harris said in a statement. “We already see its immense value in applications as diverse as diagnosing cancer to routing vehicles. The AI in Government Act gives the federal government the tools and resources it needs to build its expertise and in partnership with industry and academia. The bill will help develop the policies to ensure that society reaps the benefits of these emerging technologies, while protecting people from potential risks, such as biases in AI.”

The proposed legislation is supported by a bunch of companies and advocacy groups in the tech space including BSA, the Center for Democracy and Technology, the Information Technology and Innovation Foundation, Intel, the Internet Association, the Lincoln Network, Microsoft, the Niskanen Center, and the R Street Institute.

The senators are hardly alone in their conviction that AI will be a powerful tool for government. At a summit in May, the White House Office of Science and Technology Policy created a Select Committee on artificial intelligence, comprised of senior research and development officials from across the government….(More)”.