Human Rights in the Big Data World


Paper by Francis Kuriakose and Deepa Iyer: “Ethical approach to human rights conceives and evaluates law through the underlying value concerns. This paper examines human rights after the introduction of big data using an ethical approach to rights. First, the central value concerns such as equity, equality, sustainability and security are derived from the history of digital technological revolution. Then, the properties and characteristics of big data are analyzed to understand emerging value concerns such as accountability, transparency, tracability, explainability and disprovability.

Using these value points, this paper argues that big data calls for two types of evaluations regarding human rights. The first is the reassessment of existing human rights in the digital sphere predominantly through right to equality and right to work. The second is the conceptualization of new digital rights such as right to privacy and right against propensity-based discrimination. The paper concludes that as we increasingly share the world with intelligence systems, these new values expand and modify the existing human rights paradigm….(More)”.

Text Analysis Systems Mine Workplace Emails to Measure Staff Sentiments


Alan Rothman at LLRX: “…For all of these good, bad or indifferent workplaces, a key question is whether any of the actions of management to engage the staff and listen to their concerns ever resulted in improved working conditions and higher levels of job satisfaction?

The answer is most often “yes”. Just having a say in, and some sense of control over, our jobs and workflows can indeed have a demonstrable impact on morale, camaraderie and the bottom line. As posited in the Hawthorne Effect, also termed the “Observer Effect”, this was first discovered during studies in the 1920’s and 1930’s when the management of a factory made improvements to the lighting and work schedules. In turn, worker satisfaction and productivity temporarily increased. This was not so much because there was more light, but rather, that the workers sensed that management was paying attention to, and then acting upon, their concerns. The workers perceived they were no longer just cogs in a machine.

Perhaps, too, the Hawthorne Effect is in some ways the workplace equivalent of the Heisenberg’s Uncertainty Principle in physics. To vastly oversimplify this slippery concept, the mere act of observing a subatomic particle can change its position.¹

Giving the processes of observation, analysis and change at the enterprise level a modern (but non-quantum) spin, is a fascinating new article in the September 2018 issue of The Atlantic entitled What Your Boss Could Learn by Reading the Whole Company’s Emails, by Frank Partnoy.  I highly recommend a click-through and full read if you have an opportunity. I will summarize and annotate it, and then, considering my own thorough lack of understanding of the basics of y=f(x), pose some of my own physics-free questions….

Today the text analytics business, like the work done by KeenCorp, is thriving. It has been long-established as the processing behind email spam filters. Now it is finding other applications including monitoring corporate reputations on social media and other sites.²

The finance industry is another growth sector, as investment banks and hedge funds scan a wide variety of information sources to locate “slight changes in language” that may point towards pending increases or decreases in share prices. Financial research providers are using artificial intelligence to mine “insights” from their own selections of news and analytical sources.

But is this technology effective?

In a paper entitled Lazy Prices, by Lauren Cohen (Harvard Business School and NBER), Christopher Malloy (Harvard Business School and NBER), and Quoc Nguyen (University of Illinois at Chicago), in a draft dated February 22, 2018, these researchers found that the share price of company, in this case NetApp in their 2010 annual report, measurably went down after the firm “subtly changes” its reporting “descriptions of certain risks”. Algorithms can detect such changes more quickly and effectively than humans. The company subsequently clarified in its 2011 annual report their “failure to comply” with reporting requirements in 2010. A highly skilled stock analyst “might have missed that phrase”, but once again its was captured by “researcher’s algorithms”.

In the hands of a “skeptical investor”, this information might well have resulted in them questioning the differences in the 2010 and 2011 annual reports and, in turn, saved him or her a great deal of money. This detection was an early signal of a looming decline in NetApp’s stock. Half a year after the 2011 report’s publication, it was reported that the Syrian government has bought the company and “used that equipment to spy on its citizen”, causing further declines.

Now text analytics is being deployed at a new target: The composition of employees’ communications. Although it has been found that workers have no expectations of privacy in their workplaces, some companies remain reluctant to do so because of privacy concerns. Thus, companies are finding it more challenging to resist the “urge to mine employee information”, especially as text analysis systems continue to improve.

Among the evolving enterprise applications are the human resources departments in assessing overall employee morale. For example, Vibe is such an app that scans through communications on Slack, a widely used enterprise platform. Vibe’s algorithm, in real-time reporting, measures the positive and negative emotions of a work team….(More)”.

Craft metrics to value co-production


Liz Richardson and Beth Perry at Nature: “Advocates of co-production encourage collaboration between professional researchers and those affected by that research, to ensure that the resulting science is relevant and useful. Opening up science beyond scientists is essential, particularly where problems are complex, solutions are uncertain and values are salient. For example, patients should have input into research on their conditions, and first-hand experience of local residents should shape research on environmental-health issues.

But what constitutes success on these terms? Without a better understanding of this, it is harder to incentivize co-production in research. A key way to support co-production is reconfiguring that much-derided feature of academic careers: metrics.

Current indicators of research output (such as paper counts or the h-index) conceptualize the value of research narrowly. They are already roundly criticized as poor measures of quality or usefulness. Less appreciated is the fact that these metrics also leave out the societal relevance of research and omit diverse approaches to creating knowledge about social problems.

Peer review also has trouble assessing the value of research that sits at disciplinary boundaries or that addresses complex social challenges. It denies broader social accountability by giving scientists a monopoly on determining what is legitimate knowledge1. Relying on academic peer review as a means of valuing research can discourage broader engagement.

This privileges abstract and theoretical research over work that is localized and applied. For example, research on climate-change adaptation, conducted in the global south by researchers embedded in affected communities, can make real differences to people’s lives. Yet it is likely to be valued less highly by conventional evaluation than research that is generalized from afar and then published in a high-impact English-language journal….(More)”.

Desire paths: the illicit trails that defy the urban planners


So goes the logic of “desire paths” – described by Robert Macfarlane as “paths & tracks made over time by the wishes & feet of walkers, especially those paths that run contrary to design or planning”; he calls them “free-will ways”. The New Yorker offers other names: “cow paths, pirate paths, social trails, kemonomichi (beast trails), chemins de l’âne (donkey paths), and Olifantenpad (elephant trails)”. JM Barrie described them as “Paths that have Made Themselves”….

Desire paths have been described as illustrating “the tension between the native and the built environment and our relationship to them”. Because they often form in areas where there are no pavements, they can be seen to “indicate [the] yearning” of those wishing to walk, a way for “city dwellers to ‘write back’ to city planners, giving feedback with their feet”.

But as well as revealing the path of least resistance, they can also reveal where people refuse to tread. If you’ve been walking the same route for years, an itchy-footed urge to go off-piste, even just a few metres, is probably something you’ll identify with. It’s this idea that led one academic journal to describe them as a record of “civil disobedience”.

Rather than dismiss or even chastise the naughty pedestrian by placing fences or railings to block off “illicit” wanderings, some planners work to incorporate them into urban environments. This chimes with the thinking of Jane Jacobs, an advocate of configuring cities around desire lines, who said: “There is no logic that can be superimposed on the city; people make it, and it is to them … that we must fit our plans.”…(More)”.

Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier.


Paper by Christine L. Borgman: “As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of “grey data” about individuals in their daily activities of research, teaching, learning, services, and administration.

The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This Article explores the competing values inherent in data stewardship and makes recommendations for practice by drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk….(More)”.

Social Media Use in Crisis and Risk Communication: Emergencies, Concerns and Awareness


Open Access Book edited by Harald Hornmoen and Klas Backholm: ” This book is about how different communicators – whether professionals, such as crisis managers, first responders and journalists, or private citizens and disaster victims – have used social media to communicate about risks and crises. It is also about how these very different actors can play a crucial role in mitigating or preventing crises. How can they use social media to strengthen their own and the public’s awareness and understanding of crises when they unfold? How can they use social media to promote resilience during crises and the ability to deal with the after-effects? Moreover, what can they do to avoid using social media in a manner that weakens the situation awareness of crisis workers and citizens, or obstructs effective emergency management?

The RESCUE (Researching Social Media and Collaborative Software Use in Emergency Situations) project, on which this book is based, has sought to enable a more efficient and appropriate use of social media among key communicators, such as journalists and government actors involved in crisis management. Through empirical studies, and by drawing on relevant theory, the collection aims to improve our understanding of how social media have been used in different types of risks and crises. Building on our empirical work, we provide research-based input into how social media can be used efficiently by different communicators in a way appropriate to the specific crisis and to the concerns of the public.

We address our questions by presenting new research-based knowledge on social media use during different crises: the terrorist attacks in Norway on 22 July 2011; the central European floods in Austria in 2013; and the West African Ebola outbreak in 2014. The social media platforms analysed include the most popular ones in the affected areas at the time of the crises: Twitter and Facebook. By addressing such different cases, the book will move the field of crisis communication in social media beyond individual studies towards providing knowledge which is valid across situations….(More)”.

Future Politics: Living Together in a World Transformed by Tech


Book by Jamie Susskind: “Future Politics confronts one of the most important questions of our time: how will digital technology transform politics and society? The great political debate of the last century was about how much of our collective life should be determined by the state and what should be left to the market and civil society. In the future, the question will be how far our lives should be directed and controlled by powerful digital systems — and on what terms?

Jamie Susskind argues that rapid and relentless innovation in a range of technologies — from artificial intelligence to virtual reality — will transform the way we live together. Calling for a fundamental change in the way we think about politics, he describes a world in which certain technologies and platforms, and those who control them, come to hold great power over us. Some will gather data about our lives, causing us to avoid conduct perceived as shameful, sinful, or wrong. Others will filter our perception of the world, choosing what we know, shaping what we think, affecting how we feel, and guiding how we act. Still others will force us to behave certain ways, like self-driving cars that refuse to drive over the speed limit.

Those who control these technologies — usually big tech firms and the state — will increasingly control us. They will set the limits of our liberty, decreeing what we may do and what is forbidden. Their algorithms will resolve vital questions of social justice, allocating social goods and sorting us into hierarchies of status and esteem. They will decide the future of democracy, causing it to flourish or decay.

A groundbreaking work of political analysis, Future Politics challenges readers to rethink what it means to be free or equal, what it means to have power or property, what it means for a political system to be just or democratic, and proposes ways in which we can — and must — regain control….(More)”.

Google is using AI to predict floods in India and warn users


James Vincent at The Verge: “For years Google has warned users about natural disasters by incorporating alerts from government agencies like FEMA into apps like Maps and Search. Now, the company is making predictions of its own. As part of a partnership with the Central Water Commission of India, Google will now alert users in the country about impending floods. The service is only currently available in the Patna region, with the first alert going out earlier this month.

As Google’s engineering VP Yossi Matias outlines in a blog post, these predictions are being made using a combination of machine learning, rainfall records, and flood simulations.

“A variety of elements — from historical events, to river level readings, to the terrain and elevation of a specific area — feed into our models,” writes Matias. “With this information, we’ve created river flood forecasting models that can more accurately predict not only when and where a flood might occur, but the severity of the event as well.”

The US tech giant announced its partnership with the Central Water Commission back in June. The two organizations agreed to share technical expertise and data to work on the predictions, with the Commission calling the collaboration a “milestone in flood management and in mitigating the flood losses.” Such warnings are particularly important in India, where 20 percent of the world’s flood-related fatalities are estimated to occur….(More)”.

Mission Failure


Matthew Sawh at Stanford Social Innovation Review: “Exposing the problems of policy schools can ignite new ways to realize the mission of educating public servants in the 21st century….

Public policy schools were founded with the aim to educate public servants with academic insights that could be applied to government administration. And while these programs have adapted the tools and vocabularies of the Reagan Revolution, such as the use of privatization and the rhetoric of competition, they have not come to terms with his philosophical legacy that describes our contemporary political culture. To do so, public policy schools need to acknowledge that the public perceives the government as the problem, not the solution, to society’s ills. Today, these programs need to ask how decisionmakers should improve the design of their organizations, their decision-making processes, and their curriculum in order to address the public’s skeptical mindset.

I recently attended a public policy school, Columbia University’s School of International and Public Affairs (SIPA), hoping to learn how to bridge the distrust between public servants and citizens, and to help forge bonds between bureaucracies and voters who feel ignored by their government officials. Instead of building bridges across these divides, the curriculum of my policy program reinforced them—training students to navigate bureaucratic silos in our democracy. Of course, public policy students go to work in the government we have, not the government we wish we had—but that’s the point. These schools should lead the national conversation and equip their graduates to think and act beyond the divides between the governing and the governed.

Most US public policy programs require a core set of courses, including macroeconomics, microeconomics, statistics, and organizational management. SIPA has broader requirements, including a financial management course, a client consulting workshop, and an internship. Both sets of core curricula undervalue the intrapersonal and interpersonal elements of leadership, particularly politics, which I define aspersuasion, particularly within groups and institutions.

Public service is more than developing smart ideas; it entails the ability to marshal the financial, political, and organizational supports to make those ideas resonate with the public and take effect in government policy. Unfortunately, these programs aren’t adequately training early career professionals to implement their ideas by giving short shrift to the intrapersonal and institutional contexts of real changemaking.

Within the core curriculum, the story of change is told as the product of processes wherein policymakers can know the rational expectations of the public. But the people themselves have concerns beyond those perceived by policymakers. As public servants, our success depends on our ability to meet people where they are, rather than where we suppose they should be.  …

Public policy schools must reach a consensus on core identity questions: Who is best placed to lead a policy school? What are their aims in crafting a professional class? What exactly should a policy degree mean in the wider world? The problem is that these programs are meant to teach students about not only the science of good government, but the human art of good governance.

Curricula based on an outdated sense both of the political process and of advocacy is a predominant feature of policy programs. Instead, core courses should cover how to advocate effectively in this new political world of the 21st century. Students should learn how to raise money for a political campaign; how to lobby; how to make an advertising budget; and how to purchase airtime in the digital age…(More)”

Making Wage Data Work: Creating a Federal Resource for Evidence and Transparency


Christina Pena at the National Skills Coalition: “Administrative data on employment and earnings, commonly referred to as wage data or wage records, can be used to assess the labor market outcomes of workforce, education, and other programs, providing policymakers, administrators, researchers, and the public with valuable information. However, there is no single readily accessible federal source of wage data which covers all workers. Noting the importance of employment and earnings data to decision makers, the Commission on Evidence-Based Policymaking called for the creation of a single federal source of wage data for statistical purposes and evaluation. They recommended three options for further exploration: expanding access to systems that already exist at the U.S. Census Bureau or the U.S. Department of Health and Human Services (HHS), or creating a new database at the U.S. Department of Labor (DOL).

This paper reviews current coverage and allowable uses, as well as federal and state actions required to make each option viable as a single federal source of wage data that can be accessed by government agencies and authorized researchers. Congress and the President, in conjunction with relevant federal and state agencies, should develop one or more of those options to improve wage information for multiple purposes. Although not assessed in the following review, financial as well as privacy and security considerations would influence the viability of each scenario. Moreover, if a system like the Commission-recommended National Secure Data Service for sharing data between agencies comes to fruition, then a wage system might require additional changes to work with the new service….(More)”