The Tyranny of Metrics


Book by Jerry Z. Muller on “How the obsession with quantifying human performance threatens our schools, medical care, businesses, and government…

Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we’ve gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing–and shows how we can begin to fix the problem.

Filled with examples from education, medicine, business and finance, government, the police and military, and philanthropy and foreign aid, this brief and accessible book explains why the seemingly irresistible pressure to quantify performance distorts and distracts, whether by encouraging “gaming the stats” or “teaching to the test.” That’s because what can and does get measured is not always worth measuring, may not be what we really want to know, and may draw effort away from the things we care about. Along the way, we learn why paying for measured performance doesn’t work, why surgical scorecards may increase deaths, and much more. But metrics can be good when used as a complement to—rather than a replacement for—judgment based on personal experience, and Muller also gives examples of when metrics have been beneficial…(More)”.

Democracy and the Internet: A Retrospective


Essay by Charles Ess at Javnost: “In the late 1980s and early 1990s, the emerging internet and World Wide Web inspired both popular and scholarly optimism that these new communication technologies would inevitably “democratise”—in local organisations, larger civic and political institutions, and, indeed, the world itself. The especially Habermas- and feminist-inspired notions of deliberative democracy in an electronic public sphere at work here are subsequently challenged, however, by both theoretical and empirical developments such as the Arab Winter and platform imperialism. Nonetheless, a range of other developments—from Edward Snowden to the emergence of virtue ethics and slow tech as increasingly central to the design of ICTs—argue that resistance in the name of democracy and emancipation is not futile….(More)”.

Mobile Devices as Stigmatizing Security Sensors: The GDPR and a Future of Crowdsourced ‘Broken Windows’


Paper by Oskar Josef Gstrein and Gerard Jan Ritsema van Eck: “Various smartphone apps and services are available which encourage users to report where and when they feel they are in an unsafe or threatening environment. This user generated content may be used to build datasets, which can show areas that are considered ‘bad,’ and to map out ‘safe’ routes through such neighbourhoods.

Despite certain advantages, this data inherently carries the danger that streets or neighbourhoods become stigmatized and already existing prejudices might be reinforced. Such stigmas might also result in negative consequences for property values and businesses, causing irreversible damage to certain parts of a municipality. Overcoming such an “evidence-based stigma” — even if based on biased, unreviewed, outdated, or inaccurate data — becomes nearly impossible and raises the question how such data should be managed….(More)”.

Governments are not startups


Ramy Ghorayeb at Medium: “…Why can’t governments move fast like the private industry? We have seen the rise of big corporations and startups adapting and evolve with the needs of their customers. Why can’t the governments adapt to the needs of their population? Why is it so slow?

Truth is, innovating in the public sector cannot and should not be like in the private one. Startups and corporations are about maximizing their internalities while public administrations are also about minimizing the externalities.

Straight-forward example: Let’s imagine the US authorize online voting in addition to physical for the next presidential elections. Obviously, it is a good way to incentivize people to vote. You will be able to vote from anywhere at any time, and more importantly, the cost to make one hundred people vote will be the same as one thousand or one million.

But on the other side, you are favoring the population with easy access to the Internet, meaning the middle and upper classes. And more, you are also favoring the younger generations over the older.
These populations have different known political opinions. Ultimately, you are deliberately modifying the voting repartition power in the country. It is not necessarily a bad or a good thing (keeping only physical voting is also favoriting a specific demographic segment) but there are a lot of issues that need to be worked on thoroughly before making any change on the democratic balance. I’d like to call this the participatory bias.

This participatory bias is the reason why the public side will always have a latency to adopt technology.

On the private side, when a business wants to work on a new product, it will only focus on its customer. The goal of a startup is even to find a specific segment of the population with its own needs and problems, a niche, to test innovative solutions in order to improve their experience and optimize the acquisition and retention of this population. In other words, he will maximize the internalities.
But the public side needs to look at the externalities that its new products can create. It cannot isolate a population, but has to look at what are the negative effects on the others. And more, like a big corporation, it cannot experiment and fail like a startup does, because it has to preserve its reputation and trust legacy.

Now the situation isn’t locked. Thanks to the civic tech ecosystem, governments have found a way to externalize innovation and learn from experimentations, failure and successes without doing it themselves. Startups and labs are handling the difficult role of inventor, and are showing the good way to use tech for citizens, iteration by iteration. More interesting, they are also showing that they are not threaten by public side replications. In fact, they are finding their complementarity….(More)”

Can scientists learn to make ‘nature forecasts’ just as we forecast the weather?


 at The Conversation: “We all take weather forecasts for granted, so why isn’t there a ‘nature forecast’ to answer these questions? Enter the new scientific field of ecological forecasting. Ecologists have long sought to understand the natural world, but only recently have they begun to think systematically about forecasting.

Much of the current research in ecological forecasting is focused on long-term projections. It considers questions that play out over decades to centuries, such as how species may shift their ranges in response to climate change, or whether forests will continue to take up carbon dioxide from the atmosphere.

However, in a new article that I co-authored with 18 other scientists from universities, private research institutes and the U.S. Geological Survey, we argue that focusing on near-term forecasts over spans of days, seasons and years will help us better understand, manage and conserve ecosystems. Developing this ability would be a win-win for both science and society….

Big data is driving many of the advances in ecological forecasting. Today ecologists have orders of magnitude more data compared to just a decade ago, thanks to sustained public funding for basic science and environmental monitoring. This investment has given us better sensors, satellites and organizations such as the National Ecological Observatory Network, which collects high-quality data from 81 field sites across the United States and Puerto Rico. At the same time, cultural shifts across funding agencies, research networks and journals have made that data more open and available.

Digital technologies make it possible to access this information more quickly than in the past. Field notebooks have given way to tablets and cell networks that can stream new data into supercomputers in real time. Computing advances allow us to build better models and use more sophisticated statistical methods to produce forecasts….(More)”.

Crowdfunding site aims to get homeless back into work


Springwise: “Beam is a new crowdfunding website that has been launched to try and help homeless people based in London get back into work. The idea behind the website is that people donate money online to homeless individuals, which will then be used to give them the qualifications and training that they will need to become fully employed and therefore able to also get themselves secure in their own accommodation.

A new member of Beam is allocated a Member Manager, who works with them to find the best employment avenue for them to take based on their likes and experience. The Member Manager helps them prepare their fundraising campaign, and even helps members with childcare costs while they’re training.

Beam was founded by Alex Stephany, who is also board advisor of car parking app, JustPark, with a pilot scheme in September 2017 and has the full support of Sadiq Khan, the Mayor of London, and the innovation promotors, Nesta.

There are many ways that technology is helping the homeless. Recently, blockchain tech has been used to connect the homeless in New York with valuable services. And new keycard-accessed vending machines are to be installed in safe spaces to provide the homeless with 24-hour access to essential items….(More)”.

The Modern Research Data Portal: a design pattern for networked, data-intensive science


 et al in PeerJ Computer Science: “We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs.

We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals….(More)”.

Studying Migrant Assimilation Through Facebook Interests


Antoine DuboisEmilio ZagheniKiran Garimella, and Ingmar Weber at arXiv: “Migrants’ assimilation is a major challenge for European societies, in part because of the sudden surge of refugees in recent years and in part because of long-term demographic trends. In this paper, we use Facebook’s data for advertisers to study the levels of assimilation of Arabic-speaking migrants in Germany, as seen through the interests they express online. Our results indicate a gradient of assimilation along demographic lines, language spoken and country of origin. Given the difficulty to collect timely migration data, in particular for traits related to cultural assimilation, the methods that we develop and the results that we provide open new lines of research that computational social scientists are well-positioned to address….(More)”.

Rights-Based and Tech-Driven: Open Data, Freedom of Information, and the Future of Government Transparency


Beth Noveck at the Yale Human Rights and Development Journal: “Open data policy mandates that government proactively publish its data online for the public to reuse. It is a radically different approach to transparency than traditional right-to-know strategies as embodied in Freedom of Information Act (FOIA) legislation in that it involves ex ante rather than ex post disclosure of whole datasets. Although both open data and FOIA deal with information sharing, the normative essence of open data is participation rather than litigation. By fostering public engagement, open data shifts the relationship between state and citizen from a monitorial to a collaborative one, centered around using information to solve problems together. This Essay explores the theory and practice of open data in comparison to FOIA and highlights its uses as a tool for advancing human rights, saving lives, and strengthening democracy. Although open data undoubtedly builds upon the fifty-year legal tradition of the right to know about the workings of one’s government, open data does more than advance government accountability. Rather, it is a distinctly twenty-first century governing practice borne out of the potential of big data to help solve society’s biggest problems. Thus, this Essay charts a thoughtful path toward a twenty-first century transparency regime that takes advantage of and blends the strengths of open data’s collaborative and innovation-centric approach and the adversarial and monitorial tactics of freedom of information regimes….(More)”.

How AI Could Help the Public Sector


Emma Martinho-Truswell in the Harvard Business Review: “A public school teacher grading papers faster is a small example of the wide-ranging benefits that artificial intelligence could bring to the public sector. A.I could be used to make government agencies more efficient, to improve the job satisfaction of public servants, and to increase the quality of services offered. Talent and motivation are wasted doing routine tasks when they could be doing more creative ones.

Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world. In addition to education, public servants are using AI to help them make welfare payments and immigration decisions, detect fraud, plan new infrastructure projects, answer citizen queries, adjudicate bail hearings, triage health care cases, and establish drone paths.  The decisions we are making now will shape the impact of artificial intelligence on these and other government functions. Which tasks will be handed over to machines? And how should governments spend the labor time saved by artificial intelligence?

So far, the most promising applications of artificial intelligence use machine learning, in which a computer program learns and improves its own answers to a question by creating and iterating algorithms from a collection of data. This data is often in enormous quantities and from many sources, and a machine learning algorithm can find new connections among data that humans might not have expected. IBM’s Watson, for example, is a treatment recommendation-bot, sometimes finding treatments that human doctors might not have considered or known about.

Machine learning program may be better, cheaper, faster, or more accurate than humans at tasks that involve lots of data, complicated calculations, or repetitive tasks with clear rules. Those in public service, and in many other big organizations, may recognize part of their job in that description. The very fact that government workers are often following a set of rules — a policy or set of procedures — already presents many opportunities for automation.

To be useful, a machine learning program does not need to be better than a human in every case. In my work, we expect that much of the “low hanging fruit” of government use of machine learning will be as a first line of analysis or decision-making. Human judgment will then be critical to interpret results, manage harder cases, or hear appeals.

When the work of public servants can be done in less time, a government might reduce its staff numbers, and return money saved to taxpayers — and I am sure that some governments will pursue that option. But it’s not necessarily the one I would recommend. Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity — all things at which humans continue to outperform even the most sophisticated AI program….(More)”.