Understanding Institutions: The Science and Philosophy of Living Together


New book by Francesco Guala: “Understanding Institutions proposes a new unified theory of social institutions that combines the best insights of philosophers and social scientists who have written on this topic. Francesco Guala presents a theory that combines the features of three influential views of institutions: as equilibria of strategic games, as regulative rules, and as constitutive rules.

Guala explains key institutions like money, private property, and marriage, and develops a much-needed unification of equilibrium- and rules-based approaches. Although he uses game theory concepts, the theory is presented in a simple, clear style that is accessible to a wide audience of scholars working in different fields. Outlining and discussing various implications of the unified theory, Guala addresses venerable issues such as reflexivity, realism, Verstehen, and fallibilism in the social sciences. He also critically analyses the theory of “looping effects” and “interactive kinds” defended by Ian Hacking, and asks whether it is possible to draw a demarcation between social and natural science using the criteria of causal and ontological dependence. Focusing on current debates about the definition of marriage, Guala shows how these abstract philosophical issues have important practical and political consequences.

Moving beyond specific cases to general models and principles, Understanding Institutions offers new perspectives on what institutions are, how they work, and what they can do for us….(More)”

What Governments Can Learn From Airbnb And the Sharing Economy


 in Fortune: “….Despite some regulators’ fears, the sharing economy may not result in the decline of regulation but rather in its opposite, providing a basis upon which society can develop more rational, ethical, and participatory models of regulation. But what regulation looks like, as well as who actually creates and enforce the regulation, is also bound to change.

There are three emerging models – peer regulation, self-regulatory organizations, and data-driven delegation – that promise a regulatory future for the sharing economy best aligned with society’s interests. In the adapted book excerpt that follows, I explain how the third of these approaches, of delegating enforcement of regulations to companies that store critical data on consumers, can help mitigate some of the biases Airbnb guests may face, and why this is a superior alternative to the “open data” approach of transferring consumer information to cities and state regulators.

Consider a different problem — of collecting hotel occupancy taxes from hundreds of thousands of Airbnb hosts rather than from a handful of corporate hotel chains. The delegation of tax collection to Airbnb, something a growing number of cities are experimenting with, has a number of advantages. It is likely to yield higher tax revenues and greater compliance than a system where hosts are required to register directly with the government, which is something occasional hosts seem reluctant to do. It also sidesteps privacy concerns resulting from mandates that digital platforms like Airbnb turn over detailed user data to the government. There is also significant opportunity for the platform to build credibility as it starts to take on quasi governmental roles like this.

There is yet another advantage, and the one I believe will be the most significant in the long-run. It asks a platform to leverage its data to ensure compliance with a set of laws in a manner geared towards delegating responsibility to the platform. You might say that the task in question here — computing tax owed, collecting, and remitting it—is technologically trivial. True. But I like this structure because of the potential it represents. It could be a precursor for much more exciting delegated possibilities.

For a couple of decades now, companies of different kinds have been mining the large sets of “data trails” customers provide through their digital interactions. This generates insights of business and social importance. One such effort we are all familiar with is credit card fraud detection. When an unusual pattern of activity is detected, you get a call from your bank’s security team. Sometimes your card is blocked temporarily. The enthusiasm of these digital security systems is sometimes a nuisance, but it stems from your credit card company using sophisticated machine learning techniques to identify patterns that prior experience has told it are associated with a stolen card. It saves billions of dollars in taxpayer and corporate funds by detecting and blocking fraudulent activity swiftly.

A more recent visible example of the power of mining large data sets of customer interaction came in 2008, when Google engineers announced that they could predict flu outbreaks using data collected from Google searches, and track the spread of flu outbreaks in real time, providing information that was well ahead of the information available using the Center for Disease Control’s (CDC) own tracking systems. The Google system’s performance deteriorated after a couple of years, but its impact on public perception of what might be possible using “big data” was immense.

It seems highly unlikely that such a system would have emerged if Google had been asked to hand over anonymized search data to the CDC. In fact, there would have probably been widespread public backlash to this on privacy grounds. Besides, the reason why this capability emerged organically from within Google is partly as a consequence of Google having one of the highest concentrations of computer science and machine learning talent in the world.

Similar approaches hold great promise as a regulatory approach for sharing economy platforms. Consider the issue of discriminatory practices. There has long been anecdotal evidence that some yellow cabs in New York discriminate against some nonwhite passengers. There have been similar concerns that such behavior may start to manifest on ridesharing platforms and in other peer-to-peer markets for accommodation and labor services.

For example, a 2014 study by Benjamin Edelman and Michael Luca of Harvard suggested that African American hosts might have lower pricing power than white hosts on Airbnb. While the study did not conclusively establish that the difference is due to guests discriminating against African American hosts, a follow-up study suggested that guests with “distinctively African American names” were less likely to receive favorable responses for their requests to Airbnb hosts. This research raises a red flag about the need for vigilance as the lines between personal and professional blur.

One solution would be to apply machine-learning techniques to be able to identify patterns associated with discriminatory behavior. No doubt, many platforms are already using such systems….(More)”

Solving All the Wrong Problems


Allison Arieff in the New York Times:Every day, innovative companies promise to make the world a better place. Are they succeeding? Here is just a sampling of the products, apps and services that have come across my radar in the last few weeks:

A service that sends someone to fill your car with gas.

A service that sends a valet on a scooter to you, wherever you are, to park your car.

A service that will film anything you desire with a drone….

We are overloaded daily with new discoveries, patents and inventions all promising a better life, but that better life has not been forthcoming for most. In fact, the bulk of the above list targets a very specific (and tiny!) slice of the population. As one colleague in tech explained it to me recently, for most people working on such projects, the goal is basically to provide for themselves everything that their mothers no longer do….When everything is characterized as “world-changing,” is anything?

Clay Tarver, a writer and producer for the painfully on-point HBO comedy “Silicon Valley,” said in a recent New Yorker article: “I’ve been told that, at some of the big companies, the P.R. departments have ordered their employees to stop saying ‘We’re making the world a better place,’ specifically because we have made fun of that phrase so mercilessly. So I guess, at the very least, we’re making the world a better place by making these people stop saying they’re making the world a better place.”

O.K., that’s a start. But the impulse to conflate toothbrush delivery with Nobel Prize-worthy good works is not just a bit cultish, it’s currently a wildfire burning through the so-called innovation sector. Products and services are designed to “disrupt” market sectors (a.k.a. bringing to market things no one really needs) more than to solve actual problems, especially those problems experienced by what the writer C. Z. Nnaemeka has described as “the unexotic underclass” — single mothers, the white rural poor, veterans, out-of-work Americans over 50 — who, she explains, have the “misfortune of being insufficiently interesting.”

If the most fundamental definition of design is to solve problems, why are so many people devoting so much energy to solving problems that don’t really exist? How can we get more people to look beyond their own lived experience?

In “Design: The Invention of Desire,” a thoughtful and necessary new book by the designer and theorist Jessica Helfand, the author brings to light an amazing kernel: “hack,” a term so beloved in Silicon Valley that it’s painted on the courtyard of the Facebook campus and is visible from planes flying overhead, is also prison slang for “horse’s ass carrying keys.”

To “hack” is to cut, to gash, to break. It proceeds from the belief that nothing is worth saving, that everything needs fixing. But is that really the case? Are we fixing the right things? Are we breaking the wrong ones? Is it necessary to start from scratch every time?…

Ms. Helfand calls for a deeper embrace of personal vigilance: “Design may provide the map,” she writes, “but the moral compass that guides our personal choices resides permanently within us all.”

Can we reset that moral compass? Maybe we can start by not being a bunch of hacks….(More)”

Open Data in Southeast Asia


Book by Manuel Stagars: “This book explores the power of greater openness, accountability, and transparency in digital information and government data for the nations of Southeast Asia. The author demonstrates that, although the term “open data” seems to be self-explanatory, it involves an evolving ecosystem of complex domains. Through empirical case studies, this book explains how governments in the ASEAN may harvest the benefits of open data to maximize their productivity, efficiency and innovation. The book also investigates how increasing digital divides in the population, boundaries to civil society, and shortfalls in civil and political rights threaten to arrest open data in early development, which may hamper post-2015 development agendas in the region. With robust open data policies and clear roadmaps, member states of the ASEAN can harvest the promising opportunities of open data in their particular developmental, institutional and legal settings. Governments, policy makers, entrepreneurs and academics will gain a clearer understanding of the factors that enable open data from this timely research….(More)”

Due Diligence? We need an app for that


Ken Banks at kiwanja.net: “The ubiquity of mobile phones, the reach of the Internet, the shear number of problems facing the planet, competitions and challenges galore, pots of money and strong media interest in tech-for-good projects has today created the perfect storm. Not a day goes by without the release of an app hoping to solve something, and the fact so many people are building so many apps to fix so many problems can only be a good thing. Right?

The only problem is this. It’s become impossible to tell good from bad, even real from fake. It’s something of a Wild West out there. So it was no surprise to see this happening recently. Quoting The Guardian:

An app which purported to offer aid to refugees lost in the Mediterranean has been pulled from Apple’s App Store after it was revealed as a fake. The I Sea app, which also won a Bronze medal at the Cannes Lions conference on Monday night, presented itself as a tool to help report refugees lost at sea, using real-time satellite footage to identify boats in trouble and highlighting their location to the Malta-based Migrant Offshore Aid Station (Moas), which would provide help.

In fact, the app did nothing of the sort. Rather than presenting real-time satellite footage – a difficult and expensive task – it instead simply shows a portion of a static, unchanging image. And while it claims to show the weather in the southern Mediterranean, that too isn’t that accurate: it’s for Western Libya.

The worry isn’t only that someone would decide to build a fake app which ‘tackles’ such an emotive subject, but the fact that this particular app won an award and received favourable press. Wired, Mashable, the Evening Standard and Reuters all spoke positively about it. Did no-one check that it did what it said it did?

This whole episode reminds me of something Joel Selanikio wrote in his contributing chapter to two books I’ve recently edited and published. In his chapters, which touch on his work on the Magpi data collection tool in addition to some of the challenges facing the tech-for-development community, Joel wrote:

In going over our user activity logs for the online Magpi app, I quickly realised that no-one from any of our funding organisations was listed. Apparently no-one who was paying us had ever seen our working software! This didn’t seem to make sense. Who would pay for software without ever looking at it? And if our funders hadn’t seen the software, what information were they using when they decided whether to fund us each year?

…The shear number of apps available that claim to solve all manner of problems may seem encouraging on the surface – 1,500 (and counting) to help refugees might be a case in point – but how many are useful? How many are being used? How many solve a problem? And how many are real?

Due diligence? Maybe it’s time we had an app for that…(More)”

Blurring the Boundaries Through Digital Innovation


Book edited by D’Ascenzo, F., Magni, M., Lazazzara, A., Za, S: “This book examines the impact of digital innovation on organizations. It reveals how the digital revolution is redefining traditional levels of analysis while at the same time blurring the internal and external boundaries of the organizational environment. It presents a collection of research papers that examine the interaction between Information and Communication Technology (ICT) and behavior from a threefold perspective:

First, they analyze individual behavior in terms of specific organizational practices like learning, collaboration and knowledge transfer, as well as the use of ICT within the organization.

Second, they explore the dynamics at work on the border between the internal and the external environments by analyzing the organizational impact of ICT usage outside the company, as can be seen in employer branding, consumer behavior and organizational image.

Third, they investigate how ICT is being adopted to help face societal challenges outside the company like waste and pollution, smart cities, and e-government….(More)”

Big Data Challenges: Society, Security, Innovation and Ethics


Book edited by Bunnik, A., Cawley, A., Mulqueen, M., Zwitter, A: “This book brings together an impressive range of academic and intelligence professional perspectives to interrogate the social, ethical and security upheavals in a world increasingly driven by data. Written in a clear and accessible style, it offers fresh insights to the deep reaching implications of Big Data for communication, privacy and organisational decision-making. It seeks to demystify developments around Big Data before evaluating their current and likely future implications for areas as diverse as corporate innovation, law enforcement, data science, journalism, and food security. The contributors call for a rethinking of the legal, ethical and philosophical frameworks that inform the responsibilities and behaviours of state, corporate, institutional and individual actors in a more networked, data-centric society. In doing so, the book addresses the real world risks, opportunities and potentialities of Big Data….(More)”

The Perils of Using Technology to Solve Other People’s Problems


Ethan Zuckerman in The Atlantic: “I found Shane Snow’s essay on prison reform — “How Soylent and Oculus Could Fix the Prison System” — through hate-linking….

Some of my hate-linking friends began their eye-rolling about Snow’s article with the title, which references two of Silicon Valley’s most hyped technologies. With the current focus on the U.S. as an “innovation economy,” it’s common to read essays predicting the end of a major social problem due to a technical innovation.Bitcoin will end poverty in the developing world by enabling inexpensive money transfers. Wikipedia and One Laptop Per Child will educate the world’s poor without need for teachers or schools. Self driving cars will obviate public transport and reshape American cities.

The writer Evgeny Morozov has offered a sharp and helpful critique to this mode of thinking, which he calls “solutionism.” Solutionism demands that we focus on problems that have “nice and clean technological solution at our disposal.” In his book, To Save Everything, Click Here, Morozov savages ideas like Snow’s, regardless of whether they are meant as thought experiments or serious policy proposals. (Indeed, one worry I have in writing this essay is taking Snow’s ideas too seriously, as Morozov does with many of the ideas he lambastes in his book.)

The problem with the solutionist critique, though, is that it tends to remove technological innovation from the problem-solver’s toolkit. In fact, technological development is often a key component in solving complex social and political problems, and new technologies can sometimes open a previously intractable problem. The rise of inexpensive solar panels may be an opportunity to move nations away from a dependency on fossil fuels and begin lowering atmospheric levels of carbon dioxide, much as developments in natural gas extraction and transport technologies have lessened the use of dirtier fuels like coal.

But it’s rare that technology provides a robust solution to a social problem by itself. Successful technological approaches to solving social problems usually require changes in laws and norms, as well as market incentives to make change at scale….

Design philosophies like participatory design and codesign bring this concept to the world of technology, demanding that technologies designed for a group of people be designed and built, in part, by those people. Codesign challenges many of the assumptions of engineering, requiring people who are used to working in isolation to build broad teams and to understand that those most qualified to offer a technical solution may be least qualified to identify a need or articulate a design problem. This method is hard and frustrating, but it’s also one of the best ways to ensure that you’re solving the right problem, rather than imposing your preferred solution on a situation…(More)”

Post, Mine, Repeat: Social Media Data Mining Becomes Ordinary


Book by Helen Kennedy that “…argues that as social media data mining becomes more and more ordinary, as we post, mine and repeat, new data relations emerge. These new data relations are characterised by a widespread desire for numbers and the troubling consequences of this desire, and also by the possibility of doing good with data and resisting data power, by new and old concerns, and by instability and contradiction. Drawing on action research with public sector organisations, interviews with commercial social insights companies and their clients, focus groups with social media users and other research, Kennedy provides a fascinating and detailed account of living with social media data mining inside the organisations that make up the fabric of everyday life….(More)”

The Racist Algorithm?


Anupam Chander in the Michigan Law Review (2017 Forthcoming) : “Are we on the verge of an apartheid by algorithm? Will the age of big data lead to decisions that unfairly favor one race over others, or men over women? At the dawn of the Information Age, legal scholars are sounding warnings about the ubiquity of automated algorithms that increasingly govern our lives. In his new book, The Black Box Society: The Hidden Algorithms Behind Money and Information, Frank Pasquale forcefully argues that human beings are increasingly relying on computerized algorithms that make decisions about what information we receive, how much we can borrow, where we go for dinner, or even whom we date. Pasquale’s central claim is that these algorithms will mask invidious discrimination, undermining democracy and worsening inequality. In this review, I rebut this prominent claim. I argue that any fair assessment of algorithms must be made against their alternative. Algorithms are certainly obscure and mysterious, but often no more so than the committees or individuals they replace. The ultimate black box is the human mind. Relying on contemporary theories of unconscious discrimination, I show that the consciously racist or sexist algorithm is less likely than the consciously or unconsciously racist or sexist human decision-maker it replaces. The principal problem of algorithmic discrimination lies elsewhere, in a process I label viral discrimination: algorithms trained or operated on a world pervaded by discriminatory effects are likely to reproduce that discrimination.

I argue that the solution to this problem lies in a kind of algorithmic affirmative action. This would require training algorithms on data that includes diverse communities and continually assessing the results for disparate impacts. Instead of insisting on race or gender neutrality and blindness, this would require decision-makers to approach algorithmic design and assessment in a race and gender conscious manner….(More)