Invisible Algorithms, Invisible Politics


Laura Forlano at Public Books: “Over the past several decades, politicians and business leaders, technology pundits and the mainstream media, engineers and computer scientists—as well as science fiction and Hollywood films—have repeated a troubling refrain, championing the shift away from the material and toward the virtual, the networked, the digital, the online. It is as if all of life could be reduced to 1s and 0s, rendering it computable….

Today, it is in design criteria and engineering specifications—such as “invisibility” and “seamlessness,” which aim to improve the human experience with technology—that ethical decisions are negotiated….

Take this example. In late July 2017, the City of Chicago agreed to settle a $38.75 million class-action lawsuit related to its red-light-camera program. Under the settlement, the city will repay drivers who were unfairly ticketed a portion of the cost of their ticket. Over the past five years, the program, ostensibly implemented to make Chicago’s intersections safer, has been mired in corruption, bribery, mismanagement, malfunction, and moral wrongdoing. This confluence of factors has resulted in a great deal of negative press about the project.

The red-light-camera program is just one of many examples of such technologies being adopted by cities in their quest to become “smart” and, at the same time, increase revenue. Others include ticketless parking, intelligent traffic management, ride-sharing platforms, wireless networks, sensor-embedded devices, surveillance cameras, predictive policing software, driverless car testbeds, and digital-fabrication facilities.

The company that produced the red-light cameras, Redflex, claims on their website that their technology can “reliably and consistently address negative driving behaviors and effectively enforce traffic laws on roadways and intersections with a history of crashes and incidents.”Nothing could be further from the truth. Instead, the cameras were unnecessarily installed at some intersections without a history of problems; they malfunctioned; they issued illegal tickets due to short yellow-lights that were not within federal limits; and they issued tickets after enforcement hours. And, due to existing structural inequalities, these difficulties were more likely to negatively impact poorer and less advantaged city residents.

The controversies surrounding red-light cameras in Chicago make visible the ways in which design criteria and engineering specifications—concepts including safety and efficiency, seamlessness and stickiness, convenience and security—are themselves ways of defining the ethics, values, and politics of our cities and citizens. To be sure, these qualities seem clean, comforting, and cuddly at first glance. They are difficult to argue against.

But, like wolves in sheep’s clothing, they gnash their political-economic teeth, and show their insatiable desire to further the goals of neoliberal capitalism. Rather than merely slick marketing, these mundane infrastructures (hardware, software, data, and services) negotiate ethical questions around what kinds of societies we aspire to, what kind of cities we want to live in, what kinds of citizens we can become, who will benefit from these tradeoffs, and who will be left out….(More)

Republics of Makers: From the Digital Commons to a Flat Marginal Cost Society


Mario Carpo at eFlux: “…as the costs of electronic computation have been steadily decreasing for the last forty years at least, many have recently come to the conclusion that, for most practical purposes, the cost of computation is asymptotically tending to zero. Indeed, the current notion of Big Data is based on the assumption that an almost unlimited amount of digital data will soon be available at almost no cost, and similar premises have further fueled the expectation of a forthcoming “zero marginal costs society”: a society where, except for some upfront and overhead costs (the costs of building and maintaining some facilities), many goods and services will be free for all. And indeed, against all odds, an almost zero marginal cost society is already a reality in the case of many services based on the production and delivery of electricity: from the recording, transmission, and processing of electrically encoded digital information (bits) to the production and consumption of electrical power itself. Using renewable energies (solar, wind, hydro) the generation of electrical power is free, except for the cost of building and maintaining installations and infrastructure. And given the recent progress in the micro-management of intelligent electrical grids, it is easy to imagine that in the near future the cost of servicing a network of very small, local hydro-electric generators, for example, could easily be devolved to local communities of prosumers who would take care of those installations as their tend to their living environment, on an almost voluntary, communal basis.4 This was already often the case during the early stages of electrification, before the rise of AC (alternate current, which, unlike DC, or direct current, could be carried over long distances): AC became the industry’s choice only after Galileo Ferraris’s and Nikola Tesla’s developments in AC technologies in the 1880s.

Likewise, at the micro-scale of the electronic production and processing of bits and bytes of information, the Open Source movement and the phenomenal surge of some crowdsourced digital media (including some so-called social media) in the first decade of the twenty-first century has already proven that a collaborative, zero cost business model can effectively compete with products priced for profit on a traditional marketplace. As the success of Wikipedia, Linux, or Firefox proves, many are happy to volunteer their time and labor for free when all can profit from the collective work of an entire community without having to pay for it. This is now technically possible precisely because the fixed costs of building, maintaining, and delivering these service are very small; hence, from the point of view of the end-user, negligible.

Yet, regardless of the fixed costs of the infrastructure, content—even user-generated content—has costs, albeit for the time being these are mostly hidden, voluntarily born, or inadvertently absorbed by the prosumers themselves. For example, the wisdom of Wikipedia is not really a wisdom of crowds: most Wikipedia entries are de facto curated by fairly traditional scholar communities, and these communities can contribute their expertise for free only because their work has already been paid for by others—often by universities. In this sense, Wikipedia is only piggybacking on someone else’s research investments (but multiplying their outreach, which is one reason for its success). Ditto for most Open Source software, as training a software engineer, coder, or hacker, takes time and money—an investment for future returns that in many countries around the world is still born, at least in part, by public institutions….(More)”.

The Entrepreneurial Impact of Open Data


Sheena Iyengar and  Patrick Bergemann at Opening Governance Research Network: “…To understand how open data is being used to spur innovation and create value, the Governance Lab (GovLab) at NYU Tandon School of Engineering conducted the first ever census of companies that use open data. Using outreach campaigns, expert advice and other sources, they created a database of more than 500 companies founded in the United States called the Open Data 500 (OD500). Among the small and medium enterprises identified that use government data, the most common industries they found are data and technology, followed by finance and investment, business and legal services, and healthcare.

In the context of our collaboration with the GovLab-chaired MacArthur Foundation Research Network on Opening Governance, we sought to dig deeper into the broader impact of open data on entrepreneurship. To do so we combined the OD500 with databases on startup activity from Crunchbase and AngelList. This allowed us to look at the trajectories of open data companies from their founding to the present day. In particular, we compared companies that use open data to similar companies with the same founding year, location and industry to see how well open data companies fare at securing funding along with other indicators of success.

We first looked at the extent to which open data companies have access to investor capital, wondering if open data companies have difficulty gaining funding because their use of public data may be perceived as insufficiently innovative or proprietary. If this is the case, the economic impact of open data may be limited. Instead, we found that open data companies obtain more investors than similar companies that do not use open data. Open data companies have, on average, 1.74 more investors than similar companies founded at the same time. Interestingly, investors in open data companies are not a specific group who specialize in open data startups. Instead, a wide variety of investors put money into these companies. Of the investors who funded open data companies, 59 percent had only invested in one open data company, while 81 percent had invested in one or two. Open data companies appear to be appealing to a wide range of investors….(More)”.

Should We Treat Data as Labor? Moving Beyond ‘Free’


Paper by Imanol Arrieta Ibarra, Leonard Goff, Diego Jiménez Hernández and Jaron Lanier: “In the digital economy, user data is typically treated as capital created by corporations observing willing individuals. This neglects users’ role in creating data, reducing incentives for users, distributing the gains from the data economy unequally and stoking fears of automation. Instead treating data (at least partially) as labor could help resolve these issues and restore a functioning market for user contributions, but may run against the near-term interests of dominant data monopsonists who have benefited from data being treated as ‘free’. Countervailing power, in the form of competition, a data labor movement and/or thoughtful regulation could help restore balance….(More)”.

Views on Open Data Business from Software Development Companies


Antti Herala, Jussi Kasurinen, and Erno Vanhala in the Journal of Theoretical and Applied Electronic Commerce Research: “The main concept of open data and its application is simple; access to the publicly-funded data provides greater returns from the public investment and can generate wealth through the downstream use of outputs, such as traffic information or weather forecast services. However, even though open data and data sharing as concepts are forty years old with the open data initiative reaching ten, the practical actions and applications have tended to stay on the superficial level, and significant progress or success stories are hard to find. The current trend is that the governments and municipalities are opening their data, but the impact and usefulness of raw open data repositories to citizens – and even to businesses – can be questioned Besides the governments, a handful of private organizations are opening their data in an attempt to unlock the economic value of open data, but even they have difficulties finding innovative usage, let alone generate additional profit.

In a previous study it was found that companies are interested in open data and that this mindset spans over different industries, from both publicly available data to the private business-to-business data access. Open data is not only a resource for software companies, but also for traditional engineering industries and even for small, nonfranchised local markets and shops. In our previous study, it was established that there is evidence  on recognizing the applicability of open data, and opening the data to the clients by private organizations leads to business opportunities, creating new value.

However, while there is interest towards open data in a wide variety of businesses, the question still remains whether or not open data is actually used to generate income or are there some other sharing methods in use that are more efficient and more profitable.

For this study, four research questions were formulated. The first three are concentrating on the usage of open data as well as the interest towards opening or sharing data and the fourth research question revolves around the different types of openness:

  • How do new clients express interest towards open data?
  • What kind of open data-based solutions is the existing clientele expecting?
  • How does the product portfolio of a software company respond to open data?
  • What are the current trends of open initiatives?…(More)”.

The Follower Factory


 Nicholas Confessore, Gabriel J.X. Dance, Richard Harris And Mark Hansen in The New York Times: “…Fake accounts, deployed by governments, criminals and entrepreneurs, now infest social media networks. By some calculations, as many as 48 million of Twitter’s reported active users — nearly 15 percent — are automated accounts designed to simulate real people, though the company claims that number is far lower.

In November, Facebook disclosed to investors that it had at least twice as many fake users as it previously estimated, indicating that up to 60 million automated accounts may roam the world’s largest social media platform. These fake accounts, known as bots, can help sway advertising audiences and reshape political debates. They can defraud businesses and ruin reputations. Yet their creation and sale fall into a legal gray zone.

“The continued viability of fraudulent accounts and interactions on social media platforms — and the professionalization of these fraudulent services — is an indication that there’s still much work to do,” said Senator Mark Warner, the Virginia Democrat and ranking member of the Senate Intelligence Committee, which has been investigating the spread of fake accounts on Facebook, Twitter and other platforms.

Despite rising criticism of social media companies and growing scrutiny by elected officials, the trade in fake followers has remained largely opaque. While Twitter and other platforms prohibit buying followers, Devumi and dozens of other sites openly sell them. And social media companies, whose market value is closely tied to the number of people using their services, make their own rules about detecting and eliminating fake accounts.

Devumi’s founder, German Calas, denied that his company sold fake followers and said he knew nothing about social identities stolen from real users. “The allegations are false, and we do not have knowledge of any such activity,” Mr. Calas said in an email exchange in November.

The Times reviewed business and court records showing that Devumi has more than 200,000 customers, including reality television stars, professional athletes, comedians, TED speakers, pastors and models. In most cases, the records show, they purchased their own followers. In others, their employees, agents, public relations companies, family members or friends did the buying. For just pennies each — sometimes even less — Devumi offers Twitter followers, views on YouTube, plays on SoundCloud, the music-hosting site, and endorsements on LinkedIn, the professional-networking site….(More)”.

The Tyranny of Metrics


Book by Jerry Z. Muller on “How the obsession with quantifying human performance threatens our schools, medical care, businesses, and government…

Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we’ve gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing–and shows how we can begin to fix the problem.

Filled with examples from education, medicine, business and finance, government, the police and military, and philanthropy and foreign aid, this brief and accessible book explains why the seemingly irresistible pressure to quantify performance distorts and distracts, whether by encouraging “gaming the stats” or “teaching to the test.” That’s because what can and does get measured is not always worth measuring, may not be what we really want to know, and may draw effort away from the things we care about. Along the way, we learn why paying for measured performance doesn’t work, why surgical scorecards may increase deaths, and much more. But metrics can be good when used as a complement to—rather than a replacement for—judgment based on personal experience, and Muller also gives examples of when metrics have been beneficial…(More)”.

Governments are not startups


Ramy Ghorayeb at Medium: “…Why can’t governments move fast like the private industry? We have seen the rise of big corporations and startups adapting and evolve with the needs of their customers. Why can’t the governments adapt to the needs of their population? Why is it so slow?

Truth is, innovating in the public sector cannot and should not be like in the private one. Startups and corporations are about maximizing their internalities while public administrations are also about minimizing the externalities.

Straight-forward example: Let’s imagine the US authorize online voting in addition to physical for the next presidential elections. Obviously, it is a good way to incentivize people to vote. You will be able to vote from anywhere at any time, and more importantly, the cost to make one hundred people vote will be the same as one thousand or one million.

But on the other side, you are favoring the population with easy access to the Internet, meaning the middle and upper classes. And more, you are also favoring the younger generations over the older.
These populations have different known political opinions. Ultimately, you are deliberately modifying the voting repartition power in the country. It is not necessarily a bad or a good thing (keeping only physical voting is also favoriting a specific demographic segment) but there are a lot of issues that need to be worked on thoroughly before making any change on the democratic balance. I’d like to call this the participatory bias.

This participatory bias is the reason why the public side will always have a latency to adopt technology.

On the private side, when a business wants to work on a new product, it will only focus on its customer. The goal of a startup is even to find a specific segment of the population with its own needs and problems, a niche, to test innovative solutions in order to improve their experience and optimize the acquisition and retention of this population. In other words, he will maximize the internalities.
But the public side needs to look at the externalities that its new products can create. It cannot isolate a population, but has to look at what are the negative effects on the others. And more, like a big corporation, it cannot experiment and fail like a startup does, because it has to preserve its reputation and trust legacy.

Now the situation isn’t locked. Thanks to the civic tech ecosystem, governments have found a way to externalize innovation and learn from experimentations, failure and successes without doing it themselves. Startups and labs are handling the difficult role of inventor, and are showing the good way to use tech for citizens, iteration by iteration. More interesting, they are also showing that they are not threaten by public side replications. In fact, they are finding their complementarity….(More)”

How AI Could Help the Public Sector


Emma Martinho-Truswell in the Harvard Business Review: “A public school teacher grading papers faster is a small example of the wide-ranging benefits that artificial intelligence could bring to the public sector. A.I could be used to make government agencies more efficient, to improve the job satisfaction of public servants, and to increase the quality of services offered. Talent and motivation are wasted doing routine tasks when they could be doing more creative ones.

Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world. In addition to education, public servants are using AI to help them make welfare payments and immigration decisions, detect fraud, plan new infrastructure projects, answer citizen queries, adjudicate bail hearings, triage health care cases, and establish drone paths.  The decisions we are making now will shape the impact of artificial intelligence on these and other government functions. Which tasks will be handed over to machines? And how should governments spend the labor time saved by artificial intelligence?

So far, the most promising applications of artificial intelligence use machine learning, in which a computer program learns and improves its own answers to a question by creating and iterating algorithms from a collection of data. This data is often in enormous quantities and from many sources, and a machine learning algorithm can find new connections among data that humans might not have expected. IBM’s Watson, for example, is a treatment recommendation-bot, sometimes finding treatments that human doctors might not have considered or known about.

Machine learning program may be better, cheaper, faster, or more accurate than humans at tasks that involve lots of data, complicated calculations, or repetitive tasks with clear rules. Those in public service, and in many other big organizations, may recognize part of their job in that description. The very fact that government workers are often following a set of rules — a policy or set of procedures — already presents many opportunities for automation.

To be useful, a machine learning program does not need to be better than a human in every case. In my work, we expect that much of the “low hanging fruit” of government use of machine learning will be as a first line of analysis or decision-making. Human judgment will then be critical to interpret results, manage harder cases, or hear appeals.

When the work of public servants can be done in less time, a government might reduce its staff numbers, and return money saved to taxpayers — and I am sure that some governments will pursue that option. But it’s not necessarily the one I would recommend. Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity — all things at which humans continue to outperform even the most sophisticated AI program….(More)”.

2018 Edelman Trust Barometer


Executive Summary: “Volatility brews beneath a stagnant surface. If a single theme captures the state of the world’s trust in 2018, it is this. Even as people’s trust in business, government, NGOs and media across 28 countries remained largely unchanged, experiencing virtually no recovery from 2017 (Fig. 1), dramatic shifts are taking place at the country level and within the institution of media.

Globally, 20 of 28 countries lie in distruster territory (Fig. 2), one more than in 2017. Trust among the informed public—those with higher levels of income and education— declined slightly on a global level, from 60 percent to 59 percent, thrusting this group into neutral territory from its once trusting status. A closer look, however, reveals a world moving apart (Fig. 3).

In 2018, two poles have emerged: a cluster of six nations where trust has dramatically increased, and six where trust has deeply declined. Whereas in previous years country-level trust has moved largely in lockstep, for the first time ever there is now a distinct split between extreme trust gainers and losers. No country saw steeper declines than the United States, with a 37-point aggregate drop in trust across all institutions.

The loss of trust was most severe among the informed public—a 23-point fall on the Trust Index—nearly erasing the “mass-class” divide that once stood between this segment of the U.S. population and the country’s farless-trusting mass population. At the opposite end of the spectrum, China experienced a 27-point gain, more than any other country. Following behind in the trust gainer category are the UAE (24 points) and South Korea (23 points)….(More)”.