Even Imperfect Algorithms Can Improve the Criminal Justice System


Sam Corbett-Davies, Sharad Goel and Sandra González-Bailón in the The New York Times: “In courtrooms across the country, judges turn to computer algorithms when deciding whether defendants awaiting trial must pay bail or can be released without payment. The increasing use of such algorithms has prompted warnings about the dangers of artificial intelligence. But research shows that algorithms are powerful tools for combating the capricious and biased nature of human decisions.

Bail decisions have traditionally been made by judges relying on intuition and personal preference, in a hasty process that often lasts just a few minutes. In New York City, the strictest judges are more than twice as likely to demand bail as the most lenient ones.

To combat such arbitrariness, judges in some cities now receive algorithmically generated scores that rate a defendant’s risk of skipping trial or committing a violent crime if released. Judges are free to exercise discretion, but algorithms bring a measure of consistency and evenhandedness to the process.

The use of these algorithms often yields immediate and tangible benefits: Jail populations, for example, can decline without adversely affecting public safety.

In one recent experiment, agencies in Virginia were randomly selected to use an algorithm that rated both defendants’ likelihood of skipping trial and their likelihood of being arrested if released. Nearly twice as many defendants were released, and there was no increase in pretrial crime….(More)”.

Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda


Gillian Bolsover and Philip Howard in the Journal Big Data: “Computational propaganda has recently exploded into public consciousness. The U.S. presidential campaign of 2016 was marred by evidence, which continues to emerge, of targeted political propaganda and the use of bots to distribute political messages on social media. This computational propaganda is both a social and technical phenomenon. Technical knowledge is necessary to work with the massive databases used for audience targeting; it is necessary to create the bots and algorithms that distribute propaganda; it is necessary to monitor and evaluate the results of these efforts in agile campaigning. Thus, a technical knowledge comparable to those who create and distribute this propaganda is necessary to investigate the phenomenon.

However, viewing computational propaganda only from a technical perspective—as a set of variables, models, codes, and algorithms—plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it. The very act of making something technical and impartial makes it seem inevitable and unbiased. This undermines the opportunities to argue for change in the social value and meaning of this content and the structures in which it exists. Big-data research is necessary to understand the socio-technical issue of computational propaganda and the influence of technology in politics. However, big data researchers must maintain a critical stance toward the data being used and analyzed so as to ensure that we are critiquing as we go about describing, predicting, or recommending changes. If research studies of computational propaganda and political big data do not engage with the forms of power and knowledge that produce it, then the very possibility for improving the role of social-media platforms in public life evaporates.

Definitionally, computational propaganda has two important parts: the technical and the social. Focusing on the technical, Woolley and Howard define computational propaganda as the assemblage of social-media platforms, autonomous agents, and big data tasked with the manipulation of public opinion. In contrast, the social definition of computational propaganda derives from the definition of propaganda—communications that deliberately misrepresent symbols, appealing to emotions and prejudices and bypassing rational thought, to achieve a specific goal of its creators—with computational propaganda understood as propaganda created or disseminated using computational (technical) means…(More) (Full Text HTMLFull Text PDF)

Could Bitcoin technology help science?


Andy Extance at Nature: “…The much-hyped technology behind Bitcoin, known as blockchain, has intoxicated investors around the world and is now making tentative inroads into science, spurred by broad promises that it can transform key elements of the research enterprise. Supporters say that it could enhance reproducibility and the peer review process by creating incorruptible data trails and securely recording publication decisions. But some also argue that the buzz surrounding blockchain often exceeds reality and that introducing the approach into science could prove expensive and introduce ethical problems.

A few collaborations, including Scienceroot and Pluto, are already developing pilot projects for science. Scienceroot aims to raise US$20 million, which will help pay both peer reviewers and authors within its electronic journal and collaboration platform. It plans to raise the funds in early 2018 by exchanging some of the science tokens it uses for payment for another digital currency known as ether. And the Wolfram Mathematica algebra program — which is widely used by researchers — is currently working towards offering support for an open-source blockchain platform called Multichain. Scientists could use this, for example, to upload data to a shared, open workspace that isn’t controlled by any specific party, according to Multichain….

Claudia Pagliari, who researches digital health-tracking technologies at the University of Edinburgh, UK, says that she recognizes the potential of blockchain, but researchers have yet to properly explore its ethical issues. What happens if a patient withdraws consent for a trial that is immutably recorded on a blockchain? And unscrupulous researchers could still add fake data to a blockchain, even if the process is so open that everyone can see who adds it, says Pagliari. Once added, no-one can change that information, although it’s possible they could label it as retracted….(More)”.

New York City moves to create accountability for algorithms


Lauren Kirchner at ArsTechnica: “The algorithms that play increasingly central roles in our lives often emanate from Silicon Valley, but the effort to hold them accountable may have another epicenter: New York City. Last week, the New York City Council unanimously passed a bill to tackle algorithmic discrimination—the first measure of its kind in the country.

The algorithmic accountability bill, waiting to be signed into law by Mayor Bill de Blasio, establishes a task force that will study how city agencies use algorithms to make decisions that affect New Yorkers’ lives, and whether any of the systems appear to discriminate against people based on age, race, religion, gender, sexual orientation, or citizenship status. The task force’s report will also explore how to make these decision-making processes understandable to the public.

The bill’s sponsor, Council Member James Vacca, said he was inspired by ProPublica’s investigation into racially biased algorithms used to assess the criminal risk of defendants….

A previous, more sweeping version of the bill had mandated that city agencies publish the source code of all algorithms being used for “targeting services” or “imposing penalties upon persons or policing” and to make them available for “self-testing” by the public. At a hearing at City Hall in October, representatives from the mayor’s office expressed concerns that this mandate would threaten New Yorkers’ privacy and the government’s cybersecurity.

The bill was one of two moves the City Council made last week concerning algorithms. On Thursday, the committees on health and public safety held a hearing on the city’s forensic methods, including controversial tools that the chief medical examiner’s office crime lab has used for difficult-to-analyze samples of DNA.

As a ProPublica/New York Times investigation detailed in September, an algorithm created by the lab for complex DNA samples has been called into question by scientific experts and former crime lab employees.

The software, called the Forensic Statistical Tool, or FST, has never been adopted by any other lab in the country….(More)”.

Algorithms can deliver public services, too


Diane Coyle in the Financial Times: “As economists have been pointing out for a while, the combination of always-online smartphones and matching algorithms (of the kind pioneered by Nobel Prize-winning economist Richard Thaler and others) reduces the transaction costs involved in economic exchanges. As Ronald Coase argued, transaction costs, because they limit the scope of exchanges in the market, help explain why companies exist. Where those costs are high, it is more efficient to keep the activities inside an organisation. The relevant costs are informational. But in dense urban markets the new technologies make it easy and fast to match up the two sides of a desired exchange, such as a would-be passenger and a would-be driver for a specific journey. This can expand the market (as well as substituting for existing forms of transport such as taxis and buses). It becomes viable to serve previously under-served areas, or for people to make journeys they previously could not have afforded. In principle all parties (customers, drivers and platform) can share the benefits.

Making more efficient use of assets such as cars is nice, but the economic gains come from matching demand with supply in contexts where there are no or few economies of scale — such as conveying a passenger or a patient from A to B, or providing a night’s accommodation in a specific place.

Rather than being seen as a threat to public services, the new technologies should be taken as a compelling opportunity to deliver inherently unscalable services efficiently, especially given the fiscal squeeze so many authorities are facing. Public transport is one opportunity. How else could cash-strapped transportation authorities even hope to provide a universal service on less busy routes? It is hard to see why they should not make contractual arrangements with private providers. Nor is there any good economic reason they could not adopt the algorithmic matching model themselves, although the organisational effort might defeat many.

However, in ageing societies the big prize will be organising other services such as adult social care this way. These are inherently person to person and there are few economies of scale. The financial pressures on governments in delivering care are only going to grow. Adopting a more efficient model for matching demand and supply is so obvious a possible solution that pilot schemes are under way in several cities — both public sector-led and private start-ups. In fact, if public authorities do not try new models, the private sector will certainly fill the gap….(More)”.

Normative Challenges of Identification in the Internet of Things: Privacy, Profiling, Discrimination, and the GDPR


Paper by Sandra Wachter: “In the Internet of Things (IoT), identification and access control technologies provide essential infrastructure to link data between a user’s devices with unique identities, and provide seamless and linked up services. At the same time, profiling methods based on linked records can reveal unexpected details about users’ identity and private life, which can conflict with privacy rights and lead to economic, social, and other forms of discriminatory treatment. A balance must be struck between identification and access control required for the IoT to function and user rights to privacy and identity. Striking this balance is not an easy task because of weaknesses in cybersecurity and anonymisation techniques.

The EU General Data Protection Regulation (GDPR), set to come into force in May 2018, may provide essential guidance to achieve a fair balance between the interests of IoT providers and users. Through a review of academic and policy literature, this paper maps the inherit tension between privacy and identifiability in the IoT.

It focuses on four challenges: (1) profiling, inference, and discrimination; (2) control and context-sensitive sharing of identity; (3) consent and uncertainty; and (4) honesty, trust, and transparency. The paper will then examine the extent to which several standards defined in the GDPR will provide meaningful protection for privacy and control over identity for users of IoT. The paper concludes that in order to minimise the privacy impact of the conflicts between data protection principles and identification in the IoT, GDPR standards urgently require further specification and implementation into the design and deployment of IoT technologies….(More)”.

How the Index Card Cataloged the World


Daniela Blei in the Atlantic: “…The index card was a product of the Enlightenment, conceived by one of its towering figures: Carl Linnaeus, the Swedish botanist, physician, and the father of modern taxonomy. But like all information systems, the index card had unexpected political implications, too: It helped set the stage for categorizing people, and for the prejudice and violence that comes along with such classification….

In 1780, two years after Linnaeus’s death, Vienna’s Court Library introduced a card catalog, the first of its kind. Describing all the books on the library’s shelves in one ordered system, it relied on a simple, flexible tool: paper slips. Around the same time that the library catalog appeared, says Krajewski, Europeans adopted banknotes as a universal medium of exchange. He believes this wasn’t a historical coincidence. Banknotes, like bibliographical slips of paper and the books they referred to, were material, representational, and mobile. Perhaps Linnaeus took the same mental leap from “free-floating banknotes” to “little paper slips” (or vice versa). Sweden’s great botanist was also a participant in an emerging capitalist economy.

Linnaeus never grasped the full potential of his paper technology. Born of necessity, his paper slips were “idiosyncratic,” say Charmantier and Müller-Wille. “There is no sign he ever tried to rationalize or advertise the new practice.” Like his taxonomical system, paper slips were both an idea and a method, designed to bring order to the chaos of the world.

The passion for classification, a hallmark of the Enlightenment, also had a dark side. From nature’s variety came an abiding preoccupation with the differences between people. As soon as anthropologists applied Linnaeus’s taxonomical system to humans, the category of race, together with the ideology of racism, was born.

It’s fitting, then, that the index card would have a checkered history. To take one example, the FBI’s J. Edgar Hoover used skills he burnished as a cataloger at the Library of Congress to assemble his notorious “Editorial Card Index.” By 1920, he had cataloged 200,000 subversive individuals and organizations in detailed, cross-referenced entries. Nazi ideologues compiled a deadlier index-card database to classify 500,000 Jewish Germans according to racial and genetic background. Other regimes have employed similar methods, relying on the index card’s simplicity and versatility to catalog enemies real and imagined.

The act of organizing information—even notes about plants—is never neutral or objective. Anyone who has used index cards to plan a project, plot a story, or study for an exam knows that hierarchies are inevitable. Forty years ago, Michel Foucault observed in a footnote that, curiously, historians had neglected the invention of the index card. The book was Discipline and Punish, which explores the relationship between knowledge and power. The index card was a turning point, Foucault believed, in the relationship between power and technology. Like the categories they cataloged, Linnaeus’s paper slips belong to the history of politics as much as the history of science….(More)”.

Behind the Screen: the Syrian Virtual Resistance


Billie Jeanne Brownlee at Cyber Orient: “Six years have gone by since the political upheaval that swept through many Middle East and North African (MENA) countries begun. Syria was caught in the grip of this revolutionary moment, one that drove the country from a peaceful popular mobilisation to a deadly fratricide civil war with no apparent way out.

This paper provides an alternative approach to the study of the root causes of the Syrian uprising by examining the impact that the development of new media had in reconstructing forms of collective action and social mobilisation in pre-revolutionary Syria.

By providing evidence of a number of significant initiatives, campaigns and acts of contentious politics that occurred between 2000 and 2011, this paper shows how, prior to 2011, scholarly work on Syria has not given sufficient theoretical and empirical consideration to the development of expressions of dissent and resilience of its cyberspace and to the informal and hybrid civic engagement they produced….(More)”.

The nation state goes virtual


Tom Symons at Nesta’s Predictions for 2018: “As the world changes, people expect their governments and public services to do so too. When it’s easy to play computer games with someone on the other side of the world, or set up a company bank account in five minutes, there is an expectation that paying taxes, applying for services or voting should be too…..

To add to this, large political upheavals such as Brexit and the election of Donald Trump have left some people feeling alienated from their national identity. Since the the UK voted to leave the EU, demand for Irish passports has increased by 50 per cent, a sign that people feel dissatisfied by the constraints of geographically determined citizenship when they can no longer relate to their national identity.

In response, some governments see these changes as an opportunity to reconceptualise what we mean by a nation state.

The e-Residency offer

The primary actor in this disruption is Estonia, which leads the world in digital government. In 2015 they introduced an e-Residency, allowing anyone anywhere in the world to receive a government-issued digital identity. The e-Residency gives people access to digital public services and the ability to register and run online businesses from the country, in exactly the same way as someone born in Estonia. As of November 2017, over 27,000 people have applied to be Estonian e-Residents, and they have established over 4,200 companies. Estonia aims to have ten million virtual residents by 2025….

While Estonia is a sovereign nation using technology to redefine itself, there are movements taking advantage of decentralising technologies in a bid to do away with the nation state altogether. Bitnation is a blockchain-based technology which enables people to create and join virtual nations. This allows people to agree their own social contracts between one another, using smart contract technology, removing the need for governments as an administrator or mediator. Since it began in 2014, it has been offering traditional government services, such as notaries, dispute resolution, marriages and voting systems, without the need for a middleman.

As of November 2017, there are over 10,000 Bitnation citizens. …

As citizens, we may be able to educate our children in Finland, access healthcare from South Korea and run our businesses in New Zealand, all without having to leave the comfort of our homes. Governments may see this as means of financial sustainability in the longer term, generating income by selling such services to a global population instead of centralised taxation systems levied on a geographic population.

Such a model has been described as ‘nation-as-a-service’, and could mean countries offering different tiers of citizenship, with taxes based on the number of services used, or tier of citizenship chosen. This could also mean multiple citizenships, including of city-states, as well as nations….

This is the moment for governments to start taking the full implications of the digital age seriously. From electronic IDs and data management through to seamless access to services, citizens will only demand better digital services. Countries such as Azerbaijan, are already developing their own versions of the e-Residency. Large internet platforms such as Amazon are gearing up to replace entire government functions. If governments don’t grasp the nettle, they may find themselves left behind by technology and other countries which won’t wait around for them….(More)”.

Proliferation of Open Government Initiatives and Systems


Book edited by Ayse Kok: “As is true in most aspects of daily life, the expansion of government in the modern era has included a move to a technologically-based system. A method of evaluation for such online governing systems is necessary for effective political management worldwide.

Proliferation of Open Government Initiatives and Systems is an essential scholarly publication that analyzes open government data initiatives to evaluate the impact and value of such structures. Featuring coverage on a broad range of topics including collaborative governance, civic responsibility, and public financial management, this publication is geared toward academicians and researchers seeking current, relevant research on the evaluation of open government data initiatives….(More)”.