Catch Me Once, Catch Me 218 Times


Josh Kaplan at Slate: “…It was 2010, and the San Diego County Sheriff’s Department had recently rolled out a database called GraffitiTracker—software also used by police departments in Denver and Los Angeles County—and over the previous year, they had accumulated a massive set of images that included a couple hundred photos with his moniker. Painting over all Kyle’s handiwork, prosecutors claimed, had cost the county almost $100,000, and that sort of damage came with life-changing consequences. Ultimately, he made a plea deal: one year of incarceration, five years of probation, and more than $87,000 in restitution.

Criticism of police technology often gets mired in the complexities of the algorithms involved—the obscurity of machine learning, the feedback loops, the potentials for racial bias and error. But GraffitiTracker can tell us a lot about data-driven policing in part because the concept is so simple. Whenever a public works crew goes to clean up graffiti, before they paint over it, they take a photo and put it in the county database. Since taggers tend to paint the same moniker over and over, now whenever someone is caught for vandalism, police can search the database for their pseudonym and get evidence of all the graffiti they’ve ever done.

In San Diego County, this has radically changed the way that graffiti is prosecuted and has pumped up the punishment for taggers—many of whom are minors—to levels otherwise unthinkable. The results have been lucrative. In 2011, the first year San Diego started using GraffitiTracker countywide (a few San Diego jurisdictions already had it in place), the amount of restitution received for graffiti jumped from about $170,000 to almost $800,000. Roughly $300,000 of that came from juvenile cases. For the jurisdictions that weren’t already using GraffitiTracker, the jump was even more stark: The annual total went from $45,000 to nearly $400,000. In these cities, the average restitution ordered in adult cases went from $1,281 to $5,620, and at the same time, the number of cases resulting in restitution tripled. (San Diego has said it makes prosecuting vandalism easier.)

Almost a decade later, San Diego County and other jurisdictions are still using GraffitiTracker, yet it’s received very little media attention, despite the startling consequences for vandalism prosecution. But its implications extend far beyond tagging. GraffitiTracker presaged a deeper problem with law enforcement’s ability to use technology to connect people to crimes that, as Deputy District Attorney Melissa Ocampo put it to me, “they thought they got away with.”…(More)”.

The Bad Pupil


CCCBLab: “In recent years we have been witnessing a constant trickle of news on artificial intelligence, machine learning and computer vision. We are told that machines learn, see, create… and all this builds up a discourse based on novelty, on a possible future and on a series of worries and hopes. It is difficult, sometimes, to figure out in this landscape which are real developments, and which are fantasies or warnings. And, undoubtedly, this fog that surrounds it forms part of the power that we grant, both in the present and on credit, to these tools, and of the negative and positive concerns that they arouse in us. Many of these discourses may fall into the field of false debates or, at least, of the return of old debates. Thus, in the classical artistic field, associated with the discourse on creation and authorship, there is discussion regarding the entity to be awarded to the images created with these tools. (Yet wasn’t the argument against photography in art that it was an image created automatically and without human participation? And wasn’t that also an argument in favour of taking it and using it to put an end to a certain idea of art?)

Metaphors are essential in the discourse on all digital tools and the power that they have. Are expressions such as “intelligence”, “vision”, “learning”, “neural” and the entire range of similar words the most adequate for defining these types of tools? Probably not, above all if their metaphorical nature is sidestepped. We would not understand them in the same way if we called them tools of probabilistic classification or if instead of saying that an artificial intelligence “has painted” a Rembrandt, we said that it has produced a statistical reproduction of his style (something which is still surprising, and to be celebrated, of course). These names construct an entity for these tools that endows them with a supposed autonomy and independence upon which their future authority is based.

Because that is what it’s about in many discourses: constructing a characterisation that legitimises an objective or non-human capacity in data analysis….

We now find ourselves in what is, probably, the point of the first cultural reception of these tools. Of their development in fields of research and applications that have already been derived, we are moving on to their presence in the public discourse. It is in this situation and context, where we do not fully know the breadth and characteristics of these technologies (meaning fears are more abstract and diffuse and, thus, more present and powerful), when it is especially important to understand what we are talking about, to appropriate the tools and to intervene in the discourses. Before their possibilities are restricted and solidified until they seem indisputable, it is necessary to experiment with them and reflect on them; taking advantage of the fact that we can still easily perceive them as in creation, malleable and open.

In our projects The Bad Pupil. Critical pedagogy for artificial intelligences and Latent Spaces. Machinic Imaginations we have tried to approach to these tools and their imaginary. In the statement of intentions of the former, we expressed our desire, in the face of the regulatory context and the metaphor of machine learning, to defend the bad pupil as one who escapes the norm. And also how, faced with an artificial intelligence that seeks to replicate the human on inhuman scales, it is necessary to defend and construct a non-mimetic one that produces unexpected relations and images.

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil - Estampa

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil – Estampa

Both projects are also attempts to appropriate these tools, which means, first of all, escaping industrial barriers and their standards. In this field in which mass data are an asset within reach of big companies, employing quantitively poor datasets and non-industrial calculation potentials is not just a need but a demand….(More)”.

Privacy’s not dead. It’s just not evenly distributed


Alex Pasternack in Fast Company: “In the face of all the data abuse, many of us have, quite reasonably, thrown up our hands. But privacy didn’t die. It’s just been beaten up, sold, obscured, diffused unevenly across society. What privacy is and why it matters increasingly depends upon who you are, your age, your income, gender, ethnicity, where you’re from, and where you live. To borrow William Gibson’s famous quote about the future and its unevenness and inequalities, privacy is alive—it’s just not evenly distributed. And while we don’t all care about it the same way—we’re even divided on what exactly privacy is—its harms are still real. Even when our own privacy isn’t violated, privacy violations can still hurt us.

Privacy is personal, from the creepy feeling that our phones are literally listening to the endless parade of data breaches that test our ability to care anymore. It’s the unsettling feeling of giving “consent” without knowing what that means, “agreeing” to contracts we didn’t read with companies we don’t really trust. (Forget about understanding all the details; researchers have shown that most privacy policies surpass the reading level of the average person.)

It’s the data about us that’s harvested, bought, sold, and traded by an obscure army of data brokers without our knowledge, feeding marketers, landlords, employers, immigration officialsinsurance companies, debt collectors, as well as stalkers and who knows who else. It’s the body camera or the sports arena or the social network capturing your face for who knows what kind of analysis. Don’t think of personal data as just “data.” As it gets more detailed and more correlated, increasingly, our data is us.

And “privacy” isn’t just privacy. It’s also tied up with security, freedom, social justice, free speech, and free thought. Privacy harms aren’t only personal, but societal. It’s not just the multibillion-dollar industry that aims to nab you and nudge you, but the multibillion-dollar spyware industry that helps governments nab dissidents and send them to prison or worse. It’s the supposedly fair and transparent algorithms that aren’t, turning our personal data into risk scores that can help perpetuate race, class, and gender divides, often without our knowing it.

Privacy is about dark ads bought with dark money and the micro-targeting of voters by overseas propagandists or by political campaigns at home. That kind of influence isn’t just the promise of a shadowy Cambridge Analytica or state-run misinformation campaigns, but also the premise of modern-day digital ad campaigns. (Note that Facebook’s research division later hired one of the researchers behind the Cambridge app.) And as the micro-targeting gets more micro, the tech giants that deal in ads are only getting more macro….(More)”

(This story is part of The Privacy Divide, a series that explores the fault lines and disparities–economic, cultural, philosophical–that have developed around digital privacy and its impact on society.)

How data collected from mobile phones can help electricity planning


Article by Eduardo Alejandro Martínez Ceseña, Joseph Mutale, Mathaios Panteli, and Pierluigi Mancarella in The Conversation: “Access to reliable and affordable electricity brings many benefits. It supports the growth of small businesses, allows students to study at night and protects health by offering an alternative cooking fuel to coal or wood.

Great efforts have been made to increase electrification in Africa, but rates remain low. In sub-Saharan Africa only 42% of urban areas have access to electricity, just 22% in rural areas.

This is mainly because there’s not enough sustained investment in electricity infrastructure, many systems can’t reliably support energy consumption or the price of electricity is too high.

Innovation is often seen as the way forward. For instance, cheaper and cleaner technologies, like solar storage systems deployed through mini grids, can offer a more affordable and reliable option. But, on their own, these solutions aren’t enough.

To design the best systems, planners must know where on- or off-grid systems should be placed, how big they need to be and what type of energy should be used for the most effective impact.

The problem is reliable data – like village size and energy demand – needed for rural energy planning is scarce or non-existent. Some can be estimated from records of human activities – like farming or access to schools and hospitals – which can show energy needs. But many developing countries have to rely on human activity data from incomplete and poorly maintained national census. This leads to inefficient planning.

In our research we found that data from mobile phones offer a solution. They provide a new source of information about what people are doing and where they’re located.

In sub-Saharan Africa, there are more people with mobile phones than access to electricity, as people are willing to commute to get a signal and/or charge their phones.

This means that there’s an abundance of data – that’s constantly updated and available even in areas that haven’t been electrified – that could be used to optimise electrification planning….

We were able to use mobile data to develop a countrywide electrification strategy for Senegal. Although Senegal has one of the highest access to electricity rates in sub-Saharan Africa, just 38% of people in rural areas have access.

By using mobile data we were able to identify the approximate size of rural villages and access to education and health facilities. This information was then used to size and cost different electrification options and select the most economic one for each zone – whether villages should be connected to the grids, or where off-grid systems – like solar battery systems – were a better option.

To collect the data we randomly selected mobile phone data from 450,000 users from Senegal’s main telecomms provider, Sonatel, to understand exactly how information from mobile phones could be used. This includes the location of user and the characteristics of the place they live….(More)”

When Patients Become Innovators


Article by Harold DeMonaco, Pedro Oliveira, Andrew Torrance, Christiana von Hippel, and Eric von Hippel: “Patients are increasingly able to conceive and develop sophisticated medical devices and services to meet their own needs — often without any help from companies that produce or sell medical products. This “free” patient-driven innovation process enables them to benefit from important advances that are not commercially available. Patient innovation also can provide benefits to companies that produce and sell medical devices and services. For them, patient do-it-yourself efforts can be free R&D that informs and amplifies in-house development efforts.

In this article, we will look at two examples of free innovation in the medical field — one for managing type 1 diabetes and the other for managing Crohn’s disease. We will set these cases within the context of the broader free innovation movement that has been gaining momentum in an array of industries1 and apply the general lessons of free innovation to the specific circumstances of medical innovation by patients….

What is striking about both of these cases is that neither commercial medical producers nor the clinical care system offered a solution that these patients urgently needed. Motivated patients stepped forward to develop solutions for themselves, entirely without commercial support.4

Free innovation in the medical field follows the general pattern seen in many other areas, including crafts, sporting goods, home and garden equipment, pet products, and apparel.5 Enabled by technology, social media, and a keen desire to find solutions aligned with their own needs, consumers of all kinds are designing new products for themselves….(More)”


Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems


Introduction by A.F. Winfield, K. Michael, J. Pitt, V. Evers of Special Issue of Proceedings of the IEEE: “…The primary focus of this special issue is machine ethics, that is the question of how autonomous systems can be imbued with ethical values. Ethical autonomous systems are needed because, inevitably, near future systems are moral agents; consider driverless cars, or medical diagnosis AIs, both of which will need to make choices with ethical consequences. This special issue includes papers that describe both implicit ethical agents, that is machines designed to avoid unethical outcomes, and explicit ethical agents: machines which either encode or learn ethics and determine actions based on those ethics. Of course ethical machines are socio-technical systems thus, as a secondary focus, this issue includes papers that explore the societal and regulatory implications of machine ethics, including the question of ethical governance. Ethical governance is needed in order to develop standards and processes that allow us to transparently and robustly assure the safety of ethical autonomous systems and hence build public trust and confidence….(More)?

China, India and the rise of the ‘civilisation state’


Gideon Rachman at the Financial Times: “The 19th-century popularised the idea of the “nation state”. The 21st could be the century of the “civilisation state”. A civilisation state is a country that claims to represent not just a historic territory or a particular language or ethnic-group, but a distinctive civilisation.

It is an idea that is gaining ground in states as diverse as China, India, Russia, Turkey and, even, the US. The notion of the civilisation state has distinctly illiberal implications. It implies that attempts to define universal human rights or common democratic standards are wrong-headed, since each civilisation needs political institutions that reflect its own unique culture. The idea of a civilisation state is also exclusive. Minority groups and migrants may never fit in because they are not part of the core civilisation.

One reason that the idea of the civilisation state is likely to gain wider currency is the rise of China. In speeches to foreign audiences, President Xi Jinping likes to stress the unique history and civilisation of China. This idea has been promoted by pro-government intellectuals, such as Zhang Weiwei of Fudan university. In an influential book, The China Wave: Rise of a Civilisational State, Mr Zhang argues that modern China has succeeded because it has turned its back on western political ideas — and instead pursued a model rooted in its own Confucian culture and exam-based meritocratic traditions. Mr Zhang was adapting an idea first elaborated by Martin Jacques, a western writer, in a bestselling book, When China Rules The World. “China’s history of being a nation state”, Mr Jacques argues, “dates back only 120-150 years: its civilisational history dates back thousands of years.” He believes that the distinct character of Chinese civilisation leads to social and political norms that are very different from those prevalent in the west, including “the idea that the state should be based on familial relations [and] a very different view of the relationship between the individual and society, with the latter regarded as much more important”. …

Civilisational views of the state are also gaining ground in Russia. Some of the ideologues around Vladimir Putin now embrace the idea that Russia represents a distinct Eurasian civilisation, which should never have sought to integrate with the west. In a recent article Vladislav Surkov, a close adviser to the Russian president, argued that his country’s “repeated fruitless efforts to become a part of western civilisation are finally over”. Instead, Russia should embrace its identity as “a civilisation that has absorbed both east and west” with a “hybrid mentality, intercontinental territory and bipolar history. It is charismatic, talented, beautiful and lonely. Just as a half-breed should be.” In a global system moulded by the west, it is unsurprising that some intellectuals in countries such as China, India or Russia should want to stress the distinctiveness of their own civilisations.

What is more surprising is that rightwing thinkers in the US are also retreating from the idea of “universal values” — in favour of emphasising the unique and allegedly endangered nature of western civilisation….(More)”.

Systems change and philanthropy


Introduction by Julian Corner to Special Issue of Alliance: “This special feature explores a growing aspiration in philanthropy to achieve system-level change. It looks at the potential and pitfalls by profiling a number of approaches adopted by different foundations….

While the fortunes of systems thinking have ebbed and flowed over the decades, it has mainly occurred on the margins of organisations. This time something different seems to be happening, at least in terms of philanthropy. A number of major foundations are embracing systems approaches as a core methodology. How should we understand this?…

I detect at least four broad approaches or attitudes to systems in foundations’ work, all of which have been at play in Lankelly Chase’s work at different points:

1.The system as a unit of intervention
Many foundations are trying to take in a broader canvas, recognising that both problems and solutions are generated by the interplay of multiple variables. They hope to find leverage points among these variables, so that their investment can unlock so-called system-level change. Some of their strategies include: working for policy changes, scaling disruptive innovations, supporting advocacy for people’s rights, and improving the evidence base used by system actors. These approaches seem to work best when there is common agreement on an identifiable system, such as the criminal justice system, which can be mapped and acted on.

2.Messy contested systems
Some foundations find they are drawn deeper into complexity. They unearth conflicting perspectives on the nature of the problem, especially when there is a power inequality between those defining it and those experiencing it. As greater interconnection emerges, the frame put around the canvas is shown to be arbitrary and the hope of identifying leverage points begins to look reductive. One person’s solution turns out to be another’s problem. Unable to predict how change might occur, foundations shift towards more exploratory and inquiring approaches. Rather than funding programmes or institutions, they seek to influence the conditions of change, focusing on collaborations, place-based approaches, collective impact, amplifying lesser heard voices, building skills and capacities, and reframing the narratives people hold.

3.Seeing yourself in the system
As appreciation of interconnection deepens, the way foundations earn money, how they make decisions, the people they choose to include in (and exclude from) their work, how they specify success, all come into play as parts of the system that need to change. These foundations realise that they aren’t just looking at a canvas, they are part of it. At Lankelly Chase, we now view our position as fundamentally paradoxical, given that we are seeking to tackle inequality by holding accumulated wealth. We have sought to model the behaviours of healthier systems, including delegated decision-making, mutual accountability, trust-based relationships, promoting equality of voice. By aiming for congruence between means and ends, we and our peers contend that effective practice and ethical practice become the same.

4.Beyond systems
There comes a point when the idea of systems itself can feel reductive. Different values are invoked, those of kindness and solidarity. The basis on which humans relate to each other becomes the core concern. Inspiration is sought in other histories and forms of spiritualty, as suppressed narratives are surfaced. The frame of philanthropy itself is no longer a given, with mutuality and even reparation becoming the basis of an alternative paradigm.

….Foundations can be viewed as both ‘of’ and ‘outside’ any system. This is a tension that isn’t resolvable, but if handled with sufficient self-awareness could make foundations powerful systems practitioners….(More)”.


Seeing and Being Seen


Russell C. Bogue in The Hedgehog Review: “On May 20, 2013, a pale, nervous American landed in Hong Kong and made his way to the Mira Hotel. Once there, he met with reporters from The Guardian and the Washington Post and turned over thousands of documents his high-level security clearance had enabled him to acquire while working as a contractor for the National Security Agency. Soon after this exchange, the world learned about PRISM, a top-secret NSA program that granted (court-ordered) direct access to Facebook, Apple, Google, and other US Internet giants, including users’ search histories, e-mails, file transfers, and live chats.1 Additionally, Verizon had been providing information to the NSA on an “ongoing, daily basis” about customers’ telephone calls, including location data and call duration (although not the content of conversations).2 Everyone, in short, was being monitored. Glenn Greenwald, one of the first journalists to meet with Edward Snowden, and one of his most vocal supporters, wrote later that “the NSA is collecting all forms of electronic communications between Americans…and thereby attempting by definition to destroy any remnants of privacy both in the US and globally.”3

According to a 2014 Pew Research Center poll, fully 91 percent of Americans believe they have lost control over their personal information.4 What is such a public to do? Anxious computer owners have taken to covering their devices’ built-in cameras with bits of tape.5Messaging services tout their end-to-end encryption.6 Researchers from Harvard Business School have started investigating the effectiveness of those creepy online ads that seem to know a little too much about your preferences.7

For some, this pushback has come far too late to be of any use. In a recent article in The Atlantic depressingly titled “Welcome to the Age of Privacy Nihilism,” Ian Bogost observes that we have already become unduly reliant on services that ask us to relinquish personal data in exchange for convenience. To reassert control over one’s privacy, one would have to abstain from credit card activity and use the Internet only sparingly. The worst part? We don’t get the simple pleasure of blaming this state of affairs on Big Government or the tech giants. Instead, our enemy is, as Bogost intones, “a hazy murk, a chilling, Lovecraftian murmur that can’t be seen, let alone touched, let alone vanquished.”8

The enemy may be a bit closer to home, however. While we fear being surveilled, recorded, and watched, especially when we are unaware, we also compulsively expose ourselves to others….(More)”.

Is Ethical A.I. Even Possible?


Cade Metz at The New York Times: ” When a news article revealed that Clarifaiwas working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation....

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

Google worked on the same Pentagon project as Clarifai, and after a protest from company employees, the tech giant ultimately ended its involvement. But like Clarifai, as many as 20 other companies have worked on the project without bowing to ethical concerns.

After the controversy over its Pentagon work, Google laid down a set of “A.I. principles” meant as a guide for future projects. But even with the corporate rules in place, some employees left the company in protest. The new principles are open to interpretation. And they are overseen by executives who must also protect the company’s financial interests….

In their open letter, the Clarifai employees said they were unsure whether regulation was the answer to the many ethical questions swirling around A.I. technology, arguing that the immediate responsibility rested with the company itself….(More)”.