The Smartness Mandate


Book by Orit Halpern and Robert Mitchell: “Smart phones. Smart cars. Smart homes. Smart cities. The imperative to make our world ever smarter in the face of increasingly complex challenges raises several questions: What is this “smartness mandate”? How has it emerged, and what does it say about our evolving way of understanding—and managing—reality? How have we come to see the planet and its denizens first and foremost as data-collecting instruments?

In The Smartness Mandate, Orit Halpern and Robert Mitchell radically suggest that “smartness” is not primarily a technology, but rather an epistemology. Through this lens, they offer a critical exploration of the practices, technologies, and subjects that such an understanding relies upon—above all, artificial intelligence and machine learning. The authors approach these not simply as techniques for solving problems of calculations, but rather as modes of managing life (human and other) in terms of neo-Darwinian evolution, distributed intelligences, and “resilience,” all of which have serious implications for society, politics, and the environment.

The smartness mandate constitutes a new form of planetary governance, and Halpern and Mitchell aim to map the logic of this seemingly inexorable and now naturalized demand to compute, illuminate the genealogy of how we arrived here, and point to alternative imaginaries of the possibilities and potentials of smart technologies and infrastructures…(More)”.

Data Is What Data Does: Regulating Use, Harm, and Risk Instead of Sensitive Data


Paper by Daniel J. Solove: “Heightened protection for sensitive data is becoming quite trendy in privacy laws around the world. Originating in European Union (EU) data protection law and included in the EU’s General Data Protection Regulation (GDPR), sensitive data singles out certain categories of personal data for extra protection. Commonly recognized special categories of sensitive data include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, sexual orientation and sex life, biometric data, and genetic data.

Although heightened protection for sensitive data appropriately recognizes that not all situations involving personal data should be protected uniformly, the sensitive data approach is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. The borderlines of many categories are so blurry that they are useless. Moreover, it is easy to use non-sensitive data as a proxy for certain types of sensitive data.

Personal data is akin to a grand tapestry, with different types of data interwoven to a degree that makes it impossible to separate out the strands. With Big Data and powerful machine learning algorithms, most non-sensitive data can give rise to inferences about sensitive data. In many privacy laws, data that can give rise to inferences about sensitive data is also protected as sensitive data. Arguably, then, nearly all personal data can be sensitive, and the sensitive data categories can swallow up everything. As a result, most organizations are currently processing a vast amount of data in violation of the laws.

This Article argues that the problems with the sensitive data approach make it unworkable and counterproductive — as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake — they embrace the idea that the nature of personal data is a sufficiently useful focal point for the law. But nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does. Personal data is harmful when its use causes harm or creates a risk of harm. It is not harmful if it is not used in a way to cause harm or risk of harm.

To be effective, privacy law must focus on use, harm, and risk rather than on the nature of personal data. The implications of this point extend far beyond sensitive data provisions. In many elements of privacy laws, protections should be based on the use of personal data and proportionate to the harm and risk involved with those uses…(More)”.

The Doctor Who Wasn’t There: Technology, History, and the Limits of Telehealth


Book by Jeremy A. Greene: “The Doctor Who Wasn’t There traces the long arc of enthusiasm for—and skepticism of—electronic media in health and medicine. Over the past century, a series of new technologies promised to democratize access to healthcare. From the humble telephone to the connected smartphone, from FM radio to wireless wearables, from cable television to the “electronic brains” of networked mainframe computers: each new platform has promised a radical reformation of the healthcare landscape. With equal attention to the history of technology, the history of medicine, and the politics and economies of American healthcare, physician and historian Jeremy A. Greene explores the role that electronic media play, for better and for worse, in the past, present, and future of our health.

Today’s telehealth devices are far more sophisticated than the hook-and-ringer telephones of the 1920s, the radios that broadcasted health data in the 1940s, the closed-circuit televisions that enabled telemedicine in the 1950s, or the online systems that created electronic medical records in the 1960s. But the ethical, economic, and logistical concerns they raise are prefigured in the past, as are the gaps between what was promised and what was delivered. Each of these platforms also produced subtle transformations in health and healthcare that we have learned to forget, displaced by promises of ever newer forms of communication that took their place. 

Illuminating the social and technical contexts in which electronic medicine has been conceived and put into practice, Greene’s history shows the urgent stakes, then and now, for those who would seek in new media the means to build a more equitable future for American healthcare….(More)”.

Why Do Innovations Fail? Lessons Learned from a Digital Democratic Innovation


Paper by Jenny Lindholm and Janne Berg: “Democratic innovations are brought forward by political scientists as a response to worrying democratic deficits. This paper aims to evaluate the design, process, and outcome of digital democratic innovations. We study a mobile application for following local politics. Data is collected using three online surveys with different groups, and a workshop with young citizens. The results show that the app did not fully meet the democratic ideal of inclusiveness at the process stage, especially in reaching young people. However, the user groups that had used the app reported positive democratic effects…(More)”.

Who owns the map? Data sovereignty and government spatial data collection, use, and dissemination


Paper by Peter A. Johnson and Teresa Scassa: “Maps, created through the collection, assembly, and analysis of spatial data are used to support government planning and decision-making. Traditionally, spatial data used to create maps are collected, controlled, and disseminated by government, although over time, this role has shifted. This shift has been driven by the availability of alternate sources of data collected by private sector companies, and data contributed by volunteers to open mapping platforms, such as OpenStreetMap. In theorizing this shift, we provide examples of how governments use data sovereignty as a tool to shape spatial data collection, use, and sharing. We frame four models of how governments may navigate shifting spatial data sovereignty regimes; first, with government retaining complete control over data collection; second, with government contracting a third party to provide specific data collection services, but with data ownership and dissemination responsibilities resting with government; third, with government purchasing data under terms of access set by third party data collectors, who disseminate data to several parties, and finally, with government retreating from or relinquishing data sovereignty altogether. Within this rapidly changing landscape of data providers, we propose that governments must consider how to address data sovereignty concerns to retain their ability to control data use in the public interest…(More)”.

How Smart Are the Robots Getting?


Cade Metz at The New York Times: “…These are not systems that anyone can properly evaluate with the Turing test — or any other simple method. Their end goal is not conversation.

Researchers at Google and DeepMind, which is owned by Google’s parent company, are developing tests meant to evaluate chatbots and systems like DALL-E, to judge what they do well, where they lack reason and common sense, and more. One test shows videos to artificial intelligence systems and asks them to explain what has happened. After watching someone tinker with an electric shaver, for instance, the A.I. must explain why the shaver did not turn on.

These tests feel like academic exercises — much like the Turing test. We need something that is more practical, that can really tell us what these systems do well and what they cannot, how they will replace human labor in the near term and how they will not.

We could also use a change in attitude. “We need a paradigm shift — where we no longer judge intelligence by comparing machines to human behavior,” said Oren Etzioni, professor emeritus at the University of Washington and founding chief executive of the Allen Institute for AI, a prominent lab in Seattle….

At the same time, there are many ways these bots are superior to you and me. They do not get tired. They do not let emotion cloud what they are trying to do. They can instantly draw on far larger amounts of information. And they can generate text, images and other media at speeds and volumes we humans never could.

Their skills will also improve considerably in the coming years.

Researchers can rapidly hone these systems by feeding them more and more data. The most advanced systems, like ChatGPT, require months of training, but over those months, they can develop skills they did not exhibit in the past.

“We have found a set of techniques that scale effortlessly,” said Raia Hadsell, senior director of research and robotics at DeepMind. “We have a simple, powerful approach that continues to get better and better.”

The exponential improvement we have seen in these chatbots over the past few years will not last forever. The gains may soon level out. But even then, multimodal systems will continue to improve — and master increasingly complex skills involving images, sounds and computer code. And computer scientists will combine these bots with systems that can do things they cannot. ChatGPT failed Turing’s chess test. But we knew in 1997 that a computer could beat the best humans at chess. Plug ChatGPT into a chess program, and the hole is filled.

In the months and years to come, these bots will help you find information on the internet. They will explain concepts in ways you can understand. If you like, they will even write your tweets, blog posts and term papers.

They will tabulate your monthly expenses in your spreadsheets. They will visit real estate websites and find houses in your price range. They will produce online avatars that look and sound like humans. They will make mini-movies, complete with music and dialogue…

Certainly, these bots will change the world. But the onus is on you to be wary of what these systems say and do, to edit what they give you, to approach everything you see online with skepticism. Researchers know how to give these systems a wide range of skills, but they do not yet know how to give them reason or common sense or a sense of truth.

That still lies with you…(More)”.

Why Europe must embrace participatory policymaking


Article by Alberto Alemanno, Claire Davenport, and Laura Batalla: “Today, Europe faces many threats – from economic uncertainty and war on its eastern borders to the rise of illiberal democracies and popular reactionary politicians.

As Europe recovers from the pandemic and grapples with economic and social unrest, it is at an inflection point; it can either create new spaces to build trust and a sense of shared purpose between citizens and governments, or it can continue to let its democratic institutions erode and distrust grow. 

The scale of such problems requires novel problem-solving and new perspectives, including those from civil society and citizens. Increased opportunities for citizens to engage with policymakers can lend legitimacy and accountability to traditionally ‘opaque’ policymaking processes. The future of the bloc hinges on its ability to not only sustain democratic institutions but to do so with buy-in from constituents.

Yet policymaking in the EU is often understood as a technocratic process that the public finds difficult, if not impossible, to navigate. The Spring 2022 Eurobarometer found that just 53% of respondents believed their voice counts in the EU. The issue is compounded by a lack of political literacy coupled with a dearth of channels for participation or co-creation. 

In parallel, there is a strong desire from citizens to make their voices heard. A January 2022 Special Eurobarometer on the Future of Europe found that 90% of respondents agreed that EU citizens’ voices should be taken more into account during decision-making. The Russian war in Ukraine has strengthened public support for the EU as a whole. According to the Spring 2022 Eurobarometer, 65% of Europeans view EU membership as a good thing. 

This is not to say that the EU has no existing models for citizen engagement. The European Citizens Initiative – a mechanism for petitioning the Commission to propose new laws – is one example of existing infrastructure. There is also an opportunity to build on the success of The Conference on the Future of Europe, a gathering held this past spring that gave citizens the opportunity to contribute policy recommendations and justifications alongside traditional EU policymakers…(More)”

The Autocrat in Your iPhone


Article by Ronald J. Deibert: “In the summer of 2020, a Rwandan plot to capture exiled opposition leader Paul Rusesabagina drew international headlines. Rusesabagina is best known as the human rights defender and U.S. Presidential Medal of Freedom recipient who sheltered more than 1,200 Hutus and Tutsis in a hotel during the 1994 Rwandan genocide. But in the decades after the genocide, he also became a prominent U.S.-based critic of Rwandan President Paul Kagame. In August 2020, during a layover in Dubai, Rusesabagina was lured under false pretenses into boarding a plane bound for Kigali, the Rwandan capital, where government authorities immediately arrested him for his affiliation with an opposition group. The following year, a Rwandan court sentenced him to 25 years in prison, drawing the condemnation of international human rights groups, the European Parliament, and the U.S. Congress. 

Less noted at the time, however, was that this brazen cross-border operation may also have employed highly sophisticated digital surveillance. After Rusesabagina’s sentencing, Amnesty International and the Citizen Lab at the University of Toronto, a digital security research group I founded and direct, discovered that smartphones belonging to several of Rusesabagina’s family members who also lived abroad had been hacked by an advanced spyware program called Pegasus. Produced by the Israel-based NSO Group, Pegasus gives an operator near-total access to a target’s personal data. Forensic analysis revealed that the phone belonging to Rusesabagina’s daughter Carine Kanimba had been infected by the spyware around the time her father was kidnapped and again when she was trying to secure his release and was meeting with high-level officials in Europe and the U.S. State Department, including the U.S. special envoy for hostage affairs. NSO Group does not publicly identify its government clients and the Rwandan government has denied using Pegasus, but strong circumstantial evidence points to the Kagame regime.

In fact, the incident is only one of dozens of cases in which Pegasus or other similar spyware technology has been found on the digital devices of prominent political opposition figures, journalists, and human rights activists in many countries. Providing the ability to clandestinely infiltrate even the most up-to-date smartphones—the latest “zero click” version of the spyware can penetrate a device without any action by the user—Pegasus has become the digital surveillance tool of choice for repressive regimes around the world. It has been used against government critics in the United Arab Emirates (UAE) and pro-democracy protesters in Thailand. It has been deployed by Mohammed bin Salman’s Saudi Arabia and Viktor Orban’s Hungary…(More)”.

Responding to societal challenges with data: Access, sharing, stewardship and control


OECD Report: “Data access, sharing and re-use (“data openness”) can generate significant social and economic benefits, including addressing public health emergencies such as the COVID-19 pandemic and achieving the Sustainable Development Goals. However, data openness also comes with risks to individuals and organisations – notably risks to privacy and data protection, intellectual property rights, digital and national security. It also raises ethical concerns where data access, sharing and re-use undermine ethical values and norms. This report demonstrates how approaches to data stewardship and control that are more balanced and differentiated can maximise the benefits of data, while protecting individuals’ and organisations’ rights and taking into account other legitimate interests and public policy objectives. It presents the mix of technical, organisational and legal approaches that characterises these more balanced and differentiated approaches, and how governments have implemented them…(More)”

2023 Edelman Trust Barometer


Press Release: “The 2023 Edelman Trust Barometer reveals that business is now viewed as the only global institution to be both competent and ethical. Business now holds a staggering 53-point lead over government in competence and is 30 points ahead on ethics. Its treatment of workers during the pandemic and return to work, along with the swift and decisive action of over 1,000 businesses to exit Russia after its invasion of Ukraine helped fuel a 20-point jump on ethics over the past three years. Business (62 percent) remains the most and only trusted institution globally. …

Other key findings from the 2023 Edelman Trust Barometer include:

  • Personal economic fears such as job loss (89 percent) and inflation (74 percent) are on par with urgent societal fears like climate change (76 percent), nuclear war (72 percent) and food shortages (67 percent).
  • CEOs are expected to use resources to hold divisive forces accountable: 72 percent believe CEOs are obligated to defend facts and expose questionable science being used to justify bad social policy; 71 percent believe CEOs are obligated to pull advertising money out of media platforms that spread misinformation; and 64 percent, on average, say companies can help increase civility and strengthen the social fabric by supporting politicians and media outlets that build consensus and cooperation.
  • Government (51 percent) is now distrusted in 16 of the 28 countries surveyed including the U.S. (42 percent), the UK (37 percent), Japan (33 percent), and Argentina (20 percent). Media (50 percent) is distrusted in 15 of 28 countries including Germany (47 percent), the U.S. (43 percent), Australia (38 percent), and South Korea (27 percent). ‘My employer’ (77 percent) is the most trusted institution and is trusted in every country surveyed aside from South Korea (54 percent).
  • Government leaders (41 percent), journalists (47 percent) and CEOs (48 percent) are the least trusted institutional leaders. Scientists (76 percent), my coworkers (73 percent among employees) and my CEO (64 percent among employees) are most trusted.
  • Technology (75 percent) was once again the most trusted sector trailed by education (71 percent), food & beverage (71 percent) and healthcare (70 percent). Social media (44 percent) remained the least trusted sector.
  • Canada (67 percent) and Germany (63 percent) remained the two most trusted foreign brands, followed by Japan (61 percent) and the UK (59 percent). India (34 percent) and China (32 percent) remain the least trusted..(More)”.