Why We Should End the Data Economy


Essay by Carissa Véliz: “…The data economy undermines equality and fairness. You and your neighbor are no longer treated as equal citizens. You aren’t given an equal opportunity because you are treated differently on the basis of your data. The ads and content you have access to, the prices you pay for the same services, and even how long you wait when you call customer service depend on your data.

We are much better at collecting personal data than we are at keeping it safe. But personal data is a serious threat, and we shouldn’t be collecting it in the first place if we are incapable of keeping it safe. Using smartphone location data acquired from a data broker, reporters from The New York Times were able to track military officials with security clearances, powerful lawyers and their guests, and even the president of the United States (through the phone of someone believed to be a Secret Service agent).

Our current data economy is based on collecting as much personal data as possible, storing it indefinitely, and selling it to the highest bidder. Having so much sensitive data circulating freely is reckless. By designing our economy around surveillance, we are building a dangerous structure for social control that is at odds with freedom. In the surveillance society we are constructing, there is no such thing as under the radar. It shouldn’t be up to us to constantly opt out of data collection. The default matters, and the default should be no data collection…(More)”.

Is there a role for consent in privacy?


Article by Robert Gellman: “After decades, we still talk about the role of notice and choice in privacy. Yet there seems to be broad recognition that notice and choice do nothing for the privacy of consumers. Some American businesses cling to notice and choice because they hate all the alternatives. Some legislators draft laws with elements of notice and choice, either because it’s easier to draft a law that way, because they don’t know any better or because they carry water for business.

For present purposes, I will talk about notice and choice generically as consent. Consent is a broader concept than choice, but the difference doesn’t matter for the point I want to make. How you frame consent is complex. There are many alternatives and many approaches. It’s not just a matter of opt-in or opt-out. While I’m discarding issues, I also want to acknowledge and set aside the eight basic Fair Information Practices. There is no notice and choice principle in FIPS, and FIPs are not specifically important here.

Until recently, my view was that consent in almost any form is pretty much death for consumer privacy. No matter how you structure it, websites and others will find a way to wheedle consent from consumers. Those who want to exploit consumer data will cajole, pressure, threaten, mystify, obscure, entice or otherwise coax consumers to agree.

Suddenly, I’m not as sure of my conclusion about consent. What changed my mind? There is a new data point from Apple’s App Tracking Transparency framework. Apple requires mobile application developers to obtain opt-in consent before serving targeted advertising via Apple’s Identifier for Advertisers. Early reports suggest consumers are saying “NO” in overwhelming numbers — overwhelming as in more than 90%.

It isn’t this strong consumer reaction that makes me think consent might possibly have a place. I want to highlight a different aspect of the Apple framework….(More)”.

Engaging with the public about algorithmic transparency in the public sector


Blog by the Centre for Data Ethics and Innovation (UK): “To move the recommendation that we made in our review into bias in algorithmic decision-making forward, we have been working with the Central Digital and Data Office (CDDO) and BritainThinks to scope what a transparency obligation could look like in practice, and in particular, which transparency measures would be most effective at increasing public understanding about the use of algorithms in the public sector. 

Due to the low levels of awareness about the use of algorithms in the public sector (CDEI polling in July 2020 found that 38% of the public were not aware that algorithmic systems were used to support decisions using personal data), we opted for a deliberative public engagement approach. This involved spending time gradually building up participants’ understanding and knowledge about algorithm use in the public sector and discussing their expectations for transparency, and co-designing solutions together. 

For this project, we worked with a diverse range of 36 members of the UK public, spending over five hours engaging with them over a three week period. We focused on three particular use-cases to test a range of emotive responses – policing, parking and recruitment.  

The final stage was an in-depth co-design session, where participants worked collaboratively to review and iterate prototypes in order to develop a practical approach to transparency that reflected their expectations and needs for greater openness in the public sector use of algorithms. 

What did we find? 

Our research validated that there was fairly low awareness or understanding of the use of algorithms in the public sector. Algorithmic transparency in the public sector was not a front-of-mind topic for most participants.

However, once participants were introduced to specific examples of potential public sector algorithms, they felt strongly that transparency information should be made available to the public, both citizens and experts. This included desires for; a description of the algorithm, why an algorithm was being used, contact details for more information, data used, human oversight, potential risks and technicalities of the algorithm…(More)”.

To regulate AI, try playing in a sandbox


Article by Dan McCarthy: “For an increasing number of regulators, researchers, and tech developers, the word “sandbox” is just as likely to evoke rulemaking and compliance as it is to conjure images of children digging, playing, and building. Which is kinda the point.

That’s thanks to the rise of regulatory sandboxes, which allow organizations to develop and test new technologies in a low-stakes, monitored environment before rolling them out to the general public. 

Supporters, from both the regulatory and the business sides, say sandboxes can strike the right balance of reining in potentially harmful technologies without kneecapping technological progress. They can also help regulators build technological competency and clarify how they’ll enforce laws that apply to tech. And while regulatory sandboxes originated in financial services, there’s growing interest in using them to police artificial intelligence—an urgent task as AI is expanding its reach while remaining largely unregulated. 

Even for all of its promise, experts told us, the approach should be viewed not as a silver bullet for AI regulation, but instead as a potential step in the right direction. 

Rashida Richardson, an AI researcher and visiting scholar at Rutgers Law School, is generally critical of AI regulatory sandboxes, but still said “it’s worth testing out ideas like this, because there is not going to be any universal model to AI regulation, and to figure out the right configuration of policy, you need to see theoretical ideas in practice.” 

But waiting for the theoretical to become concrete will take time. For example, in April, the European Union proposed AI regulation that would establish regulatory sandboxes to help the EU achieve its aim of responsible AI innovation, mentioning the word “sandbox” 38 times, compared to related terms like “impact assessment” (13 mentions) and “audit” (four). But it will likely take years for the EU’s proposal to become law. 

In the US, some well-known AI experts are working on an AI sandbox prototype, but regulators are not yet in the picture. However, the world’s first and (so far) only AI-specific regulatory sandbox did roll out in Norway this March, as a way to help companies comply with AI-specific provisions of the EU’s General Data Protection Regulation (GDPR). The project provides an early window into how the approach can work in practice.

“It’s a place for mutual learning—if you can learn earlier in the [product development] process, that is not only good for your compliance risk, but it’s really great for building a great product,” according to Erlend Andreas Gjære, CEO and cofounder of Secure Practice, an information security (“infosec”) startup that is one of four participants in Norway’s new AI regulatory sandbox….(More)”

How Does Artificial Intelligence Work?


BuiltIn: “Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: “Can machines think?” 

Turing’s paper “Computing Machinery and Intelligence” (1950), and its subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.   

At its core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.  

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what artificial intelligence is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive percepts from the environment and perform actions.” (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI: 

  1. Thinking humanly
  2. Thinking rationally
  3. Acting humanly 
  4. Acting rationally

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.” (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as  “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”…(More)”.

Citizens ‘on mute’ in digital public service delivery


Blog by Sarah Giest at Data and Policy: “Various countries are digitalizing their welfare system in the larger context of austerity considerations and fraud detection goals, but these changes are increasingly under scrutiny. In short, digitalization of the welfare system means that with the help of mathematical models, data and/or the combination of different administrative datasets, algorithms issue a decision on, for example, an application for social benefits (Dencik and Kaun 2020).

Several examples exist where such systems have led to unfair treatment of welfare recipients. In Europe, the Dutch SyRI system has been banned by court, due to human rights violations in the profiling of welfare recipients, and the UK has found errors in the automated processes leading to financial hardship among citizens. In the United States and Canada, automated systems led to false underpayment or denial of benefits. A recent UN report (2019) even warns that countries are ‘stumbling zombie-like into a digital welfare dystopia’. Further, studies raise alarm that this process of digitalization is done in a way that it not only creates excessive information asymmetry among government and citizens, but also disadvantages certain groups more than others.

A closer look at the Dutch Childcare Allowance case highlights this. In this example, low-income parents were regarded as fraudsters by the Tax Authorities if they had incorrectly filled out any documents. An automated and algorithm-based procedure then also singled out dual-nationality families. The victims lost their allowance without having been given any reasons. Even worse, benefits already received were reclaimed. This led to individual hardship, where financial troubles and the categorization as a fraudster by government led for citizens to a chain of events from unpaid healthcare insurance and the inability to visit a doctor to job loss, potential home loss and mental health concerns (Volkskrant 2020)….(More)”.

Citizen science allows people to ‘really know’ their communities


UGAResearch: “Local populations understand their communities best. They’re familiar both with points of pride and with areas that could be improved. But determining the nature of those improvements from best practices, as well as achieving community consensus on implementation, can present a different set of challenges.

Jerry Shannon, associate professor of geography in the Franklin College of Arts & Sciences, worked with a team of researchers to introduce a citizen science approach in 11 communities across Georgia, from Rockmart to Monroe to Millen. This work combines local knowledge with emerging digital technologies to bolster community-driven efforts in multiple communities in rural Georgia. His research was detailed in a paper, “‘Really Knowing’ the Community: Citizen Science, VGI and Community Housing Assessments” published in December in the Journal of Planning Education and Research.

Shannon worked with the Georgia Initiative for Community Housing, managed out of the College of Family and Consumer Sciences (FACS), to create tools for communities to evaluate and launch plans to address their housing needs and revitalization. This citizen science effort resulted in a more diverse and inclusive body of data that incorporated local perspectives.

“Through this project, we hope to further support and extend these community-driven efforts to assure affordable, quality housing,” said Shannon. “Rural communities don’t have the resources internally to do this work themselves. We provide training and tools to these communities.”

As part of their participation in the GICH program, each Georgia community assembled a housing team consisting of elected officials, members of community organizations and housing professionals such as real estate agents. The team recruited volunteers from student groups and religious organizations to conduct so-called “windshield surveys,” where participants work from their vehicle or walk the neighborhoods….(More)”

Process Mapping: a Tool with Many Uses


Essay by Jessica Brandt: “Traditionally, process maps are used when one is working on improving a process, but a good process map can serve many purposes. But what is a process map used for and why is this a tool worth learning about? A process map is a tool using a flowchart to illustrate the flow, people, as well as inputs, actions, and outputs of the process in a clear and detailed way. A good process map will reflect the work that is actually done within a given process, not what the intended or imagined workflow might entail. This means in order to build a good process map you should be talking to and learning from the folks that use the process every day, not just the people that oversee the process. Because I see the value behind having a good process map and the many ways you can utilize one to make your work more efficient I want to share with you some of the different ways you can use this versatile tool….(More)”.

Are Repeat Nudges Effective? For Tardy Tax Filers, It Seems So


Paper by Nicole Robitaille, Nina Mažar, and Julian House: “While behavioral scientists sometimes aim to nudge one-time actions, such as registering as an organ donor or signing up for a 401K, there are many other behaviors—making healthy food choices, paying bills, filing taxes, getting a flu shot—that are repeated on a daily, monthly, or annual basis. If you want to target these recurrent behaviors, can introducing a nudge once lead to consistent changes in behavior? What if you presented the same nudge several times—would seeing it over and over make its effects stronger, or just the opposite?

Decades of research from behavioral science has taught us a lot about nudges, but the field as a whole still doesn’t have a great understanding of the temporal dimensions of most interventions, including how long nudge effects last and whether or not they remain effective when repeated.

If you want an intervention to lead to lasting behavior change, prior research argues that it should target people’s beliefs, habits or the future costs of engaging in the behavior. Many nudges, however, focus instead on manipulating relatively small factors in the immediate choice environment to influence behavior, such as changing the order in which options are presented. In addition, relatively few field experiments have been able to administer and measure an intervention’s effects more than once, making it hard to know how long the effects of nudges are likely to persist.

While there is some research on what to expect when repeating nudges, the results are mixed. On the one hand, there is an extensive body of research in psychology on habituation, finding that, over time, people show decreased responses to the same stimuli. It wouldn’t be a giant leap to presume that seeing the same nudge again might decrease how much attention we pay to it, and thus hinder its ability to change our behavior. On the other hand, being exposed to the same nudge multiple times might help strengthen desired associations. Research on the mere exposure effect, for example, illustrates how the more times we see something, the more easily it is processed and the more we like it. It is also possible that being nudged multiple times could help foster enduring change, such as through new habit formation. Behavioral nudges aren’t going away, and their use will likely grow among policymakers and practitioners. It is critical to understand the temporal dimensions of these interventions, including how long one-off effects will last and if they will continue to be effective when seen multiple times….(More)”

Data-driven environmental decision-making and action in armed conflict


Essay by Wim Zwijnenburg: “Our understanding of how severely armed conflicts have impacted natural resources, eco-systems, biodiversity and long-term implications on climate has massively improved over the last decade. Without a doubt, cataclysmic events such as the 1991 Gulf War oil fires contributed to raising awareness on the conflict-environment nexus, and the images of burning wells are engraved into our collective mind. But another more recent, under-examined yet major contributor to this growing cognizance is the digital revolution, which has provided us with a wealth of data and information from conflict-affected countries quickly made available through the internet. With just a few clicks, anyone with a computer or smartphone and a Wi-Fi connection can follow, often in near-real time, events shared through social media in warzones or satellite imagery showing what is unfolding on the ground.

These developments have significantly deepened our understanding of how military activities, both historically and in current conflicts, contribute to environmental damage and can impact the lives and livelihoods of civilians. Geospatial analysis through earth observation (EO) is now widely used to document international humanitarian law (IHL) violations, improve humanitarian response and inform post-conflict assessments.

These new insights on conflict-environment dynamics have driven humanitarian, military and political responses. The latter are essential for the protection of the environment in armed conflict: with knowledge and understanding also comes a responsibility to prevent, mitigate and minimize environmental damage, in line with existing international obligations. Of particular relevance, under international humanitarian law, militaries must take into account incidental environmental damage that is reasonably foreseeable based on an assessment of information from all sources available to them at the relevant time (ICRC Guidelines on the Protection of the Environment, Rule 7Customary IHL Rule 43). Excessive harm is prohibited, and all feasible precautions must be taken to reduce incidental damage (Guidelines Rule 8, Customary IHL Rule 44).

How do we ensure that the data-driven strides forward in understanding conflict-driven environmental damage translate into proper military training and decision-making, humanitarian response and reconstruction efforts? How can this influence behaviour change and improve accountability for military actions and targeting decisions?…(More)”.