One Nation Tracked: An investigation into the smartphone tracking industry


Stuart A. Thompson and Charlie Warzel at the New York Times: “…For brands, following someone’s precise movements is key to understanding the “customer journey” — every step of the process from seeing an ad to buying a product. It’s the Holy Grail of advertising, one marketer said, the complete picture that connects all of our interests and online activity with our real-world actions.

Pointillist location data also has some clear benefits to society. Researchers can use the raw data to provide key insights for transportation studies and government planners. The City Council of Portland, Ore., unanimously approved a deal to study traffic and transit by monitoring millions of cellphones. Unicef announced a plan to use aggregated mobile location data to study epidemics, natural disasters and demographics.

For individual consumers, the value of constant tracking is less tangible. And the lack of transparency from the advertising and tech industries raises still more concerns.

Does a coupon app need to sell second-by-second location data to other companies to be profitable? Does that really justify allowing companies to track millions and potentially expose our private lives?

Data companies say users consent to tracking when they agree to share their location. But those consent screens rarely make clear how the data is being packaged and sold. If companies were clearer about what they were doing with the data, would anyone agree to share it?

What about data collected years ago, before hacks and leaks made privacy a forefront issue? Should it still be used, or should it be deleted for good?

If it’s possible that data stored securely today can easily be hacked, leaked or stolen, is this kind of data worth that risk?

Is all of this surveillance and risk worth it merely so that we can be served slightly more relevant ads? Or so that hedge fund managers can get richer?

The companies profiting from our every move can’t be expected to voluntarily limit their practices. Congress has to step in to protect Americans’ needs as consumers and rights as citizens.

Until then, one thing is certain: We are living in the world’s most advanced surveillance system. This system wasn’t created deliberately. It was built through the interplay of technological advance and the profit motive. It was built to make money. The greatest trick technology companies ever played was persuading society to surveil itself….(More)”.

Why the Global South should nationalise its data


Ulises Ali Mejias at AlJazeera: “The recent coup in Bolivia reminds us that poor countries rich in resources continue to be plagued by the legacy of colonialism. Anything that stands in the way of a foreign corporation’s ability to extract cheap resources must be removed.

Today, apart from minerals and fossil fuels, corporations are after another precious resource: Personal data. As with natural resources, data too has become the target of extractive corporate practices.

As sociologist Nick Couldry and I argue in our book, The Costs of Connection: How Data is Colonizing Human Life and Appropriating It for Capitalism, there is a new form of colonialism emerging in the world: data colonialism. By this, we mean a new resource-grab whereby human life itself has become a direct input into economic production in the form of extracted data.

We acknowledge that this term is controversial, given the extreme physical violence and structures of racism that historical colonialism employed. However, our point is not to say that data colonialism is the same as historical colonialism, but rather to suggest that it shares the same core function: extraction, exploitation, and dispossession.

Like classical colonialism, data colonialism violently reconfigures human relations to economic production. Things like land, water, and other natural resources were valued by native people in the precolonial era, but not in the same way that colonisers (and later, capitalists) came to value them: as private property. Likewise, we are experiencing a situation in which things that were once primarily outside the economic realm – things like our most intimate social interactions with friends and family, or our medical records – have now been commodified and made part of an economic cycle of data extraction that benefits a few corporations.

So what could countries in the Global South do to avoid the dangers of data colonialism?…(More)”.

Assessing employer intent when AI hiring tools are biased


Report by Caitlin Chin at Brookings: “When it comes to gender stereotypes in occupational roles, artificial intelligence (AI) has the potential to either mitigate historical bias or heighten it. In the case of the Word2vec model, AI appears to do both.

Word2vec is a publicly available algorithmic model built on millions of words scraped from online Google News articles, which computer scientists commonly use to analyze word associations. In 2016, Microsoft and Boston University researchers revealed that the model picked up gender stereotypes existing in online news sources—and furthermore, that these biased word associations were overwhelmingly job related. Upon discovering this problem, the researchers neutralized the biased word correlations in their specific algorithm, writing that “in a small way debiased word embeddings can hopefully contribute to reducing gender bias in society.”

Their study draws attention to a broader issue with artificial intelligence: Because algorithms often emulate the training datasets that they are built upon, biased input datasets could generate flawed outputs. Because many contemporary employers utilize predictive algorithms to scan resumes, direct targeted advertising, or even conduct face- or voice-recognition-based interviews, it is crucial to consider whether popular hiring tools might be susceptible to the same cultural biases that the researchers discovered in Word2vec.

In this paper, I discuss how hiring is a multi-layered and opaque process and how it will become more difficult to assess employer intent as recruitment processes move online. Because intent is a critical aspect of employment discrimination law, I ultimately suggest four ways upon which to include it in the discussion surrounding algorithmic bias….(More)”

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI and Bias,” a series that explores ways to mitigate possible biases and create a pathway toward greater fairness in AI and emerging technologies.

Responsible Operations: Data Science, Machine Learning, and AI in Libraries


OCLC Research Position Paper by Thomas Padilla: “Despite greater awareness, significant gaps persist between concept and operationalization in libraries at the level of workflows (managing bias in probabilistic description), policies (community engagement vis-à-vis the development of machine-actionable collections), positions (developing staff who can utilize, develop, critique, and/or promote services influenced by data science, machine learning, and AI), collections (development of “gold standard” training data), and infrastructure (development of systems that make use of these technologies and methods). Shifting from awareness to operationalization will require holistic organizational commitment to responsible operations. The viability of responsible operations depends on organizational incentives and protections that promote constructive dissent…(More)”.

Facial recognition needs a wider policy debate


Editorial Team of the Financial Times: “In his dystopian novel 1984, George Orwell warned of a future under the ever vigilant gaze of Big Brother. Developments in surveillance technology, in particular facial recognition, mean the prospect is no longer the stuff of science fiction.

In China, the government was this year found to have used facial recognition to track the Uighurs, a largely Muslim minority. In Hong Kong, protesters took down smart lamp posts for fear of their actions being monitored by the authorities. In London, the consortium behind the King’s Cross development was forced to halt the use of two cameras with facial recognition capabilities after regulators intervened. All over the world, companies are pouring money into the technology.

At the same time, governments and law enforcement agencies of all hues are proving willing buyers of a technology that is still evolving — and doing so despite concerns over the erosion of people’s privacy and human rights in the digital age. Flaws in the technology have, in certain cases, led to inaccuracies, in particular when identifying women and minorities.

The news this week that Chinese companies are shaping new standards at the UN is the latest sign that it is time for a wider policy debate. Documents seen by this newspaper revealed Chinese companies have proposed new international standards at the International Telecommunication Union, or ITU, a Geneva-based organisation of industry and official representatives, for things such as facial recognition. Setting standards for what is a revolutionary technology — one recently described as the “plutonium of artificial intelligence” — before a wider debate about its merits and what limits should be imposed on its use, can only lead to unintended consequences. Crucially, standards ratified in the ITU are commonly adopted as policy by developing nations in Africa and elsewhere — regions where China has long wanted to expand its influence. A case in point is Zimbabwe, where the government has partnered with Chinese facial recognition company CloudWalk Technology. The investment, part of Beijing’s Belt and Road investment in the country, will see CloudWalk technology monitor major transport hubs. It will give the Chinese company access to valuable data on African faces, helping to improve the accuracy of its algorithms….

Progress is needed on regulation. Proposals by the European Commission for laws to give EU citizens explicit rights over the use of their facial recognition data as part of a wider overhaul of regulation governing artificial intelligence are welcome. The move would bolster citizens’ protection above existing restrictions laid out under its general data protection regulation. Above all, policymakers should be mindful that if the technology’s unrestrained rollout continues, it could hold implications for other, potentially more insidious, innovations. Western governments should step up to the mark — or risk having control of the technology’s future direction taken from them….(More)”.

Access My Info (AMI)


About: “What do companies know about you? How do they handle your data? And who do they share it with?

Access My Info (AMI) is a project that can help answer these questions by assisting you in making data access requests to companies. AMI includes a web application that helps users send companies data access requests, and a research methodology designed to understand the responses companies make to these requests. Past AMI projects have shed light on how companies treat user data and contribute to digital privacy reforms around the world.

What are data access requests?

A data access request is a letter you can send to any company with products/services that you use. The request asks that the company disclose all the information it has about you and whether or not it has shared your data with any third-parties. If the place where you live has data protection laws that include the right to data access then companies may be legally obligated to respond…

AMI has made personal data requests in jurisdictions around the world and found common patterns.

  1. There are significant gaps between data access laws on paper and the law in practice;
  2. People have consistently encountered barriers to accessing their data.

Together with our partners in each jurisdiction, we have used Access My Info to set off a dialog between users, civil society, regulators, and companies…(More)”

Seeing Like a Finite State Machine


Henry Farrell at the Crooked Timber: “…So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason….(More)”

“Mind the Five”: Guidelines for Data Privacy and Security in Humanitarian Work With Undocumented Migrants and Other Vulnerable Populations


Paper by Sara Vannini, Ricardo Gomez and Bryce Clayton Newell: “The forced displacement and transnational migration of millions of people around the world is a growing phenomenon that has been met with increased surveillance and datafication by a variety of actors. Small humanitarian organizations that help irregular migrants in the United States frequently do not have the resources or expertise to fully address the implications of collecting, storing, and using data about the vulnerable populations they serve. As a result, there is a risk that their work could exacerbate the vulnerabilities of the very same migrants they are trying to help. In this study, we propose a conceptual framework for protecting privacy in the context of humanitarian information activities (HIA) with irregular migrants. We draw from a review of the academic literature as well as interviews with individuals affiliated with several US‐based humanitarian organizations, higher education institutions, and nonprofit organizations that provide support to undocumented migrants. We discuss 3 primary issues: (i) HIA present both technological and human risks; (ii) the expectation of privacy self‐management by vulnerable populations is problematic; and (iii) there is a need for robust, actionable, privacy‐related guidelines for HIA. We suggest 5 recommendations to strengthen the privacy protection offered to undocumented migrants and other vulnerable populations….(More)”.

Principles alone cannot guarantee ethical AI


Paper by Brent Mittelstadt: “Artificial intelligence (AI) ethics is now a global topic of discussion in academic and policy circles. At least 84 public–private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. According to recent meta-analyses, AI ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach for the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement….(More)”.

Surveillance giants: how the business model of Google and Facebook threatens human rights


Report by Amnesty International: “Google and Facebook help connect the world and provide crucial services to billions. To participate meaningfully in today’s economy and society, and to realize their human rights, people rely on access to the internet—and to the tools Google and Facebook offer. But Google and Facebook’s platforms come at a systemic cost. The companies’ surveillance-based business model is inherently incompatible with the right to privacy and poses a threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination….(More)”.