AI Global Surveillance Technology


Carnegie Endowment: “Artificial intelligence (AI) technology is rapidly proliferating around the world. A growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.

In order to appropriately address the effects of this technology, it is important to first understand where these tools are being deployed and how they are being used.

To provide greater clarity, Carnegie presents an AI Global Surveillance (AIGS) Index—representing one of the first research efforts of its kind. The index compiles empirical data on AI surveillance use for 176 countries around the world. It does not distinguish between legitimate and unlawful uses of AI surveillance. Rather, the purpose of the research is to show how new surveillance capabilities are transforming the ability of governments to monitor and track individuals or systems. It specifically asks:

  • Which countries are adopting AI surveillance technology?
  • What specific types of AI surveillance are governments deploying?
  • Which countries and companies are supplying this technology?

Learn more about our findings and how AI surveillance technology is spreading rapidly around the globe….(More)”.

Algorithmic Censorship on Social Platforms: Power, Legitimacy, and Resistance


Paper by Jennifer Cobbe: “Effective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation. This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist.

While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration.Jennifer CobbeEffective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation.

This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist. While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration….(More)”.

How to Build Artificial Intelligence We Can Trust


Gary Marcus and Ernest Davis at the New York Times: “Artificial intelligence has a trust problem. We are relying on A.I. more and more, but it hasn’t yet earned our confidence.

Tesla cars driving in Autopilot mode, for example, have a troubling history of crashing into stopped vehicles. Amazon’s facial recognition system works great much of the time, but when asked to compare the faces of all 535 members of Congress with 25,000 public arrest photos, it found 28 matches, when in reality there were none. A computer program designed to vet job applicants for Amazon was discovered to systematically discriminate against women. Every month new weaknesses in A.I. are uncovered.

The problem is not that today’s A.I. needs to get better at what it does. The problem is that today’s A.I. needs to try to do something completely different.

In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality….

We face a choice. We can stick with today’s approach to A.I. and greatly restrict what the machines are allowed to do (lest we end up with autonomous-vehicle crashes and machines that perpetuate bias rather than reduce it). Or we can shift our approach to A.I. in the hope of developing machines that have a rich enough conceptual understanding of the world that we need not fear their operation. Anything else would be too risky….(More)”.

The Ethics of Hiding Your Data From the Machines


Molly Wood at Wired: “…But now that data is being used to train artificial intelligence, and the insights those future algorithms create could quite literally save lives.

So while targeted advertising is an easy villain, data-hogging artificial intelligence is a dangerously nuanced and highly sympathetic bad guy, like Erik Killmonger in Black Panther. And it won’t be easy to hate.

I recently met with a company that wants to do a sincerely good thing. They’ve created a sensor that pregnant women can wear, and it measures their contractions. It can reliably predict when women are going into labor, which can help reduce preterm births and C-sections. It can get women into care sooner, which can reduce both maternal and infant mortality.

All of this is an unquestionable good.

And this little device is also collecting a treasure trove of information about pregnancy and labor that is feeding into clinical research that could upend maternal care as we know it. Did you know that the way most obstetricians learn to track a woman’s progress through labor is based on a single study from the 1950s, involving 500 women, all of whom were white?…

To save the lives of pregnant women and their babies, researchers and doctors, and yes, startup CEOs and even artificial intelligence algorithms need data. To cure cancer, or at least offer personalized treatments that have a much higher possibility of saving lives, those same entities will need data….

And for we consumers, well, a blanket refusal to offer up our data to the AI gods isn’t necessarily the good choice either. I don’t want to be the person who refuses to contribute my genetic data via 23andMe to a massive research study that could, and I actually believe this is possible, lead to cures and treatments for diseases like Parkinson’s and Alzheimer’s and who knows what else.

I also think I deserve a realistic assessment of the potential for harm to find its way back to me, because I didn’t think through or wasn’t told all the potential implications of that choice—like how, let’s be honest, we all felt a little stung when we realized the 23andMe research would be through a partnership with drugmaker (and reliable drug price-hiker) GlaxoSmithKline. Drug companies, like targeted ads, are easy villains—even though this partnership actually couldproduce a Parkinson’s drug. But do we know what GSK’s privacy policy looks like? That deal was a level of sharing we didn’t necessarily expect….(More)”.

Data versus Democracy


Book by  Kris Shaffer: “Human attention is in the highest demand it has ever been. The drastic increase in available information has compelled individuals to find a way to sift through the media that is literally at their fingertips. Content recommendation systems have emerged as the technological solution to this social and informational problem, but they’ve also created a bigger crisis in confirming our biases by showing us only, and exactly, what it predicts we want to see. Data versus Democracy investigates and explores how, in the era of social media, human cognition, algorithmic recommendation systems, and human psychology are all working together to reinforce (and exaggerate) human bias. The dangerous confluence of these factors is driving media narratives, influencing opinions, and possibly changing election results. 


In this book, algorithmic recommendations, clickbait, familiarity bias, propaganda, and other pivotal concepts are analyzed and then expanded upon via fascinating and timely case studies: the 2016 US presidential election, Ferguson, GamerGate, international political movements, and more events that come to affect every one of us. What are the implications of how we engage with information in the digital age? Data versus Democracy explores this topic and an abundance of related crucial questions. We live in a culture vastly different from any that has come before. In a society where engagement is currency, we are the product. Understanding the value of our attention, how organizations operate based on this concept, and how engagement can be used against our best interests is essential in responsibly equipping ourselves against the perils of disinformation….(More)”.

This High-Tech Solution to Disaster Response May Be Too Good to Be True


Sheri Fink in The New York Times: “The company called One Concern has all the characteristics of a buzzy and promising Silicon Valley start-up: young founders from Stanford, tens of millions of dollars in venture capital and a board with prominent names.

Its particular niche is disaster response. And it markets a way to use artificial intelligence to address one of the most vexing issues facing emergency responders in disasters: figuring out where people need help in time to save them.

That promise to bring new smarts and resources to an anachronistic field has generated excitement. Arizona, Pennsylvania and the World Bank have entered into contracts with One Concern over the past year. New York City and San Jose, Calif., are in talks with the company. And a Japanese city recently became One Concern’s first overseas client.

But when T.J. McDonald, who works for Seattle’s office of emergency management, reviewed a simulated earthquake on the company’s damage prediction platform, he spotted problems. A popular big-box store was grayed out on the web-based map, meaning there was no analysis of the conditions there, and shoppers and workers who might be in danger would not receive immediate help if rescuers relied on One Concern’s results.

“If that Costco collapses in the middle of the day, there’s going to be a lot of people who are hurt,” he said.

The error? The simulation, the company acknowledged, missed many commercial areas because damage calculations relied largely on residential census data.

One Concern has marketed its products as lifesaving tools for emergency responders after earthquakes, floods and, soon, wildfires. But interviews and documents show the company has often exaggerated its tools’ abilities and has kept outside experts from reviewing its methodology. In addition, some product features are available elsewhere at no charge, and data-hungry insurance companies — whose interests can diverge from those of emergency workers — are among One Concern’s biggest investors and customers.

Some critics even suggest that shortcomings in One Concern’s approach could jeopardize lives….(More)”.

Tackling Climate Change with Machine Learning


Paper by David Rolnick et al: “Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help. Here we describe how machine learning can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by machine learning, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the machine learning community to join the global effort against climate change….(More)”.

Bringing machine learning to the masses


Matthew Hutson at Science: “Artificial intelligence (AI) used to be the specialized domain of data scientists and computer programmers. But companies such as Wolfram Research, which makes Mathematica, are trying to democratize the field, so scientists without AI skills can harness the technology for recognizing patterns in big data. In some cases, they don’t need to code at all. Insights are just a drag-and-drop away. One of the latest systems is software called Ludwig, first made open-source by Uber in February and updated last week. Uber used Ludwig for projects such as predicting food delivery times before releasing it publicly. At least a dozen startups are using it, plus big companies such as Apple, IBM, and Nvidia. And scientists: Tobias Boothe, a biologist at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, uses it to visually distinguish thousands of species of flatworms, a difficult task even for experts. To train Ludwig, he just uploads images and labels….(More)”.

Value in the Age of AI


Project Syndicate: “Much has been written about Big Data, artificial intelligence, and automation. The Fourth Industrial Revolution will have far-reaching implications for jobs, ethics, privacy, and equality. But more than that, it will also transform how we think about value – where it comes from, how it is captured, and by whom.

In “Value in the Age of AI,” Project Syndicate, with support from the Dubai Future Foundation, GovLab (New York University), and the Centre for Data & Society (Brussels), will host an ongoing debate about the changing nature of value in the twenty-first century. In the commentaries below, leading thinkers at the intersection of technology, economics, culture, and politics discuss how new technologies are changing our societies, businesses, and individual lived experiences, and what that might mean for our collective future….(More)”.

The effective and ethical development of Artificial Intelligence: an opportunity to improve our wellbeing


Paper by Toby Walsh, Neil Levy, Genevieve Bell, Anthony Elliott, James Maclaurin, Iven Mareels, Fiona Woods: “As Artificial Intelligence (AI) becomes more advanced its applications will become increasingly complex and will find their place in homes, work places and cities.

AI offers broad-reaching opportunities, but uptake also carries serious implications for human capital, social inclusion, privacy and cultural values to name a few. These must be considered to pre-empt responsible deployment.

This project examined the potential that Artificial Intelligence (AI) technologies have in enhancing Australia’s wellbeing, lifting the economy, improving environmental sustainability and creating a more equitable, inclusive and fair society. Placing society at the core of AI development, the report analyses the opportunities, challenges and prospects that AI technologies present, and explores considerations such as workforce, education, human rights and our regulatory environment.

Key findings:

  1. AI offers major opportunities to improve our economic, societal and environmental wellbeing, while also presenting potentially significant global risks, including technological unemployment and the use of lethal autonomous weapons. Further development of AI must be directed to allow well-considered implementation that supports our society in becoming what we would like it to be – one centred on improving prosperity, reducing inequity and achieving continued betterment.
  2. Proactive engagement, consultation and ongoing communication with the public about the changes and effects of AI will be essential for building community awareness. Earning public trust will be critical to enable acceptance and uptake of the technology.
  3. The application of AI is growing rapidly. Ensuring its continued safe and appropriate development will be dependent on strong governance and a responsive regulatory system that encourages innovation. It will also be important to engender public confidence that the goods and services driven by AI are at, or above, benchmark standards and preserve the values that society seeks.
  4. AI is enabled by access to data. To support successful implementation of AI, there is a need for effective digital infrastructure, including data centres and structures for data sharing, that makes AI secure, trusted and accessible, particularly for rural and remote populations. If such essential infrastructure is not carefully and appropriately developed, the advancement of AI and the immense benefits it offers will be diminished.
  5. Successful development and implementation of AI will require a broad range of new skills and enhanced capabilities that span the humanities, arts and social sciences (HASS) and science, technology, engineering and mathematics (STEM) disciplines. Building a talent base and establishing an adaptable and skilled workforce for the future will need education programs that start in early childhood and continue throughout working life and a supportive immigration policy.
  6. An independently led AI body that brings stakeholders together from government, academia and the public and private sectors would provide a critical mass of skills and institutional leadership to develop AI technologies, as well as promote engagement with international initiatives and to develop appropriate ethical frameworks….(More)”.