The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines


Public Knowledge: “Today, we’re happy to announce our newest white paper, “The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines,” by Public Knowledge General Counsel Ryan Clough. The paper argues that the rapid and pervasive rise of artificial intelligence risks exploiting the most marginalized and vulnerable in our society. To mitigate these harms, Clough advocates for a new federal authority to help the U.S. government implement fair and equitable AI. Such an authority should provide the rest of the government with the expertise and experience needed to achieve five goals crucial to building ethical AI systems:

  • Boosting sector-specific regulators and confronting overarching policy challenges raised by AI;
  • Protecting public values in government procurement and implementation of AI;
  • Attracting AI practitioners to civil service, and building durable and centralized AI expertise within government;
  • Identifying major gaps in the laws and regulatory frameworks that govern AI; and
  • Coordinating strategies and priorities for international AI governance.

“Any individual can be misjudged and mistreated by artificial intelligence,” Clough explains, “but the record to date indicates that it is significantly more likely to happen to the less powerful, who also have less recourse to do anything about it.” The paper argues that a new federal authority is the best way to meet the profound and novel challenges AI poses for us all….(More)”.

Algorithmic Government: Automating Public Services and Supporting Civil Servants in using Data Science Technologies


Zeynep Engin and Philip Treleaven in the Computer Journal:  “The data science technologies of artificial intelligence (AI), Internet of Things (IoT), big data and behavioral/predictive analytics, and blockchain are poised to revolutionize government and create a new generation of GovTech start-ups. The impact from the ‘smartification’ of public services and the national infrastructure will be much more significant in comparison to any other sector given government’s function and importance to every institution and individual.

Potential GovTech systems include Chatbots and intelligent assistants for public engagement, Robo-advisors to support civil servants, real-time management of the national infrastructure using IoT and blockchain, automated compliance/regulation, public records securely stored in blockchain distributed ledgers, online judicial and dispute resolution systems, and laws/statutes encoded as blockchain smart contracts. Government is potentially the major ‘client’ and also ‘public champion’ for these new data technologies. This review paper uses our simple taxonomy of government services to provide an overview of data science automation being deployed by governments world-wide. The goal of this review paper is to encourage the Computer Science community to engage with government to develop these new systems to transform public services and support the work of civil servants….(More)”.

Declaration on Ethics and Data Protection in Artifical Intelligence


Declaration: “…The 40th International Conference of Data Protection and Privacy Commissioners considers that any creation, development and use of artificial intelligence systems shall fully respect human rights, particularly the rights to the protection of personal data and to privacy, as well as human dignity, non-discrimination and fundamental values, and shall provide solutions to allow individuals to maintain control and understanding of artificial intelligence systems.

The Conference therefore endorses the following guiding principles, as its core values to preserve human rights in the development of artificial intelligence:

  1. Artificial intelligence and machine learning technologies should be designed, developed and used in respect of fundamental human rights and in accordance with the fairness principle, in particular by:
  2. Considering individuals’ reasonable expectations by ensuring that the use of artificial intelligence systems remains consistent with their original purposes, and that the data are used in a way that is not incompatible with the original purpose of their collection,
  3. taking into consideration not only the impact that the use of artificial intelligence may have on the individual, but also the collective impact on groups and on society at large,
  4. ensuring that artificial intelligence systems are developed in a way that facilitates human development and does not obstruct or endanger it, thus recognizing the need for delineation and boundaries on certain uses,…(More)

When AI Misjudgment Is Not an Accident


Douglas Yeung at Scientific American: “The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin.

But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn. This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.

According to a U.S. government study on big data and privacy, biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent. Commercial data brokers collect and hold onto all kinds of information, such as online browsing or shopping habits, that could be used in this way.

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Finally, national security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization. This would fit naturally with tactics that reportedly seek to exploit ideological divides by creating social media posts and buying online ads designed to inflame racial tensions….(More)”.

This is how computers “predict the future”


Dan Kopf at Quartz: “The poetically named “random forest” is one of data science’s most-loved prediction algorithms. Developed primarily by statistician Leo Breiman in the 1990s, the random forest is cherished for its simplicity. Though it is not always the most accurate prediction method for a given problem, it holds a special place in machine learning because even those new to data science can implement and understand this powerful algorithm.

This was the algorithm used in an exciting 2017 study on suicide predictions, conducted by biomedical-informatics specialist Colin Walsh of Vanderbilt University and psychologists Jessica Ribeiro and Joseph Franklin of Florida State University. Their goal was to take what they knew about a set of 5,000 patients with a history of self-injury, and see if they could use those data to predict the likelihood that those patients would commit suicide. The study was done retrospectively. Sadly, almost 2,000 of these patients had killed themselves by the time the research was underway.

Altogether, the researchers had over 1,300 different characteristics they could use to make their predictions, including age, gender, and various aspects of the individuals’ medical histories. If the predictions from the algorithm proved to be accurate, the algorithm could theoretically be used in the future to identify people at high risk of suicide, and deliver targeted programs to them. That would be a very good thing.

Predictive algorithms are everywhere. In an age when data are plentiful and computing power is mighty and cheap, data scientists increasingly take information on people, companies, and markets—whether given willingly or harvested surreptitiously—and use it to guess the future. Algorithms predict what movie we might want to watch next, which stocks will increase in value, and which advertisement we’re most likely to respond to on social media. Artificial-intelligence tools, like those used for self-driving cars, often rely on predictive algorithms for decision making….(More)”.

The Moral Machine experiment


Jean-François Bonnefon, Iyad Rahwan et al in Nature:  “With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles.

This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available….(More)”.

Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security


Paper by Robert Chesney and Danielle Keats Citron: “Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection.

Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors. While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.

Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions….(More)”.

Privacy and Synthetic Datasets


Paper by Steven M. Bellovin, Preetam K. Dutta and Nathan Reitinger: “Sharing is a virtue, instilled in us from childhood. Unfortunately, when it comes to big data — i.e., databases possessing the potential to usher in a whole new world of scientific progress — the legal landscape prefers a hoggish motif. The historic approach to the resulting database–privacy problem has been anonymization, a subtractive technique incurring not only poor privacy results, but also lackluster utility. In anonymization’s stead, differential privacy arose; it provides better, near-perfect privacy, but is nonetheless subtractive in terms of utility.

Today, another solution is leaning into the fore, synthetic data. Using the magic of machine learning, synthetic data offers a generative, additive approach — the creation of almost-but-not-quite replica data. In fact, as we recommend, synthetic data may be combined with differential privacy to achieve a best-of-both-worlds scenario. After unpacking the technical nuances of synthetic data, we analyze its legal implications, finding both over and under inclusive applications. Privacy statutes either overweigh or downplay the potential for synthetic data to leak secrets, inviting ambiguity. We conclude by finding that synthetic data is a valid, privacy-conscious alternative to raw data, but is not a cure-all for every situation. In the end, computer science progress must be met with proper policy in order to move the area of useful data dissemination forward….(More)”.

Wearable device data and AI can reduce health care costs and paperwork


Darrell West at Brookings: “Though digital technology has transformed nearly every corner of the economy in recent years, the health care industry seems stubbornly immune to these trends. That may soon change if more wearable devices record medical information that physicians can use to diagnose and treat illnesses at earlier stages. Last month, Apple announced that an FDA-approved electrocardiograph (EKG) will be included in the latest generation Apple Watch to check the heart’s electrical activity for signs of arrhythmia. However, the availability of this data does not guarantee that health care providers are currently equipped to process all of it. To cope with growing amounts of medical data from wearable devices, health care providers may need to adopt artificial intelligence that can identify data trends and spot any deviations that indicate illness. Greater medical data, accompanied by artificial intelligence to analyze it, could expand the capabilities of human health care providers and offer better outcomes at lower costs for patients….

By 2016, American health care spending had already ballooned to 17.9 percent of GDP. The rise in spending saw a parallel rise in health care employment. Patients still need doctors, nurses, and health aides to administer care, yet these health care professionals might not yet be able to make sense of the massive quantities of data coming from wearable devices. Doctors already spend much of their time filling out paperwork, which leaves less time to interact with patients. The opportunity may arise for artificial intelligence to analyze the coming flood of data from wearable devices. Tracking small changes as they happen could make a large difference in diagnosis and treatment: AI could detect abnormal heartbeat, respiration, or other signs that indicate worsening health. Catching symptoms before they worsen may be key to improving health outcomes and lowering costs….(More)”.

How pro-trust initiatives are taking over the Internet


Sara Fisher at Axios: “Dozens of new initiatives have launched over the past few years to address fake news and the erosion of faith in the media, creating a measurement problem of its own.

Why it matters: So many new efforts are launching simultaneously to solve the same problem that it’s become difficult to track which ones do what and which ones are partnering with each other….

To name a few:

  • The Trust Project, which is made up of dozens of global news companies, announced this morning that the number of journalism organizations using the global network’s “Trust Indicators” now totals 120, making it one of the larger global initiatives to combat fake news. Some of these groups (like NewsGuard) work with Trust Project and are a part of it.
  • News Integrity Initiative (Facebook, Craig Newmark Philanthropic Fund, Ford Foundation, Democracy Fund, John S. and James L. Knight Foundation, Tow Foundation, AppNexus, Mozilla and Betaworks)
  • NewsGuard (Longtime journalists and media entrepreneurs Steven Brill and Gordon Crovitz)
  • The Journalism Trust Initiative (Reporters Without Borders, and Agence France Presse, the European Broadcasting Union and the Global Editors Network )
  • Internews (Longtime international non-profit)
  • Accountability Journalism Program (American Press Institute)
  • Trusting News (Reynolds Journalism Institute)
  • Media Manipulation Initiative (Data & Society)
  • Deepnews.ai (Frédéric Filloux)
  • Trust & News Initiative (Knight Foundation, Facebook and Craig Newmark in. affiliation with Duke University)
  • Our.News (Independently run)
  • WikiTribune (Wikipedia founder Jimmy Wales)

There are also dozens of fact-checking efforts being championed by different third-parties, as well as efforts being built around blockchain and artificial intelligence.

Between the lines: Most of these efforts include some sort of mechanism for allowing readers to physically discern real journalism from fake news via some sort of badge or watermark, but that presents problems as well.

  • Attempts to flag or call out news as being real and valid have in the past been rejected even further by those who wish to discredit vetted media.
  • For example, Facebook said in December that it will no longer use “Disputed Flags” — red flags next to fake news articles — to identify fake news for users, because it found that “putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended.”…(More)”.