Malicious Uses and Abuses of Artificial Intelligence


Report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro: “… looking into current and predicted criminal uses of artificial intelligence (AI)… The report provides law enforcers, policy makers and other organizations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.

“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.” said Edvardas Šileris, Head of Europol’s Cybercrime Centre. “This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems.”

The report concludes that cybercriminals will leverage AI both as an attack vector and an attack surface. Deepfakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

For example, AI could be used to support:

  • Convincing social engineering attacks at scale
  • Document-scraping malware to make attacks more efficient
  • Evasion of image recognition and voice biometrics
  • Ransomware attacks, through intelligent targeting and evasion
  • Data pollution, by identifying blind spots in detection rules..

The three organizations make several recommendations to conclude the report:

Interoperability as a tool for competition regulation


Paper by Ian Brown: “Interoperability is a technical mechanism for computing systems to work together – even if they are from competing firms. An interoperability requirement for large online platforms has been suggested by the European Commission as one ex ante (up-front rule) mechanism in its proposed Digital Markets Act (DMA), as a way to encourage competition. The policy goal is to increase choice and quality for users, and the ability of competitors to succeed with better services. The application would be to the largest online platforms, such as Facebook, Google, Amazon, smartphone operating systems (e.g. Android/iOS), and their ancillary services, such as payment and app stores.

This report analyses up-front interoperability requirements as a pro-competition policy tool for regulating large online platforms, exploring the economic and social rationales and possible regulatory mechanisms. It is based on a synthesis of recent comprehensive policy re-views of digital competition in major industrialised economies, and related academic literature, focusing on areas of emerging consensus while noting important disagreements. It draws particularly on the Vestager, Furman and Stigler reviews, and the UK Competition and Markets Authority’s study on online platforms and digital advertising. It also draws on interviews with software developers, platform operators, government officials, and civil society experts working in this field….(More)”.

‘It gave me hope in democracy’: how French citizens are embracing people power


Peter Yeung at The Guardian: “Angela Brito was driving back to her home in the Parisian suburb of Seine-et-Marne one day in September 2019 when the phone rang. The 47-year-old caregiver, accustomed to emergency calls, pulled over in her old Renault Megane to answer. The voice on the other end of the line informed her she had been randomly selected to take part in a French citizens’ convention on climate. Would she, the caller asked, be interested?

“I thought it was a real prank,” says Brito, a single mother of four who was born in the south of Portugal. “I’d never heard anything about it before. But I said yes, without asking any details. I didn’t believe it.’”

Brito received a letter confirming her participation but she still didn’t really take it seriously. On 4 October, the official launch day, she got up at 7am as usual and, while driving to meet her first patient of the day, heard a radio news item on how 150 ordinary citizens had been randomly chosen for this new climate convention. “I said to myself, ah, maybe it was true,” she recalls.

At the home of her second patient, a good-humoured old man in a wheelchair, the TV news was on. Images of the grand Art Déco-style Palais d’Iéna, home of the citizens’ gathering, filled the screen. “I looked at him and said, ‘I’m supposed to be one of those 150,’” says Brito. “He told me, ‘What are you doing here then? Leave, get out, go there!’”

Brito had two hours to get to the Palais d’Iéna. “I arrived a little late, but I arrived!” she says.

Over the next nine months, Brito would take part in the French citizens’ convention for the climate, touted by Emmanuel Macron as an “unprecedented democratic experiment”, which would bring together 150 people aged 16 upwards, from all over France and all walks of French life – to learn, debate and then propose measures to reduce greenhouse gas emissions by at least 40% by 2030. By the end of the process, Brito and her fellow participants had convinced Macron to pledge an additional €15bn (£13.4bn) to the climate cause and to accept all but three of the group’s 149 recommendations….(More)”.

The Case for Digital Activism: Refuting the Fallacies of Slacktivism


Paper by Nora Madison and Mathias Klang: “This paper argues for the importance and value of digital activism. We first outline the arguments against digitally mediated activism and then address the counter-arguments against its derogatory criticisms. The low threshold for participating in technologically mediated activism seems to irk its detractors. Indeed, the term used to downplay digital activism is slacktivism, a portmanteau of slacker and activism. The use of slacker is intended to stress the inaction, low effort, and laziness of the person and thereby question their dedication to the cause. In this work we argue that digital activism plays a vital role in the arsenal of the activist and needs to be studied on its own terms in order to be more fully understood….(More)”

Don’t Fear the Robots, and Other Lessons From a Study of the Digital Economy


Steve Lohr at the New York Times: “L. Rafael Reif, the president of Massachusetts Institute of Technology, delivered an intellectual call to arms to the university’s faculty in November 2017: Help generate insights into how advancing technology has changed and will change the work force, and what policies would create opportunity for more Americans in the digital economy.

That issue, he wrote, is the “defining challenge of our time.”

Three years later, the task force assembled to address it is publishing its wide-ranging conclusions. The 92-page report, “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines,” was released on Tuesday….

Here are four of the key findings in the report:

Most American workers have fared poorly.

It’s well known that those on the top rungs of the job ladder have prospered for decades while wages for average American workers have stagnated. But the M.I.T. analysis goes further. It found, for example, that real wages for men without four-year college degrees have declined 10 to 20 percent since their peak in 1980….

Robots and A.I. are not about to deliver a jobless future.

…The M.I.T. researchers concluded that the change would be more evolutionary than revolutionary. In fact, they wrote, “we anticipate that in the next two decades, industrialized countries will have more job openings than workers to fill them.”…

Worker training in America needs to match the market.

“The key ingredient for success is public-private partnerships,” said Annette Parker, president of South Central College, a community college in Minnesota, and a member of the advisory board to the M.I.T. project.

The schools, nonprofits and corporate-sponsored programs that have succeeded in lifting people into middle-class jobs all echo her point: the need to link skills training to business demand….

Workers need more power, voice and representation.The report calls for raising the minimum wage, broadening unemployment insurance and modifying labor laws to enable collective bargaining in occupations like domestic and home-care workers and freelance workers. Such representation, the report notes, could come from traditional unions or worker advocacy groups like the National Domestic Workers Alliance, Jobs With Justice and the Freelancers Union….(More)”

A nudge helps doctors bring up end-of-life issues with their dying cancer patients


Article by Ravi Parikh et al: “When conversations about goals and end-of-life wishes happen early, they can improve patients’ quality of life and decrease their chances of dying on a ventilator or in an intensive care unit. Yet doctors treating cancer focus so much of their attention on treating the disease that these conversations tend to get put off until it’s too late. This leads to costly and often unwanted care for the patient.Related: 

This can be fixed, but it requires addressing two key challenges. The first is that it is often difficult for doctors to know how long patients have left to live. Even among patients in hospice care, doctors get it wrong nearly 70% of the time. Hospitals and private companies have invested millions of dollars to try and identify these outcomes, often using artificial intelligence and machine learning, although most of these algorithms have not been vetted in real-world settings.

In a recent set of studies, our team used data from real-time electronic medical records to develop a machine learning algorithm that identified which cancer patients had a high risk of dying in the next six months. We then tested the algorithm on 25,000 patients who were seen at our health system’s cancer practices and found it performed better than relying only on doctors to identify high-risk patients.

But just because such a tool exists doesn’t mean doctors will use it to prompt more conversations. The second challenge — which is even harder to overcome — is using machine learning to motivate clinicians to have difficult conversations with patients about the end of life.

We wondered if implementing a timely “nudge” that doctors received before seeing their high-risk patients could help them start the conversation.

To test this idea, we used our prediction tool in a clinical trial involving nine cancer practices. Doctors in the nudge group received a weekly report on how many end-of-life conversations they had compared to their peers, along with a list of patients they were scheduled to see the following week who the algorithm deemed at high-risk of dying in the next six months. They could review the list and uncheck any patients they thought were not appropriate for end-of-life conversations. For the patients who remained checked, doctors received a text message on the day of the appointment reminding them to discuss the patient’s goals at the end of life. Doctors in the control group did not receive the email or text message intervention.

As we reported in JAMA Oncology, 15% of doctors who received the nudge text had end-of-life conversations with their patients, compared to just 4% of the control doctors….(More)”.

Remaking the Commons: How Digital Tools Facilitate and Subvert the Common Good


Paper by Jessica Feldman:”This scoping paper considers how digital tools, such as ICTs and AI, have failed to contribute to the “common good” in any sustained or scalable way. This is attributed to a problem that is at once political-economic and technical.

Many digital tools’ business models are predicated on advertising: framing the user as an individual consumer-to-be-targeted, not as an organization, movement, or any sort of commons. At the level of infrastructure and hardware, the increased privatization and centralization of transmission and production leads to a dangerous bottlenecking of communication power, and to labor and production practices that are undemocratic and damaging to common resources.

These practices escalate collective action problems, pose a threat to democratic decision making, aggravate issues of economic and labor inequality, and harm the environment and health. At the same time, the growth of both AI and online community formation raise questions around the very definition of human subjectivity and modes of relationality. Based on an operational definition of the common good grounded in ethics of care, sustainability, and redistributive justice, suggestions are made for solutions and further research in the areas of participatory design, digital democracy, digital labor, and environmental sustainability….(More)”

Leveraging Open Data with a National Open Computing Strategy


Policy Brief by Lara Mangravite and John Wilbanks: “Open data mandates and investments in public data resources, such as the Human Genome Project or the U.S. National Oceanic and Atmospheric Administration Data Discovery Portal, have provided essential data sets at a scale not possible without government support. By responsibly sharing data for wide reuse, federal policy can spur innovation inside the academy and in citizen science communities. These approaches are enabled by private-sector advances in cloud computing services and the government has benefited from innovation in this domain. However, the use of commercial products to manage the storage of and access to public data resources poses several challenges.

First, too many cloud computing systems fail to properly secure data against breaches, improperly share copies of data with other vendors, or use data to add to their own secretive and proprietary models. As a result, the public does not trust technology companies to responsibly manage public data—particularly private data of individual citizens. These fears are exacerbated by the market power of the major cloud computing providers, which may limit the ability of individuals or institutions to negotiate appropriate terms. This impacts the willingness of U.S. citizens to have their personal information included within these databases.

Second, open data solutions are springing up across multiple sectors without coordination. The federal government is funding a series of independent programs that are working to solve the same problem, leading to a costly duplication of effort across programs.

Third and most importantly, the high costs of data storage, transfer, and analysis preclude many academics, scientists, and researchers from taking advantage of governmental open data resources. Cloud computing has radically lowered the costs of high-performance computing, but it is still not free. The cost of building the wrong model at the wrong time can quickly run into tens of thousands of dollars.

Scarce resources mean that many academic data scientists are unable or unwilling to spend their limited funds to reuse data in exploratory analyses outside their narrow projects. And citizen scientists must use personal funds, which are especially scarce in communities traditionally underrepresented in research. The vast majority of public data made available through existing open science policy is therefore left unused, either as reference material or as “foreground” for new hypotheses and discoveries….The Solution: Public Cloud Computing…(More)”.

Evaluating Identity Disclosure Risk in Fully Synthetic Health Data: Model Development and Validation


Paper by Khaled El Emam et al: “There has been growing interest in data synthesis for enabling the sharing of data for secondary analysis; however, there is a need for a comprehensive privacy risk model for fully synthetic data: If the generative models have been overfit, then it is possible to identify individuals from synthetic data and learn something new about them.

Objective: The purpose of this study is to develop and apply a methodology for evaluating the identity disclosure risks of fully synthetic data.

Methods: A full risk model is presented, which evaluates both identity disclosure and the ability of an adversary to learn something new if there is a match between a synthetic record and a real person. We term this “meaningful identity disclosure risk.” The model is applied on samples from the Washington State Hospital discharge database (2007) and the Canadian COVID-19 cases database. Both of these datasets were synthesized using a sequential decision tree process commonly used to synthesize health and social science data.

Results: The meaningful identity disclosure risk for both of these synthesized samples was below the commonly used 0.09 risk threshold (0.0198 and 0.0086, respectively), and 4 times and 5 times lower than the risk values for the original datasets, respectively.

Conclusions: We have presented a comprehensive identity disclosure risk model for fully synthetic data. The results for this synthesis method on 2 datasets demonstrate that synthesis can reduce meaningful identity disclosure risks considerably. The risk model can be applied in the future to evaluate the privacy of fully synthetic data….(More)”.

Federal Regulators Increase Focus on Patient Risks From Electronic Health Records


Ben Moscovitch at Pew: “…The Office of the National Coordinator for Health Information Technology (ONC) will collect clinicians’ feedback through a survey developed by the Urban Institute under a contract with the agency. ONC will release aggregated results as part its EHR reporting program. Congress required the program’s creation in the 21st Century Cures Act, the wide-ranging federal health legislation enacted in 2016. The act directs ONC to determine which data to gather from health information technology vendors. That information can then be used to illuminate the strengths and weaknesses of EHR products, as well as industry trends.

The Pew Charitable Trusts, major medical organizations and hospital groups, and health information technology experts have urged that the reporting program examine usability-related patient risks. Confusing, cumbersome, and poorly customized EHR systems can cause health care providers to order the wrong drug or miss test results and other information critical to safe, effective treatment. Usability challenges also can increase providers’ frustration and, in turn, their likelihood of making mistakes.

The data collected from clinicians will shed light on these problems, encourage developers to improve the safety of their products, and help hospitals and doctor’s offices make better-informed decisions about the purchase, implementation, and use of these tools. Research shows that aggregated data about EHRs can generate product-specific insights about safety deficiencies, even when health care facilities implement the same system in distinct ways….(More)”.