tl;dr: this AI sums up research papers in a sentence


Jeffrey M. Perkel & Richard Van Noorden at Nature: “The creators of a scientific search engine have unveiled software that automatically generates one-sentence summaries of research papers, which they say could help scientists to skim-read papers faster.

The free tool, which creates what the team calls TLDRs (the common Internet acronym for ‘Too long, didn’t read’), was activated this week for search results at Semantic Scholar, a search engine created by the non-profit Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington. For the moment, the software generates sentences only for the ten million computer-science papers covered by Semantic Scholar, but papers from other disciplines should be getting summaries in the next month or so, once the software has been fine-tuned, says Dan Weld, who manages the Semantic Scholar group at AI2…

Weld was inspired to create the TLDR software in part by the snappy sentences his colleagues share on Twitter to flag up articles. Like other language-generation software, the tool uses deep neural networks trained on vast amounts of text. The team included tens of thousands of research papers matched to their titles, so that the network could learn to generate concise sentences. The researchers then fine-tuned the software to summarize content by training it on a new data set of a few thousand computer-science papers with matching summaries, some written by the papers’ authors and some by a class of undergraduate students. The team has gathered training examples to improve the software’s performance in 16 other fields, with biomedicine likely to come first.

The TLDR software is not the only scientific summarizing tool: since 2018, the website Paper Digest has offered summaries of papers, but it seems to extract key sentences from text, rather than generate new ones, Weld notes. TLDR can generate a sentence from a paper’s abstract, introduction and conclusion. Its summaries tend to be built from key phrases in the article’s text, so are aimed squarely at experts who already understand a paper’s jargon. But Weld says the team is working on generating summaries for non-expert audiences….(More)”.

Interoperability as a tool for competition regulation


Paper by Ian Brown: “Interoperability is a technical mechanism for computing systems to work together – even if they are from competing firms. An interoperability requirement for large online platforms has been suggested by the European Commission as one ex ante (up-front rule) mechanism in its proposed Digital Markets Act (DMA), as a way to encourage competition. The policy goal is to increase choice and quality for users, and the ability of competitors to succeed with better services. The application would be to the largest online platforms, such as Facebook, Google, Amazon, smartphone operating systems (e.g. Android/iOS), and their ancillary services, such as payment and app stores.

This report analyses up-front interoperability requirements as a pro-competition policy tool for regulating large online platforms, exploring the economic and social rationales and possible regulatory mechanisms. It is based on a synthesis of recent comprehensive policy re-views of digital competition in major industrialised economies, and related academic literature, focusing on areas of emerging consensus while noting important disagreements. It draws particularly on the Vestager, Furman and Stigler reviews, and the UK Competition and Markets Authority’s study on online platforms and digital advertising. It also draws on interviews with software developers, platform operators, government officials, and civil society experts working in this field….(More)”.

Curating citizen engagement: Food solutions for future generations


EIT Food: “The Curating Citizen Engagement project will revolutionise our way of solving grand societal challenges by creating a platform for massive public involvement and knowledge generation, specifically targeting food-related issues. …Through a university course developed by partners representing different aspects of the food ecosystem (from sensory perception to nutrition to food policy), we will educate the next generation of students to be able to engage and involve the public in tackling food-related societal challenges. The students will learn iterative prototyping skills in order to create museum installations with built-in data collection points, that will engage the public and assist in shaping future food solutions. Thus, citizens are not only provided with knowledge on food related topics, but are empowered and encouraged to actively use it, leading to more trust in the food sector in general….(More)”.

For America’s New Mayors, a Chance to Lead with Data


Article by Zachary Markovits and Molly Daniell:”While the presidential race drew much of the nation’s attention this year, voters also chose leaders in 346 mayoral elections, as well as many more city and county commission and council races, reshaping the character of government leadership from coast to coast.

These newly elected and re-elected leaders will enter office facing an unprecedented set of challenges: a worsening pandemic, weakened local economies, budget shortfalls and a reckoning over how government policies have contributed to racial injustice. To help their communities “build back better”—in the words of the new President-elect—these leaders will need not just more federal support, but also a strategy that is data-driven in order to protect their residents and ensure that resources are invested where they are needed most.

For America’s new mayors, it’s a chance to show the public what effective leadership looks like after a chaotic federal response to Covid-19—and no response can be fully effective without putting data at the center of how leaders make decisions.

Throughout 2020, we’ve been documenting the key steps that local leaders can take to advance a culture of data-informed decision-making. Here are five lessons that can help guide these new leaders as they seek to meet this moment of national crisis:

1. Articulate a vision

The voice of the chief executive is galvanizing and unlike any other in city hall. That’s why the vision for data-driven government must be articulated from the top. From the moment they are sworn in, mayors have the opportunity to lean forward and use their authority to communicate to the whole administration, council members and city employees about the shift to using data to drive policymaking.

Consider Los Angeles Mayor Eric Garcetti who, upon coming into office, spearheaded an internal review process culminating in this memo to all general managers stressing the need for a culture of both continuous learning and performance. In this memo, he creates urgency, articulates precisely what will change and how it will affect the success of the organization as well as build a data-driven culture….(More)”.

Crowdfunding during COVID-19: An international comparison of online fundraising


Paper by Greg Elmer, Sabrina Ward-Kimola and Anthony Glyn Burton: “This article performs a digital methods analysis on a sample of online crowdfunding campaigns seeking financial support for COVID related financial challenges. Building upon the crowdfunding literature this paper performs an international comparison of the goals of COVID related campaigns during the early spread of the pandemic. The paper seeks to determine the extent to which crowdfunding campaigns reflect current failures of governments to supress the COVID pandemic and support the financial challenges of families, communities and small businesses….(More)”.

Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias (2020)


Foreword of a Report by the Australian Human Rights Commission: “Artificial intelligence (AI) promises better, smarter decision making.

Governments are starting to use AI to make decisions in welfare, policing and law enforcement, immigration, and many other areas. Meanwhile, the private sector is already using AI to make decisions about pricing and risk, to determine what sorts of people make the ‘best’ customers… In fact, the use cases for AI are limited only by our imagination.

However, using AI carries with it the risk of algorithmic bias. Unless we fully understand and address this risk, the promise of AI will be hollow.

Algorithmic bias is a kind of error associated with the use of AI in decision making, and often results in unfairness. Algorithmic bias can arise in many ways. Sometimes the problem is with the design of the AI-powered decision-making tool itself. Sometimes the problem lies with the data set that was used to train the AI tool, which could replicate or even make worse existing problems, including societal inequality.

Algorithmic bias can cause real harm. It can lead to a person being unfairly treated, or even suffering unlawful discrimination, on the basis of characteristics such as their race, age, sex or disability.

This project started by simulating a typical decision-making process. In this technical paper, we explore how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.

To ground our discussion, we chose a hypothetical scenario: an electricity retailer uses an AI-powered tool to decide how to offer its products to customers, and on what terms. The general principles and solutions for mitigating the problem, however, will be relevant far beyond this specific situation.

Because algorithmic bias can result in unlawful activity, there is a legal imperative to address this risk. However, good businesses go further than the bare minimum legal requirements, to ensure they always act ethically and do not jeopardise their good name.

Rigorous design, testing and monitoring can avoid algorithmic bias. This technical paper offers some guidance for companies to ensure that when they use AI, their decisions are fair, accurate and comply with human rights….(More)”

‘It gave me hope in democracy’: how French citizens are embracing people power


Peter Yeung at The Guardian: “Angela Brito was driving back to her home in the Parisian suburb of Seine-et-Marne one day in September 2019 when the phone rang. The 47-year-old caregiver, accustomed to emergency calls, pulled over in her old Renault Megane to answer. The voice on the other end of the line informed her she had been randomly selected to take part in a French citizens’ convention on climate. Would she, the caller asked, be interested?

“I thought it was a real prank,” says Brito, a single mother of four who was born in the south of Portugal. “I’d never heard anything about it before. But I said yes, without asking any details. I didn’t believe it.’”

Brito received a letter confirming her participation but she still didn’t really take it seriously. On 4 October, the official launch day, she got up at 7am as usual and, while driving to meet her first patient of the day, heard a radio news item on how 150 ordinary citizens had been randomly chosen for this new climate convention. “I said to myself, ah, maybe it was true,” she recalls.

At the home of her second patient, a good-humoured old man in a wheelchair, the TV news was on. Images of the grand Art Déco-style Palais d’Iéna, home of the citizens’ gathering, filled the screen. “I looked at him and said, ‘I’m supposed to be one of those 150,’” says Brito. “He told me, ‘What are you doing here then? Leave, get out, go there!’”

Brito had two hours to get to the Palais d’Iéna. “I arrived a little late, but I arrived!” she says.

Over the next nine months, Brito would take part in the French citizens’ convention for the climate, touted by Emmanuel Macron as an “unprecedented democratic experiment”, which would bring together 150 people aged 16 upwards, from all over France and all walks of French life – to learn, debate and then propose measures to reduce greenhouse gas emissions by at least 40% by 2030. By the end of the process, Brito and her fellow participants had convinced Macron to pledge an additional €15bn (£13.4bn) to the climate cause and to accept all but three of the group’s 149 recommendations….(More)”.

The Case for Digital Activism: Refuting the Fallacies of Slacktivism


Paper by Nora Madison and Mathias Klang: “This paper argues for the importance and value of digital activism. We first outline the arguments against digitally mediated activism and then address the counter-arguments against its derogatory criticisms. The low threshold for participating in technologically mediated activism seems to irk its detractors. Indeed, the term used to downplay digital activism is slacktivism, a portmanteau of slacker and activism. The use of slacker is intended to stress the inaction, low effort, and laziness of the person and thereby question their dedication to the cause. In this work we argue that digital activism plays a vital role in the arsenal of the activist and needs to be studied on its own terms in order to be more fully understood….(More)”

Facial-recognition research needs an ethical reckoning


Editorial in Nature: “…As Nature reports in a series of Features on facial recognition this week, many in the field are rightly worried about how the technology is being used. They know that their work enables people to be easily identified, and therefore targeted, on an unprecedented scale. Some scientists are analysing the inaccuracies and biases inherent in facial-recognition technology, warning of discrimination, and joining the campaigners calling for stronger regulation, greater transparency, consultation with the communities that are being monitored by cameras — and for use of the technology to be suspended while lawmakers reconsider where and how it should be used. The technology might well have benefits, but these need to be assessed against the risks, which is why it needs to be properly and carefully regulated.Is facial recognition too biased to be let loose?

Responsible studies

Some scientists are urging a rethink of ethics in the field of facial-recognition research, too. They are arguing, for example, that scientists should not be doing certain types of research. Many are angry about academic studies that sought to study the faces of people from vulnerable groups, such as the Uyghur population in China, whom the government has subjected to surveillance and detained on a mass scale.

Others have condemned papers that sought to classify faces by scientifically and ethically dubious measures such as criminality….One problem is that AI guidance tends to consist of principles that aren’t easily translated into practice. Last year, the philosopher Brent Mittelstadt at the University of Oxford, UK, noted that at least 84 AI ethics initiatives had produced high-level principles on both the ethical development and deployment of AI (B. Mittelstadt Nature Mach. Intell. 1, 501–507; 2019). These tended to converge around classical medical-ethics concepts, such as respect for human autonomy, the prevention of harm, fairness and explicability (or transparency). But Mittelstadt pointed out that different cultures disagree fundamentally on what principles such as ‘fairness’ or ‘respect for autonomy’ actually mean in practice. Medicine has internationally agreed norms for preventing harm to patients, and robust accountability mechanisms. AI lacks these, Mittelstadt noted. Specific case studies and worked examples would be much more helpful to prevent ethics guidance becoming little more than window-dressing….(More)”.

Technologies of Speculation: The limits of knowledge in a data-driven society


Book by Sun-ha Hong: “What counts as knowledge in the age of big data and smart machines? In its pursuit of better knowledge, technology is reshaping what counts as knowledge in its own image – and demanding that the rest of us catch up to new machinic standards for what counts as suspicious, informed, employable. In the process, datafication often generates speculation as much as it does information. The push for algorithmic certainty sets loose an expansive array of incomplete archives, speculative judgments and simulated futures where technology meets enduring social and political problems.

Technologies of Speculation traces this technological manufacturing of speculation as knowledge. It shows how unprovable predictions, uncertain data and black-boxed systems are upgraded into the status of fact – with lasting consequences for criminal justice, public opinion, employability, and more. It tells the story of vast dragnet systems constructed to predict the next terrorist, and how familiar forms of prejudice seep into the data by the back door. In software placeholders like ‘Mohammed Badguy’, the fantasy of pure data collides with the old spectre of national purity. It tells the story of smart machines for ubiquitous and automated self-tracking, manufacturing knowledge that paradoxically lies beyond the human senses. Such data is increasingly being taken up by employers, insurers and courts of law, creating imperfect proxies through which my truth can be overruled.

The book situates ongoing controversies over AI and algorithms within a broader societal faith in objective truth and technological progress. It argues that even as datafication leverages this faith to establish its dominance, it is dismantling the longstanding link between knowledge and human reason, rational publics and free individuals. Technologies of Speculation thus emphasises the basic ethical problem underlying contemporary debates over privacy, surveillance and algorithmic bias: who, or what, has the right to the truth of who I am and what is good for me? If data promises objective knowledge, then we must ask in return: knowledge by and for whom, enabling what forms of life for the human subject?…(More)”.