Congressional Research Service: “In common parlance, the terms propaganda, misinformation, and disinformation are often used interchangeably, often with connotations of deliberate untruths of nefarious origin. In a national security context, however, these terms refer to categories of information that are created and disseminated with different intent and serve different strategic purposes. This primer examines these categories to create a framework for understanding the national security implications of information related to the Coronavirus Disease 2019 (COVID-19) pandemic….(More)”.
Edd Gent at the BBC: “…There are already promising examples of how AI can help us better pool our unique capabilities. San Francisco start-up Unanimous AI has built an online platform that helps guide group decisions. They’ve looked to an unlikely place to guide their AI: the way honeybees make collective decisions.
“We went back to basics and said, ‘How does nature amplify the intelligence of groups?’,” says CEO Louis Rosenberg. “What nature does is form real-time systems, where the groups are interacting all at once together with feedback loops. So, they’re pushing and pulling on each other as a system, and converging on the best possible combination of their knowledge, wisdom, insight and intuition.”
Their Swarm AI platform presents groups with a question and places potential answers in different corners of their screen. Users control a virtual magnet with their mouse and engage in a tug of war to drag an ice hockey puck to the answer they think is correct. The system’s algorithm analyses how each user interacts with the puck – for instance, how much conviction they drag it with or how quickly they waver when they’re in the minority – and uses this information to determine where the puck moves. That creates feedback loops in which each user is influenced by the choice and conviction of the others allowing the puck to end up at the answer best reflecting the collective wisdom of the group.
Several academic papers and high-profile clients who use the product back up the effectiveness of the Swarm AI platform. In one recent study, a group of traders were asked to forecast the weekly movement of several key stock market indices by trying to drag the puck to one of four answers — up or down by more than 4%, or up and down by less than 4%. With the tool, they boosted their accuracy by 36%.
Credit Suisse has used the platform to help investors forecast the performance of Asian markets; Disney has used it to predict the success of TV shows; and Unanimous has even partnered with Stanford Medical School to boost doctors’ ability to diagnose pneumonia from chest X-rays by 33%….(More)”
See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern and Identifying Citizens’ Needs by Combining Artificial Intelligence (AI) and Collective Intelligence (CI).
Report by the Stiftung Neue Verantwortung: “How easy it is to order a book on an online shop’s website, how intuitive maps or navigation services are to use in everyday life, or how laborious it is to set up a customer account for a car-sharing service, these features and ‘user flows’ have become incredibly important to the every customer. Today, the “user friendliness” of a digital platform or service can therefore have a significant influence on how well a product sells or what market share it gains. Therefore, not only operators of large online platforms, but also companies in more traditional sectors of the economy are increasing investments into designing websites, apps or software in such a way that they can be used easily, intuitively and as time-saving as possible.
This approach to product design is called user-centered design (UX design) and is based on the observations of how people interact with digital products, developing prototypes and testing them in experiments. These methods are not only used to improve the user-friendliness of digital interfaces but also to improve certain performance indicators which are relevant to the business – whether it is raising the number of users who register as new customers, increasing the sales volume per user or encouraging as many users as possible to share personal data.
UX design as well as intensive testing and optimization of user interfaces has become a standard in today’s digital product development as well as an important growth-driver for many companies. However, this development also has a side effect: Since companies and users can have conflicting interests and needs with regard to the design of digital products or services, digital design practices which cause problems or even harm for users are spreading.
Examples of problematic design choices include warnings and countdowns that create time pressure in online shops, the design of settings-windows that make it difficult for users to activate data protection settings, or website architectures that make it extremely time-consuming to delete an account. These examples are called “dark patterns”, “Deceptive Design” or “Unethical Design” and are defined as design practices which, intentionally or intentionally, influence people to their disadvantage and potentially manipulate users in their behaviour or decisions….(More)”.
Samuel Stolton at Euractiv: “As part of a series of debates in Parliament’s Legal Affairs Committee on Tuesday afternoon, MEPs exchanged ideas concerning several reports on Artificial Intelligence, covering ethics, civil liability, and intellectual property.
The reports represent Parliament’s recommendations to the Commission on the future for AI technology in the bloc, following the publication of the executive’s White Paper on Artificial Intelligence, which stated that high-risk technologies in ‘critical sectors’ and those deemed to be of ‘critical use’ should be subjected to new requirements.
One Parliament initiative on the ethical aspects of AI, led by Spanish Socialist Ibán García del Blanco, notes that he believes a uniform regulatory framework in the field of AI in Europe is necessary to avoid member states adopting different approaches.
“We felt that regulation is important to make sure that there is no restriction on the internal market. If we leave scope to the member states, I think we’ll see greater legal uncertainty,” García del Blanco said on Tuesday.
In the context of the current public health crisis, García del Blanco also said the use of certain biometric applications and remote recognition technologies should be proportionate, while respecting the EU’s data protection regime and the EU Charter of Fundamental Rights.
A new EU agency for Artificial Intelligence?
One of the most contested areas of García del Blanco’s report was his suggestion that the EU should establish a new agency responsible for overseeing compliance with future ethical principles in Artificial Intelligence.
“We shouldn’t get distracted by the idea of setting up an agency, European Union citizens are not interested in setting up further bodies,” said the conservative EPP’s shadow rapporteur on the file, Geoffroy Didier.
The centrist-liberal Renew group also did not warm up to the idea of establishing a new agency for AI, with MEP Stephane Sejourne saying that there already exist bodies that could have their remits extended.
In the previous mandate, as part of a 2017 resolution on Civil Law Rules on Robotics, Parliament had called upon the Commission to ‘consider’ whether an EU Agency for Robotics and Artificial Intelligence could be worth establishing in the future.
Another point of divergence consistently raised by MEPs on Tuesday was the lack of harmony in key definitions related to Artificial Intelligence across different Parliamentary texts, which could create legal loopholes in the future.
In this vein, members highlighted the need to work towards joint definitions for Artificial intelligence operations, in order to ensure consistency across Parliament’s four draft recommendations to the Commission….(More)”.
Essay by Geoff Mulgan: “Crises – whether wars or pandemics – can sometimes, though not always, fuel social imagination. New arrangements have to be created at breakneck speed and old norms have to be discarded. The deeper the crisis the more likely it is that people ask not for a return to normal but for a jump to something different and better.
So it is now. Across the world countries are beginning to think about how life after COVID-19 might be different: could we use the crisis to solve the problems of carbon, low status for care-workers, or welfare states ill-suited to new forms of precariousness? As this debate gathers speed, it’s opening up questions about the role of the social sciences. They’re playing a vital role in helping countries to manage the crisis, and to plan for recovery. But how much are they there to understand the past and present – and how much should they help us to shape the future?
A century ago the answers were perhaps more obvious than today. HG Wells early in the last century described sociology as ‘the description of the Ideal Society and its relation to existing societies’. The founders of UCL in the mid-19th century and of LSE at the end of the 19th century, saw them as vehicles to change the world not just to interpret it. It was taken for granted that social science should help map out possible futures – new rights, new forms of social policy, new ways of running economies.
Unfortunately, these traditions have largely atrophied. Within academia you are far more likely to make a successful career analysing past patterns, or critiquing the present, than offering designs for the future. That is partly the result of very healthy trends – in particular, more attention being paid to evidence and data. But it’s left a gap since, by definition, there isn’t any hard evidence about a future that hasn’t yet happened. There are a few small pockets of more speculative, future-oriented work in universities. But they’re seen as quite marginal, and a fair proportion of this work is inward looking – feeding into academic journals and very small audiences – rather than feeding into political programmes and public imagination as happened in the past. Meanwhile one of the less attractive legacies of several decades of post-structuralism and post-modernism is that many academics believe they have much more of a duty to critique than to propose or create.
Outside the academy the traditions of social imagination have also atrophied. Political parties have largely closed down the research departments that once helped them think. Thinktanks have become ever more locked into news cycles rather than long range thinking.
In the late 20th century the progressive movements of the left lost confidence in a forward march of history, and the green movements that have partly replaced them have proven more effective at persuading people of the likelihood of future ecological disaster than promoting positive alternatives (though the green visions of future arrangements for food, circular economies are a partial exception to the picture I’m describing here). As a result much of the role of future imagination has been left to fiction.
One symptom is that many fewer people today can articulate a plausible and desirable better society than was the case 50 or 100 years ago. Majorities in countries like the UK now expect their children to be worse off than they are….(More)”.
Press Release: “As part of efforts to identify priorities across sectors in which data and data science could make a difference, The Governance Lab (The GovLab) at the New York University Tandon School of Engineering has partnered with Data2X, the gender data alliance housed at the United Nations Foundation, to release ten pressing questions on gender that experts have determined can be answered using data. Members of the public are invited to share their views and vote to help develop a data agenda on gender.
The questions are part of the 100 Questions Initiative, an effort to identify the most important societal questions that can be answered by data. The project relies on an innovative process of sourcing “bilinguals,” individuals with both subject-matter and data expertise, who in this instance provided questions related to gender they considered to be urgent and answerable. The results span issues of labor, health, climate change, and gender-based violence.
Through the initiative’s new online platform, anyone can now vote on what they consider to be the most pressing, data-related questions about gender that researchers and institutions should prioritize. Through voting, the public can steer the conversation and determine which topics should be the subject of data collaboratives, an emerging form of collaboration that allows organizations from different sectors to exchange data to create public value.
The GovLab has conducted significant research on the value and practice of data collaboratives, and its research shows that inter-sectoral collaboration can both increase access to data as well as unleash the potential of that data to serve the public good.
Data2X supported the 100 Questions Initiative by providing expertise and connecting The GovLab with relevant communities, events, and resources. The initiative helped inform Data2X’s “Big Data, Big Impact? Towards Gender-Sensitive Data Systems” report, which identifies gaps of information on gender equality across key policy domains.
“Asking the right questions is a critical first step in fostering data production and encouraging data use to truly meet the unique experiences and needs of women and girls,” said Emily Courey Pryor, executive director of Data2X. “Obtaining public feedback is a crucial way to identify the most urgent questions — and to ultimately incentivize investment in gender data collection and use to find the answers.”Said Stefaan Verhulst, co-founder and chief research and development officer at The GovLab, “Sourcing and prioritizing questions related to gender can inform resource and funding allocation to address gender data gaps and support projects with the greatest potential impact. This way, we can be confident about solutions that address the challenges facing women and girls.”…(More)”.
Essay by Stuart Whatley: “It is now a familiar story. A civilization that measures itself by its technological achievements is confronted with the limits of its power. A new threat, a sudden shock, has shown its tools to be wanting, yet it is now more dependent on them than ever before. While the few in a position to wrest back a semblance of control busy themselves preparing new models and methods, the nonessential masses hurl themselves at luminescent screens, like so many moths to the flame.
It is precisely at such moments of technological dependency that one might consider interrogating one’s relationship with technology more broadly. Yes, “this too shall pass,” because technology always holds the key to our salvation. The question is whether it also played a role in our original sin.
In 1909, following a watershed era of technological progress, but preceding the industrialized massacres of the Somme and Verdun, E.M. Forster imagined, in “The Machine Stops,” a future society in which the entirety of lived experience is administered by a kind of mechanical demiurge. The story is the perfect allegory for the moment, owing not least to its account of a society-wide sudden stop and its eerily prescient description of isolated lives experienced wholly through screens.
The denizens (for they are not citizens) of Forster’s world wile away their days in single-occupancy hexagonal underground rooms, where all of their basic needs are made available on demand. “The Machine…feeds us and clothes us and houses us,” they exclaim, “through it we speak to one another, through it we see one another, in it we have our being.” As such, one’s only duty is to abide by the “spirit of the age.” Whereas in the past that may have entailed sacrifices, always to ensure “that the Machine may progress, that the Machine may progress eternally,” most inhabitants now lead lives of leisure, “eating, or sleeping, or producing ideas.”
Yet despite all of their comforts and free time, they are a harried leisure class, because they have absorbed the values of the Machine itself. They are obsessed with efficiency, an impulse that they discharge by trying to render order (“ideas”) from the unmanageable glut of information that the machine spits out. One character, Vashti, is a fully initiated member of the cult of efficiency. She does not bother trying to acquire a bed to fit her smaller stature more comfortably, for she accepts that “to have an alternative size would have involved vast alterations in the Machine.” Nor does she have any interest in traveling, because she generates “no ideas in an air-ship.” To her mind, any habit that “was unproductive of ideas…had no connexion with the habits that really mattered.” Everyone simply accepts that although the machine’s video feeds do not convey the nuances of one’s facial expressions, they’re “good enough for all practical purposes.”
Chief among Vashti’s distractions is her son, Kuno, a Cassandra-like figure who dares to point out that, “The Machine develops—but not on our lines. The Machine proceeds—but not to our goal.” When the mechanical system eventually begins to break down (starting with the music-streaming service, then the beds), the people have no choice but to take further recourse in the Machine. Complaints are lodged with the Committee of the Mending Apparatus, but the Mending Apparatus itself turns out to be broken. Rather than protest further, the people pray and pine for the Machine’s quick recovery. By that “latter day,” Forster explains, they “had become so subservient that they readily adapted themselves to every caprice of the Machine.”…(More)”.
Press Release: “The National Academies of Sciences, Engineering, and Medicine and the National Science Foundation announced today the formation of a Societal Experts Action Network (SEAN) to connect social and behavioral science researchers with decision-makers who are leading the response to COVID-19. SEAN will respond to the most pressing social, behavioral, and economic questions that are being asked by federal, state, and local officials by working with appropriate experts to quickly provide actionable answers.
The new network’s activities will be overseen by an executive committee in coordination with the National Academies’ Standing Committee on Emerging Infectious Diseases and 21st Century Health Threats, established earlier this year to provide rapid expert input on urgent questions facing the federal government on the COVID-19 pandemic. Standing committee members Robert Groves, executive vice president and provost at Georgetown University, and Mary T. Bassett, director of the François-Xavier Bagnoud Center for Health and Human Rights at Harvard University, will co-chair the executive committee to manage SEAN’s solicitation of questions and expert responses, anticipate leaders’ research needs, and guide the dissemination of network findings.
SEAN will include individual researchers from a broad range of disciplines as well as leading national social and behavioral science institutions. Responses to decision-maker requests may range from individual phone calls and presentations to written committee documents such as Rapid Expert Consultations.
“This pandemic has broadly impacted all aspects of life — not just our health, but our work, families, education, supply chains, and even the global environment,” said Marcia McNutt, president of the National Academy of Sciences. “Therefore, to address the myriad questions that are being raised by mayors, governors, local representatives, and other leaders, we must recruit the full range of scientific expertise from across the social, natural, and biomedical sciences.”
“Our communities and our society at large are facing a range of complex issues on multiple fronts due to COVID-19,” said Arthur Lupia, head of the Directorate for Social, Behavioral, and Economic Sciences at the National Science Foundation. “These are human-centered issues affecting our daily lives — the education and well-being of our children, the strength of our economy, the health of our loved ones, neighbors, and so many more. Through SEAN, social and behavioral scientists will provide actionable, evidence-driven guidance to our leaders across the U.S. who are working to support our communities and speed their recovery.”…(More)”.
Report by the Ada Lovelace Institute and DataKind UK: “As algorithmic systems become more critical to decision making across many parts of society, there is increasing interest in how they can be scrutinised and assessed for societal impact, and regulatory and normative compliance.
This report is primarily aimed at policymakers, to inform more accurate and focused policy conversations. It may also be helpful to anyone who creates, commissions or interacts with an algorithmic system and wants to know what methods or approaches exist to assess and evaluate that system…
Clarifying terms and approaches
Through literature review and conversations with experts from a range of disciplines, we’ve identified four prominent approaches to assessing algorithms that are often referred to by just two terms: algorithm audit and algorithmic impact assessment. But there is not always agreement on what these terms mean among different communities: social scientists, computer scientists, policymakers and the general public have different interpretations and frames of reference.
While there is broad enthusiasm among policymakers for algorithm audits and impact assessments, there is often lack of detail about the approaches being discussed. This stems both from the confusion of terms, but also from the different maturity of the approaches the terms describe.
Clarifying which approach we’re referring to, as well as where further research is needed, will help policymakers and practitioners to do the more vital work of building evidence and methodology to take these approaches forward.
We focus on algorithm audit and algorithmic impact assessment. For each, we identify two key approaches the terms can be interpreted as:
- Algorithm audit
- Bias audit: a targeted, non-comprehensive approach focused on assessing algorithmic systems for bias
- Regulatory inspection: a broad approach, focused on an algorithmic system’s compliance with regulation or norms, necessitating a number of different tools and methods; typically performed by regulators or auditing professionals
- Algorithmic impact assessment
- Algorithmic risk assessment: assessing possible societal impacts of an algorithmic system before the system is in use (with ongoing monitoring often advised)
- Algorithmic impact evaluation: assessing possible societal impacts of an algorithmic system on the users or population it affects after it is in use…(More)”.
Andrew Young at The GovLab: “The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to share a set of user-friendly tools to support organizations and practitioners seeking to operationalize the RD4C Principles. These principles—Purpose-Driven, People-Centric, Participatory, Protective of Children’s Rights, Proportional, Professionally Accountable, and Prevention of Harms Across the Data Lifecycle—are especially important in the current moment, as actors around the world are taking a data-driven approach to the fight against COVID-19.
The initial components of the RD4C Toolkit are:
The RD4C Data Ecosystem Mapping Tool intends to help users to identify the systems generating data about children and the key components of those systems. After using this tool, users will be positioned to understand the breadth of data they generate and hold about children; assess data systems’ redundancies or gaps; identify opportunities for responsible data use; and achieve other insights.
The RD4C Decision Provenance Mapping methodology provides a way for actors designing or assessing data investments for children to identify key decision points and determine which internal and external parties influence those decision points. This distillation can help users to pinpoint any gaps and develop strategies for improving decision-making processes and advancing more professionally accountable data practices.
The RD4C Opportunity and Risk Diagnostic provides organizations with a way to take stock of the RD4C principles and how they might be realized as an organization reviews a data project or system. The high-level questions and prompts below are intended to help users identify areas in need of attention and to strategize next steps for ensuring more responsible handling of data for and about children across their organization.
Finally, the Data for Children Collaborative with UNICEF developed an Ethical Assessment that “forms part of [their] safe data ecosystem, alongside data management and data protection policies and practices.” The tool reflects the RD4C Principles and aims to “provide an opportunity for project teams to reflect on the material consequences of their actions, and how their work will have real impacts on children’s lives.
RD4C launched in October 2019 with the release of the RD4C Synthesis Report, Selected Readings, and the RD4C Principles. Last month we published the The RD4C Case Studies, which analyze data systems deployed in diverse country environments, with a focus on their alignment with the RD4C Principles. The case studies are: Romania’s The Aurora Project, Childline Kenya, and Afghanistan’s Nutrition Online Database.