Report by the Center for Data Ethics and Innovation (CDEI) (UK): “Unfair biases, whether conscious or unconscious, can be a problem in many decision-making processes. This review considers the impact that an increasing use of algorithmic tools is having on bias in decision-making, the steps that are required to manage risks, and the opportunities that better use of data offers to enhance fairness. We have focused on the use of
algorithms in significant decisions about individuals, looking across four sectors (recruitment, financial services, policing and local government), and making cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making…(More)”.
The new SkillsMatch platform tackles skills assessment and matches your skills with training
European Commission: “The European labour market requires new skills to meet the demands of the Digital Age. EU citizens should have the right training, skills and support to empower them to find quality jobs and improve their living standards.
‘Soft skills’ such as confidence, teamwork, self-motivation, networking, presentation skills, are considered important for the employability and adaptability of Europe’s citizens. Soft skills are essential for how we work together and influence the decisions we take every day and can be more important than hard skills in today’s workplaces. The lack of soft skills is often only discovered once a person is already working on the job.
The state-of-the-art SkillsMatch platform helps users to match and adapt their soft skills assets to the demands of the labour market. The project is the first to offer a fully comprehensive platform with style guide cataloguing 36 different soft skills and matching them with occupations, as well as training opportunities, offering a large number of courses to improve soft skills depending on the chosen occupation.
The platform proposes courses, such as organisation and personal development, entrepreneurship, business communication and conflict resolution. There is a choice of courses in Spanish and English. Moreover, the platform will also provide recognition of the new learning and skills (open badges)…(More)”.
Malicious Uses and Abuses of Artificial Intelligence
Report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro: “… looking into current and predicted criminal uses of artificial intelligence (AI)… The report provides law enforcers, policy makers and other organizations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.
“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.” said Edvardas Šileris, Head of Europol’s Cybercrime Centre. “This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems.”
The report concludes that cybercriminals will leverage AI both as an attack vector and an attack surface. Deepfakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.
For example, AI could be used to support:
- Convincing social engineering attacks at scale
- Document-scraping malware to make attacks more efficient
- Evasion of image recognition and voice biometrics
- Ransomware attacks, through intelligent targeting and evasion
- Data pollution, by identifying blind spots in detection rules..
The three organizations make several recommendations to conclude the report:
- Harness the potential of AI technology as a crime-fighting tool to future-proof the cybersecurity industry and policing
- Continue research to stimulate the development of defensive technology
- Promote and develop secure AI design frameworks
- De-escalate politically loaded rhetoric on the use of AI for cybersecurity purposes
- Leverage public-private partnerships and establish multidisciplinary expert groups…(More)”.
Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias (2020)
Foreword of a Report by the Australian Human Rights Commission: “Artificial intelligence (AI) promises better, smarter decision making.
Governments are starting to use AI to make decisions in welfare, policing and law enforcement, immigration, and many other areas. Meanwhile, the private sector is already using AI to make decisions about pricing and risk, to determine what sorts of people make the ‘best’ customers… In fact, the use cases for AI are limited only by our imagination.
However, using AI carries with it the risk of algorithmic bias. Unless we fully understand and address this risk, the promise of AI will be hollow.
Algorithmic bias is a kind of error associated with the use of AI in decision making, and often results in unfairness. Algorithmic bias can arise in many ways. Sometimes the problem is with the design of the AI-powered decision-making tool itself. Sometimes the problem lies with the data set that was used to train the AI tool, which could replicate or even make worse existing problems, including societal inequality.
Algorithmic bias can cause real harm. It can lead to a person being unfairly treated, or even suffering unlawful discrimination, on the basis of characteristics such as their race, age, sex or disability.
This project started by simulating a typical decision-making process. In this technical paper, we explore how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.
To ground our discussion, we chose a hypothetical scenario: an electricity retailer uses an AI-powered tool to decide how to offer its products to customers, and on what terms. The general principles and solutions for mitigating the problem, however, will be relevant far beyond this specific situation.
Because algorithmic bias can result in unlawful activity, there is a legal imperative to address this risk. However, good businesses go further than the bare minimum legal requirements, to ensure they always act ethically and do not jeopardise their good name.
Rigorous design, testing and monitoring can avoid algorithmic bias. This technical paper offers some guidance for companies to ensure that when they use AI, their decisions are fair, accurate and comply with human rights….(More)”
Facial-recognition research needs an ethical reckoning
Editorial in Nature: “…As Nature reports in a series of Features on facial recognition this week, many in the field are rightly worried about how the technology is being used. They know that their work enables people to be easily identified, and therefore targeted, on an unprecedented scale. Some scientists are analysing the inaccuracies and biases inherent in facial-recognition technology, warning of discrimination, and joining the campaigners calling for stronger regulation, greater transparency, consultation with the communities that are being monitored by cameras — and for use of the technology to be suspended while lawmakers reconsider where and how it should be used. The technology might well have benefits, but these need to be assessed against the risks, which is why it needs to be properly and carefully regulated.Is facial recognition too biased to be let loose?
Responsible studies
Some scientists are urging a rethink of ethics in the field of facial-recognition research, too. They are arguing, for example, that scientists should not be doing certain types of research. Many are angry about academic studies that sought to study the faces of people from vulnerable groups, such as the Uyghur population in China, whom the government has subjected to surveillance and detained on a mass scale.
Others have condemned papers that sought to classify faces by scientifically and ethically dubious measures such as criminality….One problem is that AI guidance tends to consist of principles that aren’t easily translated into practice. Last year, the philosopher Brent Mittelstadt at the University of Oxford, UK, noted that at least 84 AI ethics initiatives had produced high-level principles on both the ethical development and deployment of AI (B. Mittelstadt Nature Mach. Intell. 1, 501–507; 2019). These tended to converge around classical medical-ethics concepts, such as respect for human autonomy, the prevention of harm, fairness and explicability (or transparency). But Mittelstadt pointed out that different cultures disagree fundamentally on what principles such as ‘fairness’ or ‘respect for autonomy’ actually mean in practice. Medicine has internationally agreed norms for preventing harm to patients, and robust accountability mechanisms. AI lacks these, Mittelstadt noted. Specific case studies and worked examples would be much more helpful to prevent ethics guidance becoming little more than window-dressing….(More)”.
Evaluating Identity Disclosure Risk in Fully Synthetic Health Data: Model Development and Validation
Paper by Khaled El Emam et al: “There has been growing interest in data synthesis for enabling the sharing of data for secondary analysis; however, there is a need for a comprehensive privacy risk model for fully synthetic data: If the generative models have been overfit, then it is possible to identify individuals from synthetic data and learn something new about them.
Objective: The purpose of this study is to develop and apply a methodology for evaluating the identity disclosure risks of fully synthetic data.
Methods: A full risk model is presented, which evaluates both identity disclosure and the ability of an adversary to learn something new if there is a match between a synthetic record and a real person. We term this “meaningful identity disclosure risk.” The model is applied on samples from the Washington State Hospital discharge database (2007) and the Canadian COVID-19 cases database. Both of these datasets were synthesized using a sequential decision tree process commonly used to synthesize health and social science data.
Results: The meaningful identity disclosure risk for both of these synthesized samples was below the commonly used 0.09 risk threshold (0.0198 and 0.0086, respectively), and 4 times and 5 times lower than the risk values for the original datasets, respectively.
Conclusions: We have presented a comprehensive identity disclosure risk model for fully synthetic data. The results for this synthesis method on 2 datasets demonstrate that synthesis can reduce meaningful identity disclosure risks considerably. The risk model can be applied in the future to evaluate the privacy of fully synthetic data….(More)”.
Algorithmic governance: A modes of governance approach
Article by Daria Gritsenko and Matthew Wood: “This article examines how modes of governance are reconfigured as a result of using algorithms in the governance process. We argue that deploying algorithmic systems creates a shift toward a special form of design‐based governance, with power exercised ex ante via choice architectures defined through protocols, requiring lower levels of commitment from governing actors. We use governance of three policy problems – speeding, disinformation, and social sharing – to illustrate what happens when algorithms are deployed to enable coordination in modes of hierarchical governance, self‐governance, and co‐governance. Our analysis shows that algorithms increase efficiency while decreasing the space for governing actors’ discretion. Furthermore, we compare the effects of algorithms in each of these cases and explore sources of convergence and divergence between the governance modes. We suggest design‐based governance modes that rely on algorithmic systems might be re‐conceptualized as algorithmic governance to account for the prevalence of algorithms and the significance of their effects….(More)”.
The political choreography of the Sophia robot: beyond robot rights and citizenship to political performances for the social robotics market
Paper by A humanoid robot named ‘Sophia’ has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence (AI). Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia’s citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of AI robots, we analyse the performativity of Sophia from the perspective of what we call ‘political choreography’: drawing on phenomenological approaches to performance-oriented philosophy of technology. This paper proposes to interpret and discuss the world tour of Sophia as a political choreography that boosts the rise of the social robot market, rather than a statement about robot citizenship or artificial intelligence. We argue that the media performances of the Sophia robot were choreographed to advance specific political interests. We illustrate our philosophical discussion with media material of the Sophia performance, which helps us to explore the mechanisms through which the media spectacle functions hand in hand with advancing the economic interests of technology industries and their governmental promotors. Using a phenomenological approach and attending to the movement of robots, we also criticize the notion of ‘embodied intelligence’ used in the context of social robotics and AI. In this way, we put the discussions about the robot’s rights or citizenship in the context of AI politics and economics….(More)”
Extending the framework of algorithmic regulation. The Uber case
Paper by Florian Eyert, Florian Irgmaier, and Lena Ulbricht: “In this article, we take forward recent initiatives to assess regulation based on contemporary computer technologies such as big data and artificial intelligence. In order to characterize current phenomena of regulation in the digital age, we build on Karen Yeung’s concept of “algorithmic regulation,” extending it by building bridges to the fields of quantification, classification, and evaluation research, as well as to science and technology studies. This allows us to develop a more fine‐grained conceptual framework that analyzes the three components of algorithmic regulation as representation, direction, and intervention and proposes subdimensions for each. Based on a case study of the algorithmic regulation of Uber drivers, we show the usefulness of the framework for assessing regulation in the digital age and as a starting point for critique and alternative models of algorithmic regulation….(More)”.
Four Principles to Make Data Tools Work Better for Kids and Families
Blog by the Annie E. Casey Foundation: “Advanced data analytics are deeply embedded in the operations of public and private institutions and shape the opportunities available to youth and families. Whether these tools benefit or harm communities depends on their design, use and oversight, according to a report from the Annie E. Casey Foundation.
Four Principles to Make Advanced Data Analytics Work for Children and Families examines the growing field of advanced data analytics and offers guidance to steer the use of big data in social programs and policy….
The Foundation report identifies four principles — complete with examples and recommendations — to help steer the growing field of data science in the right direction.
Four Principles for Data Tools
- Expand opportunity for children and families. Most established uses of advanced analytics in education, social services and criminal justice focus on problems facing youth and families. Promising uses of advanced analytics go beyond mitigating harm and help to identify so-called odds beaters and new opportunities for youth.
- Example: The Children’s Data Network at the University of Southern California is helping the state’s departments of education and social services explore why some students succeed despite negative experiences and what protective factors merit more investment.
- Recommendation: Government and its philanthropic partners need to test if novel data science applications can create new insights and when it’s best to apply them.
- Provide transparency and evidence. Advanced analytical tools must earn and maintain a social license to operate. The public has a right to know what decisions these tools are informing or automating, how they have been independently validated, and who is accountable for answering and addressing concerns about how they work.
- Recommendations: Local and state task forces can be excellent laboratories for testing how to engage youth and communities in discussions about advanced analytics applications and the policy frameworks needed to regulate their use. In addition, public and private funders should avoid supporting private algorithms whose design and performance are shielded by trade secrecy claims. Instead, they should fund and promote efforts to develop, evaluate and adapt transparent and effective models.
- Recommendations: Local and state task forces can be excellent laboratories for testing how to engage youth and communities in discussions about advanced analytics applications and the policy frameworks needed to regulate their use. In addition, public and private funders should avoid supporting private algorithms whose design and performance are shielded by trade secrecy claims. Instead, they should fund and promote efforts to develop, evaluate and adapt transparent and effective models.
- Empower communities. The field of advanced data analytics often treats children and families as clients, patients and consumers. Put to better use, these same tools can help elucidate and reform the systems acting upon children and families. For this shift to occur, institutions must focus analyses and risk assessments on structural barriers to opportunity rather than individual profiles.
- Recommendation: In debates about the use of data science, greater investment is needed to amplify the voices of youth and their communities.
- Recommendation: In debates about the use of data science, greater investment is needed to amplify the voices of youth and their communities.
- Promote equitable outcomes. Useful advanced analytics tools should promote more equitable outcomes for historically disadvantaged groups. New investments in advanced analytics are only worthwhile if they aim to correct the well-documented bias embedded in existing models.
- Recommendations: Advanced analytical tools should only be introduced when they reduce the opportunity deficit for disadvantaged groups — a move that will take organizing and advocacy to establish and new policy development to institutionalize. Philanthropy and government also have roles to play in helping communities test and improve tools and examples that already exist….(More)”.