Paper by Soenke Ehret: “Citizens’ attitudes concerning aspects of AI such as transparency, privacy, and discrimination have received considerable attention. However, it is an open question to what extent economic consequences affect preferences for public policies governing AI. When does the public demand imposing restrictions on – or even prohibiting – emerging AI technologies? Do average citizens’ preferences depend causally on normative and economic concerns or only on one of these causes? If both, how might economic risks and opportunities interact with assessments based on normative factors? And to what extent does the balance between the two kinds of concerns vary by context? I answer these questions using a comparative conjoint survey experiment conducted in Germany, the United Kingdom, India, Chile, and China. The data analysis suggests strong effects regarding AI systems’ economic and normative attributes. Moreover, I find considerable cross-country variation in normative preferences regarding the prohibition of AI systems vis-a-vis economic concerns…(More)”.
Artificial intelligence was supposed to transform health care. It hasn’t.
Article by Ben Leonard and Ruth Reader: “Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or help patients. | Seth Wenig/AP Photo
Investors see health care’s future as inextricably linked with artificial intelligence. That’s obvious from the cash pouring into AI-enabled digital health startups, including more than $3 billion in the first half of 2022 alone and nearly $10 billion in 2021, according to a Rock Health investment analysis commissioned by POLITICO.
And no wonder, considering the bold predictions technologists have made. At a conference in 2016, Geoffrey Hinton, British cognitive psychologist and “godfather” of AI, said radiologists would soon go the way of typesetters and bank tellers: “People should stop training radiologists now. It’s just completely obvious that, within five years, deep learning is going to do better.”
But more than five years since Hinton’s forecast, radiologists are still training to read image scans. Instead of replacing doctors, health system administrators now see AI as a tool clinicians will use to improve everything from their diagnoses to billing practices. AI hasn’t lived up to the hype, medical experts said, because health systems’ infrastructure isn’t ready for it yet. And the government is just beginning to grapple with its regulatory role.
“Companies come in promising the world and often don’t deliver,” said Bob Wachter, head of the department of medicine at the University of California, San Francisco. “When I look for examples of … true AI and machine learning that’s really making a difference, they’re pretty few and far between. It’s pretty underwhelming.”
Administrators say algorithms — the software that processes data — from outside companies don’t always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.
But it’s slow going. Research based on job postings shows health care behind every industry except construction in adopting AI…(More)”.
Designing Human-Centric AI Experiences
Book by Akshay Kore: “User experience (UX) design practices have seen a fundamental shift as more and more software products incorporate machine learning (ML) components and artificial intelligence (AI) algorithms at their core. This book will probe into UX design’s role in making technologies inclusive and enabling user collaboration with AI.
AI/ML-based systems have changed the way of traditional UX design. Instead of programming a method to do a specific action, creators of these systems provide data and nurture them to curate outcomes based on inputs. These systems are dynamic and while AI systems change over time, their user experience, in many cases, does not adapt to this dynamic nature.
Applied UX Design for Artificial Intelligence will explore this problem, addressing the challenges and opportunities in UX design for AI/ML systems, look at best practices for designers, managers, and product creators and showcase how individuals from a non-technical background can collaborate effectively with AI and Machine learning teams…(More)”.
Algorithms for Decision Making
Book by Mykel J. Kochenderfer, Tim A. Wheeler and Kyle H. Wray: “Automated decision-making systems or decision-support systems—used in applications that range from aircraft collision avoidance to breast cancer screening—must be designed to account for various sources of uncertainty while carefully balancing multiple objectives. This textbook provides a broad introduction to algorithms for decision making under uncertainty, covering the underlying mathematical problem formulations and the algorithms for solving them.
The book first addresses the problem of reasoning about uncertainty and objectives in simple decisions at a single point in time, and then turns to sequential decision problems in stochastic environments where the outcomes of our actions are uncertain. It goes on to address model uncertainty, when we do not start with a known model and must learn how to act through interaction with the environment; state uncertainty, in which we do not know the current state of the environment due to imperfect perceptual information; and decision contexts involving multiple agents. The book focuses primarily on planning and reinforcement learning, although some of the techniques presented draw on elements of supervised learning and optimization. Algorithms are implemented in the Julia programming language. Figures, examples, and exercises convey the intuition behind the various approaches presented…(More)”
AI-powered cameras to enforce bus lanes
Article by Chris Teale: “New York’s Metropolitan Transportation Authority will use an automated camera system to ensure bus lanes in New York City are free from illegally parked vehicles.
The MTA is partnering with Hayden AI to deploy Automated Bus Lane Enforcement camera systems to 300 buses, which will be mounted on the interior of the windshield and powered by artificial intelligence. The agency has the option to add the cameras to 200 more buses if it chooses.
Chris Carson, Hayden AI’s CEO and co-founder, said when the cameras detect an encroachment on a bus lane, they use real-time automated license plate recognition and edge computing to compile a packet of evidence that includes the time, date and location of the offense, as well as a brief video that shows the violator’s license plate.
That information is encrypted and sent securely to the cloud, where MTA officials can access and analyze it for violations. If there is no encroachment on a bus lane, the cameras do not record anything…
An MTA spokesperson said the agency will also use data from the system to identify locations that have the highest instances of vehicles blocking bus lanes. New York City has 140 miles of bus lanes and has plans to build 150 more miles in the next four years, but congestion and lane violations from other road users slows the speed of the buses. The city already uses cameras and police patrols to attempt to enforce proper bus lane use…(More)”.
AI ethics: the case for including animals
Paper by Peter Singer & Yip Fai Tse: “The ethics of artificial intelligence, or AI ethics, is a rapidly growing field, and rightly so. While the range of issues and groups of stakeholders concerned by the field of AI ethics is expanding, with speculation about whether it extends even to the machines themselves, there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals. This paper seeks to explore the kinds of impact AI has on nonhuman animals, the severity of these impacts, and their moral implications. We hope that this paper will facilitate the development of a new field of philosophical and technical research regarding the impacts of AI on animals, namely, the ethics of AI as it affects nonhuman animals…(More)”.
Towards Human-Centric Algorithmic Governance
Blog by Zeynep Engin: “It is no longer news to say that the capabilities afforded by Data Science, AI and their associated technologies (such as Digital Twins, Smart Cities, Ledger Systems and other platforms) are poised to revolutionise governance, radically transforming the way democratic processes work, citizen services are provided, and justice is delivered. Emerging applications range from the way election campaigns are run and how crises at population level are managed (e.g. pandemics) to everyday operations like simple parking enforcement and traffic management, and to decisions at critical individual junctures, such as hiring or sentencing decisions. What it means to be a ‘human’ is also a hot topic for both scholarly and everyday discussions, since our societal interactions and values are also shifting fast in an increasingly digital and data-driven world.
As a millennial who grew up in a ‘developing’ economy in the ’90s and later established a cross-sector career in a ‘developed’ economy in the fields of data for policy and algorithmic governance, I believe I can credibly claim a pertinent, hands-on experience of the transformation from a fully analogue world into a largely digital one. I started off trying hard to find sufficient printed information to refer to in my term papers at secondary school, gradually adapting to trying hard to extract useful information amongst practically unlimited resources available online today. The world has become a lot more connected: communities are formed online, goods and services customised to individual tastes and preferences, work and education are increasingly hybrid, reducing dependency on physical environment, geography and time zones. Despite all these developments in nearly every aspect of our lives, one thing that has persisted in the face of this change is the nature of collective decision-making, particularly at the civic/governmental level. It still comprises the same election cycles with more or less similar political incentives and working practices, and the same type of politicians, bureaucracies, hierarchies and networks making and executing important (and often suboptimal) decisions on behalf of the public. Unelected private sector stakeholders in the meantime are quick to fill the growing gap — they increasingly make policies that affect large populations and define the public discourse, to primarily maximise their profit behind their IP protection walls…(More)”.
The UK Algorithmic Transparency Standard: A Qualitative Analysis of Police Perspectives
Paper by Marion Oswald, Luke Chambers, Ellen P. Goodman, Pam Ugwudike, and Miri Zilka: “1. The UK Government’s draft ‘Algorithmic Transparency Standard’ is intended to provide a standardised way for public bodies and government departments to provide information about how algorithmic tools are being used to support decisions. The research discussed in this report was conducted in parallel to the piloting of the Standard by the Cabinet Office and the Centre for Data Ethics and Innovation.
2. We conducted semi-structured interviews with respondents from across UK policing and commercial bodies involved in policing technologies. Our aim was to explore the implications for police forces of participation in the Standard, to identify rewards, risks, challenges for the police, and areas where the Standard could be improved, and therefore to contribute to the exploration of policy options for expansion of participation in the Standard.
3. Algorithmic transparency is both achievable for policing and could bring significant rewards. A key reward of police participation in the Standard is that it provides the opportunity to demonstrate proficient implementation of technology-driven policing, thus enhancing earned trust. Research participants highlighted the public good that could result from the considered use of algorithms.
4. Participants noted, however, a risk of misperception of the dangers of policing technology, especially if use of algorithmic tools was not appropriately compared to the status quo and current methods…(More)”.
Artificial Intelligence and Democracy
Open Access Book by Jérôme Duberry on “Risks and Promises of AI-Mediated Citizen–Government Relations….What role does artificial intelligence (AI) play in the citizen–government rela-tions? Who is using this technology and for what purpose? How does the use of AI influence power relations in policy-making, and the trust of citizens in democratic institutions? These questions led to the writing of this book. While the early developments of e-democracy and e-participation can be traced back to the end of the 20th century, the growing adoption of smartphones and mobile applications by citizens, and the increased capacity of public adminis-trations to analyze big data, have enabled the emergence of new approaches. Online voting, online opinion polls, online town hall meetings, and online dis-cussion lists of the 1990s and early 2000s have evolved into new generations of policy-making tactics and tools, enabled by the most recent developments in information and communication technologies (ICTs) (Janssen & Helbig, 2018). Online platforms, advanced simulation websites, and serious gaming tools are progressively used on a larger scale to engage citizens, collect their opinions, and involve them in policy processes…(More)”.
First regulatory sandbox on Artificial Intelligence presented
European Commission: “The sandbox aims to bring competent authorities close to companies that develop AI to define best practices that will guide the implementation of the future European Commission’s AI Regulation (Artificial Intelligence Act). This would also ensure that the legistlation can be implemented in two years.
The regulatory sandbox is a way to connect innovators and regulators and provide a controlled environment for them to cooperate. Such a collaboration between regulators and innovators should facilitates the development, testing and validation of innovative AI systems with a view to ensuring compliance with the requirements of the AI Regulation.
While the entire ecosystem is preparing for the AI Act, this sandbox initiative is expected to generate easy-to-follow, future-proof best practice guidelines and other supporting materials. Such outputs are expected to facilitate the implementation of rules by companies, in particular SMEs and start-ups.
This sandbox pilot initiated by the Spanish government will look at operationalising the requirements of the future AI regulation as well as other features such as conformity assessments or post-market activities.
Thanks to this pilot experience, obligations and how to implement them will be documented, for AI system providers (participants of the sandbox) and systematised in a good practice and lessons learnt implementation guidelines. The deliverables will also include methods to control and follow up that are useful for supervising national authorities in charge of implementing the supervisory mechanisms that the regulation stablishes.
In order to strengthen the cooperation of all possible actors at the European level, this exercise will remain open to other Member States that will be able to follow or join the pilot in what could potentially become a pan-European AI regulatory sandbox. Cooperation at EU level with other Member States will be pursued within the framework of the Expert Group on AI and Digitalisation of Businesses set up by the Commission.
The financing of this sandbox is drawn from the Recovery and Resilience Funds assigned to the Spanish Government, through the Spanish Recovery, Transformation and Resilience Plan, and in particular through the Spanish National AI Strategy (Component 16 of the Plan). The overall budget for the pilot will be approximately 4.3M EUR for approximately three years…(More)”.