How can Mixed Reality and AI improve emergency medical care?


Springwise: “Mixed reality (MR) refers to technologies that create immersive computer-generated environments in which parts of the physical and virtual environment are combined. With potential applications that range from education and engineering to entertainment, the market for MR is forecast to record revenues of just under $25 billion by 2032. Now, in a ground-breaking partnership, Singapore-based company Mediwave is teaming up with Sri Lanka’s 1990 Suwa Seriya to deploy MR and artificial intelligence (AI) to create a fully connected ambulance.

1990 Suwa Seriya is Sri Lanka’s national pre-hospital emergency ambulance service, which boasts response times that surpass even some services in developed countries. The innovative ambulance it has deployed uses Mediwave’s integrated Emergency Response Suite, which combines the latest communications equipment with internet-of-things (IoT) and AR capabilities to enhance the efficiency of the emergency response eco-system.

The connected ambulance ensures swift response times and digitises critical processes, while specialised care can be provided remotely through a Microsoft HoloLens. The technology enables Emergency Medical Technicians (EMTs) – staff who man ambulances in Sri Lanka – to connect with physicians at the Emergency Command and Control Centre. These physicians help the EMTs provide care during the so-called ‘golden hour’ of medical emergencies – the concept that rapid clinical investigation and care within 60 minutes of a traumatic injury is essential for a positive patient outcome…

Other applications of extended reality in the Springwise library include holograms that are used to train doctorsvirtual environments for treating phobias, and an augmented reality contact lens…(More)”.

The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now


Book by Hilke Schellmann: “Based on exclusive information from whistleblowers, internal documents, and real world test results, Emmy‑award winning Wall Street Journal contributor Hilke Schellmann delivers a shocking and illuminating expose on the next civil rights issue of our time: how AI has already taken over the workplace and shapes our future.
 

Hilke Schellmann, is an Emmy‑award winning investigative reporter, Wall Street Journal and Guardian contributor and Journalism Professor at NYU. In The Algorithm, she investigates the rise of artificial intelligence (AI) in the world of work. AI is now being used to decide who has access to an education, who gets hired, who gets fired, and who receives a promotion. Drawing on exclusive information from whistleblowers, internal documents and real‑world tests, Schellmann discovers that many of the algorithms making high‑stakes decisions are biased, racist, and do more harm than good. Algorithms are on the brink of dominating our lives and threaten our human future—if we don’t fight back. 
 
Schellmann takes readers on a journalistic detective story testing algorithms that have secretly analyzed job candidates’ facial expressions and tone of voice. She investigates algorithms that scan our online activity including Twitter and LinkedIn to construct personality profiles à la Cambridge Analytica. Her reporting reveals how employers track the location of their employees, the keystrokes they make, access everything on their screens and, during meetings, analyze group discussions to diagnose problems in a team. Even universities are now using predictive analytics for admission offers and financial aid…(More)”

Charting the Emerging Geography of AI


Article by Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi: “Given the high stakes of this race, which countries are in the lead? Which are gaining on the leaders? How might this hierarchy shape the future of AI? Identifying AI-leading countries is not straightforward, as data, knowledge, algorithms, and models can, in principle, cross borders. Even the U.S.–China rivalry is complicated by the fact that AI researchers from the two countries cooperate — and more so than researchers from any other pair of countries. Open-source models are out there for everyone to use, with licensing accessible even for cutting-edge models. Nonetheless, AI development benefits from scale economies and, as a result, is geographically clustered as many significant inputs are concentrated and don’t cross borders that easily….

Rapidly accumulating pools of data in digital economies around the world are clearly one of the critical drivers of AI development. In 2019, we introduced the idea of “gross data product” of countries determined by the volume, complexity, and accessibility of data consumed alongside the number of active internet users in the country. For this analysis, we recognized that gross data product is an essential asset for AI development — especially for generative AI, which requires massive, diverse datasets — and updated the 2019 analyses as a foundation, adding drivers that are critical for AI development overall. That essential data layer makes the index introduced here distinct from other indicators of AI “vibrancy” or measures of global investments, innovations, and implementation of AI…(More)”.

New group aims to professionalize AI auditing


Article by Louise Matsakis: “The newly formed International Association of Algorithmic Auditors (IAAA) is hoping to professionalize the sector by creating a code of conduct for AI auditors, training curriculums, and eventually, a certification program.

Over the last few years, lawmakers and researchers have repeatedly proposed the same solution for regulating artificial intelligence: require independent audits. But the industry remains a wild west; there are only a handful of reputable AI auditing firms and no established guardrails for how they should conduct their work.

Yet several jurisdictions have passed laws mandating tech firms to commission independent audits, including New York City. The idea is that AI firms should have to demonstrate their algorithms work as advertised, the same way companies need to prove they haven’t fudged their finances.

Since ChatGPT was released last year, a troubling norm has been established in the AI industry, which is that it’s perfectly acceptable to evaluate your own models in-house.

Leading startups like OpenAI and Anthropic regularly publish research about the AI systems they’re developing, including the potential risks. But they rarely commission independent audits, let alone publish the results, making it difficult for anyone to know what’s really happening under the hood…(More)”..(More)”

Considerations for Governing Open Foundation Models


Brief by Rishi Bommasani et al: “Foundation models (e.g., GPT-4, Llama 2) are at the epicenter of AI, driving technological innovation and billions in investment. This paradigm shift has sparked widespread demands for regulation. Animated by factors as diverse as declining transparency and unsafe labor practices, limited protections for copyright and creative work, as well as market concentration and productivity gains, many have called for policymakers to take action.

Central to the debate about how to regulate foundation models is the process by which foundation models are released. Some foundation models like Google DeepMind’s Flamingo are fully closed, meaning they are available only to the model developer; others, such as OpenAI’s GPT-4, are limited access, available to the public but only as a black box; and still others, such as Meta’s Llama 2, are more open, with widely available model weights enabling downstream modification and scrutiny. As of August 2023, the U.K.’s Competition and Markets Authority documents the most common release approach for publicly-disclosed models is open release based on data from Stanford’s Ecosystem Graphs. Developers like Meta, Stability AI, Hugging Face, Mistral, Together AI, and EleutherAI frequently release models openly.

Governments around the world are issuing policy related to foundation models. As part of these efforts, open foundation models have garnered significant attention: The recent U.S. Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence tasks the National Telecommunications and Information Administration with preparing a report on open foundation models for the president. In the EU, open foundation models trained with fewer than 1025 floating point operations (a measure of the amount of compute expended) appear to be exempted under the recently negotiated AI Act. The U.K.’s AI Safety Institute will “consider open-source systems as well as those deployed with various forms of access controls” as part of its initial priorities. Beyond governments, the Partnership on AI has introduced guidelines for the safe deployment of foundation models, recommending against open release for the most capable foundation models.

Policy on foundation models should support the open foundation model ecosystem, while providing resources to monitor risks and create safeguards to address harms. Open foundation models provide significant benefits to society by promoting competition, accelerating innovation, and distributing power. For example, small businesses hoping to build generative AI applications could choose among a variety of open foundation models that offer different capabilities and are often less expensive than closed alternatives. Further, open models are marked by greater transparency and, thereby, accountability. When a model is released with its training data, independent third parties can better assess the model’s capabilities and risks…(More)”.

How Moral Can A.I. Really Be?


Article by Paul Bloom: “…The problem isn’t just that people do terrible things. It’s that people do terrible things that they consider morally good. In their 2014 book “Virtuous Violence,” the anthropologist Alan Fiske and the psychologist Tage Rai argue that violence is often itself a warped expression of morality. “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying,” they write. Their examples include suicide bombings, honor killings, and war. The philosopher Kate Manne, in her book “Down Girl,” makes a similar point about misogynistic violence, arguing that it’s partially rooted in moralistic feelings about women’s “proper” role in society. Are we sure we want A.I.s to be guided by our idea of morality?

Schwitzgebel suspects that A.I. alignment is the wrong paradigm. “What we should want, probably, is not that superintelligent AI align with our mixed-up, messy, and sometimes crappy values but instead that superintelligent AI have ethically good values,” he writes. Perhaps an A.I. could help to teach us new values, rather than absorbing old ones. Stewart, the former graduate student, argued that if researchers treat L.L.M.s as minds and study them psychologically, future A.I. systems could help humans discover moral truths. He imagined some sort of A.I. God—a perfect combination of all the great moral minds, from Buddha to Jesus. A being that’s better than us.

Would humans ever live by values that are supposed to be superior to our own? Perhaps we’ll listen when a super-intelligent agent tells us that we’re wrong about the facts—“this plan will never work; this alternative has a better chance.” But who knows how we’ll respond if one tells us, “You think this plan is right, but it’s actually wrong.” How would you feel if your self-driving car tried to save animals by refusing to take you to a steakhouse? Would a government be happy with a military A.I. that refuses to wage wars it considers unjust? If an A.I. pushed us to prioritize the interests of others over our own, we might ignore it; if it forced us to do something that we consider plainly wrong, we would consider its morality arbitrary and cruel, to the point of being immoral. Perhaps we would accept such perverse demands from God, but we are unlikely to give this sort of deference to our own creations. We want alignment with our own values, then, not because they are the morally best ones, but because they are ours…(More)”

Artificial Intelligence and the City


Book edited by Federico Cugurullo, Federico Caprotti, Matthew Cook, Andrew Karvonen, Pauline McGuirk, and Simon Marvin: “This book explores in theory and practice how artificial intelligence (AI) intersects with and alters the city. Drawing upon a range of urban disciplines and case studies, the chapters reveal the multitude of repercussions that AI is having on urban society, urban infrastructure, urban governance, urban planning and urban sustainability.

Contributors also examine how the city, far from being a passive recipient of new technologies, is influencing and reframing AI through subtle processes of co-constitution. The book advances three main contributions and arguments:

  • First, it provides empirical evidence of the emergence of a post-smart trajectory for cities in which new material and decision-making capabilities are being assembled through multiple AIs.
  • Second, it stresses the importance of understanding the mutually constitutive relations between the new experiences enabled by AI technology and the urban context.
  • Third, it engages with the concepts required to clarify the opaque relations that exist between AI and the city, as well as how to make sense of these relations from a theoretical perspective…(More)”.

Steering Responsible AI: A Case for Algorithmic Pluralism


Paper by Stefaan G. Verhulst: “In this paper, I examine questions surrounding AI neutrality through the prism of existing literature and scholarship about mediation and media pluralism. Such traditions, I argue, provide a valuable theoretical framework for how we should approach the (likely) impending era of AI mediation. In particular, I suggest examining further the notion of algorithmic pluralism. Contrasting this notion to the dominant idea of algorithmic transparency, I seek to describe what algorithmic pluralism may be, and present both its opportunities and challenges. Implemented thoughtfully and responsibly, I argue, Algorithmic or AI pluralism has the potential to sustain the diversity, multiplicity, and inclusiveness that are so vital to democracy…(More)”.

Generative AI and Policymaking for the New Frontier


Essay by Beth Noveck: “…Embracing the same responsible experimentation approach taken in Boston and New Jersey and expanding on the examples in those interim policies, this November the state of California issued an executive order and a lengthy but clearly written report, enumerating potential benefits from the use of generative AI.

These include:

  1. Sentiment Analysis — Using generative AI (GenAI) to analyze public feedback on state policies and services.
  2. Summarizing Meetings — GenAI can find the key topics, conclusions, action items and insights.
  3. Improving Benefits Uptake — AI can help identify public program participants who would benefit from additional outreach. GenAI can also identify groups that are disproportionately not accessing services.
  4. Translation — Generative AI can help translate government forms and websites into multiple languages.
  5. Accessibility — GenAI can be used to translate materials, especially educational materials into formats like audio, large print or Braille or to add captions.
  6. Cybersecurity —GenAI models can analyze data to detect and respond to cyber attacks faster and safeguard public infrastructure.
  7. Updating Legacy Technology — Because it can analyze and generate computer code, generative AI can accelerate the upgrading of old computer systems.
  8. Digitizing Services — GenAI can help speed up the creation of new technology. And with GenAI, anyone can create computer code, enabling even nonprogrammers to develop websites and software.
  9. Optimizing Routing — GenAI can analyze traffic patterns and ride requests to improve efficiency of state-managed transportation fleets, such as buses, waste collection trucks or maintenance vehicles.
  10. Improving Sustainability — GenAI can be applied to optimize resource allocation and enhance operational efficiency. GenAI simulation tools could, for example, “model the carbon footprint, water usage and other environmental impacts of major infrastructure projects.”

Because generative AI tools can both create and analyze content, these 10 are just a small subset of the many potential applications of generative AI in governing…(More)”.

Urban Artificial Intelligence: From Real-world Observations to a Paradigm-Shifting Concept


Blog by Hubert Beroche: “Cities are facing unprecedented challenges. The figures are well known: while occupying only 2% of the earth’s surface, urban settlements host more than 50% of the global population and are responsible for 70% of greenhouse emissions. While concentrating most capital and human wealth, they are also places of systemic inequalities (Nelson, 2023), exacerbating and materializing social imbalances. In the meantime, cities have fewer and fewer resources to face those tensions. Increasing environmental constraints, combined with shrinking public budgets, are putting pressure on cities’ capacities. More than ever, urban stakeholders have to do more with less.

In this context, Artificial Intelligence has usually been seen as a much-welcomed technology. This technology can be defined as machines’ ability to perform cognitive functions, mainly through learning algorithms since 2012. First embedded in heavy top-down Smart City projects, AI applications in cities have gradually proliferated under the impetus of various stakeholders. Today’s cities are home to numerous AIs, owned and used by multiple stakeholders to serve different, sometimes divergent, interests.

The diversity of urban AIs in cities is well illustrated in our project co-produced with Cornell Tech: “The Future of Urban AI”. This graph represents different urban AI trends based on The Future of UrbanTech Horizon Scan. Each colored dot represents a major urban tech/urban AI trend, with its ramifications. Some of these trends are opposed but still cohabiting (eg “Dark Plans” and “New Screen Deal”)…(More)”.