Report by Justin Kollar, Niko McGlashan, and Sarah Williams: “The use of data in urban development is controversial because of the numerous examples showing its use to reinforce inequality rather than inclusion. From the development of Home Owners Loan Corporation (HOLC) maps, which excluded many minority communities from mortgages, to zoning laws used to reinforce structural racism, data has been used by those in power to elevate some while further marginalizing others. Yet data can achieve the opposite outcome by exposing inequity, encouraging dialogue and debate, making developers and cities more accountable, and ultimately creating new digital tools to make development processes more inclusive. Using data for action requires that we build teams to ask and answer the right questions, collect the right data, analyze the data ingeniously, ground-truth the results with communities, and share the insights with broader groups so they can take informed action. This paper looks at the development of two recent approaches in New York and Seattle to measure equity in urban development. We reflect on these approaches through the lens of data action principles (Williams 2020). Such reflections can highlight the challenges and opportunities for furthering the measurement and achievement of equitable development by other groups, such as real estate developers and community organizations, who seek to create positive social impact through their activities…(More)”.
EU leadership in trustworthy AI: Guardrails, Innovation & Governance
Article by Thierry Breton: “As mentioned in President von der Leyen’s State of the Union letter of intent, Europe should lead global efforts on artificial intelligence, guiding innovation, setting guardrails and developing global governance.
First, on innovation: we will launch the EU AI Start-Up Initiative, leveraging one of Europe’s biggest assets: its public high-performance computing infrastructure. We will identify the most promising European start-ups in AI and give them access to our supercomputing capacity.
I have said it before: AI is a combination of data, computing and algorithms. To train and finetune the most advanced foundation models, developers need large amounts of computing power.
Europe is a world leader in supercomputing through its European High-Performance Computing Joint Undertaking (EuroHPC). Soon, Europe will have its first exascale supercomputers, JUPITER in Germany and JULES VERNE in France (able to perform a quintillion -that means a billion billion- calculations per second), in addition to various existing supercomputers (such as LEONARDO in Italy and LUMI in Finland).
Access to Europe’s supercomputing infrastructure will help start-ups bring down the training time for their newest AI models from months or years to days or weeks. And it will help them lead the development and scale-up of AI responsibly and in line with European values.
This goes together with our broader efforts to support AI innovation across the value chain – from AI start-ups to all those businesses using AI technologies in their industrial ecosystems. This includes our Testing and Experimentation Facilities for AI (launched in January 2023), our Digital Innovation Hubs, the development of regulatory sandboxes under the AI Act, our support for the European Partnership on AI, Data and Robotics and the cutting-edge research supported by HorizonEurope.
Second, guardrails for AI: Europe has pioneered clear rules for AI systems through the EU AI Act, the world’s first comprehensive regulatory framework for AI. My teams are working closely with the Parliament and Council to support the swift adoption of the EU AI Act. This will give citizens and businesses confidence in AI developed in Europe, knowing that it is safe and respects fundamental rights and European values. And it serves as an inspiration for global rules and principles for trustworthy AI.
As reiterated by President von der Leyen, we are developing an AI Pact that will convene AI companies, help them prepare for the implementation of the EU AI Act and encourage them to commit voluntarily to applying the principles of the Act before its date of applicability.
Third, governance: with the AI Act and the Coordinated Plan on AI, we are working towards a governance framework for AI, which can be a centre of expertise, in particular on large foundation models, and promote cooperation, not only between Member States, but also internationally…(More)”
AI often mangles African languages. Local scientists and volunteers are taking it back to school
Article by Sandeep Ravindran: “Imagine joyfully announcing to your Facebook friends that your wife gave birth, and having Facebook automatically translate your words to “my prostitute gave birth.” Shamsuddeen Hassan Muhammad, a computer science Ph.D. student at the University of Porto, says that’s what happened to a friend when Facebook’s English translation mangled the nativity news he shared in his native language, Hausa.
Such errors in artificial intelligence (AI) translation are common with African languages. AI may be increasingly ubiquitous, but if you’re from the Global South, it probably doesn’t speak your language.
That means Google Translate isn’t much help, and speech recognition tools such as Siri or Alexa can’t understand you. All of these services rely on a field of AI known as natural language processing (NLP), which allows AI to “understand” a language. The overwhelming majority of the world’s 7000 or so languages lack data, tools, or techniques for NLP, making them “low-resourced,” in contrast with a handful of “high-resourced” languages such as English, French, German, Spanish, and Chinese.
Hausa is the second most spoken African language, with an estimated 60 million to 80 million speakers, and it’s just one of more than 2000 African languages that are mostly absent from AI research and products. The few products available don’t work as well as those for English, notes Graham Neubig, an NLP researcher at Carnegie Mellon University. “It’s not the people who speak the languages making the technology.” More often the technology simply doesn’t exist. “For example, now you cannot talk to Siri in Hausa, because there is no data set to train Siri,” Muhammad says.
He is trying to fill that gap with a project he co-founded called HausaNLP, one of several launched within the past few years to develop AI tools for African languages…(More)”.
The Adoption and Implementation of Artificial Intelligence Chatbots in Public Organizations: Evidence from U.S. State Governments
Paper by Tzuhao Chen, Mila Gascó-Hernandez, and Marc Esteve: “Although the use of artificial intelligence (AI) chatbots in public organizations has increased in recent years, three crucial gaps remain unresolved. First, little empirical evidence has been produced to examine the deployment of chatbots in government contexts. Second, existing research does not distinguish clearly between the drivers of adoption and the determinants of success and, therefore, between the stages of adoption and implementation. Third, most current research does not use a multidimensional perspective to understand the adoption and implementation of AI in government organizations. Our study addresses these gaps by exploring the following question: what determinants facilitate or impede the adoption and implementation of chatbots in the public sector? We answer this question by analyzing 22 state agencies across the U.S.A. that use chatbots. Our analysis identifies ease of use and relative advantage of chatbots, leadership and innovative culture, external shock, and individual past experiences as the main drivers of the decisions to adopt chatbots. Further, it shows that different types of determinants (such as knowledge-base creation and maintenance, technology skills and system crashes, human and financial resources, cross-agency interaction and communication, confidentiality and safety rules and regulations, and citizens’ expectations, and the COVID-19 crisis) impact differently the adoption and implementation processes and, therefore, determine the success of chatbots in a different manner. Future research could focus on the interaction among different types of determinants for both adoption and implementation, as well as on the role of specific stakeholders, such as IT vendors…(More)”.
Social approach to the transition to smart cities
Report by the European Parliamentary Research Services (EPRS): “This study explores the main impacts of the smart city transition on our cities and, in particular, on citizens and territories. In our research, we start from an analysis of smart city use cases to identify a set of key challenges, and elaborate on the main accelerating factors that may amplify or contain their impact on particular groups and territories. We then present an account of best practices that can help mitigate or prevent such challenges, and make some general observations on their scalability and replicability. Finally, based on an analysis of EU regulatory frameworks and a mapping of current or upcoming initiatives in the domain of smart city innovation, capacity-building and knowledge capitalisation, we propose six policy options to inform future policy-making at EU level to support a more inclusive smart city transition…(More)”.
Who Wrote This? How AI and the Lure of Efficiency Threaten Human Writing
Book by Naomi S. Baron: “Would you read this book if a computer wrote it? Would you even know? And why would it matter?
Today’s eerily impressive artificial intelligence writing tools present us with a crucial challenge: As writers, do we unthinkingly adopt AI’s time-saving advantages or do we stop to weigh what we gain and lose when heeding its siren call? To understand how AI is redefining what it means to write and think, linguist and educator Naomi S. Baron leads us on a journey connecting the dots between human literacy and today’s technology. From nineteenth-century lessons in composition, to mathematician Alan Turing’s work creating a machine for deciphering war-time messages, to contemporary engines like ChatGPT, Baron gives readers a spirited overview of the emergence of both literacy and AI, and a glimpse of their possible future. As the technology becomes increasingly sophisticated and fluent, it’s tempting to take the easy way out and let AI do the work for us. Baron cautions that such efficiency isn’t always in our interest. As AI plies us with suggestions or full-blown text, we risk losing not just our technical skills but the power of writing as a springboard for personal reflection and unique expression.
Funny, informed, and conversational, Who Wrote This? urges us as individuals and as communities to make conscious choices about the extent to which we collaborate with AI. The technology is here to stay. Baron shows us how to work with AI and how to spot where it risks diminishing the valuable cognitive and social benefits of being literate…(More)”.
Computing the Climate: How We Know What We Know About Climate Change
Book by Steve M. Easterbrook: “How do we know that climate change is an emergency? How did the scientific community reach this conclusion all but unanimously, and what tools did they use to do it? This book tells the story of climate models, tracing their history from nineteenth-century calculations on the effects of greenhouse gases, to modern Earth system models that integrate the atmosphere, the oceans, and the land using the full resources of today’s most powerful supercomputers. Drawing on the author’s extensive visits to the world’s top climate research labs, this accessible, non-technical book shows how computer models help to build a more complete picture of Earth’s climate system. ‘Computing the Climate’ is ideal for anyone who has wondered where the projections of future climate change come from – and why we should believe them…(More)”.
Wastewater monitoring: ‘the James Webb Telescope for population health’
Article by Exemplars News: “When the COVID-19 pandemic triggered a lockdown across Bangladesh and her research on environmental exposure to heavy metals became impossible to continue, Dr. Rehnuma Haque began a search for some way she could contribute to the pandemic response.
“I knew I had to do something during COVID,” said Dr. Haque, a research scientist at the International Centre for Diarrheal Disease Research, Bangladesh (icddr,b). “I couldn’t just sit at home.”
Then she stumbled upon articles on early wastewater monitoring efforts for COVID in Australia, the Netherlands, Italy, and the United States. “When I read those papers, I was so excited,” said Dr. Haque. “I emailed my supervisor, Dr. Mahbubur Rahman, and said, ‘Can we do this?’”
Two months later, in June 2020, Dr. Haque and her colleagues had launched one of the most robust and earliest national wastewater surveillance programs for COVID in a low- or middle-income country (LMIC).
The initiative, which has now been expanded to monitor for cholera, salmonella, and rotavirus and may soon be expanded further to monitor for norovirus and antibiotic resistance, demonstrates the power and potential of wastewater surveillance to serve as a low-cost tool for obtaining real-time meaningful health data at scale to identify emerging risks and guide public health responses.
“It is improving public health outcomes,” said Dr. Haque. “We can see everything going on in the community through wastewater surveillance. You can find everything you are looking for and then prepare a response.”
A single wastewater sample can yield representative data about an entire ward, town, or county and allow LMICs to monitor for emerging pathogens. Compared with clinical monitoring, wastewater monitoring is easier and cheaper to collect, can capture infections that are asymptomatic or before symptoms arise, raises fewer ethical concerns, can be more inclusive and not as prone to sampling biases, can generate a broader range of data, and is unrivaled at quickly generating population-level data…(More)” – See also: The #Data4Covid19 Review
The danger of building strong narratives on weak data
Article by John Burn-Murdoch: “Measuring gross domestic product is extremely complicated. Around the world, national statistics offices are struggling to get the sums right the first time around.
Some struggle more than others. When Ireland first reported its estimate for GDP growth in Q1 2015, it came in at 1.4 per cent. One year later, and with some fairly unique distortions due to its location as headquarters for many US big tech and pharma companies, this was revised upwards to an eye-watering 21.4 per cent.
On average, five years after an estimate of quarterly Irish GDP growth is first published, the latest revision of that figure is two full percentage points off the original value. The equivalent for the UK is almost 10 times smaller at 0.25 percentage points, making the ONS’s initial estimates among the most accurate in the developed world, narrowly ahead of the US at 0.26 and well ahead of the likes of Japan (0.46) and Norway (0.56).
But it’s not just the size of revisions that matters, it’s the direction. Out of 24 developed countries that consistently report quarterly GDP revisions to the OECD, the UK’s initial estimates are the most pessimistic. Britain’s quarterly growth figures typically end up 0.15 percentage points higher than first thought. The Germans go up by 0.07 on average, the French by 0.04, while the Americans, ever optimistic, typically end up revising their estimates down by 0.11 percentage points.
In other words, next time you hear a set of quarterly growth figures, it wouldn’t be unreasonable to mentally add 0.15 to the UK one and subtract 0.11 from the US.
This may all sound like nerdy detail, but it matters because people graft strong narratives on to this remarkably flimsy data. Britain was the only G7 economy yet to rebound past pre-Covid levels until it wasn’t. Ireland is booming, apparently, except its actual individual consumption per capita — a much better measure of living standards than GDP — has fallen steadily from just above the western European average in 2007 to 10 per cent below last year.
And the phenomenon is not exclusive to economic data. Two years ago, progressives critical of the government’s handling of the pandemic took to calling the UK “Plague Island”, citing Britain’s reported Covid death rates, which were among the highest in the developed world. But with the benefit of hindsight, we know that Britain was simply better at counting its deaths than most countries…(More)”
Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models
Paper by Pengfei Li, Jianyi Yang, Mohammad A. Islam, Shaolei Ren: “The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft’s Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models’ runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI…(More)”.