US Department of Commerce: “…This guidance provides actionable guidelines and best practices for publishing open data optimized for generative AI systems. While it is designed for use by the Department of Commerce and its bureaus, this guidance has been made publicly available to benefit open data publishers globally…(More)”. See also: A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI
AI for Social Good
Essay by Iqbal Dhaliwal: “Artificial intelligence (AI) has the potential to transform our lives. Like the internet, it’s a general-purpose technology that spans sectors, is widely accessible, has a low marginal cost of adding users, and is constantly improving. Tech companies are rapidly deploying more capable AI models that are seeping into our personal lives and work.
AI is also swiftly penetrating the social sector. Governments, social enterprises, and NGOs are infusing AI into programs, while public treasuries and donors are working hard to understand where to invest. For example, AI is being deployed to improve health diagnostics, map flood-prone areas for better relief targeting, grade students’ essays to free up teachers’ time for student interaction, assist governments in detecting tax fraud, and enable agricultural extension workers to customize advice.
But the social sector is also rife with examples over the past two decades of technologies touted as silver bullets that fell short of expectations, including One Laptop Per Child, SMS reminders to take medication, and smokeless stoves to reduce indoor air pollution. To avoid a similar fate, AI-infused programs must incorporate insights from years of evidence generated by rigorous impact evaluations and be scaled in an informed way through concurrent evaluations.
Specifically, implementers of such programs must pay attention to three elements. First, they must use research insights on where AI is likely to have the greatest social impact. Decades of research using randomized controlled trials and other exacting empirical work provide us with insights across sectors on where and how AI can play the most effective role in social programs.
Second, they must incorporate research lessons on how to effectively infuse AI into existing social programs. We have decades of research on when and why technologies succeed or fail in the social sector that can help guide AI adopters (governments, social enterprises, NGOs), tech companies, and donors to avoid pitfalls and design effective programs that work in the field.
Third, we must promote the rigorous evaluation of AI in the social sector so that we disseminate trustworthy information about what works and what does not. We must motivate adopters, tech companies, and donors to conduct independent, rigorous, concurrent impact evaluations of promising AI applications across social sectors (including impact on workers themselves); draw insights emerging across multiple studies; and disseminate those insights widely so that the benefits of AI can be maximized and its harms understood and minimized. Taking these steps can also help build trust in AI among social sector players and program participants more broadly…(More)”.
Which Health Facilities Have Been Impacted by L.A.-Area Fires? AI May Paint a Clearer Picture
Article by Andrew Schroeder: “One of the most important factors for humanitarian responders in these types of large-scale disaster situations is to understand the effects on the formal health system, upon which most people — and vulnerable communities in particular — rely upon in their neighborhoods. Evaluation of the impact of disasters on individual structures, including critical infrastructure such as health facilities, is traditionally a relatively slow and manually arduous process, involving extensive ground truth visitation by teams of assessment professionals.
Speeding up this process without losing accuracy, while potentially improving the safety and efficiency of assessment teams, is among the more important analytical efforts Direct Relief can undertake for response and recovery efforts. Manual assessments can now be effectively paired with AI-based analysis of satellite imagery to do just that…
With the advent of geospatial AI models trained on disaster damage impacts, ground assessment is not the only tool available to response agencies and others seeking to understand how much damage has occurred and the degree to which that damage may affect essential services for communities. The work of the Oregon State University team of experts in remote sensing-based post-disaster damage detection, led by Jamon Van Den Hoek and Corey Scher, was featured in the Financial Times on January 9.
Their modeling, based on Sentinel-1 satellite imagery, identified 21,757 structures overall, of which 11,124 were determined to have some level of damage. The Oregon State model does not distinguish between different levels of damage, and therefore cannot respond to certain types of questions that the manual inspections can respond to, but nevertheless the coverage area and the speed of detection have been much greater…(More)”.
Governance of Indigenous data in open earth systems science
Paper by Lydia Jennings et al: “In the age of big data and open science, what processes are needed to follow open science protocols while upholding Indigenous Peoples’ rights? The Earth Data Relations Working Group (EDRWG), convened to address this question and envision a research landscape that acknowledges the legacy of extractive practices and embraces new norms across Earth science institutions and open science research. Using the National Ecological Observatory Network (NEON) as an example, the EDRWG recommends actions, applicable across all phases of the data lifecycle, that recognize the sovereign rights of Indigenous Peoples and support better research across all Earth Sciences…(More)”
Facing & mitigating common challenges when working with real-world data: The Data Learning Paradigm
Paper by Jake Lever et al: “The rapid growth of data-driven applications is ubiquitous across virtually all scientific domains, and has led to an increasing demand for effective methods to handle data deficiencies and mitigate the effects of imperfect data. This paper presents a guide for researchers encountering real-world data-driven applications, and the respective challenges associated with this. This article proposes the concept of the Data Learning Paradigm, combining the principles of machine learning, data science and data assimilation to tackle real-world challenges in data-driven applications. Models are a product of the data upon which they are trained, and no data collected from real world scenarios is perfect due to natural limitations of sensing and collection. Thus, computational modelling of real world systems is intrinsically limited by the various deficiencies encountered in real data. The Data Learning Paradigm aims to leverage the strengths of data improvement to enhance the accuracy, reliability, and interpretability of data-driven models. We outline a range of methods which are currently being implemented in the field of Data Learning involving machine learning and data science methods, and discuss how these mitigate the various problems associated with data-driven models, illustrating improved results in a multitude of real world applications. We highlight examples where these methods have led to significant advancements in fields such as environmental monitoring, planetary exploration, healthcare analytics, linguistic analysis, social networks, and smart manufacturing. We offer a guide to how these methods may be implemented to deal with general types of limitations in data, alongside their current and potential applications…(More)”.
Digitalizing sewage: The politics of producing, sharing, and operationalizing data from wastewater-based surveillance
Paper by Josie Wittmer, Carolyn Prouse, and Mohammed Rafi Arefin: “Expanded during the COVID-19 pandemic, Wastewater-Based Surveillance (WBS) is now heralded by scientists and policy makers alike as the future of monitoring and governing urban health. The expansion of WBS reflects larger neoliberal governance trends whereby digitalizing states increasingly rely on producing big data as a ‘best practice’ to surveil various aspects of everyday life. With a focus on three South Asian cities, our paper investigates the transnational pathways through which WBS data is produced, made known, and operationalized in ‘evidence-based’ decision-making in a time of crisis. We argue that in South Asia, wastewater surveillance data is actively produced through fragile but power-laden networks of transnational and local knowledge, funding, and practices. Using mixed qualitative methods, we found these networks produced artifacts like dashboards to communicate data to the public in ways that enabled claims to objectivity, ethical interventions, and transparency. Interrogating these representations, we demonstrate how these artifacts open up messy spaces of translation that trouble linear notions of objective data informing accountable, transparent, and evidence-based decision-making for diverse urban actors. By thinking through the production of precarious biosurveillance infrastructures, we respond to calls for more robust ethical and legal frameworks for the field and suggest that the fragility of WBS infrastructures has important implications for the long-term trajectories of urban public health governance in the global South…(More)”
Will Artificial Intelligence Replace Us or Empower Us?
Article by Peter Coy: “…But A.I. could also be designed to empower people rather than replace them, as I wrote a year ago in a newsletter about the M.I.T. Shaping the Future of Work Initiative.
Which of those A.I. futures will be realized was a big topic at the San Francisco conference, which was the annual meeting of the American Economic Association, the American Finance Association and 65 smaller groups in the Allied Social Science Associations.
Erik Brynjolfsson of Stanford was one of the busiest economists at the conference, dashing from one panel to another to talk about his hopes for a human-centric A.I. and his warnings about what he has called the “Turing Trap.”
Alan Turing, the English mathematician and World War II code breaker, proposed in 1950 to evaluate the intelligence of computers by whether they could fool someone into thinking they were human. His “imitation game” led the field in an unfortunate direction, Brynjolfsson argues — toward creating machines that behaved as much like humans as possible, instead of like human helpers.
Henry Ford didn’t set out to build a car that could mimic a person’s walk, so why should A.I. experts try to build systems that mimic a person’s mental abilities? Brynjolfsson asked at one session I attended.
Other economists have made similar points: Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University use the term “so-so technologies” for systems that replace human beings without meaningfully increasing productivity, such as self-checkout kiosks in supermarkets.
People will need a lot more education and training to take full advantage of A.I.’s immense power, so that they aren’t just elbowed aside by it. “In fact, for each dollar spent on machine learning technology, companies may need to spend nine dollars on intangible human capital,” Brynjolfsson wrote in 2022, citing research by him and others…(More)”.
AI Is Bad News for the Global South
Article by Rachel Adams: “…AI’s adoption in developing regions is also limited by its design. AI designed in Silicon Valley on largely English-language data is not often fit for purpose outside of wealthy Western contexts. The productive use of AI requires stable internet access or smartphone technology; in sub-Saharan Africa, only 25 percent of people have reliable internet access, and it is estimated that African women are 32 percent less likely to use mobile internet than their male counterparts.
Generative AI technologies are also predominantly developed using the English language, meaning that the outputs they produce for non-Western users and contexts are oftentimes useless, inaccurate, and biased. Innovators in the global south have to put in at least twice the effort to make their AI applications work for local contexts, often by retraining models on localized datasets and through extensive trial and error practices.
Where AI is designed to generate profit and entertainment only for the already privileged, it will not be effective in addressing the conditions of poverty and in changing the lives of groups that are marginalized from the consumer markets of AI. Without a high level of saturation across major industries, and without the infrastructure in place to enable meaningful access to AI by all people, global south nations are unlikely to see major economic benefits from the technology.
As AI is adopted across industries, human labor is changing. For poorer countries, this is engendering a new race to the bottom where machines are cheaper than humans and the cheap labor that was once offshored to their lands is now being onshored back to wealthy nations. The people most impacted are those with lower education levels and fewer skills, whose jobs can be more easily automated. In short, much of the population in lower- and middle-income countries may be affected, severely impacting the lives of millions of people and threatening the capacity of poorer nations to prosper…(More)”.
Behaviour-based dependency networks between places shape urban economic resilience
Paper by Takahiro Yabe et al: “Disruptions, such as closures of businesses during pandemics, not only affect businesses and amenities directly but also influence how people move, spreading the impact to other businesses and increasing the overall economic shock. However, it is unclear how much businesses depend on each other during disruptions. Leveraging human mobility data and same-day visits in five US cities, we quantify dependencies between points of interest encompassing businesses, stores and amenities. We find that dependency networks computed from human mobility exhibit significantly higher rates of long-distance connections and biases towards specific pairs of point-of-interest categories. We show that using behaviour-based dependency relationships improves the predictability of business resilience during shocks by around 40% compared with distance-based models, and that neglecting behaviour-based dependencies can lead to underestimation of the spatial cascades of disruptions. Our findings underscore the importance of measuring complex relationships in patterns of human mobility to foster urban economic resilience to shocks…(More)”.
Big brother: the effects of surveillance on fundamental aspects of social vision
Paper by Kiley Seymour et al: “Despite the dramatic rise of surveillance in our societies, only limited research has examined its effects on humans. While most research has focused on voluntary behaviour, no study has examined the effects of surveillance on more fundamental and automatic aspects of human perceptual awareness and cognition. Here, we show that being watched on CCTV markedly impacts a hardwired and involuntary function of human sensory perception—the ability to consciously detect faces. Using the method of continuous flash suppression (CFS), we show that when people are surveilled (N = 24), they are quicker than controls (N = 30) to detect faces. An independent control experiment (N = 42) ruled out an explanation based on demand characteristics and social desirability biases. These findings show that being watched impacts not only consciously controlled behaviours but also unconscious, involuntary visual processing. Our results have implications concerning the impacts of surveillance on basic human cognition as well as public mental health…(More)”.