COVID isn’t going anywhere, neither should our efforts to increase responsible access to data


Article by Andrew J. Zahuranec, Hannah Chafetz and Stefaan Verhulst: “..Moving forward, institutions will need to consider how to embed non-traditional data capacity into their decision-making to better understand the world around them and respond to it.

For example, wastewater surveillance programmes that emerged during the pandemic continue to provide valuable insights about outbreaks before they are reported by clinical testing and have the potential to be used for other emerging diseases.

We need these and other programmes now more than ever. Governments and their partners need to maintain and, in many cases, strengthen the collaborations they established through the pandemic.

To address future crises, we need to institutionalize new data capacities – particularly those involving non-traditional datasets that may capture digital information that traditional health surveys and statistical methods often miss.

The figure above summarizes the types and sources of non-traditional data sources that stood out most during the COVID-19 response.

The types and sources of non-traditional data sources that stood out most during the COVID-19 response. Image: The GovLab

In our report, we suggest four pathways to advance the responsible access to non-traditional data during future health crises…(More)”.

Researchers scramble as Twitter plans to end free data access


Article by Heidi Ledford: “Akin Ünver has been using Twitter data for years. He investigates some of the biggest issues in social science, including political polarization, fake news and online extremism. But earlier this month, he had to set aside time to focus on a pressing emergency: helping relief efforts in Turkey and Syria after the devastating earthquake on 6 February.

Aid workers in the region have been racing to rescue people trapped by debris and to provide health care and supplies to those displaced by the tragedy. Twitter has been invaluable for collecting real-time data and generating crucial maps to direct the response, says Ünver, a computational social scientist at Özyeğin University in Istanbul.

So when he heard that Twitter was about to end its policy of providing free access to its application programming interface (API) — a pivotal set of rules that allows people to extract and process large amounts of data from the platform — he was dismayed. “Couldn’t come at a worse time,” he tweeted. “Most analysts and programmers that are building apps and functions for Turkey earthquake aid and relief, and are literally saving lives, are reliant on Twitter API.”..

Twitter has long offered academics free access to its API, an unusual approach that has been instrumental in the rise of computational approaches to studying social media. So when the company announced on 2 February that it would end that free access in a matter of days, it sent the field into a tailspin. “Thousands of research projects running over more than a decade would not be possible if the API wasn’t free,” says Patty Kostkova, who specializes in digital health studies at University College London…(More)”.

How ChatGPT Hijacks Democracy


Article by Nathan E. Sanders and Bruce Schneier:”…But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes — not through voting, but through lobbying.

ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees.Automatically generated comments aren’t a new problem. For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Communications Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers…(More)”

Innovative informatics interventions to improve health and health care


Editorial by Suzanne Bakken: “In this editorial, I highlight 5 papers that address innovative informatics interventions—3 research studies and 2 reviews. The papers reflect a variety of information technologies and processes including mobile health (mHealth), behavioral nudges in the electronic health record (EHR), adaptive intervention framework, predictive models, and artificial intelligence (eg, machine learning, data mining, natural language processing). The interventions were designed to address important clinical and public health problems such as adherence to antiretroviral therapy for persons living with HIV (PLWH), opioid use disorder, and pain assessment and management, as well as aspects of healthcare quality including no-show rates for appointments and erroneous decisions, waste, and misuse of resources due to EHR choice architecture for clinician orders…(More)”.

ChatGPT reminds us why good questions matter


Article by Stefaan Verhulst and Anil Ananthaswamy: “Over 100 million people used ChatGPT in January alone, according to one estimate, making it the fastest-growing consumer application in history. By producing resumes, essays, jokes and even poetry in response to prompts, the software brings into focus not just language models’ arresting power, but the importance of framing our questions correctly.

To that end, a few years ago I initiated the 100 Questions Initiative, which seeks to catalyse a cultural shift in the way we leverage data and develop scientific insights. The project aims not only to generate new questions, but also reimagine the process of asking them…

As a species and a society, we tend to look for answers. Answers appear to provide a sense of clarity and certainty, and can help guide our actions and policy decisions. Yet any answer represents a provisional end-stage of a process that begins with questions – and often can generate more questions. Einstein drew attention to the critical importance of how questions are framed, which can often determine (or at least play a significant role in determining) the answers we ultimately reach. Frame a question differently and one might reach a different answer. Yet as a society we undervalue the act of questioning – who formulates questions, how they do so, the impact they have on what we investigate, and on the decisions we make. Nor do we pay sufficient attention to whether the answers are in fact addressing the questions initially posed…(More)”.

‘There is no standard’: investigation finds AI algorithms objectify women’s bodies


Article by Hilke Schellmann: “Images posted on social media are analyzed by artificial intelligence (AI) algorithms that decide what to amplify and what to suppress. Many of these algorithms, a Guardian investigation has found, have a gender bias, and may have been censoring and suppressing the reach of countless photos featuring women’s bodies.

These AI tools, developed by large technology companies, including Google and Microsoft, are meant to protect users by identifying violent or pornographic visuals so that social media companies can block it before anyone sees it. The companies claim that their AI tools can also detect “raciness” or how sexually suggestive an image is. With this classification, platforms – including Instagram and LinkedIn – may suppress contentious imagery.

Two Guardian journalists used the AI tools to analyze hundreds of photos of men and women in underwear, working out, using medical tests with partial nudity and found evidence that the AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men. As a result, the social media companies that leverage these or similar algorithms have suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.

Even medical pictures are affected by the issue. The AI algorithms were tested on images released by the US National Cancer Institute demonstrating how to do a clinical breast examination. Google’s AI gave this photo the highest score for raciness, Microsoft’s AI was 82% confident that the image was “explicitly sexual in nature”, and Amazon classified it as representing “explicit nudity”…(More)”.

Six Prescriptions for Building Healthy Behavioral Insights Units


Essay by Dilip Soman, and Bing Feng: “Over the past few years, we have had the opportunity to work with over 20 behavioral units as part of our Behaviourally Informed Organizations partnership. While we as a field know a fair bit about what works for changing the behavior of stakeholders, what can we say about what works for creating thriving behavioral units within organizations?

Based on our research and hard-won experience working with a diverse set of behavioral units in government, business, and not-for-profit organizations, we have seen many success stories. But we have also seen our share of instances where the units wished they had done things differently, units with promising pilots that didn’t scale well, units that tried to do everything for everyone, units that jumped to solutions too quickly, units too fixated on one methodology, and units too quick to dispense with advice without thinking through the context in which it will be used.

We’ve outlined six prescriptions that we think are critical to developing a successful behavioral unit—three don’ts and three dos. We hope the advice helps new and existing behavioral units find their path to success.

Prescription 1: Don’t anchor on solutions too soon

Many potential partners approach behavioral units with a preconceived notion of the outcome they want to find. For instance, we have been approached by partners asking us to validate their belief that an app, a website redesign, a new communication program, or a text messaging strategy will be the answer to their behavior change challenge. It is tempting to approach a problem with a concrete solution in mind because it can create the illusion of efficiency.

However, it has been our experience that anchoring on a solution constrains thinking and diverts attention to an aspect of the problem that might not be central to the issue.

For example, in a project one of us (Dilip) was involved in, the team had determined, very early on, that the most efficient and scalable way of delivering their interventions would be through a smartphone app. After extensive investments in developing, piloting, and testing an app, they realized that it didn’t work as expected. In hindsight, they realized that for the intervention to be successful, the recipient needed to pay a certain level of attention, something for which the app did not allow. The team made the mistake of anchoring too soon on a solution…(More)”.

Data from satellites is starting to spur climate action


Miriam Kramer and Alison Snyder at Axios: “Data from space is being used to try to fight climate change by optimizing shipping lanes, adjusting rail schedules and pinpointing greenhouse gas emissions.

Why it matters: Satellite data has been used to monitor how human activities are changing Earth’s climate. Now it’s being used to attempt to alter those activities and take action against that change.

  • “Pixels are great but nobody really wants pixels except as a step to answering their questions about how the world is changing and how that should assess and inform their decisionmaking,” Steven Brumby, CEO and co-founder of Impact Observatory, which uses AI to create maps from satellite data, tells Axios in an email.

What’s happening: Several satellite companies are beginning to use their capabilities to guide on-the-ground actions that contribute to greenhouse gas emissions cuts.

  • UK-based satellite company Inmarsat, which provides telecommunications to the shipping and agriculture industries, is working with Brazilian railway operator Rumo to optimize train trips — and reduce fuel use.
  • Maritime shipping, which relies on heavy fuel oil, is another sector where satellites could help to reduce emissions by routing ships more efficiently and prevent communications-caused delays, says Inmarsat’s CEO Rajeev Suri. The industry contributes 3% of global greenhouse gas emissions.
  • Carbon capture, innovations in steel and cement production and other inventions are important for addressing climate change, Suri says. But using satellites is “potentially low-hanging fruit because these technologies are already available.”

Other satellites are also tracking emissions of methane — a strong greenhouse gas — from landfills and oil and gas production.

  • “It’s a needle in a haystack problem. There are literally millions of potential leak points all over the world,” says Stéphane Germain, founder and CEO of GHGSat, which monitors methane emissions from its six satellites in orbit.
  • A satellite dedicated to honing in on carbon dioxide emissions is due to launch later this year…(More)”.

Why cities should be fully recognized stakeholders within the UN system


Article by Andràs Szörényi and Pauline Leroy: Cities and their networks have risen on the international scene in the past decades as urban populations have increased dramatically. Cities have become more vocal on issues such as climate change, migration, and international conflict, as these challenges are increasingly impacting urban areas.

What’s more, innovative solutions to these problems are being invented in cities. And yet, despite their outsized contribution to the global economy and social development, cities have very few opportunities to engage in global decision-making and governance. They are not recognized stakeholders at the United Nations, and mayors are rarely afforded an international stage.

The Geneva Cities Hub – established in 2020 by the City and Canton of Geneva, with the support of the Swiss Confederation – enables cities and local governments to connect with Geneva-based international actors and amplify their voices.

Acknowledging cities as international actors is not just a good thing to do; it’s critical to developing policies that stand a chance of implementation.

When goals are announced and solutions are devised without the input of those in charge of implementation, unanticipated challenges inevitably arise. In short, including cities is critical to ensuring that decisions are practicable.

The Geneva Cities Hub has thus been empowered to facilitate the participation of cities in relevant multilateral processes in the Swiss city and beyond. We follow several of those and identify where the contribution of cities is relevant.

How cities can play a key role in multilateralism

How cities can play a key role in multilateralism. Image: Geneva Cities Hub

We then work with states and international organizations to open these processes up and liaise with local governments to support their engagement…(More)”.

Letting the public decide is key to Big Tech regulation


Article by Rana Foroohar: “Complexity is often used to obfuscate. Industries like finance, pharmaceuticals and particularly technology are rife with examples. Just as programmers can encrypt code or strip out metadata to protect the workings of their intellectual property, so insiders — from technologists to economists to lawyers — can defend their business models by using industry jargon and Byzantine explanations of simple concepts in order to obscure things they may not want the public to understand.

That’s why it’s so important that in its second major antitrust case filed against Google, the US Department of Justice last month asked not only that the company break up its advertising business, but that a jury of the people decide whether it must do so. This is extremely unusual for antitrust cases, which are usually decided by a judge.

It is a risky move, since it means that the DoJ’s antitrust division head, Jonathan Kanter, will have to deconstruct the online advertising auction business for lay people. But it’s also quite smart. The federal judges who hear such complex antitrust cases tend to be older, conservative types who are historically more likely to align themselves with large corporations.

As one legal scholar pointed out to me, such judges are reluctant to be seen as people who don’t understand complexity, even when it’s in a realm far outside their own. This may make them more likely to agree with the arguments put forward by expert witnesses — the Nobel laureates who construct auction models, for example — than average people who are willing to admit they simply don’t get it…

There are, of course, risks to policy by populism. Look at Britain’s departure from the EU after the 2016 referendum, which has left the country poorer. But that’s how democracy works. Allowing important decisions over key issues like corporate power and the rules of surveillance capitalism to be made by technocrats behind closed doors also carries dangers. The justice department is quite right that ordinary people should be able to hear the arguments…(More)”.