The effects of AI on the working lives of women


Report by Clementine Collett, Gina Neff and Livia Gouvea: “Globally, studies show that women in the labor force are paid less, hold fewer senior positions and participate less in science, technology, engineering and mathematics (STEM) fields. A 2019 UNESCO report found that women represent only 29% of science R&D positions globally and are already 25% less likely than men to know how to leverage digital technology for basic uses.

As the use and development of Artificial Intelligence (AI) continues to mature, its time to ask: What will tomorrows labor market look like for women? Are we effectively harnessing the power of AI to narrow gender equality gaps, or are we letting these gaps perpetuate, or even worse, widen?

This collaboration between UNESCO, the Inter-American Development Bank (IDB) and the Organisation for Economic Co-operation and Development (OECD) examines the effects of the use of AI on the working lives of women. By closely following the major stages of the workforce lifecycle from job requirements, to hiring to career progression and upskilling within the workplace – this joint report is a thorough introduction to issues related gender and AI and hopes to foster important conversations about womens equality in the future of work…(More)”

Imaginable: How to see the future coming and be ready for anything


Book by Jane McGonigal: “When we think about the future it can be difficult to feel that we have any control. We aren’t confident that we can take actions and make decisions that help determine what happens next. We want to feel prepared, hopeful and equipped, and to face the future with optimism. Or, better yet, change the future. But how do we map out our lives when it feels impossible to predict what the world will be like next week, let alone next year?

Jane McGonigal, a renowned future forecaster, reveals that ‘unimaginable’ events aren’t unimaginable before they happen. It is possible to see them coming and it’s a mindset that can be learned by engaging with tools, games, and ideas that will allow you to dive into the future before you live it.

By learning to think the unthinkable and imagine the unimaginable you can better plan for a future you’d like to see. And by seeing what’s coming faster, you can adapt to new challenges, reduce anxiety, and build hope and resilience…(More)”.

‘It’s like the wild west’: Data security in frontline aid


A Q&A on how aid workers handle sensitive data by Irwin Loy: “The cyber-attack on the International Committee of the Red Cross, discovered in January, was the latest high-profile breach to connect the dots between humanitarian data risks and real-world harms. Personal information belonging to more than 515,000 people was exposed in what the ICRC said was a “highly sophisticated” hack using tools employed mainly by states or state-backed groups.

But there are countless other examples of how the reams of data collected from some of the world’s most vulnerable communities can be compromisedmisused, and mishandled.

“The biggest frontier in the humanitarian sector is the weaponisation of humanitarian data,” said Olivia Williams, a former aid worker who now specialises in information security at Apache iX, a UK-based defence consultancy.

She recently completed research – including surveys and interviews with more than 180 aid workers from 28 countries – examining how data is handled, and what agencies and frontline staff say they do to protect it.

Sensitive data is often collected on personal devices, sent over hotel WiFi, scrawled on scraps of paper then photographed and sent to headquarters via WhatsApp, or simply emailed and widely shared with partner organisations, aid workers told her.

The organisational security and privacy policies meant to guide how data is stored and protected? Impractical, irrelevant, and often ignored, Williams said.

Some frontline staff are taking information security into their own hands, devising their own systems of coding, filing, and securing data. One respondent kept paper files locked in their bedroom.

Aid workers from dozens of major UN agencies, NGOs, Red Cross organisations, and civil society groups took part in the survey.

Williams’ findings echo her own misgivings about data security in her previous deployments to crisis zones from northern Iraq to Nepal and the Philippines. Aid workers are increasingly alarmed about how data is handled, she said, while their employers are largely “oblivious” to what actually happens on the ground.

Williams spoke to The New Humanitarian about the unspoken power imbalance in data collection, why there’s so much data, and what aid workers can do to better protect it….(More)”.

Eight reasons responsible data for and about children matters


Article by Stefaan Verhulst and Andrew Young: “…The relationship between the datafication of everyday life and child welfare has generally been under-explored, both by researchers in data ethics and those who work to advance the rights of children. This neglect is a lost opportunity, and also poses a risk to children, who are in many ways at the forefront of the steady incursions of data into our lives. In what follows, over a series of two articles, we outline eight reasons why child welfare advocates should pay more attention to data, and why we need a framework for responsible data collection and use for children….(Part 1) and (Part2). (See also Responsible Data for Children).

Superpowers as Inspiration for Visualization


Paper by Wesley Willett et al: “We explore how the lens of fictional superpowers can help characterize how visualizations empower people and provide inspiration for new visualization systems. Researchers and practitioners often tout visualizations’ ability to “make the invisible visible” and to “enhance cognitive abilities.” Meanwhile superhero comics and other modern fiction often depict characters with similarly fantastic abilities that allow them to see and interpret the world in ways that transcend traditional human perception. We investigate the intersection of these domains, and show how the language of superpowers can be used to characterize existing visualization systems and suggest opportunities for new and empowering ones. We introduce two frameworks: The first characterizes seven underlying mechanisms that form the basis for a variety of visual superpowers portrayed in fiction. The second identifies seven ways in which visualization tools and interfaces can instill a sense of empowerment in the people who use them. Building on these observations, we illustrate a diverse set of “visualization superpowers” and highlight opportunities for the visualization community to create new systems and interactions that empower new experiences with data…(More).”

How climate data scarcity costs lives


Paula Dupraz-Dobias at New Humanitarian: “Localised data can help governments project climate forecasts, prepare for disasters as early as possible, and create long-term policies for adapting to climate change.

Wealthier countries tend to have better access to new technology that allows for more accurate predictions, such as networks of temperature, wind, and atmospheric pressure sensors.

But roughly half the world’s countries do not have multi-hazard early warning systems, according to the UN’s World Meteorological Organization. Some 60 percent lack basic water information services designed to gather and analyse data on surface, ground, and atmospheric water, which could help reduce flooding and better manage water. Some 43 percent do not communicate or interact adequately with other countries to share potentially life-saving information.

The black holes in weather data around the globe

Availability of surface land observations (Map)WMO/ECMWFUS reports weather observations every three hours, as opposed to the every hour required by World Meteorological Organization regulations. It says it will comply with these from 2023.

See WIGOS’s full interactive map

“Right now, we can analyse weather; in other words, what happens today, tomorrow, and the day after,” said Ena Jaimes Espinoza, a weather expert at CENEPRED, Peru’s national centre for disaster monitoring, prevention, and risk reduction. “For climate data, where you need years of data, there is still a dearth [of information].”

Without this information, she said, it’s difficult to establish accurate trends in different areas of the country – trends that could help forecasters better predict conditions in Tarucani, for example, or help policymakers to plan responses.

Inadequate funding, poor data-sharing between countries, and conflict, at least in some parts of the world, contribute to the data shortfalls. Climate experts warn that some of the world’s most disaster-vulnerable countries risk being left behind as this information gap widens…(More)”.

On the Dynamics of Human Behavior: The Past, Present, and Future of Culture, Conflict, and Cooperation


Paper by Nathan Nunn: “I provide a theoretically-guided discussion of the dynamics of human behavior, focusing on the importance of culture (socially-learned information) and tradition (transmission of culture across generations). Decision-making that relies on tradition can be an effective strategy and arises in equilibrium. While dynamically optimal, it generates static `mismatch.’ When the world changes, since traits evolve slowly, they may not be beneficial in their new environment. I discuss how mismatch helps explain the world around us, presents special challenges and opportunities for policy, and provides important lessons for our future as a human species…(More)”.

An intro to AI, made for students


Reena Jana at Google: “Adorable, operatic blobs. A global, online guessing game. Scribbles that transform into works of art. These may not sound like they’re part of a curriculum, but learning the basics of how artificial intelligence (AI) works doesn’t have to be complicated, super-technical or boring.

To celebrate Digital Learning Day, we’re releasing a new lesson from Applied Digital Skills, Google’s free, online, video-based curriculum (and part of the larger Grow with Google initiative). “Discover AI in Daily Life” was designed with middle and high school students in mind, and dives into how AI is built, and how it helps people every day.

AI for anyone — and everyone

“Twenty or 30 years ago, students might have learned basic typing skills in school,” says Dr. Patrick Gage Kelley, a Google Trust and Safety user experience researcher who co-created (and narrates) the “Discover AI in Daily Life” lesson. “Today, ‘AI literacy’ is a key skill. It’s important that students everywhere, from all backgrounds, are given the opportunity to learn about AI.”

“Discover AI in Daily Life” begins with the basics. You’ll find simple, non-technical explanations of how a machine can “learn” from patterns in data, and why it’s important to train AI responsibly and avoid unfair bias.

First-hand experiences with AI

“By encouraging students to engage directly with everyday tools and experiment with them, they get a first-hand experience of the potential uses and limitations of AI,” says Dr. Annica Voneche, the lesson’s learning designer. “Those experiences can then be tied to a more theoretical explanation of the technology behind it, in a way that makes the often abstract concepts behind AI tangible.”…(More)”.

How to avoid sharing bad information about Russia’s invasion of Ukraine


Abby Ohlheiser at MIT Technology Review: “The fast-paced online coverage of the Russian invasion of Ukraine on Wednesday followed a pattern that’s become familiar in other recent crises that have unfolded around the world. Photos, videos, and other information are posted and reshared across platforms much faster than they can be verified.

The result is that falsehoods are mistaken for truth and amplified, even by well-intentioned people. This can help bad actors to terrorize innocent civilians or advance disturbing ideologies, causing real harm.

Disinformation has been a prominent and explicit part of the Russian government’s campaign to justify the invasion. Russia falsely claimed that Ukrainian forces in Donbas, a city in the southeastern part of the country that harbors a large number of pro-Russian separatists, were planning violent attacks, engaging in antagonistic shelling, and committing genocide. Fake videos of those nonexistent attacks became part of a domestic propaganda campaign. (The US government, meanwhile, has been working to debunk and “prebunk” these lies.)

Meanwhile, even people who are not part of such government campaigns may intentionally share bad, misleading, or false information about the invasion to promote ideological narratives, or simply to harvest clicks, with little care about the harm they’re causing. In other cases, honest mistakes made amid the fog of war take off and go viral….

Your attention matters …

First, realize that what you do online makes a difference. “People often think that because they’re not influencers, they’re not politicians, they’re not journalists, that what they do [online] doesn’t matter,” Whitney Phillips, an assistant professor of communication and rhetorical studies at Syracuse University, told me in 2020. But it does matter. Sharing dubious information with even a small circle of friends and family can lead to its wider dissemination.

… and so do your angry quote tweets and duets.

While an urgent news story is developing, well-meaning people may quote, tweet, share, or duet with a post on social media to challenge and condemn it. Twitter and Facebook have introduced new rules, moderation tactics, and fact-checking provisions to try to combat misinformation. But interacting with misinformation at all risks amplifying the content you’re trying to minimize, because it signals to the platform that you find it interesting. Instead of engaging with a post you know to be wrong, try flagging it for review by the platform where you saw it.

Stop.

Mike Caulfield, a digital literacy expert, developed a method for evaluating online information that he calls SIFT: “Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context.” When it comes to news about Ukraine, he says, the emphasis should be on “Stop”—that is, pause before you react to or share what you’re seeing….(More)”.

How Tech Despair Can Set You Free


Essay by Samuel Matlack: “One way to look at the twentieth century is to say that nations may rise and fall but technical progress remains forever. Its sun rises on the evil and on the good, and its rain falls on the just and on the unjust. Its sun can be brighter than a thousand suns, scorching our enemies, but, with some time and ingenuity, it can also power air conditioners and 5G. One needs to look on the bright side, living by faith and not by sight.

The century’s inquiring minds wished to know whether this faith in progress is meaningfully different from blindness. Ranking high among those minds was the French historian, sociologist, and lay theologian Jacques Ellul, and his answer was simple: No.

In America, Ellul became best known for his book The Technological Society. The book’s signature term was “technique,” an idea he developed throughout his vast body of writing. Technique is the social structure on which modern life is built. It is the consciousness that has come to govern all human affairs, suppressing questions of ultimate human purposes and meaning. Our society no longer asks why we should do anything. All that matters anymore, Ellul argued, is how to do it — to which the canned answer is always: More efficiently! Much as a modern machine can be said to run on its own, so does the technological society. Human control of it is an illusion, which means we are on a path to self-destruction — not because the social machine will necessarily kill us (although it might), but because we are fast becoming soulless creatures…(More)”.