Paper by Jessica Feldman:”This scoping paper considers how digital tools, such as ICTs and AI, have failed to contribute to the “common good” in any sustained or scalable way. This is attributed to a problem that is at once political-economic and technical.
Many digital tools’ business models are predicated on advertising: framing the user as an individual consumer-to-be-targeted, not as an organization, movement, or any sort of commons. At the level of infrastructure and hardware, the increased privatization and centralization of transmission and production leads to a dangerous bottlenecking of communication power, and to labor and production practices that are undemocratic and damaging to common resources.
These practices escalate collective action problems, pose a threat to democratic decision making, aggravate issues of economic and labor inequality, and harm the environment and health. At the same time, the growth of both AI and online community formation raise questions around the very definition of human subjectivity and modes of relationality. Based on an operational definition of the common good grounded in ethics of care, sustainability, and redistributive justice, suggestions are made for solutions and further research in the areas of participatory design, digital democracy, digital labor, and environmental sustainability….(More)”
Article by Justine Calma: “Google unveiled a tool today that could help cities keep their residents cool by mapping out where trees are needed most. Cities tend to be warmer than surrounding areas because buildings and asphalt trap heat. An easy way to cool metropolitan areas down is to plant more trees in neighborhoods where they’re sparse.
Google’s new Tree Canopy Lab uses aerial imagery and Google’s AI to figure out where every tree is in a city. Tree Canopy Lab puts that information on an interactive map along with additional data on which neighborhoods are more densely populated and are more vulnerable to high temperatures. The hope is that planting new trees in these areas could help cities adapt to a warming world and save lives during heat waves.
Google piloted Tree Canopy Lab in Los Angeles. Data on hundreds more cities is on the way, the company says. City planners interested in using the tool in the future can reach out to Google through a form it posted along with today’s announcement.
“We’ll be able to really home in on where the best strategic investment will be in terms of addressing that urban heat,” says Rachel Malarich, Los Angeles’ first city forest officer.
Google claims that its new tool can save cities like Los Angeles time when it comes to taking inventory of their trees. That’s often done by sending people to survey each block. Los Angeles has also used LIDAR technology to map their urban forest in the past, which uses a laser sensor to detect the trees — but that process was expensive and slow, according to Malarich. Google’s new service, on the other hand, is free to use and will be updated regularly using images the company already takes by plane for Google Maps….(More)”.
Joseph Cox at Vice: “The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.
Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.
The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.
The apps sending data to X-Mode include Muslim Pro, an app that reminds users when to pray and what direction Mecca is in relation to the user’s current location. The app has been downloaded over 50 million times on Android, according to the Google Play Store, and over 98 million in total across other platforms including iOS, according to Muslim Pro’s website….(More)”.
Blog post by Bill Gates: “My family loves to do jigsaw puzzles. It’s one of our favorite activities to do together, especially when we’re on vacation. There is something so satisfying about everyone working as a team to put down piece after piece until finally the whole thing is done.
In a lot of ways, the fight against Alzheimer’s disease reminds me of doing a puzzle. Your goal is to see the whole picture, so that you can understand the disease well enough to better diagnose and treat it. But in order to see the complete picture, you need to figure out how all of the pieces fit together.
Right now, all over the world, researchers are collecting data about Alzheimer’s disease. Some of these scientists are working on drug trials aimed at finding a way to stop the disease’s progression. Others are studying how our brain works, or how it changes as we age. In each case, they’re learning new things about the disease.
But until recently, Alzheimer’s researchers often had to jump through a lot of hoops to share their data—to see if and how the puzzle pieces fit together. There are a few reasons for this. For one thing, there is a lot of confusion about what information you can and can’t share because of patient privacy. Often there weren’t easily available tools and technologies to facilitate broad data-sharing and access. In addition, pharmaceutical companies invest a lot of money into clinical trials, and often they aren’t eager for their competitors to benefit from that investment, especially when the programs are still ongoing.
Unfortunately, this siloed approach to research data hasn’t yielded great results. We have only made incremental progress in therapeutics since the late 1990s. There’s a lot that we still don’t know about Alzheimer’s, including what part of the brain breaks down first and how or when you should intervene. But I’m hopeful that will change soon thanks in part to the Alzheimer’s Disease Data Initiative, or ADDI….(More)“.
Paper by Maria Ralli et al: “The lack of granular and rich descriptive metadata highly affects the discoverability and usability of the digital content stored in museums, libraries and archives, aggregated and served through Europeana, thus often frustrating the user experience offered by these institutions’ portals. In this context, metadata enrichment services through automated analysis and feature extraction along with crowdsourcing annotation services can offer a great opportunity for improving the metadata quality of digital cultural content in a scalable way, while at the same time engaging different user communities and raising awareness about cultural heritage assets. Such an effort is Crowdheritage, an open crowdsourcing platform that aims to employ machine and human intelligence in order to improve the digital cultural content metadata quality….(More)”.
Louise Guillot and Elisa Braun at Politico: “Emmanuel Macron asked 150 ordinary people to help figure out France’s green policies — and now this citizens’ convention is turning into a political problem for the French president.
The Citizens’ Convention on Climate was aimed at calming tensions in the wake of the Yellow Jackets protest movement — which was sparked by a climate tax on fuel — and showing that Macron wasn’t an out-of-touch elitist.
After nine months of deliberations, the convention came up with 149 proposals to slash greenhouse gas emissions this summer. The government has to put some of these measures before the parliament for them to become binding, and a bill is due to be presented in December.
But that’s too slow for many of the convention’s members, who feel the government is back-pedalling on some of the ideas and that Macron has poked fun at them.
Muriel Raulic, a member of the convention, accused Macron of using the body to greenwash his administration.
She supports a moratorium on 5G high-speed mobile technology, which has created some health and environmental fears. Macron has dismissed proponents of the ban as “Amish” — a Christian sect suspicious of technology.
The 150 members wrote an open letter to Macron in mid-October, complaining about a lack of “clear and defined support from the executive, whose positions sometimes appear contradictory,” and to “openly hostile communications” from “certain professional actors.”
Some gathered late last month before the National Assembly to complain they felt used and treated like “guinea pigs” by politicians. In June, they created an association to oversee what the government is doing with their proposals.
…The government denied it is using the convention to greenwash itself….(More)”.
Paper by Elizabether Seger and Mark Briers: “The current COVID-19 pandemic and the accompanying ‘infodemic’ clearly illustrate that access to reliable information is crucial to coordinating a timely crisis response in democratic societies. Inaccurate information and the muzzling of important information sources have degraded trust in health authorities and slowed public response to the crisis. Misinformation about ineffective cures, the origins and malicious spread of COVID-19, unverified treatment discoveries, and the efficacy of face coverings have increased the difficulty of coordinating a unified public response during the crisis.
In a recent report, researchers at the Cambridge Centre for the Study of Existential Risk (CSER) in collaboration with The Alan Turing Institute and the Defence Science and Technology Laboratory (Dstl) workshopped an array of hypothetical crisis scenarios to investigate social and technological factors that interfere with well-informed decision-making and timely collective action in democratic societies.
Crisis scenarios
Crisis scenarios are useful tools for appraising threats and vulnerabilities to systems of information production, dissemination, and evaluation. Factors influencing how robust a society is to such threats and vulnerabilities are not always obvious when life is relatively tranquil but are often highlighted under the stress of a crisis.
CSER and Dstl workshop organisers, together with workshop participants (a diverse group of professionals interested in topics related to [mis/dis]information, information technology, and crisis response), co-developed and explored six hypothetical crisis scenarios and complex challenges:
Global health crisis
Character assassination
State fake news campaign
Economic collapse
Xenophobic ethnic cleansing
Epistemic babble, where the ability for the general population to tell the difference between truth and fiction (presented as truth) is lost
We analysed each scenario to identify various interest groups and actors, to pinpoint vulnerabilities in systems of information production and exchange, and to visualise how the system might be interfered with. We also considered interventions that could help bolster the society against threats to informed decision-making.
The systems map below is an example from workshop scenario 1: Global health crisis. The map shows how adversarial actors (red) and groups working to mitigate the crisis (blue) interact, impact each other’s actions, and influence the general public and other interest groups (green) such as those affected by the health crisis.
Systems maps help visualise vulnerabilities in both red and blue actor systems, which, in turn, helps identify areas where intervention (yellow) is possible to help mitigate the crisis….(More)“
Paper by Heike Schweitzer and Robert Welker: “The paper strives to systematise the debate on access to data from a competition policy angle. At the outset, two general policy approaches to access to data are distinguished: a “private control of data” approach versus an “open access” approach. We argue that, when it comes to private sector data, the “private control of data” approach is preferable. According to this approach, the “whether” and “how” of data access should generally be left to the market. However, public intervention can be justified by significant market failures. We discuss the presence of such market failures and the policy responses, including, in particular, competition policy responses, with a view to three different data access scenarios: access to data by co-generators of usage data (Scenario 1); requests for access to bundled or aggregated usage data by third parties vis-à-vis a service or product provider who controls such datasets, with the goal to enter complementary markets (Scenario 2); requests by firms to access the large usage data troves of the Big Tech online platforms for innovative purposes (Scenario 3). On this basis we develop recommendations for data access policies….(More)”.
Paper by Chris Culnane, Benjamin I. P. Rubinstein, and David Watts: “Adopted by government agencies in Australia, New Zealand, and the UK as policy instrument or as embodied into legislation, the ‘Five Safes’ framework aims to manage risks of releasing data derived from personal information. Despite its popularity, the Five Safes has undergone little legal or technical critical analysis. We argue that the Fives Safes is fundamentally flawed: from being disconnected from existing legal protections and appropriation of notions of safety without providing any means to prefer strong technical measures, to viewing disclosure risk as static through time and not requiring repeat assessment. The Five Safes provides little confidence that resulting data sharing is performed using ‘safety’ best practice or for purposes in service of public interest….(More)”.
Paper by Stephanie Russo Carroll et al: “Concerns about secondary use of data and limited opportunities for benefit-sharing have focused attention on the tension that Indigenous communities feel between (1) protecting Indigenous rights and interests in Indigenous data (including traditional knowledges) and (2) supporting open data, machine learning, broad data sharing, and big data initiatives. The International Indigenous Data Sovereignty Interest Group (within the Research Data Alliance) is a network of nation-state based Indigenous data sovereignty networks and individuals that developed the ‘CARE Principles for Indigenous Data Governance’ (Collective Benefit, Authority to Control, Responsibility, and Ethics) in consultation with Indigenous Peoples, scholars, non-profit organizations, and governments. The CARE Principles are people– and purpose-oriented, reflecting the crucial role of data in advancing innovation, governance, and self-determination among Indigenous Peoples. The Principles complement the existing data-centric approach represented in the ‘FAIR Guiding Principles for scientific data management and stewardship’ (Findable, Accessible, Interoperable, Reusable). The CARE Principles build upon earlier work by the Te Mana Raraunga Maori Data Sovereignty Network, US Indigenous Data Sovereignty Network, Maiam nayri Wingara Aboriginal and Torres Strait Islander Data Sovereignty Collective, and numerous Indigenous Peoples, nations, and communities. The goal is that stewards and other users of Indigenous data will ‘Be FAIR and CARE.’ In this first formal publication of the CARE Principles, we articulate their rationale, describe their relation to the FAIR Principles, and present examples of their application….(More)” See also Selected Readings on Indigenous Data Sovereignty.