Paper by Salla Rantala, Brent Swallow, Anu Lähteenmäki-Uutela and Riikka Paloniemi: “The rapid development of new digital technologies for natural resource management has created a need to design and update governance regimes for effective and transparent generation, sharing and use of digital natural resource data. In this paper, we contribute to this novel area of investigation from the perspective of institutional change. We develop a conceptual framework to analyze how emerging natural resource data governance is shaped by related natural resource governance; complex, multilevel systems of actors, institutions and their interplay. We apply this framework to study forest data governance and its roots in forest governance in Finland and Canada. In Finland, an emphasis on open forest data and the associated legal reform represents the instutionalization of a mixed open data-bioeconomy discourse, pushed by higher-level institutional requirements towards greater openness and shaped by changing actor dynamics in relation to diverse forest values. In Canada, a strong institutional lock-in around public-private partnerships in forest management has engendered an approach that is based on voluntary data sharing agreements and fragmented data management, conforming with the entrenched interests of autonomous sub-national actors and thus extending the path-dependence of forest governance to forest data governance. We conclude by proposing how the framework could be further developed and tested to help explain which factors condition the formation of natural resource data institutions and subsequently the (re-)distribution of benefits they govern. Transparent and efficient data approaches can be enabled only if the analysis of data institutions is given equal attention to the technological development of data solutions…(More)”.
Who Should Represent Future Generations in Climate Planning?
Paper by Morten Fibieger Byskov and Keith Hyams: “Extreme impacts from climate change are already being felt around the world. The policy choices that we make now will affect not only how high global temperatures rise but also how well-equipped future economies and infrastructures are to cope with these changes. The interests of future generations must therefore be central to climate policy and practice. This raises the questions: Who should represent the interests of future generations with respect to climate change? And according to which criteria should we judge whether a particular candidate would make an appropriate representative for future generations? In this essay, we argue that potential representatives of future generations should satisfy what we call a “hypothetical acceptance criterion,” which requires that the representative could reasonably be expected to be accepted by future generations. This overarching criterion in turn gives rise to two derivative criteria. These are, first, the representative’s epistemic and experiential similarity to future generations, and second, his or her motivation to act on behalf of future generations. We conclude that communities already adversely affected by climate change best satisfy these criteria and are therefore able to command the hypothetical acceptance of future generations…(More)”.
EU Court Expands Definition of Sensitive Data, Prompting Legal Concerns for Companies
Article by Catherine Stupp: “Companies will be under increased pressure after Europe’s top court ruled they must apply special protections to data that firms previously didn’t consider sensitive.
Under the European Union’s General Data Protection Regulation, information about health, religion, political views and sexual orientation are considered sensitive. Companies generally aren’t allowed to process it unless they apply special safeguards.
The European Court of Justice on Aug. 1 determined that public officials in Lithuania had their sensitive data revealed because their spouses’ names were published online, which could indicate their sexual orientation. Experts say the implications will extend to other types of potentially sensitive information.
Data that might be used to infer a sensitive piece of information about a person is also sensitive, the court said. That could include unstructured data—which isn’t organized in databases and is therefore more difficult to search through and analyze—such as surveillance camera footage in a hospital that indicates a person was treated there, legal experts say. Records of a special airplane meal might reveal religious views.
The court ruling “raises a lot of practical complexities and a lot of difficulty in understanding if the data [organizations] have is sensitive or not,” said Dr. Gabriela Zanfir-Fortuna, vice president for global privacy at the Future of Privacy Forum, a think tank based in Washington, D.C.
Many companies with large data sets may not know they hold details that indirectly relate to sensitive information, privacy experts say. Identifying where that data is and deciding whether it could reveal personal details about an individual would be a huge undertaking, said Tobias Judin, head of the international section at the Norwegian data protection regulator.
“You can’t really comply with the law if your data set becomes so big that you don’t really know what’s in it,” Mr. Judin said.
The GDPR says companies can only process sensitive data in a few circumstances, such as if a person gives explicit consent for it to be used for a specified purpose.
Regulators have been grappling with the question of how to determine what is sensitive data. The Norwegian regulator last year fined gay-dating app Grindr LLC 65 million kroner, equivalent to roughly $6.7 million The regulator said the user data was sensitive because use of the app indicated their sexual orientation.
Grindr said it doesn’t require users to share that data. The company appealed in February. Mr. Judin said his office is reviewing material submitted by the company as part of its appeal. Spain’s regulator came to a different conclusion in January, and found that data Grindr shared for advertising purposes wasn’t sensitive….(More)”.
Can open-source technologies support open societies?
Report by Victoria Welborn, and George Ingram: “In the 2020 “Roadmap for Digital Cooperation,” U.N. Secretary General António Guterres highlighted digital public goods (DPGs) as a key lever in maximizing the full potential of digital technology to accelerate progress toward the Sustainable Development Goals (SDGs) while also helping overcome some of its persistent challenges.
The Roadmap rightly pointed to the fact that, as with any new technology, there are risks around digital technologies that might be counterproductive to fostering prosperous, inclusive, and resilient societies. In fact, without intentional action by the global community, digital technologies may more naturally exacerbate exclusion and inequality by undermining trust in critical institutions, allowing consolidation of control and economic value by the powerful, and eroding social norms through breaches of privacy and disinformation campaigns.
Just as the pandemic has served to highlight the opportunity for digital technologies to reimagine and expand the reach of government service delivery, so too has it surfaced specific risks that are hallmarks of closed societies and authoritarian states—creating new pathways to government surveillance, reinforcing existing socioeconomic inequalities, and enabling the rapid proliferation of disinformation. Why then—in the face of these real risks—focus on the role of digital public goods in development?
As the Roadmap noted, DPGs are “open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the SDGs.”[1] There are a number of factors why such products have unique potential to accelerate development efforts, including widely recognized benefits related to more efficient and cost effective implementation of technology-enabled development programming.
Historically, the use of digital solutions for development in low- and middle-income countries (LMICs) has been supported by donor investments in sector-specific technology systems, reinforcing existing silos and leaving countries with costly, proprietary software solutions with duplicative functionality and little interoperability across government agencies, much less underpinning private sector innovation. These silos are further codified through the development of sector-specific maturity models and metrics. An effective DPG ecosystem has the potential to enable the reuse and improvement of existing tools, thereby lowering overall cost of deploying technology solutions and increasing efficient implementation.
Beyond this proven reusability of DPGs and the associated cost and deployment efficiencies, do DPGs have even more transformational potential? Increasingly, there is interest in DPGs as drivers of inclusion and products through which to standardize and safeguard rights; these opportunities are less understood and remain unproven. To begin to fill that gap, this paper first examines the unique value proposition of DPGs in supporting open societies by advancing more equitable systems and by codifying rights. The paper then considers the persistent challenges to more fully realizing this opportunity and offers some recommendations for how to address these challenges…(More)”.
The New Moral Mathematics
Book Review by Kieran Setiya: “Space is big,” wrote Douglas Adams in The Hitchhiker’s Guide to the Galaxy (1979). “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.”
What we do now affects future people in dramatic ways—above all, whether they will exist at all.
Time is big, too—even if we just think on the timescale of a species. We’ve been around for approximately 300,000 years. There are now about 8 billion of us, roughly 15 percent of all humans who have ever lived. You may think that’s a lot, but it’s just peanuts to the future. If we survive for another million years—the longevity of a typical mammalian species—at even a tenth of our current population, there will be 8 trillion more of us. We’ll be outnumbered by future people on the scale of a thousand to one.
What we do now affects those future people in dramatic ways: whether they will exist at all and in what numbers; what values they embrace; what sort of planet they inherit; what sorts of lives they lead. It’s as if we’re trapped on a tiny island while our actions determine the habitability of a vast continent and the life prospects of the many who may, or may not, inhabit it. What an awful responsibility.
This is the perspective of the “longtermist,” for whom the history of human life so far stands to the future of humanity as a trip to the chemist’s stands to a mission to Mars.
Oxford philosophers William MacAskill and Toby Ord, both affiliated with the university’s Future of Humanity Institute, coined the word “longtermism” five years ago. Their outlook draws on utilitarian thinking about morality. According to utilitarianism—a moral theory developed by Jeremy Bentham and John Stuart Mill in the nineteenth century—we are morally required to maximize expected aggregate well-being, adding points for every moment of happiness, subtracting points for suffering, and discounting for probability. When you do this, you find that tiny chances of extinction swamp the moral mathematics. If you could save a million lives today or shave 0.0001 percent off the probability of premature human extinction—a one in a million chance of saving at least 8 trillion lives—you should do the latter, allowing a million people to die.
Now, as many have noted since its origin, utilitarianism is a radically counterintuitive moral view. It tells us that we cannot give more weight to our own interests or the interests of those we love than the interests of perfect strangers. We must sacrifice everything for the greater good. Worse, it tells us that we should do so by any effective means: if we can shave 0.0001 percent off the probability of human extinction by killing a million people, we should—so long as there are no other adverse effects.
But even if you think we are allowed to prioritize ourselves and those we love, and not allowed to violate the rights of some in order to help others, shouldn’t you still care about the fate of strangers, even those who do not yet exist? The moral mathematics of aggregate well-being may not be the whole of ethics, but isn’t it a vital part? It belongs to the domain of morality we call “altruism” or “charity.” When we ask what we should do to benefit others, we can’t ignore the disquieting fact that the others who occupy the future may vastly outnumber those who occupy the present, and that their very existence depends on us.
From this point of view, it’s an urgent question how what we do today will affect the further future—urgent especially when it comes to what Nick Bostrom, the philosopher who directs the Future of Humanity Institute, calls the “existential risk” of human extinction. This is the question MacAskill takes up in his new book, What We Owe the Future, a densely researched but surprisingly light read that ranges from omnicidal pandemics to our new AI overlords without ever becoming bleak…(More)”.
Localising AI for crisis response
Report by Aleks Berditchevskaia and Kathy Peach, Isabel Stewart: “Putting power back in the hands of frontline humanitarians and local communities.
This report documents the results of a year-long project to design and evaluate new proof-of-concept Collective Crisis Intelligence tools. These are tools that combine data from crisis-affected communities with the processing power of AI to improve humanitarian action.
The two collective crisis intelligence tool prototypes developed were:
- NFRI-Predict: a tool that predicts which non-food aid items (NFRI) are most needed by different types of households in different regions of Nepal after a crisis.
- Report and Respond: a French language SMS-based tool that allows Red Cross volunteers in Cameroon to check the accuracy of COVID-19 rumours or misinformation they hear from the community while they’re in the field, and receive real-time guidance on appropriate responses.
Both tools were developed using Nesta’s Participatory AI methods, which aimed to address some of the risks associated with humanitarian AI by involving local communities in the design, development and evaluation of the new tools.
The project was a partnership between Nesta’s Centre for Collective Intelligence Design (CCID) and Data Analytics Practice (DAP), the Nepal Red Cross and Cameroon Red Cross, IFRC Solferino Academy, and Open Lab Newcastle University, and it was funded by the UK Humanitarian Innovation Hub.
We found that collective crisis intelligence:
- has the potential to make local humanitarian action more timely and appropriate to local needs.
- can transform locally-generated data to drive new forms of (anticipatory) action.
We found that participatory AI:
- can overcome several critiques and limitations of AI – as well as helping to improve model performance.
- helps to surface tensions between the assumptions and standards set by AI gatekeepers versus the pragmatic reality of implementation.
- creates opportunities for building and sharing new capabilities among frontline staff and data scientists.
We also validated that collective crisis intelligence and participatory AI can help increase trust in AI tools, but more research is needed to untangle the factors that were responsible…(More)”.
Protecting Children in Cyberconflicts
Paper by Eleonore Pauwels: “Just as digital technologies have transformed myriad aspects of daily life, they are now transforming war, politics and the social fabric.
This rapid analysis examines the ways in which cyberconflict adversely affects children and offers actions that could strengthen safeguards to protect them.
Cyberconflict can impact children directly or indirectly. Harms range from direct targeting for influence and recruitment into armed forces and armed groups, to personal data manipulation and theft, to cyber attacks on infrastructure across sectors critical to child well-being such as education and health facilities.
Many experts believe that the combination of existing international humanitarian law, international criminal law, human rights law, and child rights law is adequate to address the emerging issues posed by cyberconflict. Nevertheless, several key challenges persist. Attribution of cyber attacks to specific actors and ensuring accountability has proven challenging, particularly in the so-called grey zone between war and peace.
There is an urgent need to clarify how child rights apply in the digital space and for Member States to place these rights at the centre of regulatory frameworks and legislation on new technologies…(More)”.
Using Wikipedia for conflict forecasting
Article by Christian Oswald and Daniel Ohrenhofer: “How can we improve our ability to predict conflicts? Scholars have struggled with this question for a long time. However, as a discipline, and especially over the last two decades, political science has made substantial progress. In general, what we need to improve predictions are advances in data and methodology. Data advances involve both improving the quality of existing data and developing new data sources. We propose a new data source for conflict forecasting efforts: Wikipedia.
The number of country page views indicates international salience of, or interest in, a country. Meanwhile, the number of changes to a country page indicate political controversy between opposing political views.
We took part in the Violence Early-Warning System’s friendly competition to predict changes in battle-related deaths. In our work, we evaluate our findings with out-of-sample predictions using held-out, previously unseen data, and true forecasts into the future. We find support for the predictive power of country page views, whereas we do not for page changes…
Globally available data, updated monthly, are ideal for (near) real-time forecasting. However, many commonly used data sources are available only annually. They are updated once a year, often with considerable delay.
Some of these variables, such as democracy or GDP, tend to be relatively static over time. Furthermore, many data sources face the problem of missing values. These occur when it is not possible to find reliable data for a variable for a given country.
Wikipedia is updated in real time, unlike many commonly used data sources, which may update only annually and with considerable delay
More recent data sources such as Twitter, images or text as data, or mobile phone data, often do not provide global coverage. What’s more, collecting and manipulating data from such sources is typically computationally and/or financially costly. Wikipedia provides an alternative data source that, to some extent, overcomes many of these limitations…(More)”.
Designing Data Spaces: The Ecosystem Approach to Competitive Advantage
Open access book edited by Boris Otto, Michael ten Hompel, and Stefan Wrobel: “…provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries.
To this end, the book is structured in four parts: Part I “Foundations and Contexts” provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II “Data Space Technologies” subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various “Use Cases and Data Ecosystems” from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several “Solutions and Applications”, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more.
Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty…(More)”.
How crowdfunding is shaping the war in Ukraine
The Economist: “This month Aerorozvidka, a Ukrainian drone unit, celebrated the acquisition of four Chinese-made DJI Phantom 3 drones, provided by a German donor. The group, founded in 2014 after the Russian invasion of eastern Ukraine and annexation of Crimea, is led by civilians. The gift is just one example of crowdfunding in Russia’s latest war against Ukraine. Citizens from both sides are supplying much-needed equipment to the front lines. What is the impact of these donations, and how do the two countries differ in their approach?
Private citizens have chipped in to help in times of war for centuries. A writing tablet found near Hadrian’s Wall in northern England mentions a gift of sandals, socks and underwear for Roman soldiers. During the first world war America’s government asked civilians to knit warm clothing for troops. But besides such small morale-boosting efforts, some schemes to rally civilians have proved strikingly productive. During the second world war Britain introduced a “Spitfire Fund”, encouraging civilian groups to raise the £12,600 (£490,000, or $590,000, in today’s money) needed to build the top-of-the-range fighter. Individual contributors could buy wings, machineguns or even a rivet, for six old pence (two and a half modern ones) apiece. The scheme raised around £13m in total—enough for more than 1,000 aircraft (of a total of 20,000 built)…(More)”.