Geospatial Data Market Study


Study by Frontier Economics: “Frontier Economics was commissioned by the Geospatial Commission to carry out a detailed economic study of the size, features and characteristics of the UK geospatial data market. The Geospatial Commission was established within the Cabinet Office in 2018, as an independent, expert committee responsible for setting the UK’s Geospatial Strategy and coordinating public sector geospatial activity. The Geospatial Commission’s aim is to unlock the significant economic, social and environmental opportunities offered by location data. The UK’s Geospatial Strategy (2020) sets out how the UK can unlock the full power of location data and take advantage of the significant economic, social and environmental opportunities offered by location data….

Like many other forms of data, the value of geospatial data is not limited to the data creator or data user. Value from using geospatial data can be subdivided into several different categories, based on who the value accrues to:

Direct use value: where value accrues to users of geospatial data. This could include government using geospatial data to better manage public assets like roadways.

Indirect use value: where value is also derived by indirect beneficiaries who interact with direct users. This could include users of the public assets who benefit from better public service provision.

Spillover use value: value that accrues to others who are not a direct data user or indirect beneficiary. This could, for example, include lower levels of emissions due to improvement management of the road network by government. The benefits of lower emissions are felt by all of society even those who do not use the road network.

As the value from geospatial data does not always accrue to the direct user of the data, there is a risk of underinvestment in geospatial technology and services. Our £6 billion estimate of turnover for a subset of geospatial firms in 2018 does not take account of these wider economic benefits that “spill over” across the UK economy, and generate additional value. As such, the value that geospatial data delivers is likely to be significantly higher than we have estimated and is therefore an area for potential future investment….(More)”.

Scaling up Citizen Science


Report for the European Commission: “The rapid pace of technology advancements, the open innovation paradigm, and the ubiquity of high-speed connectivity, greatly facilitate access to information to individuals, increasing their opportunities to achieve greater emancipation and empowerment. This provides new opportunities for widening participation in scientific research and policy, thus opening a myriad of avenues driving a paradigm shift across fields and disciplines, including the strengthening of Citizen Science. Nowadays, the application of Citizen Science principles spans across several scientific disciplines, covering different geographical scales. While the interdisciplinary approach taken so far has shown significant results and findings, the current situation depicts a wide range of projects that are heavily context-dependent and where the learning outcomes of pilots are very much situated within the specific areas in which these projects are implemented. There is little evidence on how to foster the spread and scalability in Citizen Science. Furthermore, the Citizen Science community currently lacks a general agreement on what these terms mean, entail and how these can be approached.

To address these issues, we developed a theoretically grounded framework to unbundle the meaning of scaling and spreading in Citizen Science. In this framework, we defined nine constructs that represent the enablers of these complex phenomena. We then validated, enriched, and instantiated this framework through four qualitative case studies of, diverse, successful examples of scaling and spreading in Citizen Science. The framework and the rich experiences allow formulating four theoretically and empirically grounded scaling scenarios. We propose the framework and the in-depth case studies as the main contribution from this report. We hope to stimulate future research to further refine our understanding of the important, complex and multifaceted phenomena of scaling and spreading in Citizen Science. The framework also proposes a structured mindset for practitioners that either want to ideate and start a new Citizen Science intervention that is scalable-by-design, or for those that are interested in assessing the scalability potential of an existing initiative….(More)”.

Reclaiming Free Speech for Democracy and Human Rights in a Digitally Networked World


Paper by Rebecca MacKinnon: : “…divided into three sections. The first section discusses the relevance of international human rights standards to U.S. internet platforms and universities. The second section identifies three common challenges to universities and internet platforms, with clear policy implications. The third section recommends approaches to internet policy that can better protect human rights and strengthen democracy. The paper concludes with proposals for how universities can contribute to the creation of a more robust digital information ecosystem that protects free speech along with other human rights, and advances social justice.

1) International human rights standards are an essential complement to the First Amendment. While the First Amendment does not apply to how privately owned and operated digital platforms set and enforce rules governing their users’ speech, international human rights standards set forth a clear framework to which companies any other type of private organization can and should be held accountable. Scholars of international law and freedom of expression point out that Article 19 of the International Covenant on Civil and Political Rights encompasses not only free speech, but also the right to access information and to formulate opinions without interference. Notably, this aspect of international human rights law is relevant in addressing the harms caused by disinformation campaigns aided by algorithms and targeted profiling. In protecting freedom of expression, private companies and organizations must also protect and respect other human rights, including privacy, non-discrimination, assembly, the right to political participation, and the basic right to security of person.

2) Three core challenges are common to universities and internet platforms. These common challenges must be addressed in order to protect free speech alongside other fundamental human rights including non-discrimination:

Challenge 1: The pretense of neutrality amplifies bias in an unjust world. In an inequitable and unjust world, “neutral” platforms and institutions will perpetuate and even exacerbate inequities and power imbalances unless they understand and adjust for those inequities and imbalances. This fundamental civil rights concept is better understood by the leaders of universities than by those in charge of social media platforms, which have clear impact on public discourse and civic engagement.

Challenge 2: Rules and enforcement are inadequate without strong leadership and cultural norms. Rules governing speech, and their enforcement, can be ineffective and even counterproductive unless they are accompanied by values-based leadership. Institutional cultures should take into account the context and circumstances of unique situations, individuals, and communities. For rules to have legitimacy, communities that are governed by them must be actively engaged in building a shared culture of responsibility.

Challenge 3: Communities need to be able to shape how and where they enable discourse and conduct learning. Different types of discourse that serve different purposes require differently designed spaces—be they physical or digital. It is important for communities to be able to set their own rules of engagement, and shape their spaces for different types of discourse. Overdependence upon a small number of corporate-controlled platforms does not serve communities well. Online free speech not only will be better served by policies that foster competition and strengthen antitrust law; policies and resources must also support the development of nonprofit, open source, and community-driven digital public infrastructure.

3) A clear and consistent policy environment that supports civil rights objectives and is compatible with human rights standards is essential to ensure that the digital public sphere evolves in a way that genuinely protects free speech and advances social justice. Analysis of twenty different consensus declarations, charters, and principles produced by international coalitions of civil society organizations reveals broad consensus with U.S.-based advocates of civil rights-compatible technology policy….(More)”.

Malicious Uses and Abuses of Artificial Intelligence


Report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro: “… looking into current and predicted criminal uses of artificial intelligence (AI)… The report provides law enforcers, policy makers and other organizations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.

“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.” said Edvardas Šileris, Head of Europol’s Cybercrime Centre. “This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems.”

The report concludes that cybercriminals will leverage AI both as an attack vector and an attack surface. Deepfakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

For example, AI could be used to support:

  • Convincing social engineering attacks at scale
  • Document-scraping malware to make attacks more efficient
  • Evasion of image recognition and voice biometrics
  • Ransomware attacks, through intelligent targeting and evasion
  • Data pollution, by identifying blind spots in detection rules..

The three organizations make several recommendations to conclude the report:

The Next Generation Humanitarian Distributed Platform


Report by Mercy Corps, the Danish Red Cross and hiveonline: “… call for the development of a shared, sector-wide “blockchain for good” to allow the aid sector to better automate and track processes in real-time, and maintain secure records. This would help modernize and coordinate the sector to reach more people as increasing threats such as pandemics, climate change and natural disasters require aid to be disbursed faster, more widely and efficiently.

A cross-sector blockchain platform – a digital database that can be simultaneously used and shared within a large decentralized, publicly accessible network – could support applications ranging from cash and voucher distribution to identity services, natural capital and carbon tracking, and donor engagement.

The report authors call for the creation of a committee to develop cross-sector governance and coordinate the implementation of a shared “Humanitarian Distributed Platform.” The authors believe the technology can help organizations fulfill commitments made to transparency, collaboration and efficiency under the Humanitarian Grand Bargain.

The report is compiled from responses of 35 survey participants, representing stakeholders in the humanitarian sector, including NGO project implementers, consultants, blockchain developers, academics, and founders. A further 39 direct interviews took place over the course of the research between July and September 2020….(More)”.

Interoperability as a tool for competition regulation


Paper by Ian Brown: “Interoperability is a technical mechanism for computing systems to work together – even if they are from competing firms. An interoperability requirement for large online platforms has been suggested by the European Commission as one ex ante (up-front rule) mechanism in its proposed Digital Markets Act (DMA), as a way to encourage competition. The policy goal is to increase choice and quality for users, and the ability of competitors to succeed with better services. The application would be to the largest online platforms, such as Facebook, Google, Amazon, smartphone operating systems (e.g. Android/iOS), and their ancillary services, such as payment and app stores.

This report analyses up-front interoperability requirements as a pro-competition policy tool for regulating large online platforms, exploring the economic and social rationales and possible regulatory mechanisms. It is based on a synthesis of recent comprehensive policy re-views of digital competition in major industrialised economies, and related academic literature, focusing on areas of emerging consensus while noting important disagreements. It draws particularly on the Vestager, Furman and Stigler reviews, and the UK Competition and Markets Authority’s study on online platforms and digital advertising. It also draws on interviews with software developers, platform operators, government officials, and civil society experts working in this field….(More)”.

Curating citizen engagement: Food solutions for future generations


EIT Food: “The Curating Citizen Engagement project will revolutionise our way of solving grand societal challenges by creating a platform for massive public involvement and knowledge generation, specifically targeting food-related issues. …Through a university course developed by partners representing different aspects of the food ecosystem (from sensory perception to nutrition to food policy), we will educate the next generation of students to be able to engage and involve the public in tackling food-related societal challenges. The students will learn iterative prototyping skills in order to create museum installations with built-in data collection points, that will engage the public and assist in shaping future food solutions. Thus, citizens are not only provided with knowledge on food related topics, but are empowered and encouraged to actively use it, leading to more trust in the food sector in general….(More)”.

Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias (2020)


Foreword of a Report by the Australian Human Rights Commission: “Artificial intelligence (AI) promises better, smarter decision making.

Governments are starting to use AI to make decisions in welfare, policing and law enforcement, immigration, and many other areas. Meanwhile, the private sector is already using AI to make decisions about pricing and risk, to determine what sorts of people make the ‘best’ customers… In fact, the use cases for AI are limited only by our imagination.

However, using AI carries with it the risk of algorithmic bias. Unless we fully understand and address this risk, the promise of AI will be hollow.

Algorithmic bias is a kind of error associated with the use of AI in decision making, and often results in unfairness. Algorithmic bias can arise in many ways. Sometimes the problem is with the design of the AI-powered decision-making tool itself. Sometimes the problem lies with the data set that was used to train the AI tool, which could replicate or even make worse existing problems, including societal inequality.

Algorithmic bias can cause real harm. It can lead to a person being unfairly treated, or even suffering unlawful discrimination, on the basis of characteristics such as their race, age, sex or disability.

This project started by simulating a typical decision-making process. In this technical paper, we explore how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.

To ground our discussion, we chose a hypothetical scenario: an electricity retailer uses an AI-powered tool to decide how to offer its products to customers, and on what terms. The general principles and solutions for mitigating the problem, however, will be relevant far beyond this specific situation.

Because algorithmic bias can result in unlawful activity, there is a legal imperative to address this risk. However, good businesses go further than the bare minimum legal requirements, to ensure they always act ethically and do not jeopardise their good name.

Rigorous design, testing and monitoring can avoid algorithmic bias. This technical paper offers some guidance for companies to ensure that when they use AI, their decisions are fair, accurate and comply with human rights….(More)”

Responsible Data Re-Use for COVID19


” The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, with support from the Henry Luce Foundation, today released guidance to inform decision-making in the responsible re-use of data — re-purposing data for a use other than that for which it was originally intended — to address COVID-19. The findings, recommendations, and a new Responsible Data Re-Use framework stem from The Data Assembly initiative in New York City. An effort to solicit diverse, actionable public input on data re-use for crisis response in the United States, the Data Assembly brought together New York City-based stakeholders from government, the private sector, civic rights and advocacy organizations, and the general public to deliberate on innovative, though potentially risky, uses of data to inform crisis response in New York City. The findings and guidance from the initiative will inform policymaking and practice regarding data re-use in New York City, as well as free data literacy training offerings.

The Data Assembly’s Responsible Data Re-Use Framework provides clarity on a major element of the ongoing crisis. Though leaders throughout the world have relied on data to reduce uncertainty and make better decisions, expectations around the use and sharing of siloed data assets has remained unclear. This summer, along with the New York Public Library and Brooklyn Public Library, The GovLab co-hosted four months of remote deliberations with New York-based civil rights organizations, key data holders, and policymakers. Today’s release is a product of these discussions, to show how New Yorkers and their leaders think about the opportunities and risks involved in the data-driven response to COVID-19….(More)”

See: The Data Assembly Synthesis Report by y Andrew Young, Stefaan G. Verhulst, Nadiya Safonova, and Andrew J. Zahuranec

Don’t Fear the Robots, and Other Lessons From a Study of the Digital Economy


Steve Lohr at the New York Times: “L. Rafael Reif, the president of Massachusetts Institute of Technology, delivered an intellectual call to arms to the university’s faculty in November 2017: Help generate insights into how advancing technology has changed and will change the work force, and what policies would create opportunity for more Americans in the digital economy.

That issue, he wrote, is the “defining challenge of our time.”

Three years later, the task force assembled to address it is publishing its wide-ranging conclusions. The 92-page report, “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines,” was released on Tuesday….

Here are four of the key findings in the report:

Most American workers have fared poorly.

It’s well known that those on the top rungs of the job ladder have prospered for decades while wages for average American workers have stagnated. But the M.I.T. analysis goes further. It found, for example, that real wages for men without four-year college degrees have declined 10 to 20 percent since their peak in 1980….

Robots and A.I. are not about to deliver a jobless future.

…The M.I.T. researchers concluded that the change would be more evolutionary than revolutionary. In fact, they wrote, “we anticipate that in the next two decades, industrialized countries will have more job openings than workers to fill them.”…

Worker training in America needs to match the market.

“The key ingredient for success is public-private partnerships,” said Annette Parker, president of South Central College, a community college in Minnesota, and a member of the advisory board to the M.I.T. project.

The schools, nonprofits and corporate-sponsored programs that have succeeded in lifting people into middle-class jobs all echo her point: the need to link skills training to business demand….

Workers need more power, voice and representation.The report calls for raising the minimum wage, broadening unemployment insurance and modifying labor laws to enable collective bargaining in occupations like domestic and home-care workers and freelance workers. Such representation, the report notes, could come from traditional unions or worker advocacy groups like the National Domestic Workers Alliance, Jobs With Justice and the Freelancers Union….(More)”