Articulating Value from Data


Report by the World Economic Forum: “The distinct characteristics and dynamics of data – contextual, relational and cumulative – call for new approaches to articulating its value. Businesses should value data based on cases that go beyond the transactional monetization of data and take into account the broader context, future opportunities to collaborate and innovate, and value created for its ecosystem stakeholders. Doing so will encourage companies to think about the future value data can help generate, beyond the existing data lakes they sit on, and open them up to collaboration opportunities….(More)”.

UNESCO member states adopt the first ever global agreement on the Ethics of Artificial Intelligence


Press Release: “In 2018, Audrey Azoulay, Director-General of UNESCO, launched an ambitious project: to give the world an ethical framework for the use of artificial intelligence. Three years later, thanks to the mobilization of hundreds of experts from around the world and intense international negotiations, the 193 UNESCO’s member states have just officially adopted this ethical framework….

The Recommendation aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy. 

  1. Protecting data 

The Recommendation calls for action beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. It states that individuals should all be able to access or even erase records of their personal data. It also includes actions to improve data protection and an individual’s knowledge of, and right to control, their own data. It also increases the ability of regulatory bodies around the world to enforce this.

  1. Banning social scoring and mass surveillance

The Recommendation explicitly bans the use of AI systems for social scoring and mass surveillance. These types of technologies are very invasive, they infringe on human rights and fundamental freedoms, and they are used in a broad way. The Recommendation stresses that when developing regulatory frameworks, Member States should consider that ultimate responsibility and accountability must always lie with humans and that AI technologies should not be given legal personality themselves. 

  1. Helping to monitor and evaluate

The Recommendation also sets the ground for tools that will assist in its implementation. Ethical Impact Assessment is intended to help countries and companies developing and deploying AI systems to assess the impact of those systems on individuals, on society and on the environment. Readiness Assessment Methodology helps Member States to assess how ready they are in terms of legal and technical infrastructure. This tool will assist in enhancing the institutional capacity of countries and recommend appropriate measures to be taken in order to ensure that ethics are implemented in practice. In addition, the Recommendation encourages Member States to consider adding the role of an independent AI Ethics Officer or some other mechanism to oversee auditing and continuous monitoring efforts. 

  1. Protecting the environment

The Recommendation emphasises that AI actors should favour data, energy and resource-efficient AI methods that will help ensure that AI becomes a more prominent tool in the fight against climate change and on tackling environmental issues. The Recommendation asks governments to assess the direct and indirect environmental impact throughout the AI system life cycle. This includes its carbon footprint, energy consumption and the environmental impact of raw material extraction for supporting the manufacturing of AI technologies. It also aims at reducing the environmental impact of AI systems and data infrastructures. It incentivizes governments to invest in green tech, and if there are disproportionate negative impact of AI systems on the environment, the Recommendation instruct that they should not be used….(More)”.

NativeDATA


About: “NativeDATA is a free online resource that offers practical guidance for Tribes and Native-serving organizations. For this resource, Native-serving organizations includes Tribal and urban Indian organizations and Tribal Epidemiology Centers (TECs). 

Tribal and urban Indian communities need correct health information (data), so that community leaders can:

  • Watch disease trends
  • Respond to health threats
  • Create useful health policies…

Throughout, this resource offers practical guidance for obtaining and sharing health data in ways that honor Tribal sovereignty, data sovereignty, and public health authorityis the authority of a sovereign government to protect the health, safety, and welfare of its citizens. As sovereign nations, Tribes have the power to define how they will use this authority to protect and promote the health of their communities. The federal government recognizes Tribes and Tribal Epidemiology Centers (TECs) as public health authorities under federal law. More.

Inside you will find expert advice to help you:

UK government publishes pioneering standard for algorithmic transparency


UK Government Press Release: “The UK government has today launched one of the world’s first national standards for algorithmic transparency.

This move delivers on commitments made in the National AI Strategy and National Data Strategy, and strengthens the UK’s position as a global leader in trustworthy AI.

In its landmark review into bias in algorithmic decision-making, the Centre for Data Ethics and Innovation (CDEI) recommended that the UK government should place a mandatory transparency obligation on public sector organisations using algorithms to support significant decisions affecting individuals….

The Cabinet Office’s Central Digital and Data Office (CDDO) has worked closely with the CDEI to design the standard. It also consulted experts from across civil society and academia, as well as the public. The standard is organised into two tiers. The first includes a short description of the algorithmic tool, including how and why it is being used, while the second includes more detailed information about how the tool works, the dataset/s that have been used to train the model and the level of human oversight. The standard will help teams be meaningfully transparent about the way in which algorithmic tools are being used to support decisions, especially in cases where they might have a legal or economic impact on individuals.

The standard will be piloted by several government departments and public sector bodies in the coming months. Following the piloting phase, CDDO will review the standard based on feedback gathered and seek formal endorsement from the Data Standards Authority in 2022…(More)”.

Data Powered Positive Deviance Handbook


Handbook by GIZ and UNDP: “Positive Deviance (PD) is based on the observation that in every community or organization, there are a few individuals who achieve significantly better outcomes than their peers, despite having similar challenges and resources. These individuals are referred to as positive deviants, and adopting their solutions is what is referred to as the PD approach.
The method described in this Handbook follows the same logic as the PD approach but uses pre-existing, non-traditional data sources instead of — or in conjunction with — traditional data sources. Non-traditional data in this context broadly refers to data that is digitally captured (e.g. mobile phone records and financial data), mediated (e.g. social media and online data), or observed (e.g. satellite imagery). The integration of such data to complement traditional data sources generally used in PD is what we refer to as Data Powered Positive Deviance (DPPD)…(More)”.

Creating and governing social value from data


Paper by Diane Coyle and Stephanie Diepeveen: “Data is increasingly recognised as an important economic resource for innovation and growth, but its innate characteristics mean market-based valuations inadequately account for the impact of its use on social welfare. This paper extends the literature on the value of data by providing a framework that takes into account its non-rival nature and integrates its inherent positive and negative externalities. Positive externalities consist of the scope for combining different data sets or enabling innovative uses of existing data, while negative externalities include potential privacy loss. We propose a framework integrating these and explore the policy trade-offs shaping net social welfare through a case study of geospatial data and the transport sector in the UK, where insufficient recognition of the trade-offs has contributed to suboptimal policy outcomes. We conclude by proposing methods for empirical approaches to social data valuation, essential evidence for decisions regarding the policy trade-offs . This article therefore lays important groundwork for novel approaches to the measurement of the net social welfare contribution of data, and hence illuminating opportunities for greater and more equitable creation of value from data in our societies….(More)”

Surveillance, Companionship, and Entertainment: The Ancient History of Intelligent Machines


Essay by E.R. Truitt: “Robots have histories that extend far back into the past. Artificial servants, autonomous killing machines, surveillance systems, and sex robots all find expression from the human imagination in works and contexts beyond Ovid (43 BCE to 17 CE) and the story of Pygmalion in cultures across Eurasia and North Africa. This long history of our human-machine relationships also reminds us that our aspirations, fears, and fantasies about emergent technologies are not new, even as the circumstances in which they appear differ widely. Situating these objects, and the desires that create them, within deeper and broader contexts of time and space reveals continuities and divergences that, in turn, provide opportunities to critique and question contemporary ideas and desires about robots and artificial intelligence (AI).

As early as 3,000 years ago we encounter interest in intelligent machines and AI that perform different servile functions. In the works of Homer (c. eighth century BCE) we find Hephaestus, the Greek god of smithing and craft, using automatic bellows to execute simple, repetitive labor. Golden handmaidens, endowed with characteristics of movement, perception, judgment, and speech, assist him in his work. In his “Odyssey,” Homer recounts how the ships of the Phaeacians perfectly obey their human captains, detecting and avoiding obstacles or threats, and moving “at the speed of thought.” Several centuries later, around 400 BCE, we meet Talos, the giant bronze sentry, created by Hephaestus, that patrolled the shores of Crete. These examples from the ancient world all have in common their subservient role; they exist to serve the desires of other, more powerful beings — either gods or humans — and even if they have sentience, they lack autonomy. Thousands of years before Karel Čapek introduced the term “robot” to refer to artificial slaves, we find them in Homer….(More)”.

Conceptualizing AI literacy: An exploratory review


Paper by Davy Tsz KitNg, Jac Ka LokLeung, Samuel K.W.Chu, and Maggie QiaoShen: “Artificial Intelligence (AI) has spread across industries (e.g., business, science, art, education) to enhance user experience, improve work efficiency, and create many future job opportunities. However, public understanding of AI technologies and how to define AI literacy is under-explored. This vision poses upcoming challenges for our next generation to learn about AI. On this note, an exploratory review was conducted to conceptualize the newly emerging concept “AI literacy”, in search for a sound theoretical foundation to define, teach and evaluate AI literacy. Grounded in literature on 30 existing peer-reviewed articles, this review proposed four aspects (i.e., know and understand, use, evaluate, and ethical issues) for fostering AI literacy based on the adaptation of classic literacies. This study sheds light on the consolidated definition, teaching, and ethical concerns on AI literacy, establishing the groundwork for future research such as competency development and assessment criteria on AI literacy….(More)”.

‘Anyway, the dashboard is dead’: On trying to build urban informatics


Paper by Jathan Sadowski: “How do the idealised promises and purposes of urban informatics compare to the material politics and practices of their implementation? To answer this question, I ethnographically trace the development of two data dashboards by strategic planners in an Australian city over the course of 2 years. By studying this techno-political process from its origins onward, I uncovered an interesting story of obdurate institutions, bureaucratic momentum, unexpected troubles, and, ultimately, frustration and failure. These kinds of stories, which often go untold in the annals of innovation, contrast starkly with more common framings of technological triumph and transformation. They also, I argue, reveal much more about how techno-political systems are actualised in the world…(More)”.

The Birth of Digital Human Rights


Book by Rebekah Dowd on “Digitized Data Governance as a Human Rights Issue in the EU”: “…This book considers contested responsibilities between the public and private sectors over the use of online data, detailing exactly how digital human rights evolved in specific European states and gradually became a part of the European Union framework of legal protections. The author uniquely examines why and how European lawmakers linked digital data protection to fundamental human rights, something heretofore not explained in other works on general data governance and data privacy. In particular, this work examines the utilization of national and European Union institutional arrangements as a location for activism by legal and academic consultants and by first-mover states who legislated digital human rights beginning in the 1970s. By tracing the way that EU Member States and non-state actors utilized the structure of EU bodies to create the new norm of digital human rights, readers will learn about the process of expanding the scope of human rights protections within multiple dimensions of European political space. The project will be informative to scholar, student, and layperson, as it examines a new and evolving area of technology governance – the human rights of digital data use by the public and private sectors….(More)”.