European Commission Press Release: “…the Big Data Value Association and euRobotics agreed to cooperate more in order to boost the advancement of artificial intelligence’s (AI) in Europe. Both associations want to strengthen their collaboration on AI in the future. Specifically by:
Working together to boost European AI, building on existing industrial and research communities and on results of the Big Data Value PPP and SPARC PPP. This to contribute to the European Commission’s ambitious approach to AI, backed up with a drastic increase investment, reaching €20 billion total public and private funding in Europe until 2020.
Enabling joint-pilots, for example, to accelerate the use and integration of big data, robotics and AI technologies in different sectors and society as a whole
Exchanging best practices and approaches from existing and future projects of the Big Data PPP and the SPARC PPP
This Memorandum of Understanding between the PPPs follows the European Commission’s approach to AI presented in April 2018 and the Declaration of Cooperation on Artificial Intelligence signed by all 28 Member States and Norway. This Friday 7 December the Commission will present its EU coordinated plan….(More)”.
Paper by Bjorn Lundqvist: “In the Internet of Things era devices will monitor and collect data, whilst device producing firms will store, distribute, analyse and re-use data on a grand scale. Great deal of data analytics will be used to enable firms to understand and make use of the collected data. The infrastructure around the collected data is controlled and access to the data flow is thus restricted on technical, but also on legal grounds. Legally, the data are being obscured behind a thicket of property rights, including intellectual property rights. Therefore, there is no general “data commons” for everyone to enjoy.
If firms would like to combine data, they need to give each other access either by sharing, trading, or pooling the data. On the one hand, industry-wide pooling of data could increase efficiency of certain services, and contribute to the innovation of other services, e.g., think about self-driven cars or personalized medicine. On the other hand, firms combining business data may use the data, not to advance their services or products, but to collude, to exclude competitors or to abuse their market position. Indeed by combining their data in a pool, they can gain market power, and, hence, the ability to violate competition law. Moreover, we also see firms hoarding data from various source creating de facto data pools. This article will discuss what implications combining data in data pools by firms might have on competition, and when competition law should be applicable. It develops the idea that data pools harbour great opportunities, whilst acknowledging that there are still risks to take into consideration, and to regulate….(More)”.
Blog by Derval Usher and Darren Hanniffy: “…We aim to equip decision makers with data tools so that they have access to the analysis on the fly. But to help this scale we need progress in three areas:
1. The framework to support Shared Value partnerships.
2. Shared understanding of The Proposition and the benefits for all parties.
3. Access to finance and a funding strategy, designing-in innovation.
1. Any Public-Private Partnership should be aligned to achieve impact centered on the SDGs through a Shared Value / Inclusive Business approach. Mobile network operators are consumed with the challenge of maintaining or upgrading their infrastructure, driving device sales and sustaining their agent networks to reach the last mile. Measuring impact against the SDGs has not been a priority. Mobile network operators tend not to seek out partnerships with traditional development donors or development implementers. But there is a growing realisation of the potential and the need to partner. It’s important to move from a service level transactional relationship to a strategic partnership approach.
Private sector partners have been fundamental to the success of UN Global Pulse as these companies are often the custodians of the big data sets from which we develop valuable development and humanitarian insights. Although in previous years our private sector partners were framed primarily as data philanthropists, we are beginning to see a shift in the relationship to one of shared value. Our work generates public value and also insights that can enhance business operations. This shared value model is attracting more private enterprises to engage and to explore their own data, and more broadly to investigate the value of their networks and data as part of the data innovation ecosystem, which the Global Pulse lab network will build on as we move forward.
2. Partners need to be more propositional and less charitable. They need to recognise the fact that earning profit may help ensure the sustainability of digital platforms and services that offer developmental impact. Through partnership we can attract innovative finance, deliver mobile for development programmes, measure impact and create affordable commercial solutions to development challenges that become sustainable by design. Pulse Lab Jakarta and Digicel have been flexible with one another which is important as this partnership has not always been a priority for either side all the time. But we believe in unlocking the power of mobile data for development and therefore continue to make progress.
3. Development and commercial strategies should be more aligned to create an enabling environment. Currently they are not. Private sector needs to become a strategic partner to development where multi-annual development funds align with commercial strategy. Mobile network operators continue to invest in their network particularly in developing countries and the digital platform is coming into being in the markets where Digicel operates. But the platform is new and experience is limited within governments, the development community and indeed even within mobile network operators.
We need to see donors actively engage during the development of multi-annual funding facilities….(More)”.
James Guszcza, Iyad Rahwan, Will Bible, Manuel Cebrian and Vic Katyal at Harvard Business Review: “Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.
Ensuring that societal values are reflected in algorithms and AI technologies will require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing. Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because — like algorithms — companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage. To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. Should we not subject societally impactful “black box” algorithms to comparable scrutiny?
Indeed, some forward thinking regulators are beginning to explore this possibility. For example, the EU’s General Data Protection Regulation (GDPR) requires that organizations be able to explain their algorithmic decisions. The city of New York recently assembled a task force to study possible biases in algorithmic decision systems. It is reasonable to anticipate that emerging regulations might be met with market pull for services involving algorithmic accountability.
So what might an algorithm auditing discipline look like? First, it should adopt a holistic perspective. Computer science and machine learning methods will be necessary, but likely not sufficient foundations for an algorithm auditing discipline. Strategic thinking, contextually informed professional judgment, communication, and the scientific method are also required.
As a result, algorithm auditing must be interdisciplinary in order for it to succeed….(More)”.
Blog by Stefaan G. Verhulst and Andrew J. Zahuranec: “For years, public-private partnerships (PPPs) have promised to help governments do more for less. Yet, the discussion and experimentation surrounding PPPs often focus on outdated models and narratives, and the field of experimentation has not fully embraced the opportunities provided by an increasingly networked and data-rich private sector.
Private-sector actors (including businesses and NGOs) have expertise and assets that, if brought to bear in collaboration with the public sector, could spur progress in addressing public problems or providing public services. Challenges to date have largely involved the identification of effective and legitimate means for unlocking the public value of private-sector expertise and assets. Those interested in creating public value through PPPs are faced with a number of questions, including:
How do we broaden and deepen our understanding of PPPs in the 21st Century?
How can we innovate and improve the ways that PPPs tap into private-sector assets and expertise for the public good?
How do we connect actors in the PPP space with open governance developments and practices, especially given that PPPs have not played a major role in the governance innovation space to date?
The PPP Knowledge Lab defines a PPP as a “long-term contract between a private party and a government entity, for providing a public asset or service, in which the private party bears significant risk and management responsibility and remuneration is linked to performance.”…
To maximize the value of PPPs, we don’t just need new tools or experiments but new models for using assets and expertise in different sectors. We need to bring that capacity to public problems.
At the latest convening of the MacArthur Foundation Research Network on Opening Governance, Network members and experts from across the field tried to chart this new course by exploring questions about the future of PPPs.
The group explored the new research and thinking that enables many new types of collaboration beyond the typical “contract” based approaches. Through their discussions, Network members identified four shifts representing ways that cross-sector collaboration could evolve in the future:
From Formal to Informal Trust Mechanisms;
From Selection to Iterative and Inclusive Curation;
Paper by Michael P. Cañares: “The record of countries in the region in terms of transparency and accountability is dismal. In the latest Corruption Perceptions Index released by Transparency International, more than half of the country in the region scored below 50, with at least a quarter of these are countries considered with systemic corruption problems. Nevertheless, there have been significant attempts of several countries to install transparency measures and project a commitment towards greater openness. At least a dozen of countries has right to information laws that provide citizens’ fundamental access to government information and several have installed open data policies and are implementing e-government programs or practices. But access of citizens to data and information to hold governments to account, demand for better services, and strengthen citizen participation in governance remain elusive.
The Open Government Partnership (OGP) is a multilateral initiative that aims to secure concrete commitments from governments to promote transparency, empower citizens, fight corruption, and harness new technologies to strengthen governance. OGP’s vision is that more governments become more transparent, more accountable, and more responsive to their own citizens, with the goal of improving the quality of governance, as well as the quality of services that citizens receive. Since its inception in 2011, OGP today brings together 75 countries and 15 subnational governments with over 2,500 commitments to make their governments more open and accountable. In Asia, only the governments of Indonesia, the Philippines, and South Korea are participating countries along with two subnational pilots, Seoul and Bojonegoro. These governments have launched initiatives to involve citizens in the planning and budgeting processes, proactively disclose budget and other public financial information, and engage citizens in monitoring of public service delivery. But these countries remain the exception rather than the norm….(More)”.
Report from the Congressional Research Service: “Quantum information science (QIS) combines elements of mathematics, computer science, engineering, and physical sciences, and has the potential to provide capabilities far beyond what is possible with the most advanced technologies available today. Although much of the press coverage of QIS has been devoted to quantum computing, there is more to QIS. Many experts divide QIS technologies into three application areas:
Sensing and metrology,
Communications, and
Computing and simulation.
… Today, QIS is a component of the National Strategic Computing Initiative (Presidential Executive Order 13702), which was established in 2015. Most recently, in September 2018, the National Science and Technology Council issued the National Strategic Overview for Quantum Information Science. The policy opportunities identified in this strategic overview include:
choosing a science-first approach to QIS,
creating a “quantum-smart” workforce,
deepening engagement with the quantum industry,
providing critical infrastructure,
maintaining national security and economic growth, and
advancing international cooperation.
This report provides an overview of QIS technologies: sensing and metrology, communications, and computing and simulation. It also includes examples of existing and potential future applications; brief summaries of funding and selected R&D initiatives in the United States and elsewhere around the world; a description of U.S. congressional activity; and a discussion of related policy considerations….(More)”.
Paper by Morgan E. Currie and Joan M. Donovan: “The purpose of this paper is to expand on emergent data activism literature to draw distinctions between different types of data management practices undertaken by groups of data activists.
The authors offer three case studies that illuminate the data management strategies of these groups. Each group discussed in the case studies is devoted to representing a contentious political issue through data, but their data management practices differ in meaningful ways. The project Making Sense produces their own data on pollution in Kosovo. Fatal Encounters collects “missing data” on police homicides in the USA. The Environmental Data Governance Initiative hopes to keep vulnerable US data on climate change and environmental injustices in the public domain.
In analysing our three case studies, the authors surface how temporal dimensions, geographic scale and sociotechnical politics influence their differing data management strategies….(More)”.
Book by Yanni Alexander Loukissas: “In our data-driven society, it is too easy to assume the transparency of data. Instead, Yanni Loukissas argues in All Data Are Local, we should approach data sets with an awareness that data are created by humans and their dutiful machines, at a time, in a place, with the instruments at hand, for audiences that are conditioned to receive them. All data are local. The term data set implies something discrete, complete, and portable, but it is none of those things. Examining a series of data sources important for understanding the state of public life in the United States—Harvard’s Arnold Arboretum, the Digital Public Library of America, UCLA’s Television News Archive, and the real estate marketplace Zillow—Loukissas shows us how to analyze data settings rather than data sets.
Loukissas sets out six principles: all data are local; data have complex attachments to place; data are collected from heterogeneous sources; data and algorithms are inextricably entangled; interfaces recontextualize data; and data are indexes to local knowledge. He then provides a set of practical guidelines to follow. To make his argument, Loukissas employs a combination of qualitative research on data cultures and exploratory data visualizations. Rebutting the “myth of digital universalism,” Loukissas reminds us of the meaning-making power of the local….(More)”.
Evidence about either the existence or the nature of a causal mechanism connecting the two; in other words, about the entities and activities mediating the XY relationship(Marchionni and Samuli Reijula, 2018).
There has been mounting pressure on policymakers to adopt and expand the concept of evidence-based policy making (EBP).
In 2017, the U.S. Commission on Evidence-Based Policymaking issued a report calling for a future in which
“rigorous evidence is created efficiently, as a routine part of government
operations, and used to construct effective public policy.” The report asserts
that modern technology and statistical methods, “combined with transparency and
a strong legal framework, create the opportunity to use data for evidence
building in ways that were not possible in the past.”
Similarly, the European Commission’s 2015 report on Strengthening Evidence Based Policy Making through Scientific
Advice states that policymaking “requires robust evidence, impact
assessment and adequate monitoring and evaluation,” emphasizing the notion that
“sound scientific evidence is a key element of the policy-making process, and
therefore science advice should be embedded at all levels of the European
policymaking process.” That same year, the Commission’s Data4Policy program
launched a call for contributions to support its research:
“If policy-making is ‘whatever government chooses to do or not to do’ (Th. Dye), then how do governments actually decide? Evidence-based policy-making is not a new answer to this question, but it is constantly challenging both policy-makers and scientists to sharpen their thinking, their tools and their responsiveness.”
Yet, while the importance and value of EBP are well established, the question of how to establish evidence is often answered by referring to randomized controlled trials (RCTs), cohort studies, or case reports. According to Caterina Marchionni and Samuli Reijula these answers overlook the important concept of mechanistic evidence.
Their paper takes a deeper dive into the differences
between statistical and mechanistic evidence:
“It has recently been argued that successful evidence-based policy should rely on two kinds of evidence: statistical and mechanistic. The former is held to be evidence that a policy brings about the desired outcome, and the latter concerns how it does so.”
The paper further argues that in order to make effective
decisions, policymakers must take both statistical and mechanistic evidence into account:
“… whereas statistical studies provide evidence that the policy variable, X, makes a difference to the policy outcome, Y, mechanistic evidence gives information about either the existence or the nature of a causal mechanism connecting the two; in other words, about the entities and activities mediating the XY relationship. Both types of evidence, it is argued, are required to establish causal claims, to design and interpret statistical trials, and to extrapolate experimental findings.”
Ultimately Marchionni and Reijula take a closer look at why
introducing research methods that beyond RCTs is crucial for evidence-based
policymaking:
“The evidence-based policy (EBP) movement urges policymakers to select policies on the basis of the best available evidence that they work. EBP utilizes evidence-ranking schemes to evaluate the quality of evidence in support of a given policy, which typically prioritize meta-analyses and randomized controlled trials (henceforth RCTs) over other evidence-generating methods.”
They go on to explain that mechanistic evidence has been placed “at the bottom of the evidence hierarchies,” while RCTs have been considered the “gold standard.”
However, the paper argues, mechanistic evidence is in fact as important as statistical evidence:
“… evidence-based policy nearly always involves predictions about the effectiveness of an intervention in populations other than those in which it has been tested. Such extrapolative inferences, it is argued, cannot be based exclusively on the statistical evidence produced by methods higher up in the hierarchies.”
Sources and Further Readings:
Clarke, Brendan, Gillies, Donald, Illari, Phyllis, Federica Russo, and Jon Williamson. “Mechanisms and the Evidence Hierarchy.” Topoi 33 (2014): 339–360.
“Evidence-Based Policymaking: What is it? How does it work? What relevance for developing countries?” Overseas Development Institute, 2015.