Explore our articles
View All Results

Stefaan Verhulst

JRC Digital Economy Working Paper by Bertin Martens: “Digitization triggered a steep drop in the cost of information. The resulting data glut created a bottleneck because human cognitive capacity is unable to cope with large amounts of information. Artificial intelligence and machine learning (AI/ML) triggered a similar drop in the cost of machine-based decision-making and helps in overcoming this bottleneck. Substantial change in the relative price of resources puts pressure on ownership and access rights to these resources. This explains pressure on access rights to data. ML thrives on access to big and varied datasets. We discuss the implications of access regimes for the development of AI in its current form of ML. The economic characteristics of data (non-rivalry, economies of scale and scope) favour data aggregation in big datasets. Non-rivalry implies the need for exclusive rights in order to incentivise data production when it is costly. The balance between access and exclusion is at the centre of the debate on data regimes. We explore the economic implications of several modalities for access to data, ranging from exclusive monopolistic control to monopolistic competition and free access. Regulatory intervention may push the market beyond voluntary exchanges, either towards more openness or reduced access. This may generate private costs for firms and individuals. Society can choose to do so if the social benefits of this intervention outweigh the private costs.

We briefly discuss the main EU legal instruments that are relevant for data access and ownership, including the General Data Protection Regulation (GDPR) that defines the rights of data subjects with respect to their personal data and the Database Directive (DBD) that grants ownership rights to database producers. These two instruments leave a wide legal no-man’s land where data access is ruled by bilateral contracts and Technical Protection Measures that give exclusive control to de facto data holders, and by market forces that drive access, trade and pricing of data. The absence of exclusive rights might facilitate data sharing and access or it may result in a segmented data landscape where data aggregation for ML purposes is hard to achieve. It is unclear if incompletely specified ownership and access rights maximize the welfare of society and facilitate the development of AI/ML…(More)”

The Importance of Data Access Regimes for Artificial Intelligence and Machine Learning

Book by Shaun Riordan: “The world has been sleep-walking into cyber chaos. The spread of misinformation via social media and the theft of data and intellectual property, along with regular cyberattacks, threaten the fabric of modern societies. All the while, the Internet of Things increases the vulnerability of computer systems, including those controlling critical infrastructure. What can be done to tackle these problems? Does diplomacy offer ways of managing security and containing conflict online?

In this provocative book, Shaun Riordan shows how traditional diplomatic skills and mindsets can be combined with new technologies to bring order and enhance international cooperation. He explains what cyberdiplomacy means for diplomats, foreign services and corporations and explores how it can be applied to issues such as internet governance, cybersecurity, cybercrime and information warfare. Cyberspace, he argues, is too important to leave to technicians. Using the vital tools offered by cyberdiplomacy, we can reduce the escalation and proliferation of cyberconflicts by proactively promoting negotiation and collaboration online….(More)”.

Cyberdiplomacy: Managing Security and Governance Online

Paper by Christine Rinik: “In the recent report, Growing the Artificial Intelligence Industry in the UK, Hall and Pesenti suggest the use of a ‘data trust’ to facilitate data sharing. Whilst government and corporations are focusing on their need to facilitate data sharing, the perspective of many individuals is that too much data is being shared. The issue is not only about data, but about power. The individual does not often have a voice when issues relating to data sharing are tackled. Regulators can cite the ‘public interest’ when data governance is discussed, but the individual’s interests may diverge from that of the public.

This paper considers the data subject’s position with respect to data collection leading to considerations about surveillance and datafication. Proposals for data trusts will be considered applying principles of English trust law to possibly mitigate the imbalance of power between large data users and individual data subjects. Finally, the possibility of a workable remedy in the form of a class action lawsuit which could give the data subjects some collective power in the event of a data breach will be explored. Despite regulatory efforts to protect personal data, there is a lack of public trust in the current data sharing system….(More)”.

Data Trusts: More Data than Trust? The Perspective of the Data Subject in the Face of a Growing Problem

Eamon Duede at Nature Research: “We were seeing that the consumption of ideas seemed deeply related io political alignment, and because our group (Knowledge Lab) is concerned with understanding the social dynamics involved in production of ideas, we began wondering whether and to what extent the political alignment of individuals contributes to a group’s ability to produce knowledge. A Wikipedia article is full of smuggled content and worked into a narrative by a diverse team of editors. Because those articles constitute knowledge, we were curious to know whether political polarization within those teams had an effect on the quality of that production. So, we decided to braid both strands of research together and look at the way in which individual political alignments and the polarization of the teams they form affect the quality of the work that is produced collaboratively on Wikipedia.

To answer this question, we turned not to the article itself, but the immense history of articles on Wikipedia. Every edit to every article, no matter how insignificant, is documented and saved in Wikipedia’s astonishingly massive archives. And every edit to every article, no matter how insignificant, is evaluated for its relevance or validity by the vast community of editors, both robotic and human. Remarkable teamwork has gone into producing the encyclopedia. Some people edit randomly, simply cleaning typos, adding citations, or contributing graffiti and vandalism (I’ve experimented with this, and it gets painted over very quickly, no matter where you put it). Yet, many people are genuinely purposeful in their work, and contribute specifically to topics on which they have both interest and knowledge. They tend and grow a handful of articles or a few broad topics like gardeners. We walked through the histories of these gardens, looking back at who made contributions here and there, how much they contributed, and where. We thought that editors who make frequent contributions to pages associated with American liberalism would hold left leaning opinions, and for conservatism opinions on the right. This was a controversial hypothesis, and many in the Wikipedia community felt that perhaps the opposite would be true, with liberals correcting conservative pages and conservatives kindly returning the favor -like weeding or applying pesticide. But a survey we conducted of active Wikipedia editors found that building a function over the relative number of bits they contributed to liberal versus conservative pages predicted more than a third of the probability that they identified as such and voted accordingly.

Following this validation, we assigned a political alignment score to hundreds of thousands of editors by looking at where they make contributions, and then examined the polarization within teams of editors that produced hundreds of thousands of Wikipedia articles in the broad topic areas of politics, social issues, and science. We found that when most members of a team have the same political alignment, whether conservative, liberal, or “independent”, the quality of the Wikipedia pages they produce is not as strong as those of teams with polarized compositions of editors (Shi et al. 2019).

The United States Senate is increasingly polarized, but largely balanced in its polarization. If the Senate was trying to write a Wikipedia article, would they produce a high quality article? If they are doing so on Wikipedia, following norms of civility and balance inscribed within Wikipedia’s policies and guidelines, committed to the production of knowledge rather than self-promotion, then the answer is probably “yes”. That is a surprising finding. We think that the reason for this is that the policies of Wikipedia work to suppress the kind of rhetoric and sophistry common in everyday discourse, not to mention toxic language and name calling. Wikipedia’s policies are intolerant of discussion that could distort balanced consideration of the edit and topic under consideration, and, given that these policies shut down discourse that could bias proposed edits, teams with polarized viewpoints have to spend significantly more time discussing and debating the content that is up for consideration for inclusion in an article. These diverse viewpoints seem to bring out points and arguments between team members that sharpen and refine the quality of the content they can collectively agree to. With assumptions and norms of respect and civility, political polarization can be powerful and generative….(More)”

Finding Wisdom in Politically Polarized Crowds

Paper by Joseph D. Tucker, Suzanne Day, Weiming Tang, and Barry Bayus: “Crowdsourcing shifts medical research from a closed environment to an open collaboration between the public and researchers. We define crowdsourcing as an approach to problem solving which involves an organization having a large group attempt to solve a problem or part of a problem, then sharing solutions. Crowdsourcing allows large groups of individuals to participate in medical research through innovation challenges, hackathons, and related activities. The purpose of this literature review is to examine the definition, concepts, and applications of crowdsourcing in medicine.

This multi-disciplinary review defines crowdsourcing for medicine, identifies conceptual antecedents (collective intelligence and open source models), and explores implications of the approach. Several critiques of crowdsourcing are also examined. Although several crowdsourcing definitions exist, there are two essential elements: (1) having a large group of individuals, including those with skills and those without skills, propose potential solutions; (2) sharing solutions through implementation or open access materials. The public can be a central force in contributing to formative, pre-clinical, and clinical research. A growing evidence base suggests that crowdsourcing in medicine can result in high-quality outcomes, broad community engagement, and more open science….(More)”

Crowdsourcing in medical research: concepts and applications

Dana Gold in the Washington Post: “When a whistleblower revealed the Trump administration’s decision to overturn 25 security clearance denials, it was the latest in a long and storied history of insiders exposing significant abuses of public trust. Whistles were blown on U.S. involvement in Vietnam, the Watergate coverupEnron’s financial fraud, the National Security Agency’s mass surveillance of domestic electronic communications and, during the Trump administration, the corruption of former Environmental Protection Agency chief Scott Pruitt , Cambridge Analytica’s theft of Facebook users’ data to develop targeted political ads, and harm to children posed by the “zero tolerance” immigration policy. Despite the essential role whistleblowers play in illuminating the truth and protecting the public interest, several myths persist about them, some pernicious.

MYTH NO. 1 Whistleblowers are employees who report problems externally….

MYTH NO. 2 Whistleblowers are either disloyal or heroes….

MYTH NO. 3 ‘Leaker’ is another term for ‘whistleblower.’…

MYTH NO. 4 Remaining anonymous is the best strategy for whistleblowing….

MYTH NO. 5 Julian Assange is a whistleblower….(More)”.

Five myths about whistleblowers

Robin Wigglesworth in the Financial Times: “Imagine a world where interminable waits for backward-looking, frequently-revised economic data seem as archaically quaint as floppy disks, beepers and a civil internet. This fantasy realm may be closer than you think.

The Bureau of Economic Analysis will soon publish its preliminary estimate for US economic growth in the first three months of the year, finally catching up on its regular schedule after a government shutdown paralysed the agency. But other data are still delayed, and the final official result for US gross domestic product won’t be available until July. Along the way there are likely to be many tweaks.

Collecting timely and accurate data are a Herculean task, especially for an economy as vast and varied as the US’s. But last week’s World Bank-International Monetary Fund’s annual spring meetings offered some clues on a brighter, more digital future for economic data.

The IMF hosted a series of seminars and discussions exploring how the hot new world of Big Data could be harnessed to produce more timely economic figures — and improve economic forecasts. Jiaxiong Yao, an IMF official in its African department, explained how it could use satellites to measure the intensity of night-time lights, and derive a real-time gauge of economic health.

“If a country gets brighter over time, it is growing. If it is getting darker then it probably needs an IMF programme,” he noted. Further sessions explored how the IMF could use machine learning — a popular field of artificial intelligence — to improve its influential but often faulty economic forecasts; and real-time shipping data to map global trade flows.

Sophisticated hedge funds have been mining some of these new “alternative” data sets for some time, but statistical agencies, central banks and multinational organisations such as the IMF and the World Bank are also starting to embrace the potential.

The amount of digital data around the world is already unimaginably vast. As more of our social and economic activity migrates online, the quantity and quality is going to increase exponentially. The potential is mind-boggling. Setting aside the obvious and thorny privacy issues, it is likely to lead to a revolution in the world of economic statistics. …

Yet the biggest issues are not the weaknesses of these new data sets — all statistics have inherent flaws — but their nature and location.

Firstly, it depends on the lax regulatory and personal attitudes towards personal data continuing, and there are signs of a (healthy) backlash brewing.

Secondly, almost all of this alternative data is being generated and stored in the private sector, not by government bodies such as the Bureau of Economic Analysis, Eurostat or the UK’s Office for National Statistics.

Public bodies are generally too poorly funded to buy or clean all this data themselves, meaning hedge funds will benefit from better economic data than the broader public. We might, in fact, need legislation mandating that statistical agencies receive free access to any aggregated private sector data sets that might be useful to their work.

That would ensure that our economic officials and policymakers don’t fly blind in an increasingly illuminated world….(More)”.

Illuminating Big Data will leave governments in the dark

Blog Post by Stefaan G. Verhulst: “…The value of data collaboratives stems from the fact that the supply of and demand for data are generally widely dispersed — spread across government, the private sector, and civil society — and often poorly matched. This failure (a form of “market failure”) results in tremendous inefficiencies and lost potential. Much data that is released is never used. And much data that is actually needed is never made accessible to those who could productively put it to use.

Data collaboratives, when designed responsibly, are the key to addressing this shortcoming. They draw together otherwise siloed data and a dispersed range of expertise, helping match supply and demand, and ensuring that the correct institutions and individuals are using and analyzing data in ways that maximize the possibility of new, innovative social solutions.

Roadmap for Data Collaboratives

Despite their clear potential, the evidence base for data collaboratives is thin. There’s an absence of a systemic, structured framework that can be replicated across projects and geographies, and there’s a lack of clear understanding about what works, what doesn’t, and how best to maximize the potential of data collaboratives.

At the GovLab, we’ve been working to address these shortcomings. For emerging economies considering the use of data collaboratives, whether in pursuit of Artificial Intelligence or other solutions, we present six steps that can be considered in order to create data collaborative that are more systematic, sustainable, and responsible.

The need for making Data Collaboratives Systematic, Sustainable and Responsible
  • Increase Evidence and Awareness
  • Increase Readiness and Capacity
  • Address Data Supply and Demand Inefficiencies and Uncertainties
  • Establish a New “Data Stewards” Function
  • Develop and strengthen policies and governance practices for data collaboration
Data Collaboratives as an enabling infrastructure for AI for Good

Book by Nathan Gardels and Nicolas Berggruen: “The rise of populism in the West and the rise of China in the East have stirred a rethinking of how democratic systems work—and how they fail. The impact of globalism and digital capitalism is forcing worldwide attention to the starker divide between the “haves” and the “have-nots,” challenging how we think about the social contract.

With fierce clarity and conviction, Renovating Democracy tears down our basic structures and challenges us to conceive of an alternative framework for governance. To truly renovate our global systems, the authors argue for empowering participation without populism by integrating social networks and direct democracy into the system with new mediating institutions that complement representative government. They outline steps to reconfigure the social contract to protect workers instead of jobs, shifting from a “redistribution” after wealth to “pre-distribution” with the aim to enhance the skills and assets of those less well-off. Lastly, they argue for harnessing globalization through “positive nationalism” at home while advocating for global cooperation—specifically with a partnership with China—to create a viable rules-based world order. 

Thought provoking and persuasive, Renovating Democracy serves as a point of departure that deepens and expands the discourse for positive change in governance….(More)”.

Renovating Democracy: Governing in the Age of Globalization and Digital Capitalism

Nathaniel Raymond at Nature: “One of the primary documents aiming to protect human research participants was published in the US Federal Register 40 years ago this week. The Belmont Report was commissioned by Congress in the wake of the notorious Tuskegee syphilis study, in which researchers withheld treatment from African American men for years and observed how the disease caused blindness, heart disease, dementia and, in some cases, death.

The Belmont Report lays out core principles now generally required for human research to be considered ethical. Although technically governing only US federally supported research, its influence reverberates across academia and industry globally. Before academics with US government funding can begin research involving humans, their institutional review boards (IRBs) must determine that the studies comply with regulation largely derived from a document that was written more than a decade before the World Wide Web and nearly a quarter of a century before Facebook.

It is past time for a Belmont 2.0. We should not be asking those tasked with protecting human participants to single-handedly identify and contend with the implications of the digital revolution. Technological progress, including machine learning, data analytics and artificial intelligence, has altered the potential risks of research in ways that the authors of the first Belmont report could not have predicted. For example, Muslim cab drivers can be identified from patterns indicating that they stop to pray; the Ugandan government can try to identify gay men from their social-media habits; and researchers can monitor and influence individuals’ behaviour online without enrolling them in a study.

Consider the 2014 Facebook ‘emotional contagion study’, which manipulated users’ exposure to emotional content to evaluate effects on mood. That project, a collaboration with academic researchers, led the US Department of Health and Human Services to launch a long rule-making process that tweaked some regulations governing IRBs.

A broader fix is needed. Right now, data science overlooks risks to human participants by default….(More)”.

Safeguards for human studies can’t cope with big data

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday