Explore our articles
View All Results

Stefaan Verhulst

Chava Gourarie at StoryBench: “Early this summer, at the height of the family separation crisis – where children were being forcibly separated from their parents at our nation’s border – a team of scholars pooled their skills to address the issue. The group of researchers – from a variety of humanities departments at multiple universities – spent a week of non-stop work mapping the immigration detention network that spans the United States. They named the project “Torn Apart/Separados” and published it online, to support the efforts of locating and reuniting the separated children with their parents.

The project utilizes the methods of the digital humanities, an emerging discipline that applies computational tools to fields within the humanities, like literature and history. It was led by members of Columbia University’s Group for Experimental Methods in the Humanities, which had previously used methods such as rapid deployment to responded to natural disasters.

The group has since expanded the project, publishing a second volume that focuses on the $5 billion immigration industry, based largely on public data about companies that contract with the Immigration and Customs Enforcement agency. The visualizations highlight the astounding growth in investment of ICE infrastructure (from $475 million 2014 to $5.1 billion in 2018), as well as who benefits from these contracts, and how the money is spent.

Storybench spoke with Columbia University’s Alex Gil, who worked on both phases of the project, about the process of building “Torn Apart/Separados,” about the design and messaging choices that were made and the ways in which methods of the digital humanities can cross pollinate with those of journalism…(More)”.

How data helped visualize the family separation crisis

Dani Rodrik at Project Syndicate: “New technologies reduce the prices of goods and services to which they are applied. They also lead to the creation of new products. Consumers benefit from these improvements, regardless of whether they live in rich or poor countries.

Mobile phones are a clear example of the deep impact of some new technologies. In a clear case of technological leapfrogging, they have given poor people in developing countries access to long-distance communications without the need for costly investments in landlines and other infrastructure. Likewise, mobile banking provided through cell phones has enabled access to financial services in remote areas without bank branches….

The introduction of these new technologies in production in developing countries often takes place through global value chains (GVCs). In principle, GVCs benefit these economies by easing entry into global markets.

Yet big questions surround the possibilities created by these new technologies. Are the productivity gains large enough? Can they diffuse sufficiently quickly throughout the rest of the economy?

Any optimism about the scale of GVCs’ contribution must be tempered by three sobering facts. First, the expansion of GVCs seems to have ground to a halt in recent years. Second, developing-country participation in GVCs – and indeed in world trade in general – has remained quite limited, with the notable exception of certain Asian countries. Third, and perhaps most worrisome, the domestic employment consequences of recent trade and technological trends have been disappointing.

Upon closer inspection, GVCs and new technologies exhibit features that limit the upside to – and may even undermine – developing countries’ economic performance. One such feature is an overall bias in favor of skills and other capabilities. This bias reduces developing countries’ comparative advantage in traditionally labor-intensive manufacturing (and other) activities, and decreases their gains from trade.

Second, GVCs make it harder for low-income countries to use their labor-cost advantage to offset their technological disadvantage, by reducing their ability to substitute unskilled labor for other production inputs. These two features reinforce and compound each other. The evidence to date, on the employment and trade fronts, is that the disadvantages may have more than offset the advantages….(More)”.

Will New Technologies Help or Harm Developing Countries?

Sara Fisher at Axios: “Dozens of new initiatives have launched over the past few years to address fake news and the erosion of faith in the media, creating a measurement problem of its own.

Why it matters: So many new efforts are launching simultaneously to solve the same problem that it’s become difficult to track which ones do what and which ones are partnering with each other….

To name a few:

  • The Trust Project, which is made up of dozens of global news companies, announced this morning that the number of journalism organizations using the global network’s “Trust Indicators” now totals 120, making it one of the larger global initiatives to combat fake news. Some of these groups (like NewsGuard) work with Trust Project and are a part of it.
  • News Integrity Initiative (Facebook, Craig Newmark Philanthropic Fund, Ford Foundation, Democracy Fund, John S. and James L. Knight Foundation, Tow Foundation, AppNexus, Mozilla and Betaworks)
  • NewsGuard (Longtime journalists and media entrepreneurs Steven Brill and Gordon Crovitz)
  • The Journalism Trust Initiative (Reporters Without Borders, and Agence France Presse, the European Broadcasting Union and the Global Editors Network )
  • Internews (Longtime international non-profit)
  • Accountability Journalism Program (American Press Institute)
  • Trusting News (Reynolds Journalism Institute)
  • Media Manipulation Initiative (Data & Society)
  • Deepnews.ai (Frédéric Filloux)
  • Trust & News Initiative (Knight Foundation, Facebook and Craig Newmark in. affiliation with Duke University)
  • Our.News (Independently run)
  • WikiTribune (Wikipedia founder Jimmy Wales)

There are also dozens of fact-checking efforts being championed by different third-parties, as well as efforts being built around blockchain and artificial intelligence.

Between the lines: Most of these efforts include some sort of mechanism for allowing readers to physically discern real journalism from fake news via some sort of badge or watermark, but that presents problems as well.

  • Attempts to flag or call out news as being real and valid have in the past been rejected even further by those who wish to discredit vetted media.
  • For example, Facebook said in December that it will no longer use “Disputed Flags” — red flags next to fake news articles — to identify fake news for users, because it found that “putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended.”…(More)”.
How pro-trust initiatives are taking over the Internet

A conversation with Karl Sigmund at Edge: “…Now, I’m getting back to evolutionary game theory, the theory of evolution of cooperation and the social contract, and how the social contract can be subverted by corruption. That’s what interests me most currently. Of course, that is not a new story. I believe it explains a lot of what I see happening in my field and in related fields. The ideas that survive are the ideas that are fruitful in the sense of quickly producing a lot of publications, and that’s not necessarily correlated with these ideas being important to advancing science.

Corruption is a wicked problem, wicked in the technical sense of sociology, and it’s not something that will go away. You can reduce it, but as soon as you stop your efforts, it comes back again. Of course, there are many sides to corruption, but everybody seems now to agree that it is a very important problem. In fact, there was a Gallop Poll recently in which people were asked what the number one problem in today’s world is. You would think it would be climate change or overpopulation, but it turned out the majority said “corruption.” So, it’s a problem that is affecting us deeply.

There are so many different types of corruption, but the official definition is “a misuse of public trust for private means.” And this need not be by state officials; it could be also by CEOs, or by managers of non-governmental organizations, or by a soccer referee for that matter. It is always the misuse of public trust for private means, which of course takes many different forms; for instance, you have something called pork barreling, which is a wonderful expression in the United States, or embezzlement of funds, and so on.

I am mostly interested in the effect of bribery upon the judiciary system. If the trust in contracts breaks down, then the economy breaks down, because trust is at the root of the economy. There are staggering statistics which illustrate that the economic welfare of a state is closely related to the corruption perception index. Every year there are statistics about corruption published by organizations such as Transparency International or other such non-governmental organizations. It is truly astonishing how close this gradient between the different countries on the corruption level aligns with the gradient in welfare, in household income and things like this.

The paralyzing effect of this type of corruption upon the economy is something that is extremely interesting. Lots of economists are now turning their interest to that, which is new. In the 1970s, there was a Nobel Prize-winning economist, Gunnar Myrdal, who said that corruption is practically taboo as a research topic among economists. This has well changed in the decades since. It has become a very interesting topic for law students, for students of economy, sociology, and historians, of course, because corruption has always been with us. This is now a booming field, and I would like to approach this with evolutionary game theory.

Evolutionary game theory has a long tradition, and I have witnessed its development practically from the beginning. Some of the most important pioneers were Robert Axelrod and John Maynard Smith. In particular, Axelrod who in the late ‘70s wrote a truly seminal book called The Evolution of Cooperation, which iterated the prisoner’s dilemma. He showed that there is a way out of the social dilemma, which is based on reciprocity. This surprised economists, particularly, game theoreticians. He showed that by viewing social dilemmas in the context of a population where people learn from each other, where the social learning imitates whatever type of behavior is currently the best, you can place it into a context where cooperative strategies, like tit for tat, based on reciprocation can evolve….(More)”.

When the Rule of Law Is Not Working

Introduction to the Special Issue of the Philosophical Transactions of the Royal Society by Sandra Wachter, Brent Mittelstadt, Luciano Floridi and Corinne Cath: “Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can improve economic, social welfare and the exercise of human rights. Owing to the proliferation of AI in high-risk areas, the pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed in this special issue, in which eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems. It also gives a brief overview of recent developments in AI governance, how much of the agenda for defining AI regulation, ethical frameworks and technical approaches is set, as well as providing some concrete suggestions to further the debate on AI governance…(More)”.

Governing artificial intelligence: ethical, legal, and technical opportunities and challenges

Joshua New at the Center for Data Innovation: “…the Trump administration announced the United States-Mexico-Canada Agreement (USMCA), the trade deal it intends to replace NAFTA with. The parties—Canada, Mexico, and the United States—still have to adopt the deal, and if they do, they will enjoy several welcome provisions that can give a boost to data-driven innovation in all three countries.

First, USMCA is the first trade agreement in the world to promote the publication of open government data. Article 19.18 of the agreement officially recognizes that “facilitating public access to and use of government information fosters economic and social development, competitiveness, and innovation.” Though the deal does not require parties to publish open government data, to the extent they choose to publish this data, it directs them to adhere to best practices for open data, including ensuring it is in open, machine-readable formats. Additionally, the deal directs parties to try to cooperate and identify ways they can expand access to and the use of government data, particularly for the purposes of creating economic opportunity for small and medium-sized businesses. While this is a welcome provision, the United States still needs legislation to ensure that publishing open data becomes an official responsibility of federal government agencies.

Second, Article 19.11 of USMCA prevents parties from restricting “the cross-border transfer of information, including personal information, by electronic means if this activity is for the conduct of the business of a covered person.” Additionally, Article 19.12 prevents parties from requiring people or firms “to use or locate computing facilities in that Party’s territory as a condition for conducting business in that territory.” In effect, these provisions prevent parties from enacting protectionist data localization requirements that inhibit the flow of data across borders. This is important because many countries have disingenuously argued for data localization requirements on the grounds that it protects their citizens from privacy or security harms, despite the location of data having no bearing on either privacy or security, to prop up their domestic data-driven industries….(More)”.

Here’s What the USMCA Does for Data Innovation

Paul Raeburn at Scientific American: “Researchers are becoming so adept at mining information from genealogical, medical and police genetic databases that it is becoming difficult to protect anyone’s privacy—even those who have never submitted their DNA for analysis.

In one of two separate studies published October 11, researchers report that by testing the 1.28 million samples contained in a consumer gene database, they could match 60 percent of the DNA of the 140 million Americans of European descent to a third cousin or closer relative. That figure, they say in the study published in Science, will soon rise to nearly 100 percent as the number of samples rises in such consumer databases as AncestryDNA and 23andMe.

In the second study, in the journal Cell, a different research group show that police databases—once thought to be made of meaningless DNA useful only for matching suspects with crime scene samples—can be cross-linked with genetic databases to connect individuals to their genetic information. “Both of these papers show you how deeply you can reach into a family and a population,” says Erin Murphy, a professor of law at New York University School of Law. Consumers who decide to share DNA with a consumer database are providing information on their parents, children, third cousins they don’t know about—and even a trace that could point to children who don’t exist yet, she says….(More)”.

How to Identify Almost Anyone in a Consumer Gene Database

Report by Mark Latonero that “…shows how human rights can serve as a “North Star” to guide the development and governance of artificial intelligence.

The report draws the connections between AI and human rights; reframes recent AI-related controversies through a human rights lens; and reviews current stakeholder efforts at the intersection of AI and human rights.

This report is intended for stakeholders–such as technology companies, governments, intergovernmental organizations, civil society groups, academia, and the United Nations (UN) system–looking to incorporate human rights into social and organizational contexts related to the development and governance of AI….(More)”.

Governing Artificial Intelligence: Upholding Human Rights & Dignity

Paper by Sandra Wachter and Brent Mittelstadt: “Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute.

Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The broad concept of personal datain Europe could be interpreted to include inferences, predictions, and assumptions that refer to or impact on an individual. If seen as personal data, individuals are granted numerous rights under data protection law. However, the legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice.

As we show in this paper, individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with controller’s interests (e.g. trade secrets, intellectual property) than would otherwise be the case. Similarly, the GDPR provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3))….

In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business….(More)”.

A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI

Samantha Horton at WFYI: “Though many websites offer non-scientific ratings on a number of services, two Indiana University scientists say judging hospitals that way likely isn’t fair.

Their recently-released study compares the federal government’s Hospital Compare and crowdsourced sites such as Facebook, Yelp and Google. The research finds it’s difficult for people to accurately understand everything a hospital does, and that leads to biased ratings.

Patient experiences with food, amenities and bedside manner often aligns with federal government ratings. But IU professor Victoria Perez says judging quality of care and safety is much more nuanced and people often get it wrong.

“About 20 percent of the hospitals rated best within a local market on social media were rated worst in that market by Hospital Compare in terms of patient health outcomes,” she says.

For the crowdsourced ratings to be more useful, Perez says people would have to know how to cross-reference them with a more reliable data source, such as Hospital Compare. But even that site can be challenging to navigate depending on what the consumer is looking for.

“If you have a condition-specific concern and you can see the clinical measure for a hospital that may be helpful,” says Perez. “But if your particular medical concern is not listed there, it might be hard to extrapolate from the ones that are listed or to know which ones you should be looking at.”

She says consumers would need more information about patient outcomes and other quality metrics to be able to reliably crowdsource a hospital on a site such as Google…(More)”.

Study: Crowdsourced Hospital Ratings May Not Be Fair

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday