DNA databases are too white. This man aims to fix that.


Interview of Carlos D. Bustamante by David Rotman: “In the 15 years since the Human Genome Project first exposed our DNA blueprint, vast amounts of genetic data have been collected from millions of people in many different parts of the world. Carlos D. Bustamante’s job is to search that genetic data for clues to everything from ancient history and human migration patterns to the reasons people with different ancestries are so varied in their response to common diseases.

Bustamante’s career has roughly spanned the period since the Human Genome Project was completed. A professor of genetics and biomedical data science at Stanford and 2010 winner of a MacArthur genius award, he has helped to tease out the complex genetic variation across different populations. These variants mean that the causes of diseases can vary greatly between groups. Part of the motivation for Bustamante, who was born in Venezuela and moved to the US when he was seven, is to use those insights to lessen the medical disparities that still plague us.

But while it’s an area ripe with potential for improving medicine, it’s also fraught with controversies over how to interpret genetic differences between human populations. In an era still obsessed with race and ethnicity—and marred by the frequent misuse of science in defining the characteristics of different groups—Bustamante remains undaunted in searching for the nuanced genetic differences that these groups display.

Perhaps his optimism is due to his personality—few sentences go by without a “fantastic” or “extraordinarily exciting.” But it is also his recognition as a population geneticist of the incredible opportunity that understanding differences in human genomes presents for improving health and fighting disease.

David Rotman, MIT Technology Review’s editor at large, discussed with Bustamante why it’s so important to include more people in genetic studies and understand the genetics of different populations.

How good are we at making sure that the genomic data we’re collecting is inclusive?

I’m optimistic, but it’s not there yet.

In our 2011 paper, the statistic we had was that more than 96% of participants in genome-wide association studies were of European descent. In the follow-up in 2016, the number went from 96% to around 80%. So that’s getting better. Unfortunately, or perhaps fortunately, a lot of that is due to the entry of China into genetics. A lot of that was due to large-scale studies in Chinese and East Asian populations. Hispanics, for example, make up less than 1% of genome-wide association studies. So we need to do better. Ultimately, we want precision medicine to benefit everybody.

Aside from a fairness issue, why is diversity in genomic data important? What do we miss without it?

First of all, it has nothing to do with political correctness. It has everything to do with human biology and the fact that human populations and the great diaspora of human migrations have left their mark on the human genome. The genetic underpinnings of health and disease have shared components across human populations and things that are unique to different populations….(More)”.

Crowdsourcing the vote: New horizons in citizen forecasting


Article by Mickael Temporão Yannick Dufresne Justin Savoie and Clifton van der Linden in International Journal of Forecasting: “People do not know much about politics. This is one of the most robust findings in political science and is backed by decades of research. Most of this research has focused on people’s ability to know about political issues and party positions on these issues. But can people predict elections? Our research uses a very large dataset (n>2,000,000) collected during ten provincial and federal elections in Canada to test whether people can predict the electoral victor and the closeness of the race in their district throughout the campaign. The results show that they can. This paper also contributes to the emerging literature on citizen forecasting by developing a scaling method that allows us to compare the closeness of races and that can be applied to multiparty contexts with varying numbers of parties. Finally, we assess the accuracy of citizen forecasting in Canada when compared to voter expectations weighted by past votes and political competency….(More)”.

Governance and economics of smart cities: opportunities and challenges


P.B.Anand and JulioNavío-Marco in Special Issues of  Telecommunications Policy: “This editorial introduction to this special issue provides an overview and a conceptual framework of governance and economics of smart cities. We begin with a discussion of the background to smart cities and then it focuses on the key challenges for consideration in smart city economics. Here it is argued that there are four dimensions to smart city economics: the first is regarding the scale of global market for smart cities; the second issue concerns data to be used for smart city projects; the third concerns market competition and structure and the fourth concerns the impact on local economy. Likewise, smart city governance framework has to be considered a layered and multi-level concept focusing on issues of transparency and accountability to the citizens….(More)”.

Innovation Spaces: New Places for Collective Intelligence?


Chapter by Laure Morel, Laurent Dupont and Marie‐Reine Boudarel in Collective Innovation Processes: Principles and Practices: “Innovation is a complex and multifaceted notion, sometimes difficult to explain. The category of innovation spaces includes co‐working spaces, third places, Living Labs, open labs, incubators, accelerators, hothouses, canteens, FabLabs, MakerSpaces, Tech Shops, hackerspaces, design factories, and so on. Working based on the communities’ needs and motivations is a key stage in order to overcome the obstacles of collective innovation and lay favorable foundations for the emergence of shared actions that can be converted into collective innovation projects. Organizations are multiplying the opportunities of creating collective intelligence at the service of innovation. Consequently, an innovation space must favor creativity and sharing. It must also promote individual and collective learning. Collective intelligence involves the networking of multiple types of intelligence, the combination of knowledge and competences, as well as cooperation and collaboration between them….(More)”.

How data helped visualize the family separation crisis


Chava Gourarie at StoryBench: “Early this summer, at the height of the family separation crisis – where children were being forcibly separated from their parents at our nation’s border – a team of scholars pooled their skills to address the issue. The group of researchers – from a variety of humanities departments at multiple universities – spent a week of non-stop work mapping the immigration detention network that spans the United States. They named the project “Torn Apart/Separados” and published it online, to support the efforts of locating and reuniting the separated children with their parents.

The project utilizes the methods of the digital humanities, an emerging discipline that applies computational tools to fields within the humanities, like literature and history. It was led by members of Columbia University’s Group for Experimental Methods in the Humanities, which had previously used methods such as rapid deployment to responded to natural disasters.

The group has since expanded the project, publishing a second volume that focuses on the $5 billion immigration industry, based largely on public data about companies that contract with the Immigration and Customs Enforcement agency. The visualizations highlight the astounding growth in investment of ICE infrastructure (from $475 million 2014 to $5.1 billion in 2018), as well as who benefits from these contracts, and how the money is spent.

Storybench spoke with Columbia University’s Alex Gil, who worked on both phases of the project, about the process of building “Torn Apart/Separados,” about the design and messaging choices that were made and the ways in which methods of the digital humanities can cross pollinate with those of journalism…(More)”.

Will New Technologies Help or Harm Developing Countries?


Dani Rodrik at Project Syndicate: “New technologies reduce the prices of goods and services to which they are applied. They also lead to the creation of new products. Consumers benefit from these improvements, regardless of whether they live in rich or poor countries.

Mobile phones are a clear example of the deep impact of some new technologies. In a clear case of technological leapfrogging, they have given poor people in developing countries access to long-distance communications without the need for costly investments in landlines and other infrastructure. Likewise, mobile banking provided through cell phones has enabled access to financial services in remote areas without bank branches….

The introduction of these new technologies in production in developing countries often takes place through global value chains (GVCs). In principle, GVCs benefit these economies by easing entry into global markets.

Yet big questions surround the possibilities created by these new technologies. Are the productivity gains large enough? Can they diffuse sufficiently quickly throughout the rest of the economy?

Any optimism about the scale of GVCs’ contribution must be tempered by three sobering facts. First, the expansion of GVCs seems to have ground to a halt in recent years. Second, developing-country participation in GVCs – and indeed in world trade in general – has remained quite limited, with the notable exception of certain Asian countries. Third, and perhaps most worrisome, the domestic employment consequences of recent trade and technological trends have been disappointing.

Upon closer inspection, GVCs and new technologies exhibit features that limit the upside to – and may even undermine – developing countries’ economic performance. One such feature is an overall bias in favor of skills and other capabilities. This bias reduces developing countries’ comparative advantage in traditionally labor-intensive manufacturing (and other) activities, and decreases their gains from trade.

Second, GVCs make it harder for low-income countries to use their labor-cost advantage to offset their technological disadvantage, by reducing their ability to substitute unskilled labor for other production inputs. These two features reinforce and compound each other. The evidence to date, on the employment and trade fronts, is that the disadvantages may have more than offset the advantages….(More)”.

How pro-trust initiatives are taking over the Internet


Sara Fisher at Axios: “Dozens of new initiatives have launched over the past few years to address fake news and the erosion of faith in the media, creating a measurement problem of its own.

Why it matters: So many new efforts are launching simultaneously to solve the same problem that it’s become difficult to track which ones do what and which ones are partnering with each other….

To name a few:

  • The Trust Project, which is made up of dozens of global news companies, announced this morning that the number of journalism organizations using the global network’s “Trust Indicators” now totals 120, making it one of the larger global initiatives to combat fake news. Some of these groups (like NewsGuard) work with Trust Project and are a part of it.
  • News Integrity Initiative (Facebook, Craig Newmark Philanthropic Fund, Ford Foundation, Democracy Fund, John S. and James L. Knight Foundation, Tow Foundation, AppNexus, Mozilla and Betaworks)
  • NewsGuard (Longtime journalists and media entrepreneurs Steven Brill and Gordon Crovitz)
  • The Journalism Trust Initiative (Reporters Without Borders, and Agence France Presse, the European Broadcasting Union and the Global Editors Network )
  • Internews (Longtime international non-profit)
  • Accountability Journalism Program (American Press Institute)
  • Trusting News (Reynolds Journalism Institute)
  • Media Manipulation Initiative (Data & Society)
  • Deepnews.ai (Frédéric Filloux)
  • Trust & News Initiative (Knight Foundation, Facebook and Craig Newmark in. affiliation with Duke University)
  • Our.News (Independently run)
  • WikiTribune (Wikipedia founder Jimmy Wales)

There are also dozens of fact-checking efforts being championed by different third-parties, as well as efforts being built around blockchain and artificial intelligence.

Between the lines: Most of these efforts include some sort of mechanism for allowing readers to physically discern real journalism from fake news via some sort of badge or watermark, but that presents problems as well.

  • Attempts to flag or call out news as being real and valid have in the past been rejected even further by those who wish to discredit vetted media.
  • For example, Facebook said in December that it will no longer use “Disputed Flags” — red flags next to fake news articles — to identify fake news for users, because it found that “putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended.”…(More)”.

When the Rule of Law Is Not Working


A conversation with Karl Sigmund at Edge: “…Now, I’m getting back to evolutionary game theory, the theory of evolution of cooperation and the social contract, and how the social contract can be subverted by corruption. That’s what interests me most currently. Of course, that is not a new story. I believe it explains a lot of what I see happening in my field and in related fields. The ideas that survive are the ideas that are fruitful in the sense of quickly producing a lot of publications, and that’s not necessarily correlated with these ideas being important to advancing science.

Corruption is a wicked problem, wicked in the technical sense of sociology, and it’s not something that will go away. You can reduce it, but as soon as you stop your efforts, it comes back again. Of course, there are many sides to corruption, but everybody seems now to agree that it is a very important problem. In fact, there was a Gallop Poll recently in which people were asked what the number one problem in today’s world is. You would think it would be climate change or overpopulation, but it turned out the majority said “corruption.” So, it’s a problem that is affecting us deeply.

There are so many different types of corruption, but the official definition is “a misuse of public trust for private means.” And this need not be by state officials; it could be also by CEOs, or by managers of non-governmental organizations, or by a soccer referee for that matter. It is always the misuse of public trust for private means, which of course takes many different forms; for instance, you have something called pork barreling, which is a wonderful expression in the United States, or embezzlement of funds, and so on.

I am mostly interested in the effect of bribery upon the judiciary system. If the trust in contracts breaks down, then the economy breaks down, because trust is at the root of the economy. There are staggering statistics which illustrate that the economic welfare of a state is closely related to the corruption perception index. Every year there are statistics about corruption published by organizations such as Transparency International or other such non-governmental organizations. It is truly astonishing how close this gradient between the different countries on the corruption level aligns with the gradient in welfare, in household income and things like this.

The paralyzing effect of this type of corruption upon the economy is something that is extremely interesting. Lots of economists are now turning their interest to that, which is new. In the 1970s, there was a Nobel Prize-winning economist, Gunnar Myrdal, who said that corruption is practically taboo as a research topic among economists. This has well changed in the decades since. It has become a very interesting topic for law students, for students of economy, sociology, and historians, of course, because corruption has always been with us. This is now a booming field, and I would like to approach this with evolutionary game theory.

Evolutionary game theory has a long tradition, and I have witnessed its development practically from the beginning. Some of the most important pioneers were Robert Axelrod and John Maynard Smith. In particular, Axelrod who in the late ‘70s wrote a truly seminal book called The Evolution of Cooperation, which iterated the prisoner’s dilemma. He showed that there is a way out of the social dilemma, which is based on reciprocity. This surprised economists, particularly, game theoreticians. He showed that by viewing social dilemmas in the context of a population where people learn from each other, where the social learning imitates whatever type of behavior is currently the best, you can place it into a context where cooperative strategies, like tit for tat, based on reciprocation can evolve….(More)”.

Governing artificial intelligence: ethical, legal, and technical opportunities and challenges


Introduction to the Special Issue of the Philosophical Transactions of the Royal Society by Sandra Wachter, Brent Mittelstadt, Luciano Floridi and Corinne Cath: “Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can improve economic, social welfare and the exercise of human rights. Owing to the proliferation of AI in high-risk areas, the pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed in this special issue, in which eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems. It also gives a brief overview of recent developments in AI governance, how much of the agenda for defining AI regulation, ethical frameworks and technical approaches is set, as well as providing some concrete suggestions to further the debate on AI governance…(More)”.

Here’s What the USMCA Does for Data Innovation


Joshua New at the Center for Data Innovation: “…the Trump administration announced the United States-Mexico-Canada Agreement (USMCA), the trade deal it intends to replace NAFTA with. The parties—Canada, Mexico, and the United States—still have to adopt the deal, and if they do, they will enjoy several welcome provisions that can give a boost to data-driven innovation in all three countries.

First, USMCA is the first trade agreement in the world to promote the publication of open government data. Article 19.18 of the agreement officially recognizes that “facilitating public access to and use of government information fosters economic and social development, competitiveness, and innovation.” Though the deal does not require parties to publish open government data, to the extent they choose to publish this data, it directs them to adhere to best practices for open data, including ensuring it is in open, machine-readable formats. Additionally, the deal directs parties to try to cooperate and identify ways they can expand access to and the use of government data, particularly for the purposes of creating economic opportunity for small and medium-sized businesses. While this is a welcome provision, the United States still needs legislation to ensure that publishing open data becomes an official responsibility of federal government agencies.

Second, Article 19.11 of USMCA prevents parties from restricting “the cross-border transfer of information, including personal information, by electronic means if this activity is for the conduct of the business of a covered person.” Additionally, Article 19.12 prevents parties from requiring people or firms “to use or locate computing facilities in that Party’s territory as a condition for conducting business in that territory.” In effect, these provisions prevent parties from enacting protectionist data localization requirements that inhibit the flow of data across borders. This is important because many countries have disingenuously argued for data localization requirements on the grounds that it protects their citizens from privacy or security harms, despite the location of data having no bearing on either privacy or security, to prop up their domestic data-driven industries….(More)”.