The Secret Bias Hidden in Mortgage-Approval Algorithms


An investigation by The Markup: “…has found that lenders in 2019 were more likely to deny home loans to people of color than to White people with similar financial characteristics—even when we controlled for newly available financial factors that the mortgage industry for years has said would explain racial disparities in lending.

Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. These are national rates.

In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.

The industry had criticized previous similar analyses for not including financial factors they said would explain disparities in lending rates but were not public at the time: debts as a percentage of income, how much of the property’s assessed worth the person is asking to borrow, and the applicant’s credit score.

The first two are now public in the Home Mortgage Disclosure Act data. Including these financial data points in our analysis not only failed to eliminate racial disparities in loan denials, it highlighted new, devastating ones.

We found that lenders gave fewer loans to Black applicants than White applicants even when their incomes were high—$100,000 a year or more—and had the same debt ratios. In fact, high-earning Black applicants with less debt were rejected more often than high-earning White applicants who have more debt….(More)”

The Secret to Making Democracy More Civil and Less Polarized


Essay by Matt Qvortrup: “…Too often, politicians hold referendums when they themselves are in a tight spot. As the economist John Matsusaka has written, governments often rely on referendums for issues that are “too hot to handle.” In the late 1990s, British Prime Minister Tony Blair held a referendum on a parliament for Scotland in order not to alienate voters in England, and in 2005, the French government submitted the European Constitution to voters for fear of upsetting the large segment of French voters who were skeptical of the EU.

This process of elected politicians submitting unpopular questions to voters is not direct democracy. It is an abuse thereof. And it is entirely out of step with the current moment and how people want to engage with the world. By contrast, over the past three decades, some local and national governments have taken a much more proactive approach to citizen engagement through participatory budgeting.

The idea is simple: the government distributes a percentage (typically 10 percent) of the local budget to the citizens, who decide what to spend the money on. “How would you spend one million of the City’s money?” asked a pamphlet distributed to New Yorkers in 2011 that introduced them to the process.

Participatory budgeting came to Tower Hamlets, one of the most unequal parts of London, in 2009 and 2010 in a project designed to help the area choose new social service providers. The borough was divided into eight smaller areas; in each, a representative section of community volunteers could question the providers on whatever they wished, including social responsibility and commitment to the community. Eventually, the citizens were able to negotiate with providers on the details of how service would work.

Finally, after this process, a vote was taken on which providers offered the best value and which were most likely to provide employment to local residents. This participatory project was a success. An evaluation by the local government association concluded that “a majority of participants said they had developed skills linked to empowerment, and the community overall felt they could better influence their local environment and services.” It was popular, too. More than 77 percent wanted the council to repeat the event in the future.” This level of engagement was considerably above the average for similar boroughs, where as few as 20 percent of residents even bother to vote.

The Tower Hamlets experiment—as well as participatory budgeting in places as different as Porto Alegre, Brazil and Paris, France—shows that citizens behave responsibly when they are given responsibility.

The money allocated in participatory budgeting is finite, and those involved in the process know that they have to make hard choices. Admittedly “trust” is a difficult concept to measure, but research by the World Bank suggests that citizen engagement grows trust in the political system. Moreover, citizens learn democracy by doing it. As Harvard political scientist Jane Mansbridge wrote, “Participating in democratic decisions makes many participants better citizens.”…(More)”.

How local governments are scaring tech companies


Ben Brody at Protocol: “Congress has failed to regulate tech, so states and cities are stepping in with their own approaches to food delivery apps, AI regulation and, yes, privacy. Tech doesn’t like what it sees….

Andrew Rigie said it isn’t worth waiting around for tech regulation in Washington.

“New York City is a restaurant capital of the world,” Rigie told Protocol. “We need to lead on these issues.”

Rigie, executive director of the New York City Hospitality Alliance, has pushed for New York City’s new laws on food delivery apps such as Uber Eats. His group supported measures to make permanent a cap on the service fees the apps charge to restaurants, ban the apps from listing eateries without permission and share customer information with restaurants that ask for it.

While Rigie’s official purview is dining in the Big Apple, his belief that the local government should lead on regulating tech companies in a way Washington hasn’t has become increasingly common.

“It wouldn’t be a surprise if lawmakers elsewhere seek to implement similar policies,” Rigie said. “Some of it could potentially come from the federal government, but New York City can’t wait for the federal government to maybe act.”

New York is not the only city to take action. While the Federal Trade Commission has faced calls to regulate third-party food delivery apps at a national level, San Francisco was first to pass a permanent fee cap for them in June.

Food apps are just a microcosm highlighting the patchworks of local-level regulation that are developing, or are already a fact of life, for tech. These regulatory patchworks occur when state and local governments move ahead of Congress to pass their own, often divergent, laws and rules. So far, states and municipalities are racing ahead of the feds on issues such as cybersecurity, municipal broadbandcontent moderationgig work, the use of facial recognition, digital taxes, mobile app store fees and consumer rights to repair their own devices, among others.

Many in tech became familiar with the idea when the California Consumer Privacy Act passed in 2018, making it clear more states would follow suit, although the possibility has popped up throughout modern tech policy history on issues such as privacy requirements on ISPs, net neutrality and even cybersecurity breach notification.

Many patchworks reflect the stance of advocates, consumers and legislators that Washington has simply failed to do its job on tech. The resulting uncompromising or inconsistent approaches by local governments also has tech companies worried enough to push Congress to overrule states and establish one uniform U.S. standard.

“With a bit of a vacuum at the federal level, states are looking to step in, whether that’s on content moderation, whether that’s on speech on platforms, antitrust and anticompetitive conduct regulation, data privacy,” said April Doss, executive director of Georgetown University’s Institute for Technology Law and Policy. “It is the whole bundle of issues.”…(More)

Looking Under the Hood of AI’s Dubious Models


Essay by Ethan Edwards: “In 2018, McKinsey Global Institute released “Notes from the AI Frontier,” a report that seeks to predict the economic impact of artificial intelligence. Looming over the report is how the changing nature of work might transform society and pose challenges for policymakers. The good news is that the experts at McKinsey think that automation will create more jobs than it eliminates, but obviously it’s not a simple question. And the answer they give rests on sophisticated econometric models that include a variety of qualifications and estimates. Such models are necessarily simplified, and even reductionistic, but are they useful? And for whom?

Without a doubt, when it comes to predictive modeling, the center of the action in our society—and the industry through which intense entrepreneurial energy and venture capital flows—is artificial intelligence itself. AI, of course, is nothing new. A subdiscipline dedicated to mimicking human capacities in sensing, language, and thought, it’s nearly as old as computer science itself. But for the last ten years or so the promise and the hype of AI have only accelerated. The most impressive results have come from something called “neural nets,” which has used linear algebra to mimic some of the biological structures of our brain cells and has been combined with far better hardware developed for video games. In only a few years, neural nets have revolutionized image processing, language processing, audio analysis, and media recommendation. The hype is that they can do far more.

If we are—as many promoters assert—close to AIs that can do everything a human knowledge worker can and more, that is obviously a disruptive, even revolutionary, prospect. It’s also a claim that has turned on the spigot of investment capital. And that’s one reason it’s difficult to know the true potential of the industry. Talking about AI is a winning formula for startups, researchers, and anyone who wants funding, enough that the term AI gets used for more than just neural nets and is now a label for computer-based automation in general. Older methods that have nothing to do with the new boom have been rebranded under AI. Think tanks and universities are hosting seminars on the impact of AI on fields on which it has so far had no impact. Some startups who have built their company’s future profitability on the promise of their AI systems have actually had to hire low-wage humans to act like the hoped-for intelligences for customers and investors while they wait for the technology to catch up. Such hype produces a funhouse mirror effect that distorts the potential and therefore the value of firms and all but guarantees that some startups will squander valuable resources with broken (or empty) promises. But as long as some companies do keep their promises it’s gamble that many investors are still willing to take….(More)”.

Afghan people face an impossible choice over their digital footprint


Nighat Dad at New Scientist: “The swift progress of the Taliban in Afghanistan has been truly shocking…Though the Taliban spokesperson Zabihullah Mujahid told the press conference that it wouldn’t be seeking “revenge” against people who had opposed them, many Afghan people are understandably still worried. On top of this, they — including those who worked with Western forces and international NGOs, as well as foreign journalists — have been unable to leave the country, as flight capacity has been taken over by Western countries evacuating their citizens.

As such, people have been attempting to move quickly to erase their digital footprints, built up during the 20 years of the previous US-backed governments. Some Afghan activists have been reaching out to me directly to help them put in place robust mobile security and asking how to trigger a mass deletion of their data.

The last time the Taliban was in power, social media barely existed and smartphones had yet to take off. Now, around 4 million people in Afghanistan regularly use social media. Yet, despite the huge rise of digital technologies, a comparative rise in digital security hasn’t happened.

There are few digital security resources that are suitable for people in Afghanistan to use. The leading guide on how to properly delete your digital history by Human Rights First is a brilliant place to start. But unfortunately it is only available in English and unofficially in Farsi. There are also some other guides available in Farsi thanks to the thriving community of tech enthusiasts who have been working for human rights activists living in Iran for years.

However, many of these guides will still be unintelligible for those in Afghanistan who speak Dari or Pashto, for example…

People in Afghanistan who worked with Western forces also face an impossible choice as countries where they might seek asylum often require digital proof of their collaboration. Keep this evidence and they risk persecution from the Taliban, delete it and they may find their only way out no longer available.

Millions of people’s lives will now be vastly different due to the regime change. Digital security feels like one thing that could have been sorted out in advance. We are yet to see exactly how Taliban 2.0 will be different to that which went before. And while the so-called War on Terror appears to be over, I fear a digital terror offensive may just be beginning…(More).

Mathematicians are deploying algorithms to stop gerrymandering


Article by Siobhan Roberts: “The maps for US congressional and state legislative races often resemble electoral bestiaries, with bizarrely shaped districts emerging from wonky hybrids of counties, precincts, and census blocks.

It’s the drawing of these maps, more than anything—more than voter suppression laws, more than voter fraud—that determines how votes translate into who gets elected. “You can take the same set of votes, with different district maps, and get very different outcomes,” says Jonathan Mattingly, a mathematician at Duke University in the purple state of North Carolina. “The question is, if the choice of maps is so important to how we interpret these votes, which map should we choose, and how should we decide if someone has done a good job in choosing that map?”

Over recent months, Mattingly and like-minded mathematicians have been busy in anticipation of a data release expected today, August 12, from the US Census Bureau. Every decade, new census data launches the decennial redistricting cycle—state legislators (or sometimes appointed commissions) draw new maps, moving district lines to account for demographic shifts.

In preparation, mathematicians are sharpening new algorithms—open-source tools, developed over recent years—that detect and counter gerrymandering, the egregious practice giving rise to those bestiaries, whereby politicians rig the maps and skew the results to favor one political party over another. Republicans have openly declared that with this redistricting cycle they intend to gerrymander a path to retaking the US House of Representatives in 2022….(More)”.

The Illusion of Inclusion — The “All of Us” Research Program and Indigenous Peoples’ DNA


Article by Keolu Fox: “Raw data, including digital sequence information derived from human genomes, have in recent years emerged as a top global commodity. This shift is so new that experts are still evaluating what such information is worth in a global market. In 2018, the direct-to-consumer genetic-testing company 23andMe sold access to its database containing digital sequence information from approximately 5 million people to GlaxoSmithKline for $300 million. Earlier this year, 23andMe partnered with Almirall, a Spanish drug company that is using the information to develop a new antiinflammatory drug for autoimmune disorders. This move marks the first time that 23andMe has signed a deal to license a drug for development.

Eighty-eight percent of people included in large-scale studies of human genetic variation are of European ancestry, as are the majority of participants in clinical trials. Corporations such as Geisinger Health System, Regeneron Pharmaceuticals, AncestryDNA, and 23andMe have already mined genomic databases for the strongest genotype–phenotype associations. For the field to advance, a new approach is needed. There are many potential ways to improve existing databases, including “deep phenotyping,” which involves collecting precise measurements from blood panels, questionnaires, cognitive surveys, and other tests administered to research participants. But this approach is costly and physiologically and mentally burdensome for participants. Another approach is to expand existing biobanks by adding genetic information from populations whose genomes have not yet been sequenced — information that may offer opportunities for discovering globally rare but locally common population-specific variants, which could be useful for identifying new potential drug targets.

Many Indigenous populations have been geographically isolated for tens of thousands of years. Over time, these populations have developed adaptations to their environments that have left specific variant signatures in their genomes. As a result, the genomes of Indigenous peoples are a treasure trove of unexplored variation. Some of this variation will inevitably be identified by programs like the National Institutes of Health (NIH) “All of Us” research program. NIH leaders have committed to the idea that at least 50% of this program’s participants should be members of underrepresented minority populations, including U.S. Indigenous communities (Native Americans, Alaskan Natives, and Native Hawaiians), a decision that explicitly connects diversity with the program’s goal of promoting equal enjoyment of the future benefits of precision medicine.

But there are reasons to believe that this promise may be an illusion….(More)”.

Manifesto Destiny


Essay by Lidija Haas: “Manifesto is the form that eats and repeats itself. Always layered and paradoxical, it comes disguised as nakedness, directness, aggression. An artwork aspiring to be a speech act—like a threat, a promise, a joke, a spell, a dare. You can’t help but thrill to language that imagines it can get something done. You also can’t help noticing the similar demands and condemnations that ring out across the decades and the centuries—something will be swept away or conjured into being, and it must happen right this moment. While appearing to invent itself ex nihilo, the manifesto grabs whatever magpie trinkets it can find, including those that drew the eye in earlier manifestos. This is a form that asks readers to suspend their disbelief, and so like any piece of theater, it trades on its own vulnerability, invites our complicity, as if only the quality of our attention protects it from reality’s brutal puncture. A manifesto is a public declaration of intent, a laying out of the writer’s views (shared, it’s implied, by at least some vanguard “we”) on how things are and how they should be altered. Once the province of institutional authority, decrees from church or state, the manifesto later flowered as a mode of presumption and dissent. You assume the writer stands outside the halls of power (or else, occasionally, chooses to pose and speak from there). Today the US government, for example, does not issue manifestos, lest it sound both hectoring and weak. The manifesto is inherently quixotic—spoiling for a fight it’s unlikely to win, insisting on an outcome it lacks the authority to ensure.

Somewhere a manifesto is always being scrawled, but the ones that survive have usually proliferated at times of ferment and rebellion, like the pamphlets of the Diggers in seventeenth-century England, or the burst of exhortations that surrounded the French Revolution, including, most memorably, Mary Wollstonecraft’s 1792 A Vindication of the Rights of Woman. The manifesto is a creature of the Enlightenment: its logic depends on ideals of sovereign reason, social progress, a universal subject on whom equal rights should (must) be bestowed. Still unsurpassed as a model (for style, force, economy, ambition) is Marx and Engels’s 1848 Communist Manifesto, crammed with killer lines, which Marshall Berman called “the first great modernist work of art.” In its wake came the Futurists—“We wish to destroy museums, libraries, academies of any sort, and fight against moralism, feminism, and every kind of materialistic, self-serving cowardice”—and the great flood of manifestos by artists, activists, and other renegades in the decades after 1910, followed by another peak in the 1960s and ’70s.

After that point, fewer broke through the general noise, though those that have lasted cast a weird light back on what came before: Donna J. Haraway’s postmodern 1985 “A Cyborg Manifesto,” for instance, in refusing fantasies of wholeness, purity, full communication—“The feminist dream of a common language, like all dreams . . . of perfectly faithful naming of experience, is a totalizing and imperialist one”—presents the manifesto as a form that can speak from the corner of its mouth, that always says more and less than it appears to say, that teases and exaggerates, that usefully undermines itself. Haraway makes an explicit case for “serious play” and for irreconcilable contradictions, introducing her “effort to build an ironic political myth faithful to feminism, socialism, and materialism. . . . More faithful as blasphemy is faithful, than as reverent worship and identification.” By directly announcing its own tricksiness (an extra contradiction in itself), “A Cyborg Manifesto” seems both to critique its predecessors and to hint that even the most overweening of them were never quite designed to be read straight….(More)”.

Generationalism is bad science


Essay by Cort W Rudolph: “Millennials – the much-maligned generation of people who, according to the Pew Research Center, were born between 1981 and 1996 – started turning 40 this year. This by itself is not very remarkable, but a couple of related facts bear consideration. In the United States, legislation that protects ‘older workers’ from discrimination applies to those aged 40 and over. There is a noteworthy irony here: a group of people who have long been branded negatively by their elders and accused of ‘killing’ cultural institutions ranging from marriage to baseball to marmalade are now considered ‘older’ in the eyes of the government. Inevitably, the latest round of youngsters grows up, complicating the stereotypes attached to them in youth. More importantly, though, the concept of a discrete generation of ‘millennials’ – like that of the ‘Generation X’ that preceded these people, the ‘Generation Z’ that will soon follow them into middle adulthood, and indeed the entire notion of ‘generations’ – is completely made up….

The lack of evidence stems primarily from the fact that there is no research methodology that would allow us to unambiguously identify generations, let alone study whether there are differences between them. We must fall back on theory and logic to parse whether what we see is due to generations or some other phenomenon related to age or the passing of time. In our research, my colleagues and I have suggested that, owing to these limitations, there has never actually been a genuine study of generations.

Generations create a lens through which we interact with others, shaping various forms of social behaviour

Generally, when researchers seek to identify generations, they consider the year in which people were born (their ‘cohort’) as a proxy for their generation. This practice has become well established and is usually not questioned. To form generations using this approach, people are rather arbitrarily grouped together into a band of birth years (for example, members of one generation are born between 19XX and 20YY, whereas members of the next generation are born between 20YY and 20XX, etc). The problem with doing this, especially when people are studied only at a single point in time, is that it is impossible to separate the apparent influence of one’s birth year (being part of a certain ‘generation’) from how old one is at the time of the study. This means that studies that purport to offer evidence for generational differences could just as easily be showing the effects of being a particular age – a 25-year-old is likely to think and act differently than a 45-year-old does, regardless of the ‘generation’ they belong to.

Alternatively, some studies adopt a ‘cross-temporal’ approach to studying generations and attempt to hold the effect of age constant (for example, comparing 18-year-olds surveyed in 1980, 1990, 2000, 2010, etc). The issue with this approach is that any effects of living at a particular time (eg, 2010) – on political attitudes, for example – are now easily misconstrued as effects of having been born in a certain year. As such, we again cannot unambiguously attribute the findings to generational membership. This is a well-known issue. Indeed, nearly every study that has ever tried to investigate generations falls into some form of this trap.

Recently, the National Academies of Sciences, Engineering, and Medicine in the US published the results of a consensus study on the idea of generations and generational differences at work. The conclusions of this study were clear and direct: there is little credible scientific evidence to back up the idea of generations and generational differences, and the (mis)application of these ideas has the potential to detrimentally affect people regardless of their age.

Where does this leave us? Absent evidence, or a valid way of disentangling the complexities of generations through research, what do we do with the concept of generations? Recognising these challenges, we can shift the focus from understanding the supposed natures of generations to understanding the existence and persistence of generational concepts and beliefs. My colleagues and I have advanced the argument that generations exist because they are willed into being. In other words, generations are socially constructed through discourse on ageing in society; they exist because we establish them, label them, ascribe traits to them, and then promote and legitimise them through various media channels (eg, books, magazines, and even film and television), general discourse and through more formalised policy guidance….(More)”

Remove obstacles to sharing health data with researchers outside of the European Union


Heidi Beate Bentzen et al in Nature: “International sharing of pseudonymized personal data among researchers is key to the advancement of health research and is an essential prerequisite for studies of rare diseases or subgroups of common diseases to obtain adequate statistical power.

Pseudonymized personal data are data on which identifiers such as names are replaced by codes. Research institutions keep the ‘code key’ that can link an individual person to the data securely and separately from the research data and thereby protect privacy while preserving the usefulness of data for research. Pseudonymized data are still considered personal data under the General Data Protection Regulation (GDPR) 2016/679 of the European Union (EU) and, therefore, international transfers of such data need to comply with GDPR requirements. Although the GDPR does not apply to transfers of anonymized data, the threshold for anonymity under the GDPR is very high; hence, rendering data anonymous to the level required for exemption from the GDPR can diminish the usefulness of the data for research and is often not even possible.

The GDPR requires that transfers of personal data to international organizations or countries outside the European Economic Area (EEA)—which comprises the EU Member States plus Iceland, Liechtenstein and Norway—be adequately protected. Over the past two years, it has become apparent that challenges emerge for the sharing of data with public-sector researchers in a majority of countries outside of the EEA, as only a few decisions stating that a country offers an adequate level of data protection have so far been issued by the European Commission. This is a problem, for example, with researchers at federal research institutions in the United States. Transfers to international organizations such as the World Health Organization are similarly affected. Because these obstacles ultimately affect patients as beneficiaries of research, solutions are urgently needed. The European scientific academies have recently published a report explaining the consequences of stalled data transfers and pushing for responsible solutions…(More)”.