Political Science Has Its Own Lab Leaks


Paul Musgrave at Foreign Policy: “The idea of a lab leak has gone, well, viral. As a political scientist, I cannot assess whether the evidence shows that COVID-19 emerged naturally or from laboratory procedures (although many experts strenuously disagree). Yet as a political scientist, I do think that my discipline can learn something from thinking seriously about our own “lab leaks” and the damage they could cause.

A political science lab leak might seem as much of a punchline as the concept of a mad social scientist. Nevertheless, the notion that scholarly ideas and findings can escape the nuanced, cautious world of the academic seminar and transform into new forms, even becoming threats, becomes more of a compelling metaphor if you think of academics as professional crafters of ideas intended to survive in a hostile environment. Given the importance of what we study, from nuclear war to international economics to democratization and genocide, the escape of a faulty idea could have—and has had—dangerous consequences for the world.

Academic settings provide an evolutionarily challenging environment in which ideas adapt to survive. The process of developing and testing academic theories provides metaphorical gain-of-function accelerations of these dynamics. To survive peer review, an idea has to be extremely lucky or, more likely, crafted to evade the antibodies of academia (reviewers’ objections). By that point, an idea is either so clunky it cannot survive on its own—or it is optimized to thrive in a less hostile environment.

Think tanks and magazines like the Atlantic (or Foreign Policy) serve as metaphorical wet markets where wild ideas are introduced into new and vulnerable populations. Although some authors lament a putative decline of social science’s influence, the spread of formerly academic ideas like intersectionality and the use of quantitative social science to reshape electioneering suggest that ideas not only move from the academy but can flourish once transplanted. This is hardly new: Terms from disciplines including psychoanalysis (“ego”), evolution (“survival of the fittest”), and economics (the “free market” and Marxism both) have escaped from the confines of academic work before…(More)”.

Concern trolls and power grabs: Inside Big Tech’s angry, geeky, often petty war for your privacy


Article by Issie Lapowsky: “Inside the World Wide Web Consortium, where the world’s top engineers battle over the future of your data….

The W3C’s members do it all by consensus in public GitHub forums and open Zoom meetings with meticulously documented meeting minutes, creating a rare archive on the internet of conversations between some of the world’s most secretive companies as they collaborate on new rules for the web in plain sight.

But lately, that spirit of collaboration has been under intense strain as the W3C has become a key battleground in the war over web privacy. Over the last year, far from the notice of the average consumer or lawmaker, the people who actually make the web run have converged on this niche community of engineers to wrangle over what privacy really means, how the web can be more private in practice and how much power tech giants should have to unilaterally enact this change.

On one side are engineers who build browsers at Apple, Google, Mozilla, Brave and Microsoft. These companies are frequent competitors that have come to embrace web privacy on drastically different timelines. But they’ve all heard the call of both global regulators and their own users, and are turning to the W3C to develop new privacy-protective standards to replace the tracking techniques businesses have long relied on.

On the other side are companies that use cross-site tracking for things like website optimization and advertising, and are fighting for their industry’s very survival. That includes small firms like Rosewell’s, but also giants of the industry, like Facebook.

Rosewell has become one of this side’s most committed foot soldiers since he joined the W3C last April. Where Facebook’s developers can only offer cautious edits to Apple and Google’s privacy proposals, knowing full well that every exchange within the W3C is part of the public record, Rosewell is decidedly less constrained. On any given day, you can find him in groups dedicated to privacy or web advertising, diving into conversations about new standards browsers are considering.

Rather than asking technical questions about how to make browsers’ privacy specifications work better, he often asks philosophical ones, like whether anyone really wants their browser making certain privacy decisions for them at all. He’s filled the W3C’s forums with concerns about its underlying procedures, sometimes a dozen at a time, and has called upon the W3C’s leadership to more clearly articulate the values for which the organization stands….(More)”.

Government algorithms are out of control and ruin lives



Nani Jansen Reventlow at Open Democracy: “Government services are increasingly being automated and technology is relied on more and more to make crucial decisions about our lives and livelihoods. This includes decisions about what type of support we can access in times of need: welfarebenefits, and other government services.

Technology has the potential to not only reproduce but amplify structural inequalities in our societies. If you combine this drive for automation with a broader context of criminalising poverty and systemic racism, this can have disastrous effects.

A recent example is the ‘child benefits scandal’ that brought down the Dutch government at the start of 2021. In the Netherlands, working parents are eligible for a government contribution toward the costs of daycare. This can run up to 90% of the actual costs for those with a low income. While contributions are often directly paid to childcare providers, parents are responsible for them. This means that, if the tax authorities determine that any allowance was wrongfully paid out, parents are liable for repaying them.

To detect cases of fraud, the Dutch tax authorities used a system that was outright discriminatory. An investigation by the Dutch Data Protection Authority last year showed that parents were singled out for special scrutiny because of their ethnic origin or dual nationality.  “The whole system was organised in a discriminatory manner and was also used as such,” it stated.

The fallout of these ‘fraud detection’ efforts was enormous. It is currently estimated that 46,000 parents were wrongly accused of having fraudulently claimed child care allowances. Families were forced to repay tens of thousands of euros, leading to financial hardship, loss of livelihood, homes, and in one case, even loss of life – one parent died by suicide. While we can still hope that justice for these families won’t be denied, it will certainly be delayed: this weekend, it became clear that it could take up to ten years to handle all claims. An unacceptable timeline, given how precarious the situation will be for many of those affected….(More)”.

Could Trade Agreements Help Address the Wicked Problem of Cross-Border Disinformation?


Essay by Susan Ariel Aaronson: “Whether produced domestically or internationally, disinformation is a “wicked” problem that has global impacts. Although trade agreements contain measures that address cross-border disinformation, domestically created disinformation remains out of their reach. This paper looks at how policy makers can use trade agreements to mitigate disinformation and spam while implementing financial and trade sanctions against entities and countries that engage in disseminating cross-border disinformation. Developed and developing countries will need to work together to solve this global problem….(More)”.

Seek diversity to solve complexity


Katrin Prager at Nature: “As a social scientist, I know that one person cannot solve a societal problem on their own — and even a group of very intelligent people will struggle to do it. But we can boost our chances of success if we ensure not only that the team members are intelligent, but also that the team itself is highly diverse.

By ‘diverse’ I mean demographic diversity encompassing things such as race, gender identity, class, ethnicity, career stage and age, and cognitive diversity, including differences in thoughts, insights, disciplines, perspectives, frames of reference and thinking styles. And the team needs to be purposely diverse instead of arbitrarily diverse.

In my work I focus on complex world problems, such as how to sustainably manage our natural resources and landscapes, and I’ve found that it helps to deliberately assemble diverse teams. This effort requires me to be aware of the different ways in which people can be diverse, and to reflect on my own preferences and biases. Sometimes the teams might not be as diverse as I’d like. But I’ve found that making the effort not only to encourage diversity, but also to foster better understanding between team members reaps dividends….(more)”

How to be a good ancestor


Article by Sigal Samuel: “In 2015, 20 residents of Yahaba, a small town in northeastern Japan, went to their town hall to take part in a unique experiment.

Their goal was to design policies that would shape the future of Yahaba. They would debate questions typically reserved for politicians: Would it be better to invest in infrastructure or child care? Should we promote renewable energy or industrial farming?

But there was a twist. While half the citizens were invited to be themselves and express their own opinions, the remaining participants were asked to put on special ceremonial robes and play the part of people from the future. Specifically, they were told to imagine they were from the year 2060, meaning they’d be representing the interests of a future generation during group deliberations.

What unfolded was striking. The citizens who were just being themselves advocated for policies that would boost their lifestyle in the short term. But the people in robes advocated for much more radical policies — from massive health care investments to climate change action — that would be better for the town in the long term. They managed to convince their fellow citizens that taking that approach would benefit their grandkids. In the end, the entire group reached a consensus that they should, in some ways, act against their own immediate self-interest in order to help the future.

This experiment marked the beginning of Japan’s Future Design movement. What started in Yahaba has since been replicated in city halls around the country, feeding directly into real policymaking. It’s one example of a burgeoning global attempt to answer big moral questions: Do we owe it to future generations to take their interests into account? What does it look like to incorporate the preferences of people who don’t even exist yet? How can we be good ancestors?…(More)”.

Sovereignty and Data Localization


Paper by Emily Wu: “Data localization policies impose obligations on businesses to store and process data locally, rather than in servers located overseas. The adoption of data localization laws has been increasing, driven by the fear that a nation’s sovereignty will be threatened by their inability to exert full control over data stored outside their borders. This is particularly relevant to the US given its dominance in many areas of the digital ecosystem including artificial intelligence and cloud computing.

Unfortunately, data localization policies are causing more harm than good. They are ineffective at improving security, do little to simplify the regulatory landscape, and are causing economic harms to the markets where they are imposed. In order to move away from these policies, the fear of sovereignty dilution must be addressed by alternative means. This will be achieved most effectively by focusing on both technical concerns and value concerns.

To address technical concerns, the US should:

1. Enact a federal national privacy law to reduce the fears that foreign nations have about the power of US tech companies.

2. Mandate privacy and security frameworks by industry to demonstrate the importance that US industry places on privacy and security, recognizing it as fundamental to their business success.

3. Increase investment in cybersecurity to ensure that in a competitive market, the US has the best offering in both customer experience and security assurance

4. Expand multi-lateral agreements under CLOUD Act to help alleviate the concerns that data stored by US companies will be inaccessible to foreign governments in relevant to a criminal investigation…(More)”

Manipulation As Theft


Paper by Cass Sunstein: “Should there be a right not to be manipulated? What kind of right? On Kantian grounds, manipulation, lies, and paternalistic coercion are moral wrongs, and for similar reasons; they deprive people of agency, insult their dignity, and fail to respect personal autonomy. On welfarist grounds, manipulation, lies, and paternalistic coercion share a different characteristic; they displace the choices of those whose lives are directly at stake, and who are likely to have epistemic advantages, with the choices of outsiders, who are likely to lack critical information. Kantians and welfarists should be prepared to endorse a (moral) right not to be manipulated, though on very different grounds.

The moral prohibition on manipulation, like the moral prohibition on lies, should run against officials and regulators, not only against private institutions. At the same time, the creation of a legal right not to be manipulated raises hard questions, in part because of definitional challenges; there is a serious risk of vagueness and a serious risk of overbreadth. (Lies, as such, are not against the law, and the same is true of unkindness, inconsiderateness, and even cruelty.) With welfarist considerations in mind, it is probably best to start by prohibiting particular practices, while emphasizing that they are forms of manipulation and may not count as fraud. The basic goal should be to build on the claim that in certain cases, manipulation is a form of theft; the law should forbid theft, whether it occurs through force, lies, or manipulation. Some manipulators are thieves….(More)”

On regulation for data trusts


Paper by Aline Blankertz and Louisa Specht: “Data trusts are a promising concept for enabling data use while maintaining data privacy. Data trusts can pursue many goals, such as increasing the participation of consumers or other data subjects, putting data protection into practice more effectively, or strengthening data sharing along the value chain. They have the potential to become an alternative model to the large platforms, which are accused of accumulating data power and using it primarily for their own purposes rather than for the benefit of their users. To fulfill these hopes, data trusts must be trustworthy so that their users understand and trust that data is being used in their interest.

It is an important step that policymakers have recognized the potential of data trusts. This should be followed by measures that address specific risks and thus promote trust in the services. Currently, the political approach is to subject all forms of data trusts to the same rules through “one size fits all” regulation. This is the case, for example, with the Data Governance Act (DGA), which gives data trusts little leeway to evolve in the marketplace.

To encourage the development of data trusts, it makes sense to broadly define them as all organizations that manage data on behalf of others while adhering to a legal framework (including competition, trade secrets, and privacy). Which additional rules are necessary to ensure trustworthiness should be decided depending on the use case. The risk of a use case should be considered as well as the need for incentives to act as a data trust.

Risk factors can be identified across sectors; in particular, centralized or decentralized data storage and voluntary or mandatory use of data trusts are among them. The business model is not a main risk factor. Although many regulatory proposals call for strict neutrality, several data trusts without strict neutrality appear trustworthy in terms of monetization or vertical integration. At the same time, it is unclear what incentives exist for developing strictly neutral data trusts. Neutrality requirements that go beyond what is necessary make it less likely that desired alternative models will develop and take hold….(More)”.

The uncounted: politics of data in global health


Essay by Sara L M Davis: “Data is seductive in global health politics. It seduces donors with the promise of cost-effectiveness in making the right investments in people’s health and of ensuring they get results and performance from the state projects they fund. It seduces advocates of gender equality with its power to make gender differences in health outcomes and burdens visible. The seduction of data is that of the quick or technocratic fix to complex social and political problems. Are women disproportionately impacted by COVID-19? Get better data to find out the extent of the problem. Do you want to save as many lives as possible?…(More)”.