Paper by Morten Fibieger Byskov and Keith Hyams: “Extreme impacts from climate change are already being felt around the world. The policy choices that we make now will affect not only how high global temperatures rise but also how well-equipped future economies and infrastructures are to cope with these changes. The interests of future generations must therefore be central to climate policy and practice. This raises the questions: Who should represent the interests of future generations with respect to climate change? And according to which criteria should we judge whether a particular candidate would make an appropriate representative for future generations? In this essay, we argue that potential representatives of future generations should satisfy what we call a “hypothetical acceptance criterion,” which requires that the representative could reasonably be expected to be accepted by future generations. This overarching criterion in turn gives rise to two derivative criteria. These are, first, the representative’s epistemic and experiential similarity to future generations, and second, his or her motivation to act on behalf of future generations. We conclude that communities already adversely affected by climate change best satisfy these criteria and are therefore able to command the hypothetical acceptance of future generations…(More)”.
How social media has undermined our constitutional architecture
Article by Danielle Allen: “Our politics are awful. On this we all agree. Often we feel there is nothing we can do. Yet there are steps to take. Before we can decide what to do, though, we have to face squarely the nature of the problem we are solving.
We face a crisis of representation. And — put bluntly — Facebook is the cause.
By crisis of representation, I do not mean that the other side’s representatives drive us all crazy. Of course, they do. I do not even mean that the incredibly negative nature of our political discourse is ruining the mental health of all of us. Of course, it is. What I mean is that the fundamental structural mechanism of our constitutional democracy is representation, and one of the pillars of the original design for that system has been knocked out from under us. As a result, the whole system no longer functions effectively.
Imagine that a truck has crashed into a supporting wall for your building. Your building is now structurally unsound and shifting dangerously in the wind. That’s the kind of situation I’m talking about.
In that abstract metaphor the building is our constitutional system, and social media is the truck. But explaining what I mean requires going back to the early design of our Constitution.
Ours is not the first era brought to its knees by polarization. After the Revolution, the nation was grinding to a halt under the Articles of Confederation. Congress couldn’t get a quorum. It couldn’t secure the revenue needed to pay war debts. Polarization — or as they called it — “faction” brought paralysis.
The whole point of writing the Constitution was to fix this aspect. James Madison made the case that the design of the Constitution would dampen factionalism. He argued this in the Federalist Papers,the famous op-eds that he, John Jay and Alexander Hamilton wrote advocating for the Constitution…
Madison couldn’t anticipate Facebook, and Facebook — with its historically unprecedented power to bind factions over great distances — knocked this pillar out from under us. In this sense, Facebook and the equally powerful social media platforms that followed it broke our democracy. They didn’t mean to. It’s like when your kid plays with a beach ball in the house and breaks your favorite lamp. But break it they did.
Now, the rest of us have to fix it.
Representation as designed cannot work under current conditions. We have no choice but to undertake a significant project of democracy renovation. We need an alternative to that original supporting wall to restore structural soundness to our institutions.
In coming columns, I will make the case for the recommendations that I consider most fundamental for a 21st-century system of representation that can address our needs. The goal should be responsive representation, which means representation that is inclusive of our extraordinary diversity and, of course, simultaneously effective. Our representatives get stuff done.
Increasing the size of the House of Representatives is one recommendation from a bipartisan commission on democracy renovation that I recently co-chaired. The report we produced is called Our Common Purpose. …(More)”
EU Court Expands Definition of Sensitive Data, Prompting Legal Concerns for Companies
Article by Catherine Stupp: “Companies will be under increased pressure after Europe’s top court ruled they must apply special protections to data that firms previously didn’t consider sensitive.
Under the European Union’s General Data Protection Regulation, information about health, religion, political views and sexual orientation are considered sensitive. Companies generally aren’t allowed to process it unless they apply special safeguards.
The European Court of Justice on Aug. 1 determined that public officials in Lithuania had their sensitive data revealed because their spouses’ names were published online, which could indicate their sexual orientation. Experts say the implications will extend to other types of potentially sensitive information.
Data that might be used to infer a sensitive piece of information about a person is also sensitive, the court said. That could include unstructured data—which isn’t organized in databases and is therefore more difficult to search through and analyze—such as surveillance camera footage in a hospital that indicates a person was treated there, legal experts say. Records of a special airplane meal might reveal religious views.
The court ruling “raises a lot of practical complexities and a lot of difficulty in understanding if the data [organizations] have is sensitive or not,” said Dr. Gabriela Zanfir-Fortuna, vice president for global privacy at the Future of Privacy Forum, a think tank based in Washington, D.C.
Many companies with large data sets may not know they hold details that indirectly relate to sensitive information, privacy experts say. Identifying where that data is and deciding whether it could reveal personal details about an individual would be a huge undertaking, said Tobias Judin, head of the international section at the Norwegian data protection regulator.
“You can’t really comply with the law if your data set becomes so big that you don’t really know what’s in it,” Mr. Judin said.
The GDPR says companies can only process sensitive data in a few circumstances, such as if a person gives explicit consent for it to be used for a specified purpose.
Regulators have been grappling with the question of how to determine what is sensitive data. The Norwegian regulator last year fined gay-dating app Grindr LLC 65 million kroner, equivalent to roughly $6.7 million The regulator said the user data was sensitive because use of the app indicated their sexual orientation.
Grindr said it doesn’t require users to share that data. The company appealed in February. Mr. Judin said his office is reviewing material submitted by the company as part of its appeal. Spain’s regulator came to a different conclusion in January, and found that data Grindr shared for advertising purposes wasn’t sensitive….(More)”.
Can open-source technologies support open societies?
Report by Victoria Welborn, and George Ingram: “In the 2020 “Roadmap for Digital Cooperation,” U.N. Secretary General António Guterres highlighted digital public goods (DPGs) as a key lever in maximizing the full potential of digital technology to accelerate progress toward the Sustainable Development Goals (SDGs) while also helping overcome some of its persistent challenges.
The Roadmap rightly pointed to the fact that, as with any new technology, there are risks around digital technologies that might be counterproductive to fostering prosperous, inclusive, and resilient societies. In fact, without intentional action by the global community, digital technologies may more naturally exacerbate exclusion and inequality by undermining trust in critical institutions, allowing consolidation of control and economic value by the powerful, and eroding social norms through breaches of privacy and disinformation campaigns.
Just as the pandemic has served to highlight the opportunity for digital technologies to reimagine and expand the reach of government service delivery, so too has it surfaced specific risks that are hallmarks of closed societies and authoritarian states—creating new pathways to government surveillance, reinforcing existing socioeconomic inequalities, and enabling the rapid proliferation of disinformation. Why then—in the face of these real risks—focus on the role of digital public goods in development?
As the Roadmap noted, DPGs are “open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the SDGs.”[1] There are a number of factors why such products have unique potential to accelerate development efforts, including widely recognized benefits related to more efficient and cost effective implementation of technology-enabled development programming.
Historically, the use of digital solutions for development in low- and middle-income countries (LMICs) has been supported by donor investments in sector-specific technology systems, reinforcing existing silos and leaving countries with costly, proprietary software solutions with duplicative functionality and little interoperability across government agencies, much less underpinning private sector innovation. These silos are further codified through the development of sector-specific maturity models and metrics. An effective DPG ecosystem has the potential to enable the reuse and improvement of existing tools, thereby lowering overall cost of deploying technology solutions and increasing efficient implementation.
Beyond this proven reusability of DPGs and the associated cost and deployment efficiencies, do DPGs have even more transformational potential? Increasingly, there is interest in DPGs as drivers of inclusion and products through which to standardize and safeguard rights; these opportunities are less understood and remain unproven. To begin to fill that gap, this paper first examines the unique value proposition of DPGs in supporting open societies by advancing more equitable systems and by codifying rights. The paper then considers the persistent challenges to more fully realizing this opportunity and offers some recommendations for how to address these challenges…(More)”.
We don’t have a hundred biases, we have the wrong model
Blog by Jason Collins: “…Behavioral economics today is famous for its increasingly large collection of deviations from rationality, or, as they are often called, ‘biases’. While useful in applied work, it is time to shift our focus from collecting deviations from a model of rationality that we know is not true. Rather, we need to develop new theories of human decision to progress behavioral economics as a science. We need heliocentrism.
The dominant model of human decision-making across many disciplines, including my own, economics, is the rational-actor model. People make decisions based on their preferences and the constraints that they face. Whether implicitly or explicitly, they typically have the computational power to calculate the best decision and the willpower to carry it out. It’s a fiction but a useful one.
As has become broadly known through the growth of behavioral economics, there are many deviations from this model. (I am going to use the term behavioral economics through this article as a shorthand for the field that undoubtedly extends beyond economics to social psychology, behavioral science, and more.) This list of deviations has grown to the extent that if you visit the Wikipedia page ‘List of Cognitive Biases’ you will now see in excess of 200 biases and ‘effects’. These range from the classics described in the seminal papers of Amos Tversky and Daniel Kahneman through to the obscure.
We are still at the collection-of-deviations stage. There are not 200 human biases. There are 200 deviations from the wrong model…(More)”
AI ethics: the case for including animals
Paper by Peter Singer & Yip Fai Tse: “The ethics of artificial intelligence, or AI ethics, is a rapidly growing field, and rightly so. While the range of issues and groups of stakeholders concerned by the field of AI ethics is expanding, with speculation about whether it extends even to the machines themselves, there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals. This paper seeks to explore the kinds of impact AI has on nonhuman animals, the severity of these impacts, and their moral implications. We hope that this paper will facilitate the development of a new field of philosophical and technical research regarding the impacts of AI on animals, namely, the ethics of AI as it affects nonhuman animals…(More)”.
Phones Know Who Went to an Abortion Clinic. Whom Will They Tell?
Patience Haggin at Wall Street Journal: “Concerns over collection and storage of reproductive health data is the latest challenge for the location-data industry, which over the past few years has faced scrutiny from lawmakers and regulators. Data-privacy laws in California and other states in recent years have placed new restrictions on the companies, such as requiring companies to give consumers the right to opt out of having their data sold.
The Federal Trade Commission last month said it would strictly enforce laws governing the collection, use and sharing of sensitive consumer data. “The misuse of mobile location and health information— including reproductive health data—exposes consumers to significant harm,” wrote Kristin Cohen, acting associate director for the commission’s division of privacy and identity protection.
Without clear regulations for the location-data industry’s data on abortion clinics, individual companies are determining how to respond to the implications of the Supreme Court ruling.
Alphabet Inc.’s Google recently said it would automatically delete visits to abortion clinics from its users’ location history.
Apple Inc. says it minimizes collection of personal data and that most location data is stored in ways the company can’t access. It has no way to access Health and Maps app data for people using updated operating systems, and can’t provide such data in response to government requests, the company says.
The vast location-data ecosystem includes many other lesser-known companies that are taking a different approach. A trade group for some of those firms, Network Advertising Initiative, announced a new set of voluntary standards for member companies in June, two days before the Dobbs ruling came out.
Participating companies, including Foursquare Labs Inc., Cuebiq Inc. and Precisely Inc.’s PlaceIQ, agreed not to use, sell or share precise location data about visits to sensitive locations—including abortion clinics—except to comply with a legal obligation…(More)”
Who Is Falling for Fake News?
Article by Angie Basiouny: “People who read fake news online aren’t doomed to fall into a deep echo chamber where the only sound they hear is their own ideology, according to a revealing new study from Wharton.
Surprisingly, readers who regularly browse fake news stories served up by social media algorithms are more likely to diversify their news diet by seeking out mainstream sources. These well-rounded news junkies make up more than 97% of online readers, compared with the scant 2.8% who consume online fake news exclusively.
“We find that these echo chambers that people worry about are very shallow. This idea that the internet is creating an echo chamber is just not holding out to be true,” said Senthil Veeraraghavan, a Wharton professor of operations, information and decisions.
Veeraraghavan is co-author of the paper, “Does Fake News Create Echo Chambers?” It was also written by Ken Moon, Wharton professor of operations, information and decisions, and Jiding Zhang, an assistant operations management professor at New York University Shanghai who earned her doctorate at Wharton.
The study, which examined the browsing activity of nearly 31,000 households during 2017, offers empirical evidence that goes against popular beliefs about echo chambers. While echo chambers certainly are dark and dangerous places, they aren’t metaphorical black holes that suck in every person who reads an article about, say, Obama birtherism theory or conspiracies about COVID-19 vaccines. The study found that households exposed to fake news actually increase their exposure to mainstream news by 9.1%.
“We were surprised, although we were very aware going in that there was much that we did not know,” Moon said. “One thing we wanted to see is how much fake news is out there. How do we figure out what’s fake and what’s not, and who is producing the fake news and why? The economic structure of that matters from a business perspective.”…(More)”
Sustaining Open Data as a Digital Common — Design principles for Common Pool Resources applied to Open Data Ecosystems
Paper by Johan Linåker, and Per Runeson: “Digital commons is an emerging phenomenon and of increasing importance, as we enter a digital society. Open data is one example that makes up a pivotal input and foundation for many of today’s digital services and applications. Ensuring sustainable provisioning and maintenance of the data, therefore, becomes even more important.
We aim to investigate how such provisioning and maintenance can be collaboratively performed in the community surrounding a common. Specifically, we look at Open Data Ecosystems (ODEs), a type of community of actors, openly sharing and evolving data on a technological platform.
We use Elinor Ostrom’s design principles for Common Pool Resources as a lens to systematically analyze the governance of earlier reported cases of ODEs using a theory-oriented software engineering framework.
We find that, while natural commons must regulate consumption, digital commons such as open data maintained by an ODE must stimulate both use and data provisioning. Governance needs to enable such stimulus while also ensuring that the collective action can still be coordinated and managed within the frame of available maintenance resources of a community. Subtractability is, in this sense, a concern regarding the resources required to maintain the quality and value of the data, rather than the availability of data. Further, we derive empirically-based recommended practices for ODEs based on the design principles by Ostrom for how to design a governance structure in a way that enables a sustainable and collaborative provisioning and maintenance of the data.
ODEs are expected to play a role in data provisioning which democratize the digital society and enables innovation from smaller commercial actors. Our empirically based guidelines intend to support this development…(More).
Whither Nudge? The Debate Itself Offers Lessons on the Influence of Social Science
Blog by Tony Hockley: “Pursuing impact can be a disturbing balancing act between spin and substance. Underdo the spin whilst maintaining substance and the impact will likely be zero, but credibility is upheld. Overdo the spin and risk the substance being diluted by marketing and misappropriation. The story of Nudge offers insights into what can happen when research has an unpredictably large impact in the world of politics and policy.
Has Nudge overdone the spin, and how much is a one-word book title to blame if it has? It is certainly true that the usual academic balancing act of spin versus substance was tipped by a publisher’s suggestion of snappy title instead of the usual academic tongue-twister intelligible only to the initiated. Under the title Nudge the book found a receptive audience of policymakers looking to fix problems easily and on the cheap after the 2008 economic crash, and a public policy community eager to adopt exciting new terminology into their own areas of interest. ‘Behavioural Insights Teams’ quickly sprang up around the world, dubbed (very inaccurately) as “nudge units.” There was little discernible push-back against this high-level misappropriation of the term, the general excitement, and the loss of strict definition attached to the authors’ underlying concept for nudge policies of “libertarian paternalism.” In short, the authors had lost control of their own work. The book became a global bestseller. In 2021 it was updated and republished, in what was described as “the final edition.” Perhaps in recognition that the concept had stretched to the end of its logical road?…(More)”.