Paper by Jian Jia, Ginger Zhe Jin & Liad Wagman: “Digital platforms are not only match-making intermediaries but also establish internal rules that govern all users in their ecosystems. To better understand the governing role of platforms, we study two Airbnb pro-guest rules that pertain to guest and host cancellations, using data on Airbnb and VRBO listings in 10 US cities. We demonstrate that such pro-guest rules can drive demand and supply to and from the platform, as a function of the local platform competition between Airbnb and VRBO. Our results suggest that platform competition sometimes dampens a platform wide pro-guest rule and sometimes reinforces it, often with heterogeneous effects on different hosts. This implies that platform competition does not necessarily mitigate a platform’s incentive to treat the two sides asymmetrically, and any public policy in platform competition must consider its implication on all sides….(More)”.
Citizen science allows people to ‘really know’ their communities
UGAResearch: “Local populations understand their communities best. They’re familiar both with points of pride and with areas that could be improved. But determining the nature of those improvements from best practices, as well as achieving community consensus on implementation, can present a different set of challenges.
Jerry Shannon, associate professor of geography in the Franklin College of Arts & Sciences, worked with a team of researchers to introduce a citizen science approach in 11 communities across Georgia, from Rockmart to Monroe to Millen. This work combines local knowledge with emerging digital technologies to bolster community-driven efforts in multiple communities in rural Georgia. His research was detailed in a paper, “‘Really Knowing’ the Community: Citizen Science, VGI and Community Housing Assessments” published in December in the Journal of Planning Education and Research.
Shannon worked with the Georgia Initiative for Community Housing, managed out of the College of Family and Consumer Sciences (FACS), to create tools for communities to evaluate and launch plans to address their housing needs and revitalization. This citizen science effort resulted in a more diverse and inclusive body of data that incorporated local perspectives.
“Through this project, we hope to further support and extend these community-driven efforts to assure affordable, quality housing,” said Shannon. “Rural communities don’t have the resources internally to do this work themselves. We provide training and tools to these communities.”
As part of their participation in the GICH program, each Georgia community assembled a housing team consisting of elected officials, members of community organizations and housing professionals such as real estate agents. The team recruited volunteers from student groups and religious organizations to conduct so-called “windshield surveys,” where participants work from their vehicle or walk the neighborhoods….(More)”
Living in Data: A Citizen’s Guide to a Better Information Future
Book by Jer Thorp: “To live in data in the twenty-first century is to be incessantly extracted from, classified and categorized, statisti-fied, sold, and surveilled. Data—our data—is mined and processed for profit, power, and political gain. In Living in Data, Thorp asks a crucial question of our time: How do we stop passively inhabiting data, and instead become active citizens of it?
Threading a data story through hippo attacks, glaciers, and school gymnasiums, around colossal rice piles, and over active minefields, Living in Data reminds us that the future of data is still wide open, that there are ways to transcend facts and figures and to find more visceral ways to engage with data, that there are always new stories to be told about how data can be used.
Punctuated with Thorp’s original and informative illustrations, Living in Data not only redefines what data is, but reimagines who gets to speak its language and how to use its power to create a more just and democratic future. Timely and inspiring, Living in Data gives us a much-needed path forward….(More)”.
AI and Shared Prosperity
Paper by Katya Klinova and Anton Korinek: “Future advances in AI that automate away human labor may have stark implications for labor markets and inequality. This paper proposes a framework to analyze the effects of specific types of AI systems on the labor market, based on how much labor demand they will create versus displace, while taking into account that productivity gains also make society wealthier and thereby contribute to additional labor demand. This analysis enables ethically-minded companies creating or deploying AI systems as well as researchers and policymakers to take into account the effects of their actions on labor markets and inequality, and therefore to steer progress in AI in a direction that advances shared prosperity and an inclusive economic future for all of humanity…(More)”.
Confronting Bias: BSA’s Framework to Build Trust in AI
BSA Software Alliance: “The Framework is a playbook organizations can use to enhance trust in their AI systems through risk management processes that promote fairness, transparency, and accountability. It can be leveraged by organizations that develop AI systems and companies that acquire and deploy such systems as the basis for:
– Internal Process Guidance. The Framework can be used as a tool for organizing and establishing roles,
responsibilities, and expectations for internal risk management processes.
– Training, Awareness, and Education. The Framework can be used to build internal training and education
programs for employees involved in developing and using AI systems, and for educating executives about
the organization’s approach to managing AI bias risks.
– Supply Chain Assurance and Accountability. AI developers and organizations that deploy AI
systems can use the Framework as a basis for communicating and coordinating about their respective roles and responsibilities for managing AI risks throughout a system’s lifecycle.
– Trust and Confidence. The Framework can help organizations communicate information about a
product’s features and its approach to mitigating AI bias risks to a public audience. In that sense, the
Framework can help organizations communicate to the public about their commitment to building
ethical AI systems.
– Incident Response. Following an unexpected incident, the processes and documentation set forth
in the Framework can serve as an audit trail that can help organizations quickly diagnose and remediate
potential problems…(More)”
Collective data rights can stop big tech from obliterating privacy
Article by Martin Tisne: “…There are two parallel approaches that should be pursued to protect the public.
One is better use of class or group actions, otherwise known as collective redress actions. Historically, these have been limited in Europe, but in November 2020 the European parliament passed a measure that requires all 27 EU member states to implement measures allowing for collective redress actions across the region. Compared with the US, the EU has stronger laws protecting consumer data and promoting competition, so class or group action lawsuits in Europe can be a powerful tool for lawyers and activists to force big tech companies to change their behavior even in cases where the per-person damages would be very low.
Class action lawsuits have most often been used in the US to seek financial damages, but they can also be used to force changes in policy and practice. They can work hand in hand with campaigns to change public opinion, especially in consumer cases (for example, by forcing Big Tobacco to admit to the link between smoking and cancer, or by paving the way for car seatbelt laws). They are powerful tools when there are thousands, if not millions, of similar individual harms, which add up to help prove causation. Part of the problem is getting the right information to sue in the first place. Government efforts, like a lawsuit brought against Facebook in December by the Federal Trade Commission (FTC) and a group of 46 states, are crucial. As the tech journalist Gilad Edelman puts it, “According to the lawsuits, the erosion of user privacy over time is a form of consumer harm—a social network that protects user data less is an inferior product—that tips Facebook from a mere monopoly to an illegal one.” In the US, as the New York Times recently reported, private lawsuits, including class actions, often “lean on evidence unearthed by the government investigations.” In the EU, however, it’s the other way around: private lawsuits can open up the possibility of regulatory action, which is constrained by the gap between EU-wide laws and national regulators.
Which brings us to the second approach: a little-known 2016 French law called the Digital Republic Bill. The Digital Republic Bill is one of the few modern laws focused on automated decision making. The law currently applies only to administrative decisions taken by public-sector algorithmic systems. But it provides a sketch for what future laws could look like. It says that the source code behind such systems must be made available to the public. Anyone can request that code.
Importantly, the law enables advocacy organizations to request information on the functioning of an algorithm and the source code behind it even if they don’t represent a specific individual or claimant who is allegedly harmed. The need to find a “perfect plaintiff” who can prove harm in order to file a suit makes it very difficult to tackle the systemic issues that cause collective data harms. Laure Lucchesi, the director of Etalab, a French government office in charge of overseeing the bill, says that the law’s focus on algorithmic accountability was ahead of its time. Other laws, like the European General Data Protection Regulation (GDPR), focus too heavily on individual consent and privacy. But both the data and the algorithms need to be regulated…(More)”
The Coronavirus Pandemic Creative Responses Archive
National Academies of Science: “Creativity often flourishes in stressful times because innovation evolves out of need. During the coronavirus pandemic, we are witnessing a range of creative responses from individuals, communities, organizations, and industries. Some are intensely personal, others expansively global—mirroring the many ways the pandemic has affected us. What do these responses to the pandemic tell us about our society, our level of resilience, and how we might imagine the future? Explore the Coronavirus Pandemic Creative Responses Archive…
Building and Sustaining State Data Integration Efforts: Legislation, Funding, and Strategies
Policy Report by AISP: “The economic and social impacts of the COVID-19 pandemic have heightened demand for cross-agency data capacity, as policymakers are forced to reconcile the need for expanded services with extreme fiscal constraints. In this context, integrated data systems (IDS) – also commonly referred to as data hubs, data collaboratives, or state longitudinal data systems – are a valuable resource for data-informed decision making across agencies. IDS utilize standard governance processes and legal agreements to grant authority for routine, responsible use of linked data, and institutionalize roles across partners with shared priorities.
Despite these benefits, creating and sustaining IDS remains a challenge for many states. Legislation and executive action can be powerful mechanisms to overcome this challenge and promote the use of cross-agency data for public good. Legislative and/or executive actions on data sharing can:
– Require data sharing to address a specific state policy priority
– Mandate oversight and planning activities to promote a state data sharing strategy
– Grant authority to a particular office or agency to lead cross-agency data sharing
This brief is organized in three parts. First, we offer examples of these three approaches from states that have used legislation and/or executive orders to enable data integration, as well as key considerations related to each. Second, we discuss state and federal funding opportunities that can help in implementing legislative or executive actions on data sharing and enhancing long-term sustainability of data sharing efforts. Third, we offer five foundational strategies to ensure that legislative or executive action is both ethical and effective…(More)”.
We Need to Reimagine the Modern Think Tank
Article by Emma Vadehra: “We are in the midst of a great realignment in policymaking. After an era-defining pandemic, which itself served as backdrop to a generations-in-the-making reckoning on racial injustice, the era of policy incrementalism is giving way to broad, grassroots demands for structural change. But elected officials are not the only ones who need to evolve. As the broader policy ecosystem adjusts to a post-2020 world, think tanks that aim to provide the intellectual backbone to policy movements—through research, data analysis, and evidence-based recommendation—need to change their approach as well.
Think tanks may be slower to adapt because of long-standing biases around what qualifies someone to be a policy “expert.” Traditionally, think tanks assess qualifications based on educational attainment and advanced degrees, which has often meant prioritizing academic credentials over lived or professional experience on the ground. These hiring preferences alone leave many people out of the debates that shape their lives: if think tanks expect a master’s degree for mid-level and senior research and policy positions, their pool of candidates will be limited to the 4 percent of Latinos and 7 percent of Black people with those degrees (lower than the rates among white people (10.5 percent) or Asian/Pacific Islanders (17 percent)). And in specific fields like Economics, from which many think tanks draw their experts, just 0.5 percent of doctoral degrees go to Black women each year.
Think tanks alone cannot change the larger cultural and societal forces that have historically limited access to certain fields. But they can change their own practices: namely, they can change how they assess expertise and who they recruit and cultivate as policy experts. In doing so, they can push the broader policy sector—including government and philanthropic donors—to do the same. Because while the next generation marches in the streets and runs for office, the public policy sector is not doing enough to diversify and support who develops, researches, enacts, and implements policy. And excluding impacted communities from the decision-making table makes our democracy less inclusive, responsive, and effective.
Two years ago, my colleagues and I at The Century Foundation, a 100-year-old think tank that has weathered many paradigm shifts in policymaking, launched an organization, Next100, to experiment with a new model for think tanks. Our mission was simple: policy by those with the most at stake, for those with the most at stake. We believed that proximity to the communities that policy looks to serve will make policy stronger, and we put muscle and resources behind the theory that those with lived experience are as much policy experts as anyone with a PhD from an Ivy League university. The pandemic and heightened calls for racial justice in the last year have only strengthened our belief in the need to thoughtfully democratize policy development. While it’s common understanding now that COVID-19 has surfaced and exacerbated profound historical inequities, not enough has been done to question why those inequities exist, or why they run so deep. How we make policy—and who makes it—is a big reason why….(More)”
What Robots Can — And Can’t — Do For the Old and Lonely
Katie Engelhart at The New Yorker: “…In 2017, the Surgeon General, Vivek Murthy, declared loneliness an “epidemic” among Americans of all ages. This warning was partly inspired by new medical research that has revealed the damage that social isolation and loneliness can inflict on a body. The two conditions are often linked, but they are not the same: isolation is an objective state (not having much contact with the world); loneliness is a subjective one (feeling that the contact you have is not enough). Both are thought to prompt a heightened inflammatory response, which can increase a person’s risk for a vast range of pathologies, including dementia, depression, high blood pressure, and stroke. Older people are more susceptible to loneliness; forty-three per cent of Americans over sixty identify as lonely. Their individual suffering is often described by medical researchers as especially perilous, and their collective suffering is seen as an especially awful societal failing….
So what’s a well-meaning social worker to do? In 2018, New York State’s Office for the Aging launched a pilot project, distributing Joy for All robots to sixty state residents and then tracking them over time. Researchers used a six-point loneliness scale, which asks respondents to agree or disagree with statements like “I experience a general sense of emptiness.” They concluded that seventy per cent of participants felt less lonely after one year. The pets were not as sophisticated as other social robots being designed for the so-called silver market or loneliness economy, but they were cheaper, at about a hundred dollars apiece.
In April, 2020, a few weeks after New York aging departments shut down their adult day programs and communal dining sites, the state placed a bulk order for more than a thousand robot cats and dogs. The pets went quickly, and caseworkers started asking for more: “Can I get five cats?” A few clients with cognitive impairments were disoriented by the machines. One called her local department, distraught, to say that her kitty wasn’t eating. But, more commonly, people liked the pets so much that the batteries ran out. Caseworkers joked that their clients had loved them to death….(More)”.