Bridging the global digital divide: A platform to advance digital development in low- and middle-income countries


Paper by George Ingram: “The world is in the midst of a fast-moving, Fourth Industrial Revolution (also known as 4IR or Industry 4.0), driven by digital innovation in the use of data, information, and technology. This revolution is affecting everything from how we communicate, to where and how we work, to education and health, to politics and governance. COVID-19 has accelerated this transformation as individuals, companies, communities, and governments move to virtual engagement. We are still discovering the advantages and disadvantages of a digital world.

This paper outlines an initiative that would allow the United States, along with a range of public and private partners, to seize the opportunity to reduce the digital divide between nations and people in a way that benefits inclusive economic advancement in low- and middle-income countries, while also advancing the economic and strategic interests of the United States and its partner countries.

As life increasingly revolves around digital technologies and innovation, countries are in a race to digitalize at a speed that threatens to leave behind the less advantaged—countries and underserved groups. Data in this paper documents the scope of the digital divide. With the Sustainable Development Goals (SDGs), the world committed to reduce poverty and advance all aspects of the livelihood of nations and people. Countries that fail to progress along the path to 5G broadband cellular networks will be unable to unlock the benefits of the digital revolution and be left behind. Donors are recognizing this and offering solutions, but in a one-off, disconnected fashion. Absent a comprehensive, partnership approach, that takes advantage of the comparative advantage of each, these well-intended efforts will not aggregate to the scale and speed required by the challenge….(More)”.

What Data About You Can the Government Get From Big Tech?


 Jack Nicas at the New York Times: “The Justice Department, starting in the early days of the Trump administration, secretly sought data from some of the biggest tech companies about journalistsDemocratic lawmakers and White House officials as part of wide-ranging investigations into leaks and other matters, The New York Times reported last week.

The revelations, which put the companies in the middle of a clash over the Trump administration’s efforts to find the sources of news coverage, raised questions about what sorts of data tech companies collect on their users, and how much of it is accessible to law enforcement authorities.

Here’s a rundown:

All sorts. Beyond basic data like users’ names, addresses and contact information, tech companies like Google, Apple, Microsoft and Facebook also often have access to the contents of their users’ emails, text messages, call logs, photos, videos, documents, contact lists and calendars.

Most of it is. But which data law enforcement can get depends on the sort of request they make.

Perhaps the most common and basic request is a subpoena. U.S. government agencies and prosecutors can often issue subpoenas without approval from a judge, and lawyers can issue them as part of open court cases. Subpoenas are often used to cast a wide net for basic information that can help build a case and provide evidence needed to issue more powerful requests….(More)”.

Be Skeptical of Thought Leaders


Book Review by Evan Selinger: “Corporations regularly advertise their commitment to “ethics.” They often profess to behave better than the law requires and sometimes may even claim to make the world a better place. Google, for example, trumpets its commitment to “responsibly” developing artificial intelligence and swears it follows lofty AI principles that include being “socially beneficial” and “accountable to people,” and that “avoid creating or reinforcing unfair bias.”

Google’s recent treatment of Timnit Gebru, the former co-leader of its ethical AI team, tells another story. After Gebru went through an antagonistic internal review process for a co-authored paper that explores social and environmental risks and expressed concern over justice issues within Google, the company didn’t congratulate her for a job well done. Instead, she and vocally supportive colleague Margaret Mitchell (the other co-leader) were “forced out.” Google’s behavior “perhaps irreversibly damaged” the company’s reputation. It was hard not to conclude that corporate values misalign with the public good.

Even as tech companies continue to display hypocrisy, there might still be good reasons to have high hopes for their behavior in the future. Suppose corporations can do better than ethics washingvirtue signaling, and making incremental improvements that don’t challenge aggressive plans for financial growth. If so, society desperately needs to know what it takes to bring about dramatic change. On paper, Susan Liautaud is the right person to turn to for help. She has impressive academic credentials (a PhD in Social Policy from the London School of Economics and a JD from Columbia University Law School), founded and manages an ethics consulting firm with an international reach, and teaches ethics courses at Stanford University.

In The Power of Ethics: How to Make Good Choices in a Complicated World, Liautaud pursues a laudable goal: democratize the essential practical steps for making responsible decisions in a confusing and complex world. While the book is pleasantly accessible, it has glaring faults. With so much high-quality critical journalistic coverage of technologies and tech companies, we should expect more from long-form analysis.

Although ethics is more widely associated with dour finger-waving than aspirational world-building, Liautaud mostly crafts an upbeat and hopeful narrative, albeit not so cheerful that she denies the obvious pervasiveness of shortsighted mistakes and blatant misconduct. The problem is that she insists ethical values and technological development pair nicely. Big Tech might be exerting increasing control over our lives, exhibiting an oversized influence on public welfare through incursions into politics, education, social communication, space travel, national defense, policing, and currency — but this doesn’t in the least quell her enthusiasm, which remains elevated enough throughout her book to affirm the power of the people. Hyperbolically, she declares, “No matter where you stand […] you have the opportunity to prevent the monopolization of ethics by rogue actors, corporate giants, and even well-intentioned scientists and innovators.”…(More)“.

Platform as a Rule Maker: Evidence from Airbnb’s Cancellation Policies


Paper by Jian Jia, Ginger Zhe Jin & Liad Wagman: “Digital platforms are not only match-making intermediaries but also establish internal rules that govern all users in their ecosystems. To better understand the governing role of platforms, we study two Airbnb pro-guest rules that pertain to guest and host cancellations, using data on Airbnb and VRBO listings in 10 US cities. We demonstrate that such pro-guest rules can drive demand and supply to and from the platform, as a function of the local platform competition between Airbnb and VRBO. Our results suggest that platform competition sometimes dampens a platform wide pro-guest rule and sometimes reinforces it, often with heterogeneous effects on different hosts. This implies that platform competition does not necessarily mitigate a platform’s incentive to treat the two sides asymmetrically, and any public policy in platform competition must consider its implication on all sides….(More)”.

Citizen science allows people to ‘really know’ their communities


UGAResearch: “Local populations understand their communities best. They’re familiar both with points of pride and with areas that could be improved. But determining the nature of those improvements from best practices, as well as achieving community consensus on implementation, can present a different set of challenges.

Jerry Shannon, associate professor of geography in the Franklin College of Arts & Sciences, worked with a team of researchers to introduce a citizen science approach in 11 communities across Georgia, from Rockmart to Monroe to Millen. This work combines local knowledge with emerging digital technologies to bolster community-driven efforts in multiple communities in rural Georgia. His research was detailed in a paper, “‘Really Knowing’ the Community: Citizen Science, VGI and Community Housing Assessments” published in December in the Journal of Planning Education and Research.

Shannon worked with the Georgia Initiative for Community Housing, managed out of the College of Family and Consumer Sciences (FACS), to create tools for communities to evaluate and launch plans to address their housing needs and revitalization. This citizen science effort resulted in a more diverse and inclusive body of data that incorporated local perspectives.

“Through this project, we hope to further support and extend these community-driven efforts to assure affordable, quality housing,” said Shannon. “Rural communities don’t have the resources internally to do this work themselves. We provide training and tools to these communities.”

As part of their participation in the GICH program, each Georgia community assembled a housing team consisting of elected officials, members of community organizations and housing professionals such as real estate agents. The team recruited volunteers from student groups and religious organizations to conduct so-called “windshield surveys,” where participants work from their vehicle or walk the neighborhoods….(More)”

Living in Data: A Citizen’s Guide to a Better Information Future


Book by Jer Thorp: “To live in data in the twenty-first century is to be incessantly extracted from, classified and categorized, statisti-fied, sold, and surveilled. Data—our data—is mined and processed for profit, power, and political gain. In Living in Data, Thorp asks a crucial question of our time: How do we stop passively inhabiting data, and instead become active citizens of it?

Threading a data story through hippo attacks, glaciers, and school gymnasiums, around colossal rice piles, and over active minefields, Living in Data reminds us that the future of data is still wide open, that there are ways to transcend facts and figures and to find more visceral ways to engage with data, that there are always new stories to be told about how data can be used.

Punctuated with Thorp’s original and informative illustrations, Living in Data not only redefines what data is, but reimagines who gets to speak its language and how to use its power to create a more just and democratic future. Timely and inspiring, Living in Data gives us a much-needed path forward….(More)”.

AI and Shared Prosperity


Paper by Katya Klinova and Anton Korinek: “Future advances in AI that automate away human labor may have stark implications for labor markets and inequality. This paper proposes a framework to analyze the effects of specific types of AI systems on the labor market, based on how much labor demand they will create versus displace, while taking into account that productivity gains also make society wealthier and thereby contribute to additional labor demand. This analysis enables ethically-minded companies creating or deploying AI systems as well as researchers and policymakers to take into account the effects of their actions on labor markets and inequality, and therefore to steer progress in AI in a direction that advances shared prosperity and an inclusive economic future for all of humanity…(More)”.

Confronting Bias: BSA’s Framework to Build Trust in AI


BSA Software Alliance: “The Framework is a playbook organizations can use to enhance trust in their AI systems through risk management processes that promote fairness, transparency, and accountability. It can be leveraged by organizations that develop AI systems and companies that acquire and deploy such systems as the basis for:
– Internal Process Guidance. The Framework can be used as a tool for organizing and establishing roles,
responsibilities, and expectations for internal risk management processes.
– Training, Awareness, and Education. The Framework can be used to build internal training and education
programs for employees involved in developing and using AI systems, and for educating executives about
the organization’s approach to managing AI bias risks.
– Supply Chain Assurance and Accountability. AI developers and organizations that deploy AI
systems can use the Framework as a basis for communicating and coordinating about their respective roles and responsibilities for managing AI risks throughout a system’s lifecycle.
– Trust and Confidence. The Framework can help organizations communicate information about a
product’s features and its approach to mitigating AI bias risks to a public audience. In that sense, the
Framework can help organizations communicate to the public about their commitment to building
ethical AI systems.
– Incident Response. Following an unexpected incident, the processes and documentation set forth
in the Framework can serve as an audit trail that can help organizations quickly diagnose and remediate
potential problems…(More)”

Collective data rights can stop big tech from obliterating privacy


Article by Martin Tisne: “…There are two parallel approaches that should be pursued to protect the public.

One is better use of class or group actions, otherwise known as collective redress actions. Historically, these have been limited in Europe, but in November 2020 the European parliament passed a measure that requires all 27 EU member states to implement measures allowing for collective redress actions across the region. Compared with the US, the EU has stronger laws protecting consumer data and promoting competition, so class or group action lawsuits in Europe can be a powerful tool for lawyers and activists to force big tech companies to change their behavior even in cases where the per-person damages would be very low.

Class action lawsuits have most often been used in the US to seek financial damages, but they can also be used to force changes in policy and practice. They can work hand in hand with campaigns to change public opinion, especially in consumer cases (for example, by forcing Big Tobacco to admit to the link between smoking and cancer, or by paving the way for car seatbelt laws). They are powerful tools when there are thousands, if not millions, of similar individual harms, which add up to help prove causation. Part of the problem is getting the right information to sue in the first place. Government efforts, like a lawsuit brought against Facebook in December by the Federal Trade Commission (FTC) and a group of 46 states, are crucial. As the tech journalist Gilad Edelman puts it, “According to the lawsuits, the erosion of user privacy over time is a form of consumer harm—a social network that protects user data less is an inferior product—that tips Facebook from a mere monopoly to an illegal one.” In the US, as the New York Times recently reported, private lawsuits, including class actions, often “lean on evidence unearthed by the government investigations.” In the EU, however, it’s the other way around: private lawsuits can open up the possibility of regulatory action, which is constrained by the gap between EU-wide laws and national regulators.

Which brings us to the second approach: a little-known 2016 French law called the Digital Republic Bill. The Digital Republic Bill is one of the few modern laws focused on automated decision making. The law currently applies only to administrative decisions taken by public-sector algorithmic systems. But it provides a sketch for what future laws could look like. It says that the source code behind such systems must be made available to the public. Anyone can request that code.

Importantly, the law enables advocacy organizations to request information on the functioning of an algorithm and the source code behind it even if they don’t represent a specific individual or claimant who is allegedly harmed. The need to find a “perfect plaintiff” who can prove harm in order to file a suit makes it very difficult to tackle the systemic issues that cause collective data harms. Laure Lucchesi, the director of Etalab, a French government office in charge of overseeing the bill, says that the law’s focus on algorithmic accountability was ahead of its time. Other laws, like the European General Data Protection Regulation (GDPR), focus too heavily on individual consent and privacy. But both the data and the algorithms need to be regulated…(More)”

The Coronavirus Pandemic Creative Responses Archive


National Academies of Science: “Creativity often flourishes in stressful times because innovation evolves out of need. During the coronavirus pandemic, we are witnessing a range of creative responses from individuals, communities, organizations, and industries. Some are intensely personal, others expansively global—mirroring the many ways the pandemic has affected us. What do these responses to the pandemic tell us about our society, our level of resilience, and how we might imagine the future? Explore the Coronavirus Pandemic Creative Responses Archive…