Opening blog by Jeffrey R. Yost of new series on Blockchain and Society: “Blockchain is a powerful technology with roots three decades old in a 1991 paper on (immutable) timestamping of digital content. This paper, by Bellcore’s Stuart Haber and W. Scott Stornetta, along with key (in both senses) crypto research of a half dozen future Turing Awardees (Nobel of computer science–W. Diffie, M. Hellman, R. Rivest, A. Shamir, L. Adleman, S. Micali), and others, provided critical foundations for Bitcoin, blockchain, Non-Fungible Tokens (NFTs), and Decentralized Autonomous Organizations (DAOs). This initial and foundational blog post, of Blockchain and Society, seeks to address and analyze the history, sociology, and political economy of blockchain and cryptocurrency. Subsequent blogs will dive deeper into individual themes and topics on crypto’s sociocultural and political economy contexts….(More)”.
Artificial Intelligence Bias and Discrimination: Will We Pull the Arc of the Moral Universe Towards Justice?
Paper by Emile Loza de Siles: “In 1968, the Reverend Martin Luther King Jr. foresaw the inevitability of society’s eventual triumph over the deep racism of his time and the stain that continues to cast its destructive oppressive pall today. From the pulpit of the nation’s church, Dr King said, “We shall overcome because the arc of the moral universe is long but it bends toward justice”. More than 40 years later, Eric Holder, the first African American United States Attorney General, agreed, but only if people acting with conviction exert to pull that arc towards justice.
With artificial intelligence (AI) bias and discrimination rampant, the need to pull the moral arc towards algorithmic justice is urgent. This article offers empowering clarity by conceptually bifurcating AI bias problems into AI bias engineering and organisational AI governance problems, revealing proven legal development pathways to protect against the corrosive harms of AI bias and discrimination…(More)”.
Facial Recognition Plan from IRS Raises Big Concerns
Article by James Hendler: “The U.S. Internal Revenue Service is planning to require citizens to create accounts with a private facial recognition company in order to file taxes online. The IRS is joining a growing number of federal and state agencies that have contracted with ID.me to authenticate the identities of people accessing services.
The IRS’s move is aimed at cutting down on identity theft, a crime that affects millions of Americans. The IRS, in particular, has reported a number of tax filings from people claiming to be others, and fraud in many of the programs that were administered as part of the American Relief Plan has been a major concern to the government.
The IRS decision has prompted a backlash, in part over concerns about requiring citizens to use facial recognition technology and in part over difficulties some people have had in using the system, particularly with some state agencies that provide unemployment benefits. The reaction has prompted the IRS to revisit its decision.
As a computer science researcher and the chair of the Global Technology Policy Council of the Association for Computing Machinery, I have been involved in exploring some of the issues with government use of facial recognition technology, both its use and its potential flaws. There have been a great number of concerns raised over the general use of this technology in policing and other government functions, often focused on whether the accuracy of these algorithms can have discriminatory affects. In the case of ID.me, there are other issues involved as well….(More)”.
Dignity in a Digital Age: Making Tech Work for All of Us
Book by Congressman Ro Khanna: “… offers a revolutionary roadmap to facing America’s digital divide, offering greater economic prosperity to all. In Khanna’s vision, “just as people can move to technology, technology can move to people. People need not be compelled to move from one place to another to reap the benefits offered by technological progress” (from the foreword by Amartya Sen, Nobel Laureate in Economics).
In the digital age, unequal access to technology and the revenue it creates is one of the most pressing issues facing the United States. There is an economic gulf between those who have struck gold in the tech industry and those left behind by the digital revolution; a geographic divide between those in the coastal tech industry and those in the heartland whose jobs have been automated; and existing inequalities in technological access—students without computers, rural workers with spotty WiFi, and plenty of workers without the luxury to work from home.
Dignity in the Digital Age tackles these challenges head-on and imagines how the digital economy can create opportunities for people all across the country without uprooting them. Congressman Ro Khanna of Silicon Valley offers a vision for democratizing digital innovation to build economically vibrant and inclusive communities. Instead of being subject to tech’s reshaping of our economy, Representative Khanna argues that we must channel those powerful forces toward creating a more healthy, equal, and democratic society.
Born into an immigrant family, Khanna understands how economic opportunity can change the course of a person’s life. Anchored by an approach Khanna refers to as “progressive capitalism,” he shows how democratizing access to tech can strengthen every sector of economy and culture. By expanding technological jobs nationwide through public and private partnerships, we can close the wealth gap in America and begin to repair the fractured, distrusting relationships that have plagued our country for far too long.
Moving deftly between storytelling, policy, and some of the country’s greatest thinkers in political philosophy and economics, Khanna presents a bold vision we can’t afford to ignore. Dignity in a Digital Age is a roadmap to how we can seek dignity for every American in an era in which technology shapes every aspect of our lives…(More)”.
Guide for Policymakers on Making Transparency Meaningful
Report by CDT: “In 2020, the Minneapolis police used a unique kind of warrant to investigate vandalism of an AutoZone store during the protests over the murder of George Floyd by a police officer. This “geofence” warrant required Google to turn over data on all users within a certain geographic area around the store at a particular time — which would have included not only the vandal, but also protesters, bystanders, and journalists.
It was only several months later that the public learned of the warrant, because Google notified a user that his account information was subject to the warrant, and the user told reporters. And it was not until a year later — when Google first published a transparency report with data about geofence warrants — that the public learned the total number of geofence warrants Google receives from U.S. authorities and of a recent “explosion” in their use. New York lawmakers introduced a bill to forbid geofence warrants because of concerns they could be used to target protesters, and, in light of Google’s transparency report, some civil society organizations are calling for them to be banned, too.
Technology company transparency matters, as this example shows. Transparency about governmental and company practices that affect users’ speech, access to information, and privacy from government surveillance online help us understand and check the ways in which tech companies and governments wield power and impact people’s human rights.
Policymakers are increasingly proposing transparency measures as part of their efforts to regulate tech companies, both in the United States and around the world. But what exactly do we mean when we talk about transparency when it comes to technology companies like social networks, messaging services, and telecommunications firms? A new report from CDT, Making Transparency Meaningful: A Framework for Policymakers, maps and describes four distinct categories of technology company transparency:
- Transparency reports that provide aggregated data and qualitative information about moderation actions, disclosures, and other practices concerning user generated content and government surveillance;
- User notifications about government demands for their data and moderation of their content;
- Access to data held by intermediaries for independent researchers, public policy advocates, and journalists; and
- Public-facing analysis, assessments, and audits of technology company practices with respect to user speech and privacy from government surveillance.
Different forms of transparency are useful for different purposes or audiences, and they also give rise to varying technical, legal, and practical challenges. Making Transparency Meaningful is designed to help policymakers and advocates understand the potential benefits and tradeoffs that come with each form of transparency. This report addresses key questions raised by proposed legislation in the United States and Europe that seeks to mandate one or more of these types of transparency and thereby hold tech companies and governments more accountable….(More)”.
Automation exacts a toll in inequality
Rana Foroohar at The Financial Times: “When humans compete with machines, wages go down and jobs go away. But, ultimately, new categories of better work are created. The mechanisation of agriculture in the first half of the 20th century, or advances in computing and communications technology in the 1950s and 1960s, for example, went hand in hand with strong, broadly shared economic growth in the US and other developed economies.
But, in later decades, something in this relationship began to break down. Since the 1980s, we’ve seen the robotics revolution in manufacturing; the rise of software in everything; the consumer internet and the internet of things; and the growth of artificial intelligence. But during this time trend GDP growth in the US has slowed, inequality has risen and many workers — particularly, men without college degrees — have seen their real earnings fall sharply.
Globalisation and the decline of unions have played a part. But so has technological job disruption. That issue is beginning to get serious attention in Washington. In particular, politicians and policymakers are homing in on the work of MIT professor Daron Acemoglu, whose research shows that mass automation is no longer a win-win for both capital and labour. He testified at a select committee hearing to the US House of Representatives in November that automation — the substitution of machines and algorithms for tasks previously performed by workers — is responsible for 50-70 per cent of the economic disparities experienced between 1980 and 2016.
Why is this happening? Basically, while the automation of the early 20th century and the post-1945 period “increased worker productivity in a diverse set of industries and created myriad opportunities for them”, as Acemoglu said in his testimony, “what we’ve experienced since the mid 1980s is an acceleration in automation and a very sharp deceleration in the introduction of new tasks”. Put simply, he added, “the technological portfolio of the American economy has become much less balanced, and in a way that is highly detrimental to workers and especially low-education workers.”
What’s more, some things we are automating these days aren’t so economically beneficial. Consider those annoying computerised checkout stations in drug stores and groceries that force you to self-scan your purchases. They may save retailers a bit in labour costs, but they are hardly the productivity enhancer of, say, a self-driving combine harvester. Cecilia Rouse, chair of the White House’s Council of Economic Advisers, spoke for many when she told a Council on Foreign Relations event that she’d rather “stand in line [at the pharmacy] so that someone else has a job — it may not be a great job, but it is a job — and where I actually feel like I get better assistance.”
Still, there’s no holding back technology. The question is how to make sure more workers can capture its benefits. In her “Virtual Davos” speech a couple of weeks ago, Treasury secretary Janet Yellen pointed out that recent technologically driven productivity gains might exacerbate rather than mitigate inequality. She pointed to the fact that, while the “pandemic-induced surge in telework” will ultimately raise US productivity by 2.7 per cent, the gains will accrue mostly to upper income, white-collar workers, just as online learning has been better accessed and leveraged by wealthier, white students.
Education is where the rubber meets the road in fixing technology-driven inequality. As Harvard researchers Claudia Goldin and Laurence Katz have shown, when the relationship between education and technology gains breaks down, tech-driven prosperity is no longer as widely shared. This is why the Biden administration has been pushing investments into community college, apprenticeships and worker training…(More)”.
Public Provides NASA with New Innovations through Prize Competitions, Crowdsourcing, Citizen Science Opportunities
NASA Report: “Whether problem-solving during the pandemic, establishing a long-term presence at the Moon, or advancing technology to adapt to life in space, NASA has leveraged open innovation tools to inspire solutions to some of our most timely challenges – while using the creativity of everyone from garage tinkerers to citizen scientists and students of all ages.
Open Innovation: Boosting NASA Higher, Faster, and Farther highlights some of those breakthroughs, which accelerate space technology development and discovery while giving the public a gateway to work with NASA. Open innovation initiatives include problem-focused challenges and prize competitions, data hackathons, citizen science, and crowdsourcing projects that invite the public to lend their skills, ideas, and time to support NASA research and development programs.
NASA engaged the public with 56 public prize competitions and challenges and 14 citizen science and crowdsourcing activities over fiscal years 2019 and 2020. NASA awarded $2.2 million in prize money, and members of the public submitted over 11,000 solutions during that period.
“NASA’s accomplishments have hardly been NASA’s alone. Tens of thousands more individuals from academic institutions, private companies, and other space agencies also contribute to these solutions. Open innovation expands the NASA community and broadens the agency’s capacity for innovation and discovery even further,” said Amy Kaminski, Prizes, Challenges, and Crowdsourcing program executive at NASA Headquarters in Washington. “We harness the perspectives, expertise, and enthusiasm of ‘the crowd’ to gain diverse solutions, speed up projects, and reduce costs.”
This edition of the publication highlights:
- How NASA used open innovation tools to accelerate the pace of problem-solving during the COVID-19 pandemic, enabling a sprint of creativity to create valuable solutions in support of this global crisis
- How NASA invited everyone to embrace the Moon as a technological testing ground through public prize competitions and challenges, sparking development that could help prolong human stays on the Moon and lay the foundation for human exploration to Mars and beyond
- How citizen scientists gather, sort, and upload data, resulting in fruitful partnerships between the public and NASA scientists
- How NASA’s student-focused challenges have changed lives and positively impacted underserved communities…(More)”.
Suicide hotline shares data with for-profit spinoff, raising ethical questions
Alexandra Levine at Politico: “Crisis Text Line is one of the world’s most prominent mental health support lines, a tech-driven nonprofit that uses big data and artificial intelligence to help people cope with traumas such as self-harm, emotional abuse and thoughts of suicide.
But the data the charity collects from its online text conversations with people in their darkest moments does not end there: The organization’s for-profit spinoff uses a sliced and repackaged version of that information to create and market customer service software.
Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable.”
In turn, Loris has pledged to share some of its revenue with Crisis Text Line. The nonprofit also holds an ownership stake in the company, and the two entities shared the same CEO for at least a year and a half. The two call their relationship a model for how commercial enterprises can help charitable endeavors thrive…(More).”
When Do Informational Interventions Work? Experimental Evidence from New York City High School Choice
Paper by Sarah Cohodes, Sean Corcoran, Jennifer Jennings & Carolyn Sattin-Bajaj: “This paper reports the results of a large, school-level randomized controlled trial evaluating a set of three informational interventions for young people choosing high schools in 473 middle schools, serving over 115,000 8th graders. The interventions differed in their level of customization to the student and their mode of delivery (paper or online); all treated schools received identical materials to scaffold the decision-making process. Every intervention reduced likelihood of application to and enrollment in schools with graduation rates below the city median (75 percent). An important channel is their effect on reducing nonoptimal first choice application strategies. Providing a simplified, middle-school specific list of relatively high graduation rate schools had the largest impacts, causing students to enroll in high schools with 1.5-percentage point higher graduation rates. Providing the same information online, however, did not alter students’ choices or enrollment. This appears to be due to low utilization. Online interventions with individual customization, including a recommendation tool and search engine, induced students to enroll in high schools with 1-percentage point higher graduation rates, but with more variance in impact. Together, these results show that successful informational interventions must generate engagement with the material, and this is possible through multiple channels…(More)”.
We Still Can’t See American Slavery for What It Was
Jamelle Bouie at the New York Times: “…It is thanks to decades of painstaking, difficult work that we know a great deal about the scale of human trafficking across the Atlantic Ocean and about the people aboard each ship. Much of that research is available to the public in the form of the SlaveVoyages database. A detailed repository of information on individual ships, individual voyages and even individual people, it is a groundbreaking tool for scholars of slavery, the slave trade and the Atlantic world. And it continues to grow. Last year, the team behind SlaveVoyages introduced a new data set with information on the domestic slave trade within the United States, titled “Oceans of Kinfolk.”
The systematic effort to quantify the slave trade goes back at least as far as the 19th century…
Because of its specificity with regard to individual enslaved people, this new information is as pathbreaking for lay researchers and genealogists as it is for scholars and historians. It is also, for me, an opportunity to think about the difficult ethical questions that surround this work: How exactly do we relate to data that allows someone — anyone — to identify a specific enslaved person? How do we wield these powerful tools for quantitative analysis without abstracting the human reality away from the story? And what does it mean to study something as wicked and monstrous as the slave trade using some of the tools of the trade itself?…
“The data that we have about those ships is also kind of caught in a stranglehold of ship captains who care about some things and don’t care about others,” Jennifer Morgan said. We know what was important to them. It is the task of the historian to bring other resources to bear on this knowledge, to shed light on what the documents, and the data, might obscure.
“By merely reproducing the metrics of slave traders,” Fuentes said, “you’re not actually providing us with information about the people, the humans, who actually bore the brunt of this violence. And that’s important. It is important to humanize this history, to understand that this happened to African human beings.”
It’s here that we must engage with the question of the public. Work like the SlaveVoyages database exists in the “digital humanities,” a frequently public-facing realm of scholarship and inquiry. And within that context, an important part of respecting the humanity of the enslaved is thinking about their descendants.
“If you’re doing a digital humanities project, it exists in the world,” said Jessica Marie Johnson, an assistant professor of history at Johns Hopkins and the author of “Wicked Flesh: Black Women, Intimacy, and Freedom in the Atlantic World.” “It exists among a public that is beyond the academy and beyond Silicon Valley. And that means that there should be certain other questions that we ask, a different kind of ethics of care and a different morality that we bring to things.”…(More)”.