The Road Back to College Is Paved with Barriers, but Behavioral Science Can Help Smooth the Way


Blog by Katherine Flaschen and Ben Castleman: “In order to create the most effective solutions, policymakers and educators need to better understand a fundamental question: Why do so many of these students, many of whom have already made substantial progress toward their degree, fail to return to college and graduate? …

With a better understanding of the barriers preventing people who intend to finish their degree from following through, policymakers and colleges can create solutions that meaningfully meet students’ needs and help them re-enroll. As states across the country face rising unemployment rates, it’s critical to design and test interventions that address these behavioral barriers and help thousands of citizens who are out of work due to the COVID-19 crisis consider their options for going back to school.

For example, colleges could provide monetary incentives to students for taking actions related to re-enrollment that overcome these barriers, such as speaking with an advisor, reviewing upcoming recommended courses and developing a course plan, and making an active choice about when to return to college. In addition, SCND students could be paired with current students to serve as peer mentors, both to provide support with the re-enrollment process and to hold them accountable for degree completion (especially if faced with difficult remaining classes). Community colleges could also encourage major employers of the SCND population in high-demand fields, like health care, to provide options for employees to finish their degree while working (e.g., via tuition reimbursement programs), translate degree attainment into concrete career returns, and identify representatives within the company, such as recent graduates, to promote re-enrollment and make it a more salient opportunity….(More)”.

Indigenous Data Sovereignty and Policy


Book edited by Maggie WalterTahu KukutaiStephanie Russo Carroll and Desi Rodriguez-Lonebear: “This book examines how Indigenous Peoples around the world are demanding greater data sovereignty, and challenging the ways in which governments have historically used Indigenous data to develop policies and programs.

In the digital age, governments are increasingly dependent on data and data analytics to inform their policies and decision-making. However, Indigenous Peoples have often been the unwilling targets of policy interventions and have had little say over the collection, use and application of data about them, their lands and cultures. At the heart of Indigenous Peoples’ demands for change are the enduring aspirations of self-determination over their institutions, resources, knowledge and information systems.

With contributors from Australia, Aotearoa New Zealand, North and South America and Europe, this book offers a rich account of the potential for Indigenous data sovereignty to support human flourishing and to protect against the ever-growing threats of data-related risks and harms….(More)”.

The Oxford Handbook of Ethics of AI


Book edited by Markus D. Dubber, Frank Pasquale, and Sunit Das: “This volume tackles a quickly-evolving field of inquiry, mapping the existing discourse as part of a general attempt to place current developments in historical context; at the same time, breaking new ground in taking on novel subjects and pursuing fresh approaches.

The term “A.I.” is used to refer to a broad range of phenomena, from machine learning and data mining to artificial general intelligence. The recent advent of more sophisticated AI systems, which function with partial or full autonomy and are capable of tasks which require learning and ‘intelligence’, presents difficult ethical questions, and has drawn concerns from many quarters about individual and societal welfare, democratic decision-making, moral agency, and the prevention of harm. This work ranges from explorations of normative constraints on specific applications of machine learning algorithms today-in everyday medical practice, for instance-to reflections on the (potential) status of AI as a form of consciousness with attendant rights and duties and, more generally still, on the conceptual terms and frameworks necessarily to understand tasks requiring intelligence, whether “human” or “A.I.”…(More)”.

Digital Diplomacy and International Organisations: Autonomy, Legitimacy and Contestation


Book edited by Corneliu Bjola and Ruben Zaiotti: “This book examines how international organisations (IOs) have struggled to adapt to the digital age, and with social media in particular.

The global spread of new digital communication technologies has profoundly transformed the way organisations operate and interact with the outside world. This edited volume explores the impact of digital technologies, with a focus on social media, for one of the major actors in international affairs, namely IOs. To examine the peculiar dynamics characterising the IO–digital nexus, the volume relies on theoretical insights drawn from the disciplines of International Relations, Diplomatic Studies, Media, and Communication Studies, as well as from Organisation Studies.

The volume maps the evolution of IOs’ “digital universe” and examines the impact of digital technologies on issues of organisational autonomy, legitimacy, and contestation. The volume’s contributions combine engaging theoretical insights with newly compiled empirical material and an eclectic set of methodological approaches (multivariate regression, network analysis, content analysis, sentiment analysis), offering a highly nuanced and textured understanding of the multifaceted, complex, and ever-evolving nature of the use of digital technologies by international organisations in their multilateral engagements….(More)”.

Covid-19 is spurring the digitisation of government


The Economist: “…Neither health care nor Britain is unique in relying heavily on paper. By preventing face-to-face meetings and closing the offices where bureaucrats shuffle documents, the pandemic has revealed how big a problem that is. Around the world, it has been impossible to get a court hearing, a passport or get married while locked down, since they all still require face-to-face interactions. Registering a business has been slower or impossible. Courts are a mess; elections a worrying prospect.

Governments that have long invested in digitising their systems endured less disruption. Those that have not are discovering how useful it would be if a lot more official business took place online.

Covid-19 has brought many aspects of bureaucratic life to a halt. In England at least 73,400 weddings had to be delayed—not just the ceremony, also the legal part—reckons the Office for National Statistics. In France courts closed in March for all but essential services, and did not reopen until late May. Most countries have extended visas for foreigners trapped by the pandemic, but consular services stopped almost everywhere. In America green-card applications were halted in April; they restarted in June. In Britain appointments to take biometric details of people applying for permanent residency ceased in March and only resumed partly in June.

Some applications cannot be delayed and there the pandemic has revealed the creakiness of even rich countries’ bureaucracies. As Florida was locking down, huge queues formed outside government offices to get the paper forms needed to sign up for unemployment insurance. In theory the state has a digital system, but it was so poorly set up that many could not access it. At the start of the pandemic the website crashed for days. Even several months later people trying to apply had to join a digital queue and wait for hours before being able to log in. In Alabama when government offices in Montgomery, the state capital, reopened, people camped outside, hoping to see an official who might help with their claims.

Where services did exist online, their inadequacies became apparent. Digital unemployment-insurance systems collapsed under a wave of new claimants. At the end of March the website of the INPS, the Italian social-security office, received 300,000 applications for welfare in a single day. The website crashed. Some of those who could access it were shown other people’s data. The authorities blamed not just the volume of applicants but also hackers trying to put in fraudulent claims. Criminals were a problem in America too. In the worst-affected state, Washington, $550m-650m, or one dollar in every eight, was paid out to fraudsters who exploited an outdated system of identity verification (about $300m was recovered)….

the pandemic has revealed that governments need to operate in new ways. This may mean the introduction of proper digital identities, which many countries lack. Track-and-trace systems require governments to know who their citizens are and to be able to contact them reliably. Estonia’s officials can do so easily; Britain’s and America’s cannot. In China in order to board public transport or enter their own apartment buildings people have to show QR codes on their phones to verify that they have not been to a virus hotspot recently….(More)”.

How to destroy Surveillance Capitalism


Book by Cory Doctorow: “…Today, there is a widespread belief that machine learning and commercial surveillance can turn even the most fumble-tongued conspiracy theorist into a svengali who can warp your perceptions and win your belief by locating vulnerable people and then pitching them with A.I.-refined arguments that bypass their rational faculties and turn everyday people into flat Earthers, anti-vaxxers, or even Nazis. When the RAND Corporation blames Facebook for “radicalization” and when Facebook’s role in spreading coronavirus misinformation is blamed on its algorithm, the implicit message is that machine learning and surveillance are causing the changes in our consensus about what’s true.

After all, in a world where sprawling and incoherent conspiracy theories like Pizzagate and its successor, QAnon, have widespread followings, something must be afoot.

But what if there’s another explanation? What if it’s the material circumstances, and not the arguments, that are making the difference for these conspiracy pitchmen? What if the trauma of living through real conspiracies all around us — conspiracies among wealthy people, their lobbyists, and lawmakers to bury inconvenient facts and evidence of wrongdoing (these conspiracies are commonly known as “corruption”) — is making people vulnerable to conspiracy theories?

If it’s trauma and not contagion — material conditions and not ideology — that is making the difference today and enabling a rise of repulsive misinformation in the face of easily observed facts, that doesn’t mean our computer networks are blameless. They’re still doing the heavy work of locating vulnerable people and guiding them through a series of ever-more-extreme ideas and communities.

Belief in conspiracy is a raging fire that has done real damage and poses real danger to our planet and species, from epidemics kicked off by vaccine denial to genocides kicked off by racist conspiracies to planetary meltdown caused by denial-inspired climate inaction. Our world is on fire, and so we have to put the fires out — to figure out how to help people see the truth of the world through the conspiracies they’ve been confused by.

But firefighting is reactive. We need fire prevention. We need to strike at the traumatic material conditions that make people vulnerable to the contagion of conspiracy. Here, too, tech has a role to play.

There’s no shortage of proposals to address this. From the EU’s Terrorist Content Regulation, which requires platforms to police and remove “extremist” content, to the U.S. proposals to force tech companies to spy on their users and hold them liable for their users’ bad speech, there’s a lot of energy to force tech companies to solve the problems they created.

There’s a critical piece missing from the debate, though. All these solutions assume that tech companies are a fixture, that their dominance over the internet is a permanent fact. Proposals to replace Big Tech with a more diffused, pluralistic internet are nowhere to be found. Worse: The “solutions” on the table today require Big Tech to stay big because only the very largest companies can afford to implement the systems these laws demand….(More)”.

Mapping the new era of digital activism


About: “Over the past seven months the team at the Change.org Foundation have been working from home to support campaigns created in response to COVID-19. During this unprecedented time in history, millions of people, more than ever before, used our platform to share their stories and fight for their communities.

The Pandemic Report 2020 is born out of the need to share those stories with the world. We assembled a cross-functional team within the Foundation to dig into our platform data. We spotted trends, followed patterns and learned from the analysis we collected from country teams.

This work started with the hypothesis that the Coronavirus pandemic may have started a new chapter in digital activism history.

The data points to a new era, with the pandemic acting as a catalyst for citizen engagement worldwide….(More)”.

Politics without Politicians


Nathan Heller at the New Yorker: “Imagine being a citizen of a diverse, wealthy, democratic nation filled with eager leaders. At least once a year—in autumn, say—it is your right and civic duty to go to the polls and vote. Imagine that, in your country, this act is held to be not just an important task but an essential one; the government was designed at every level on the premise of democratic choice. If nobody were to show up to vote on Election Day, the superstructure of the country would fall apart.

So you try to be responsible. You do your best to stay informed. When Election Day arrives, you make the choices that, as far as you can discern, are wisest for your nation. Then the results come with the morning news, and your heart sinks. In one race, the candidate you were most excited about, a reformer who promised to clean up a dysfunctional system, lost to the incumbent, who had an understanding with powerful organizations and ultra-wealthy donors. Another politician, whom you voted into office last time, has failed to deliver on her promises, instead making decisions in lockstep with her party and against the polls. She was reëlected, apparently with her party’s help. There is a notion, in your country, that the democratic structure guarantees a government by the people. And yet, when the votes are tallied, you feel that the process is set up to favor interests other than the people’s own.

What corrective routes are open? One might wish for pure direct democracy—no body of elected representatives, each citizen voting on every significant decision about policies, laws, and acts abroad. But this seems like a nightmare of majoritarian tyranny and procedural madness: How is anyone supposed to haggle about specifics and go through the dialogue that shapes constrained, durable laws? Another option is to focus on influencing the organizations and business interests that seem to shape political outcomes. But that approach, with its lobbyists making backroom deals, goes against the promise of democracy. Campaign-finance reform might clean up abuses. But it would do nothing to insure that a politician who ostensibly represents you will be receptive to hearing and acting on your thoughts….(More)”.

‘Selfies’ could be used to detect heart disease: new research uses artificial intelligence to analyse facial photos


European Society of Cardiology: “Sending a “selfie” to the doctor could be a cheap and simple way of detecting heart disease, according to the authors of a new study published today (Friday) in the European Heart Journal [1].

The study is the first to show that it’s possible to use a deep learning computer algorithm to detect coronary artery disease (CAD) by analysing four photographs of a person’s face.

Although the algorithm needs to be developed further and tested in larger groups of people from different ethnic backgrounds, the researchers say it has the potential to be used as a screening tool that could identify possible heart disease in people in the general population or in high-risk groups, who could be referred for further clinical investigations.

“To our knowledge, this is the first work demonstrating that artificial intelligence can be used to analyse faces to detect heart disease. It is a step towards the development of a deep learning-based tool that could be used to assess the risk of heart disease, either in outpatient clinics or by means of patients taking ‘selfies’ to perform their own screening. This could guide further diagnostic testing or a clinical visit,” said Professor Zhe Zheng, who led the research and is vice director of the National Center for Cardiovascular Diseases and vice president of Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China.

He continued: “Our ultimate goal is to develop a self-reported application for high risk communities to assess heart disease risk in advance of visiting a clinic. This could be a cheap, simple and effective of identifying patients who need further investigation. However, the algorithm requires further refinement and external validation in other populations and ethnicities.”

It is known already that certain facial features are associated with an increased risk of heart disease. These include thinning or grey hair, wrinkles, ear lobe crease, xanthelasmata (small, yellow deposits of cholesterol underneath the skin, usually around the eyelids) and arcus corneae (fat and cholesterol deposits that appear as a hazy white, grey or blue opaque ring in the outer edges of the cornea). However, they are difficult for humans to use successfully to predict and quantify heart disease risk.

Prof. Zheng, Professor Xiang-Yang Ji, who is director of the Brain and Cognition Institute in the Department of Automation at Tsinghua University, Beijing, and other colleagues enrolled 5,796 patients from eight hospitals in China to the study between July 2017 and March 2019. The patients were undergoing imaging procedures to investigate their blood vessels, such as coronary angiography or coronary computed tomography angiography (CCTA). They were divided randomly into training (5,216 patients, 90%) or validation (580, 10%) groups.

Trained research nurses took four facial photos with digital cameras: one frontal, two profiles and one view of the top of the head. They also interviewed the patients to collect data on socioeconomic status, lifestyle and medical history. Radiologists reviewed the patients’ angiograms and assessed the degree of heart disease depending on how many blood vessels were narrowed by 50% or more (≥ 50% stenosis), and their location. This information was used to create, train and validate the deep learning algorithm….(More)”.

An algorithm shouldn’t decide a student’s future


Hye Jung Han at Politico: “…Education systems across Europe struggled this year with how to determine students’ all-important final grades. But one system, the International Baccalaureate (“IB”) — a high school program that is highly regarded by European universities, and offered by both public and private schools in 152 countries — did something unusual.

Having canceled final exams, which make up the majority of an IB student’s grade, the Geneva-based foundation of the same name hastily built an algorithm that used a student’s coursework scores, predicted grades by teachers and their school’s historical IB results to guess what students might have scored if they had taken their exams in a hypothetical, pandemic-free year. The result of the algorithm became the student’s final grade.

The results were catastrophic. Soon after the grades were released, serious mismatches emerged between expected grades based on a student’s prior performance, and those awarded by the algorithm. Because IB students’ university admissions are contingent upon their final grades, the unexpectedly poor grades generated for some resulted in scholarships and admissions offers being revoked

The IB had alternatives. Instead, it could have used students’ actual academic performance and graded on a generous curve. It could have incorporated practice test grades, third-party moderation to minimize grading bias and teachers’ broad evaluations of student progress.

It could have engaged with universities on flexibly factoring in final grades into this year’s admissions decisions, as universities contemplate opening their now-virtual classes to more students to replace lost revenue.

It increasingly seems like the greatest potential of the power promised by predictive data lies in the realm of misuse.

For this year’s graduating class, who have already responded with grace and resilience in their final year of school, the automating away of their capacity and potential is an unfair and unwanted preview of the world they are graduating into….(More)”.