Too many AI researchers think real-world problems are not relevant


Essay by Hannah Kerner: “Any researcher who’s focused on applying machine learning to real-world problems has likely received a response like this one: “The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.”

These words are straight from a review I received for a paper I submitted to the NeurIPS (Neural Information Processing Systems) conference, a top venue for machine-learning research. I’ve seen the refrain time and again in reviews of papers where my coauthors and I presented a method motivated by an application, and I’ve heard similar stories from countless others.

This makes me wonder: If the community feels that aiming to solve high-impact real-world problems with machine learning is of limited significance, then what are we trying to achieve?

The goal of artificial intelligence (pdf) is to push forward the frontier of machine intelligence. In the field of machine learning, a novel development usually means a new algorithm or procedure, or—in the case of deep learning—a new network architecture. As others have pointed out, this hyperfocus on novel methods leads to a scourge of papers that report marginal or incremental improvements on benchmark data sets and exhibit flawed scholarship (pdf) as researchers race to top the leaderboard.

Meanwhile, many papers that describe new applications present both novel concepts and high-impact results. But even a hint of the word “application” seems to spoil the paper for reviewers. As a result, such research is marginalized at major conferences. Their authors’ only real hope is to have their papers accepted in workshops, which rarely get the same attention from the community.

This is a problem because machine learning holds great promise for advancing health, agriculture, scientific discovery, and more. The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning. If others in the field had prioritized real-world applications, what other groundbreaking discoveries would we have made by now?

This is not a new revelation. To quote a classic paper titled “Machine Learning that Matters” (pdf), by NASA computer scientist Kiri Wagstaff: “Much of current machine learning research has lost its connection to problems of import to the larger world of science and society.” The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then….(More)”.

AI technologies — like police facial recognition — discriminate against people of colour


Jane Bailey et al at The Conversation: “…In his game-changing 1993 book, The Panoptic Sort, scholar Oscar Gandy warned that “complex technology [that] involves the collection, processing and sharing of information about individuals and groups that is generated through their daily lives … is used to coordinate and control their access to the goods and services that define life in the modern capitalist economy.” Law enforcement uses it to pluck suspects from the general public, and private organizations use it to determine whether we have access to things like banking and employment.

Gandy prophetically warned that, if left unchecked, this form of “cybernetic triage” would exponentially disadvantage members of equality-seeking communities — for example, groups that are racialized or socio-economically disadvantaged — both in terms of what would be allocated to them and how they might come to understand themselves.

Some 25 years later, we’re now living with the panoptic sort on steroids. And examples of its negative effects on equality-seeking communities abound, such as the false identification of Williams.

Pre-existing bias

This sorting using algorithms infiltrates the most fundamental aspects of everyday life, occasioning both direct and structural violence in its wake.

The direct violence experienced by Williams is immediately evident in the events surrounding his arrest and detention, and the individual harms he experienced are obvious and can be traced to the actions of police who chose to rely on the technology’s “match” to make an arrest. More insidious is the structural violence perpetrated through facial recognition technology and other digital technologies that rate, match, categorize and sort individuals in ways that magnify pre-existing discriminatory patterns.

Structural violence harms are less obvious and less direct, and cause injury to equality-seeking groups through systematic denial to power, resources and opportunity. Simultaneously, it increases direct risk and harm to individual members of those groups.

Predictive policing uses algorithmic processing of historical data to predict when and where new crimes are likely to occur, assigns police resources accordingly and embeds enhanced police surveillance into communities, usually in lower-income and racialized neighbourhoods. This increases the chances that any criminal activity — including less serious criminal activity that might otherwise prompt no police response — will be detected and punished, ultimately limiting the life chances of the people who live within that environment….(More)”.

Algorithmic Colonisation of Africa Read


Abeba Birhane at The Elephant: “The African equivalents of Silicon Valley’s tech start-ups can be found in every possible sphere of life around all corners of the continent—in “Sheba Valley” in Addis Abeba, “Yabacon Valley” in Lagos, and “Silicon Savannah” in Nairobi, to name a few—all pursuing “cutting-edge innovations” in sectors like banking, finance, healthcare, and education. They are headed by technologists and those in finance from both within and outside the continent who seemingly want to “solve” society’s problems, using data and AI to provide quick “solutions”. As a result, the attempt to “solve” social problems with technology is exactly where problems arise. Complex cultural, moral, and political problems that are inherently embedded in history and context are reduced to problems that can be measured and quantified—matters that can be “fixed” with the latest algorithm.

As dynamic and interactive human activities and processes are automated, they are inherently simplified to the engineers’ and tech corporations’ subjective notions of what they mean. The reduction of complex social problems to a matter that can be “solved” by technology also treats people as passive objects for manipulation. Humans, however, far from being passive objects, are active meaning-seekers embedded in dynamic social, cultural, and historical backgrounds.

The discourse around “data mining”, “abundance of data”, and “data-rich continent” shows the extent to which the individual behind each data point is disregarded. This muting of the individual—a person with fears, emotions, dreams, and hopes—is symptomatic of how little attention is given to matters such as people’s well-being and consent, which should be the primary concerns if the goal is indeed to “help” those in need. Furthermore, this discourse of “mining” people for data is reminiscent of the coloniser’s attitude that declares humans as raw material free for the taking. Complex cultural, moral, and political problems that are inherently embedded in history and context are reduced to problems that can be measured and quantified Data is necessarily always about something and never about an abstract entity.

The collection, analysis, and manipulation of data potentially entails monitoring, tracking, and surveilling people. This necessarily impacts people directly or indirectly whether it manifests as change in their insurance premiums or refusal of services. The erasure of the person behind each data point makes it easy to “manipulate behavior” or “nudge” users, often towards profitable outcomes for companies. Considerations around the wellbeing and welfare of the individual user, the long-term social impacts, and the unintended consequences of these systems on society’s most vulnerable are pushed aside, if they enter the equation at all. For companies that develop and deploy AI, at the top of the agenda is the collection of more data to develop profitable AI systems rather than the welfare of individual people or communities. This is most evident in the FinTech sector, one of the prominent digital markets in Africa. People’s digital footprints, from their interactions with others to how much they spend on their mobile top ups, are continually surveyed and monitored to form data for making loan assessments. Smartphone data from browsing history, likes, and locations is recorded forming the basis for a borrower’s creditworthiness.

Artificial Intelligence technologies that aid decision-making in the social sphere are, for the most part, developed and implemented by the private sector whose primary aim is to maximise profit. Protecting individual privacy rights and cultivating a fair society is therefore the least of their concerns, especially if such practice gets in the way of “mining” data, building predictive models, and pushing products to customers. As decision-making of social outcomes is handed over to predictive systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by corporate incentives, we are also allowing moral questions to be dictated by corporate interest.

“Digital nudges”, behaviour modifications developed to suit commercial interests, are a prime example. As “nudging” mechanisms become the norm for “correcting” individuals’ behaviour, eating habits, or exercise routines, those developing predictive models are bestowed with the power to decide what “correct” is. In the process, individuals that do not fit our stereotypical ideas of a “fit body”, “good health”, and “good eating habits” end up being punished, outcast, and pushed further to the margins. When these models are imported as state-of-the-art technology that will save money and “leapfrog” the continent into development, Western values and ideals are enforced, either deliberately or intentionally….(More)”.

Politics without Politicians


Nathan Heller at the New Yorker: “Imagine being a citizen of a diverse, wealthy, democratic nation filled with eager leaders. At least once a year—in autumn, say—it is your right and civic duty to go to the polls and vote. Imagine that, in your country, this act is held to be not just an important task but an essential one; the government was designed at every level on the premise of democratic choice. If nobody were to show up to vote on Election Day, the superstructure of the country would fall apart.

So you try to be responsible. You do your best to stay informed. When Election Day arrives, you make the choices that, as far as you can discern, are wisest for your nation. Then the results come with the morning news, and your heart sinks. In one race, the candidate you were most excited about, a reformer who promised to clean up a dysfunctional system, lost to the incumbent, who had an understanding with powerful organizations and ultra-wealthy donors. Another politician, whom you voted into office last time, has failed to deliver on her promises, instead making decisions in lockstep with her party and against the polls. She was reëlected, apparently with her party’s help. There is a notion, in your country, that the democratic structure guarantees a government by the people. And yet, when the votes are tallied, you feel that the process is set up to favor interests other than the people’s own.

What corrective routes are open? One might wish for pure direct democracy—no body of elected representatives, each citizen voting on every significant decision about policies, laws, and acts abroad. But this seems like a nightmare of majoritarian tyranny and procedural madness: How is anyone supposed to haggle about specifics and go through the dialogue that shapes constrained, durable laws? Another option is to focus on influencing the organizations and business interests that seem to shape political outcomes. But that approach, with its lobbyists making backroom deals, goes against the promise of democracy. Campaign-finance reform might clean up abuses. But it would do nothing to insure that a politician who ostensibly represents you will be receptive to hearing and acting on your thoughts….(More)”.

This app is helping mothers in the Brazilian favelas survive the pandemic



Daniel Avelar at Open Democracy: “As Brazil faces one of the worst COVID-19 outbreaks in the world, a smartphone app is helping residents of impoverished areas known as favelas survive the virus threat amid sudden mass unemployment.

So far, the Latin American country has recorded over 115.000 deaths caused by COVID-19. The shutdown of economic activity wiped out 7.8 million jobs, mostly affecting low-skilled informal workers who form the bulk of the population in the favelas. Emergency income distributed by the government is limited to 60% of the minimum wage, so families are struggling to make ends meet.

Many blame president Jair Bolsonaro for the tragedy. Bolsonaro, a far-right populist, has consistently rallied against science-based policies in the management of the pandemic and pushed for an end to stay-at-home orders. A precocious reopening of the economy is likely to increase infection rates and cause more deaths.

In an attempt to stop the looming humanitarian catastrophe, a coalition of activists in the favelas and corporate partners developed an app that is facilitating the distribution of food and emergency income to thousands of women spearheading families. The app has a facial recognition feature that helps volunteers identify and register recipients of aid and prevents fraud.

So far, the Favela Mothers project has distributed the equivalent to US$ 26 million in food parcels and cash allowances to more than 1.1 million families in 5,000 neighborhoods across the country….(More)”.

EU risks being dethroned as world’s lead digital regulator


Marietje Schaake at the Financial Times: “With a series of executive orders, US president Donald Trump has quickly changed the digital regulatory game. His administration has adopted unprecedented sanctions against the Chinese technology group Huawei; next on the list of likely targets is the Chinese ecommerce group Alibaba.

The TikTok takeover saga continues, since the president this month ordered the sale of its US operations within 90 days. The administration’s Clean Network programme also claims to protect privacy by keeping “unsafe” companies out of US cable, cloud and app infrastructure. Engaging with a shared privacy agenda, which the EU has enshrined in law, would be a constructive step.

Instead, US secretary of state Mike Pompeo has prioritised warnings about the dangers posed by Huawei to individual EU member states during a recent visit. Yet these unilateral American actions also highlight weaknesses in Europe’s own preparedness and unity on issues of national security in the digital world. Beyond emphasising fundamental rights and economic rules, Europe must move fast if it does not want to see other global actors draw the road maps of regulation.

Recent years have seen the acceleration of national security arguments to restrict market access for global technology companies. Decisions on bans and sanctions tend to rely on the type of executive power that the EU lacks, especially in the national security domain. The bloc has never fully developed a common security policy — and deliberately so. In its white paper on artificial intelligence, the European Commission explicitly omits AI in the military context, and European geopolitical clout remains underused by politicians keen to advance their national postures.

Tensions between the promise of a digital single market and the absence of a common approach to security were revealed in fragmented responses to 5G concerns, as well as foreign acquisitions of strategic tech companies. This ad hoc policy toolbox may well prove inadequate to build the co-ordination needed for a forceful European strategy. The US tussle with TikTok and Huawei should be a lesson to European politicians on their approach to regulating tech.

A confident Europe might argue that concerns about terabytes of the most intimate information being shared with foreign companies were promptly met with the EU’s general data protection regulations. A more critical voice would counter that Europe does not appreciate the risks of integrating Chinese tech into 5G networks, and that its narrow focus on fundamental rights and market regulations in the digital world was always naive.

Either way, now that geopolitics is integrating with tech policy, the EU risks being dethroned as the lead regulator of the digital world. In many ways it is remarkable that a reckoning took this long. For decades, online products and services have evaded restrictions on their reach into global communities. But the long-anticipated collision of geopolitics and technological disruption is finally here. It will do significant collateral damage to the open internet.

The challenge for democracies is to preserve their own core values and interests, along with the benefits of an open, global internet. A series of nationalistic bans and restrictions will not achieve these goals. Instead it will unleash a digital trade war at the expense of internet users worldwide..(More)”.

An algorithm shouldn’t decide a student’s future


Hye Jung Han at Politico: “…Education systems across Europe struggled this year with how to determine students’ all-important final grades. But one system, the International Baccalaureate (“IB”) — a high school program that is highly regarded by European universities, and offered by both public and private schools in 152 countries — did something unusual.

Having canceled final exams, which make up the majority of an IB student’s grade, the Geneva-based foundation of the same name hastily built an algorithm that used a student’s coursework scores, predicted grades by teachers and their school’s historical IB results to guess what students might have scored if they had taken their exams in a hypothetical, pandemic-free year. The result of the algorithm became the student’s final grade.

The results were catastrophic. Soon after the grades were released, serious mismatches emerged between expected grades based on a student’s prior performance, and those awarded by the algorithm. Because IB students’ university admissions are contingent upon their final grades, the unexpectedly poor grades generated for some resulted in scholarships and admissions offers being revoked

The IB had alternatives. Instead, it could have used students’ actual academic performance and graded on a generous curve. It could have incorporated practice test grades, third-party moderation to minimize grading bias and teachers’ broad evaluations of student progress.

It could have engaged with universities on flexibly factoring in final grades into this year’s admissions decisions, as universities contemplate opening their now-virtual classes to more students to replace lost revenue.

It increasingly seems like the greatest potential of the power promised by predictive data lies in the realm of misuse.

For this year’s graduating class, who have already responded with grace and resilience in their final year of school, the automating away of their capacity and potential is an unfair and unwanted preview of the world they are graduating into….(More)”.

‘Telegram revolution’: App helps drive Belarus protests


Daria Litvinova at AP News: “Every day, like clockwork, to-do lists for those protesting against Belarus’ authoritarian leader appear in the popular Telegram messaging app. They lay out goals, give times and locations of rallies with business-like precision, and offer spirited encouragement.

“Today will be one more important day in the fight for our freedom. Tectonic shifts are happening on all fronts, so it’s important not to slow down,” a message in one of Telegram’s so-called channels read Tuesday. “Morning. Expanding the strike … 11:00. Supporting the Kupala (theater) … 19:00. Gathering at the Independence Square.”

The app has become an indispensable tool in coordinating the unprecedented mass protests that have rocked Belarus since Aug. 9, when election officials announced President Alexander Lukashenko had won a landslide victory to extend his 26-year rule in a vote widely seen as rigged.

Peaceful protesters who poured into the streets of the capital, Minsk, and other cities were met with stun grenades, rubber bullets and beatings from police. The opposition candidate left for Lithuania — under duress, her campaign said — and authorities shut off the internet, leaving Belarusians with almost no access to independent online news outlets or social media and protesters seemingly without a leader.

That’s where Telegram — which often remains available despite internet outages, touts the security of messages shared in the app and has been used in other protest movements — came in. Some of its channels helped scattered rallies to mature into well-coordinated action.

The people who run the channels, which used to offer political news, now post updates, videos and photos of the unfolding turmoil sent in from users, locations of heavy police presence, contacts of human rights activists, and outright calls for new demonstrations — something Belarusian opposition leaders have refrained from doing publicly themselves. Tens of thousands of people all across the country have responded to those calls.

In a matter of days, the channels — NEXTA, NEXTA Live and Belarus of the Brain are the most popular — have become the main method for facilitating the protests, said Franak Viacorka, a Belarusian analyst and non-resident fellow at the Atlantic Council….(More)”.

Blame the politicians, not the technology, for A-level fiasco


The Editorial Board at the Financial Times: “The soundtrack of school students marching through Britain’s streets shouting “f*** the algorithm” captured the sense of outrage surrounding the botched awarding of A-level exam grades this year. But the students’ anger towards a disembodied computer algorithm is misplaced. This was a human failure. The algorithm used to “moderate” teacher-assessed grades had no agency and delivered exactly what it was designed to do.

It is politicians and educational officials who are responsible for the government’s latest fiasco and should be the target of students’ criticism….

Sensibly designed, computer algorithms could have been used to moderate teacher assessments in a constructive way. Using past school performance data, they could have highlighted anomalies in the distribution of predicted grades between and within schools. That could have led to a dialogue between Ofqual, the exam regulator, and anomalous schools to come up with more realistic assessments….

There are broader lessons to be drawn from the government’s algo fiasco about the dangers of automated decision-making systems. The inappropriate use of such systems to assess immigration status, policing policies and prison sentencing decisions is a live danger. In the private sector, incomplete and partial data sets can also significantly disadvantage under-represented groups when it comes to hiring decisions and performance measures.

Given the severe erosion of public trust in the government’s use of technology, it might now be advisable to subject all automated decision-making systems to critical scrutiny by independent experts. The Royal Statistical Society and The Alan Turing Institute certainly have the expertise to give a Kitemark of approval or flag concerns.

As ever, technology in itself is neither good nor bad. But it is certainly not neutral. The more we deploy automated decision-making systems, the smarter we must become in considering how best to use them and in scrutinising their outcomes. We often talk about a deficit of trust in our societies. But we should also be aware of the dangers of over-trusting technology. That may be a good essay subject for next year’s philosophy A-level….(More)”.

No more gut-based strategies: Using evidence to solve the digital divide


Gregory Rosston and Scott J. Wallsten at the Hill: “COVID-19 has, among other things, brought home the costs of the digital divide. Numerous op-eds have offered solutions, including increasing subsidies to schools, providing eligible low-income people with a $50 per month broadband credit, funding more digital literacy classes and putting WiFi on school buses. A House bill would allocate $80 billion to ideas meant to close the digital divide.

The key missing component of nearly every proposal to solve the connectivity problem is evidence — evidence suggesting the ideas are likely to work and ways to use evidence in the future to evaluate whether they did work. Otherwise, we are likely throwing money away. Understanding what works and what doesn’t requires data collection and research now and in the future….

Consider President Trump’s belief in hydroxychloroquine as a cure for the novel coronavirus based simply on his “gut.” That resulted in the government ordering the drug to be produced, distributed to hospitals, and 63 million doses put into a strategic national stockpile.

The well-meaning folks offering up multi-billion dollar broadband plans probably recognize the foolhardiness of the president’s gut-check approach to guiding virus treatment plans. But so far, policy makers and advocates are promoting their own gut beliefs that their proposals will treat the digital divide. An evidence-free approach is likely to cost billions of dollars more and connect fewer people than an evidence-based approach.

It doesn’t have to be this way. The pandemic did not only lay bare the implications of the digital divide, it also created a laboratory for studying how best to bridge the divide. The most immediate problem was how to help kids without home broadband attend distance learning classes. Schools had no time to formally study different options — it was a race to find anything that might help. As a result, schools incidentally ran thousands of concurrent experiments around the country….(More)”.