Stefaan Verhulst
Paper by Susanne Beck et al: “Openness and collaboration in scientific research are attracting increasing attention from scholars and practitioners alike. However, a common understanding of these phenomena is hindered by disciplinary boundaries and disconnected research streams. We link dispersed knowledge on Open Innovation, Open Science, and related concepts such as Responsible Research and Innovation by proposing a unifying Open Innovation in Science (OIS) Research Framework. This framework captures the antecedents, contingencies, and consequences of open and collaborative practices along the entire process of generating and disseminating scientific insights and translating them into innovation. Moreover, it elucidates individual-, team-, organisation-, field-, and society‐level factors shaping OIS practices. To conceptualise the framework, we employed a collaborative approach involving 47 scholars from multiple disciplines, highlighting both tensions and commonalities between existing approaches. The OIS Research Framework thus serves as a basis for future research, informs policy discussions, and provides guidance to scientists and practitioners….(More)”.
Simon Roberts and Jason Bell at the Conversation: “The COVID-19 pandemic and consequent lockdown measures have had a huge negative impact on producers and consumers. Food production has been disrupted, and incomes have been lost. But a far more devastating welfare consequence of the pandemic could be reduced access to food.
A potential rise in food insecurity is a key policy point for many countries. The World Economic Forum has stated this pandemic is set to “radically exacerbate food insecurity in Africa”. This, and other supplier shocks, such as locust swarms in East Africa, have made many African economies more dependent on externally sourced food.
As the pandemic continues to spread, the continued functioning of regional and national food supply chains is vital to avoid a food security crisis in countries dependent on agriculture. This is true in terms of both nutrition and livelihoods. Many countries in Southern and East African economies are in this situation.
The integration of regional economies is one vehicle for alleviating pervasive food security issues. But regional integration can’t be achieved without the appropriate support for investment in production, infrastructure and capabilities.
And, crucially, there must be more accurate and timely information about food markets. Data on food prices are crucial for political and economic stability. Yet they are not easily accessible.
A study by the Centre for Competition, Regulation and Economic Development highlights how poor and inconsistent pricing data severely affects the quality of any assessment of agricultural markets in the Southern and East African region….(More)”
Book by George Dyson: “In 1716, the philosopher and mathematician Gottfried Wilhelm Leibniz spent eight days taking the cure with Peter the Great at Bad Pyrmont in Saxony, seeking to initiate a digitally-computed takeover of the world. In his classic books, Darwin Among the Machines and Turing’s Cathedral, Dyson chronicled the realization of Leibniz’s dream at the hands of a series of iconoclasts who brought his ideas to life. Now, in his pathbreaking new book, Analogia, he offers a chronicle of people who fought for the other side—the Native American leader Geronimo and physicist Leo Szilard, among them—a series of stories that will change our view not only of the past but also of the future.
The convergence of a startling historical archaeology with Dyson’s unusual personal story—set alternately in the rarified world of cutting-edge physics and computer science, in Princeton, and in the rainforest of the Northwest Coast—leads to a prophetic vision of an analog revolution already under way. We are, Dyson reveals, on the cusp of a new moment in human history, driven by a generation of machines whose powers are beyond programmable control…(More)”.
Essay by Hannah Kerner: “Any researcher who’s focused on applying machine learning to real-world problems has likely received a response like this one: “The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.”
These words are straight from a review I received for a paper I submitted to the NeurIPS (Neural Information Processing Systems) conference, a top venue for machine-learning research. I’ve seen the refrain time and again in reviews of papers where my coauthors and I presented a method motivated by an application, and I’ve heard similar stories from countless others.
This makes me wonder: If the community feels that aiming to solve high-impact real-world problems with machine learning is of limited significance, then what are we trying to achieve?
The goal of artificial intelligence (pdf) is to push forward the frontier of machine intelligence. In the field of machine learning, a novel development usually means a new algorithm or procedure, or—in the case of deep learning—a new network architecture. As others have pointed out, this hyperfocus on novel methods leads to a scourge of papers that report marginal or incremental improvements on benchmark data sets and exhibit flawed scholarship (pdf) as researchers race to top the leaderboard.
Meanwhile, many papers that describe new applications present both novel concepts and high-impact results. But even a hint of the word “application” seems to spoil the paper for reviewers. As a result, such research is marginalized at major conferences. Their authors’ only real hope is to have their papers accepted in workshops, which rarely get the same attention from the community.
This is a problem because machine learning holds great promise for advancing health, agriculture, scientific discovery, and more. The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning. If others in the field had prioritized real-world applications, what other groundbreaking discoveries would we have made by now?
This is not a new revelation. To quote a classic paper titled “Machine Learning that Matters” (pdf), by NASA computer scientist Kiri Wagstaff: “Much of current machine learning research has lost its connection to problems of import to the larger world of science and society.” The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then….(More)”.
Jane Bailey et al at The Conversation: “…In his game-changing 1993 book, The Panoptic Sort, scholar Oscar Gandy warned that “complex technology [that] involves the collection, processing and sharing of information about individuals and groups that is generated through their daily lives … is used to coordinate and control their access to the goods and services that define life in the modern capitalist economy.” Law enforcement uses it to pluck suspects from the general public, and private organizations use it to determine whether we have access to things like banking and employment.
Gandy prophetically warned that, if left unchecked, this form of “cybernetic triage” would exponentially disadvantage members of equality-seeking communities — for example, groups that are racialized or socio-economically disadvantaged — both in terms of what would be allocated to them and how they might come to understand themselves.
Some 25 years later, we’re now living with the panoptic sort on steroids. And examples of its negative effects on equality-seeking communities abound, such as the false identification of Williams.
Pre-existing bias
This sorting using algorithms infiltrates the most fundamental aspects of everyday life, occasioning both direct and structural violence in its wake.
The direct violence experienced by Williams is immediately evident in the events surrounding his arrest and detention, and the individual harms he experienced are obvious and can be traced to the actions of police who chose to rely on the technology’s “match” to make an arrest. More insidious is the structural violence perpetrated through facial recognition technology and other digital technologies that rate, match, categorize and sort individuals in ways that magnify pre-existing discriminatory patterns.
Structural violence harms are less obvious and less direct, and cause injury to equality-seeking groups through systematic denial to power, resources and opportunity. Simultaneously, it increases direct risk and harm to individual members of those groups.
Predictive policing uses algorithmic processing of historical data to predict when and where new crimes are likely to occur, assigns police resources accordingly and embeds enhanced police surveillance into communities, usually in lower-income and racialized neighbourhoods. This increases the chances that any criminal activity — including less serious criminal activity that might otherwise prompt no police response — will be detected and punished, ultimately limiting the life chances of the people who live within that environment….(More)”.
Data2X: “This report builds on our 2019 technical report, Bridging the Gap: Mapping Gender Data Availability in Africa, but shifts the geographic focus to Latin America and the Caribbean (LAC).
It reports on the availability of gender data in Colombia, Costa Rica, the Dominican Republic, Jamaica, and Paraguay at the international, national, and microdata levels, and it assesses the availability of 93 gender indicators, their disaggregations, and their frequency of observation in international and national databases and publications.
Additionally, with the assistance of the UN Economic Commission for Latin America (ECLAC), the report documents the availability of statistical indicators to support gender development plans in the five countries.
Through this report, we hope to help move the development community one step closer to producing high-quality and policy-relevant gender indicators to inform better decisions….Read the report.“
About: “Over the past seven months the team at the Change.org Foundation have been working from home to support campaigns created in response to COVID-19. During this unprecedented time in history, millions of people, more than ever before, used our platform to share their stories and fight for their communities.
The Pandemic Report 2020 is born out of the need to share those stories with the world. We assembled a cross-functional team within the Foundation to dig into our platform data. We spotted trends, followed patterns and learned from the analysis we collected from country teams.
This work started with the hypothesis that the Coronavirus pandemic may have started a new chapter in digital activism history.
The data points to a new era, with the pandemic acting as a catalyst for citizen engagement worldwide….(More)”.
Book by Cass Sunstein: “How information can make us happy or miserable, and why we sometimes avoid it and sometimes seek it out.
How much information is too much? Do we need to know how many calories are in the giant vat of popcorn that we bought on our way into the movie theater? Do we want to know if we are genetically predisposed to a certain disease? Can we do anything useful with next week’s weather forecast for Paris if we are not in Paris? In Too Much Information, Cass Sunstein examines the effects of information on our lives. Policymakers emphasize “the right to know,” but Sunstein takes a different perspective, arguing that the focus should be on human well-being and what information contributes to it. Government should require companies, employers, hospitals, and others to disclose information not because of a general “right to know” but when the information in question would significantly improve people’s lives.
Sunstein argues that the information on warnings and mandatory labels is often confusing or irrelevant, yielding no benefit. He finds that people avoid information if they think it will make them sad (and seek information they think will make them happy). Our information avoidance and information seeking is notably heterogeneous—some of us do want to know the popcorn calorie count, others do not. Of course, says Sunstein, we are better off with stop signs, warnings on prescription drugs, and reminders about payment due dates. But sometimes less is more. What we need is more clarity about what information is actually doing or achieving…(More)”
Abeba Birhane at The Elephant: “The African equivalents of Silicon Valley’s tech start-ups can be found in every possible sphere of life around all corners of the continent—in “Sheba Valley” in Addis Abeba, “Yabacon Valley” in Lagos, and “Silicon Savannah” in Nairobi, to name a few—all pursuing “cutting-edge innovations” in sectors like banking, finance, healthcare, and education. They are headed by technologists and those in finance from both within and outside the continent who seemingly want to “solve” society’s problems, using data and AI to provide quick “solutions”. As a result, the attempt to “solve” social problems with technology is exactly where problems arise. Complex cultural, moral, and political problems that are inherently embedded in history and context are reduced to problems that can be measured and quantified—matters that can be “fixed” with the latest algorithm.
As dynamic and interactive human activities and processes are automated, they are inherently simplified to the engineers’ and tech corporations’ subjective notions of what they mean. The reduction of complex social problems to a matter that can be “solved” by technology also treats people as passive objects for manipulation. Humans, however, far from being passive objects, are active meaning-seekers embedded in dynamic social, cultural, and historical backgrounds.
The discourse around “data mining”, “abundance of data”, and “data-rich continent” shows the extent to which the individual behind each data point is disregarded. This muting of the individual—a person with fears, emotions, dreams, and hopes—is symptomatic of how little attention is given to matters such as people’s well-being and consent, which should be the primary concerns if the goal is indeed to “help” those in need. Furthermore, this discourse of “mining” people for data is reminiscent of the coloniser’s attitude that declares humans as raw material free for the taking. Complex cultural, moral, and political problems that are inherently embedded in history and context are reduced to problems that can be measured and quantified Data is necessarily always about something and never about an abstract entity.
The collection, analysis, and manipulation of data potentially entails monitoring, tracking, and surveilling people. This necessarily impacts people directly or indirectly whether it manifests as change in their insurance premiums or refusal of services. The erasure of the person behind each data point makes it easy to “manipulate behavior” or “nudge” users, often towards profitable outcomes for companies. Considerations around the wellbeing and welfare of the individual user, the long-term social impacts, and the unintended consequences of these systems on society’s most vulnerable are pushed aside, if they enter the equation at all. For companies that develop and deploy AI, at the top of the agenda is the collection of more data to develop profitable AI systems rather than the welfare of individual people or communities. This is most evident in the FinTech sector, one of the prominent digital markets in Africa. People’s digital footprints, from their interactions with others to how much they spend on their mobile top ups, are continually surveyed and monitored to form data for making loan assessments. Smartphone data from browsing history, likes, and locations is recorded forming the basis for a borrower’s creditworthiness.
Artificial Intelligence technologies that aid decision-making in the social sphere are, for the most part, developed and implemented by the private sector whose primary aim is to maximise profit. Protecting individual privacy rights and cultivating a fair society is therefore the least of their concerns, especially if such practice gets in the way of “mining” data, building predictive models, and pushing products to customers. As decision-making of social outcomes is handed over to predictive systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by corporate incentives, we are also allowing moral questions to be dictated by corporate interest.
“Digital nudges”, behaviour modifications developed to suit commercial interests, are a prime example. As “nudging” mechanisms become the norm for “correcting” individuals’ behaviour, eating habits, or exercise routines, those developing predictive models are bestowed with the power to decide what “correct” is. In the process, individuals that do not fit our stereotypical ideas of a “fit body”, “good health”, and “good eating habits” end up being punished, outcast, and pushed further to the margins. When these models are imported as state-of-the-art technology that will save money and “leapfrog” the continent into development, Western values and ideals are enforced, either deliberately or intentionally….(More)”.
Nathan Heller at the New Yorker: “Imagine being a citizen of a diverse, wealthy, democratic nation filled with eager leaders. At least once a year—in autumn, say—it is your right and civic duty to go to the polls and vote. Imagine that, in your country, this act is held to be not just an important task but an essential one; the government was designed at every level on the premise of democratic choice. If nobody were to show up to vote on Election Day, the superstructure of the country would fall apart.
So you try to be responsible. You do your best to stay informed. When Election Day arrives, you make the choices that, as far as you can discern, are wisest for your nation. Then the results come with the morning news, and your heart sinks. In one race, the candidate you were most excited about, a reformer who promised to clean up a dysfunctional system, lost to the incumbent, who had an understanding with powerful organizations and ultra-wealthy donors. Another politician, whom you voted into office last time, has failed to deliver on her promises, instead making decisions in lockstep with her party and against the polls. She was reëlected, apparently with her party’s help. There is a notion, in your country, that the democratic structure guarantees a government by the people. And yet, when the votes are tallied, you feel that the process is set up to favor interests other than the people’s own.
What corrective routes are open? One might wish for pure direct democracy—no body of elected representatives, each citizen voting on every significant decision about policies, laws, and acts abroad. But this seems like a nightmare of majoritarian tyranny and procedural madness: How is anyone supposed to haggle about specifics and go through the dialogue that shapes constrained, durable laws? Another option is to focus on influencing the organizations and business interests that seem to shape political outcomes. But that approach, with its lobbyists making backroom deals, goes against the promise of democracy. Campaign-finance reform might clean up abuses. But it would do nothing to insure that a politician who ostensibly represents you will be receptive to hearing and acting on your thoughts….(More)”.