The open source movement takes on climate data


Article by Heather Clancy: “…many companies are moving to disclose “climate risk,” although far fewer are moving to actually minimize it. And as those tasked with preparing those reports can attest, the process of gathering the data for them is frustrating and complex, especially as the level of detail desired and required by investors becomes deeper.

That pain point was the inspiration for a new climate data project launched this week that will be spearheaded by the Linux Foundation, the nonprofit host organization for thousands of the most influential open source software and data initiatives in the world such as GitHub. The foundation is central to the evolution of the Linux software that runs in the back offices of most major financial services firms. 

There are four powerful founding members for the new group, the LF Climate Finance Foundation (LFCF): Insurance and asset management company Allianz, cloud software giants Amazon and Microsoft, and data intelligence powerhouse S&P Global. The foundation’s “planning team” includes World Wide Fund for Nature (WWF), Ceres and the Sustainability Account Standards Board (SASB).

The group’s intention is to collaborate on an open source project called the OS-Climate platform, which will include economic and physical risk scenarios that investors, regulators, companies, financial analysts and others can use for their analysis. 

The idea is to create a “public service utility” where certain types of climate data can be accessed easily, then combined with other, more proprietary information that someone might be using for risk analysis, according to Truman Semans, CEO of OS-Climate, who was instrumental in getting the effort off the ground. “There are a whole lot of initiatives out there that address pieces of the puzzle, but no unified platform to allow those to interoperate,” he told me. There are a whole lot of initiatives out there that address pieces of the puzzle, but no unified platform to allow those to interoperate.

Why does this matter? It helps to understand the history of open source software, which was once a thing that many powerful software companies, notably Microsoft, abhorred because they were worried about the financial hit on their intellectual property. Flash forward to today and the open source software movement, “staffed” by literally millions of software developers, is credited with accelerating the creation of common system-level elements so that companies can focus their own resources on solving problems directly related to their business.

In short, this budding effort could make the right data available more quickly, so that businesses — particularly financial institutions — can make better informed decisions.

Or, as Microsoft’s chief intellectual property counsel, Jennifer Yokoyama, observed in the announcement press release: “Addressing climate issues in a meaningful way requires people and organizations to have access to data to better understand the impact of their actions. Opening up and sharing our contribution of significant and relevant sustainability data through the LF Climate Finance Foundation will help advance the financial modeling and understanding of climate change impact — an important step in affecting political change. We’re excited to collaborate with the other founding members and hope additional organizations will join.”…(More)”

The Pandemic Is No Excuse to Surveil Students


 Zeynep Tufekci in the Atlantic: “In Michigan, a small liberal-arts college is requiring students to install an app called Aura, which tracks their location in real time, before they come to campus. Oakland University, also in Michigan, announced a mandatory wearable that would track symptoms, but, facing a student-led petition, then said it would be optional. The University of Missouri, too, has an app that tracks when students enter and exit classrooms. This practice is spreading: In an attempt to open during the pandemic, many universities and colleges around the country are forcing students to download location-tracking apps, sometimes as a condition of enrollment. Many of these apps function via Bluetooth sensors or Wi-Fi networks. When students enter a classroom, their phone informs a sensor that’s been installed in the room, or the app checks the Wi-Fi networks nearby to determine the phone’s location.

As a university professor, I’ve seen surveillance like this before. Many of these apps replicate the tracking system sometimes installed on the phones of student athletes, for whom it is often mandatory. That system tells us a lot about what we can expect with these apps.

There is a widespread charade in the United States that university athletes, especially those who play high-profile sports such as football and basketball, are just students who happen to be playing sports as amateurs “in their free time.” The reality is that these college athletes in high-level sports, who are aggressively recruited by schools, bring prestige and financial resources to universities, under a regime that requires them to train like professional athletes despite their lack of salary. However, making the most of one’s college education and training at that level are virtually incompatible, simply because the day is 24 hours long and the body, even that of a young, healthy athlete, can only take so much when training so hard. Worse, many of these athletes are minority students, specifically Black men, who were underserved during their whole K–12 education and faced the same challenge then as they do now: Train hard in hopes of a scholarship and try to study with what little time is left, often despite being enrolled in schools with mediocre resources. Many of them arrive at college with an athletic scholarship but not enough academic preparation compared with their peers who went to better schools and could also concentrate on schooling….(More)”

The New Net Delusion


Essay by Geoff Shullenberger: “How 2010’s digital utopians became 2020’s tech prophets of doom…In June 2009, large protests broke out in Iran in the wake of a disputed election result. The unrest did not differ all that much from comparable episodes that had occurred elsewhere in the world over the preceding decades, but many Western observers became convinced that new digital platforms like Twitter and Facebook were propelling the movement. By the time the Arab Spring kicked off with an anti-government uprising in Tunisia the following year, the belief had become widespread that social media was fomenting insurgencies for liberalization in authoritarian regimes.

The most vigorous dissenter from this cheerful consensus was technology critic Evgeny Morozov, whose 2011 book The Net Delusion inveighed against the “cyber-utopianism” then common among academics, bloggers, journalists, activists, and policymakers. For Morozov, cyber-utopians were captive to a “naïve belief in the emancipatory nature of online communication that rests on a stubborn refusal to acknowledge its downside…(More)”.

Covid-19 is spurring the digitisation of government


The Economist: “…Neither health care nor Britain is unique in relying heavily on paper. By preventing face-to-face meetings and closing the offices where bureaucrats shuffle documents, the pandemic has revealed how big a problem that is. Around the world, it has been impossible to get a court hearing, a passport or get married while locked down, since they all still require face-to-face interactions. Registering a business has been slower or impossible. Courts are a mess; elections a worrying prospect.

Governments that have long invested in digitising their systems endured less disruption. Those that have not are discovering how useful it would be if a lot more official business took place online.

Covid-19 has brought many aspects of bureaucratic life to a halt. In England at least 73,400 weddings had to be delayed—not just the ceremony, also the legal part—reckons the Office for National Statistics. In France courts closed in March for all but essential services, and did not reopen until late May. Most countries have extended visas for foreigners trapped by the pandemic, but consular services stopped almost everywhere. In America green-card applications were halted in April; they restarted in June. In Britain appointments to take biometric details of people applying for permanent residency ceased in March and only resumed partly in June.

Some applications cannot be delayed and there the pandemic has revealed the creakiness of even rich countries’ bureaucracies. As Florida was locking down, huge queues formed outside government offices to get the paper forms needed to sign up for unemployment insurance. In theory the state has a digital system, but it was so poorly set up that many could not access it. At the start of the pandemic the website crashed for days. Even several months later people trying to apply had to join a digital queue and wait for hours before being able to log in. In Alabama when government offices in Montgomery, the state capital, reopened, people camped outside, hoping to see an official who might help with their claims.

Where services did exist online, their inadequacies became apparent. Digital unemployment-insurance systems collapsed under a wave of new claimants. At the end of March the website of the INPS, the Italian social-security office, received 300,000 applications for welfare in a single day. The website crashed. Some of those who could access it were shown other people’s data. The authorities blamed not just the volume of applicants but also hackers trying to put in fraudulent claims. Criminals were a problem in America too. In the worst-affected state, Washington, $550m-650m, or one dollar in every eight, was paid out to fraudsters who exploited an outdated system of identity verification (about $300m was recovered)….

the pandemic has revealed that governments need to operate in new ways. This may mean the introduction of proper digital identities, which many countries lack. Track-and-trace systems require governments to know who their citizens are and to be able to contact them reliably. Estonia’s officials can do so easily; Britain’s and America’s cannot. In China in order to board public transport or enter their own apartment buildings people have to show QR codes on their phones to verify that they have not been to a virus hotspot recently….(More)”.

Behavioral Contagion Could Spread the Benefits of a Carbon Tax


Robert H. Frank at the New York Times: “…Why, then, hasn’t the United States adopted a carbon tax? One hurdle is the fear that emissions would fall too slowly in response to a carbon tax, that more direct measures are needed. Another difficulty is that political leaders have reason to fear voter opposition to taxation of any kind. But there are persuasive rejoinders to both objections.

Regarding the first, critics are correct that a carbon tax alone won’t parry the climate threat. It is also true that as creatures of habit, humans tend to change their behavior only slowly, even in the face of significant financial incentives. But even small changes in behavior are greatly amplified by behavioral contagion — the social scientist’s term for how ideas and behaviors spread from person to person like infectious diseases. And if a carbon tax were to shift the behavior of some individuals now, those changes would quickly spread more widely.

Smoking rates, for example, changed little in the short run even as cigarette taxes rose sharply, but that wasn’t the end of the story. The most powerful predictor of whether someone will smoke is the percentage of her friends who smoke. Most smokers stick with their habit in the face of higher taxes, but a small minority quit, and still others refrain from starting.

Every peer group that includes those people thus contains a smaller proportion of smokers, which influences still others to quit or refrain, and so on. This contagion process explains why the percentage of American adults who smoke has fallen by two-thirds since the mid-1960s.

Behavioral contagion would similarly amplify the effects of a carbon tax. By making solar power cheaper in comparison with fossil fuels, for example, the tax would initially encourage a small number of families to install solar panels on their rooftops. But as with cigarette taxes, it’s the indirect effects that really matter….(More)”.

The Broken Algorithm That Poisoned American Transportation


Aaron Gordon at Vice: “…The Louisville highway project is hardly the first time travel demand models have missed the mark. Despite them being a legally required portion of any transportation infrastructure project that gets federal dollars, it is one of urban planning’s worst kept secrets that these models are error-prone at best and fundamentally flawed at worst.

Recently, I asked Renn how important those initial, rosy traffic forecasts of double-digit growth were to the boondoggle actually getting built.

“I think it was very important,” Renn said. “Because I don’t believe they could have gotten approval to build the project if they had not had traffic forecasts that said traffic across the river is going to increase substantially. If there isn’t going to be an increase in traffic, how do you justify building two bridges?”

ravel demand models come in different shapes and sizes. They can cover entire metro regions spanning across state lines or tackle a small stretch of a suburban roadway. And they have gotten more complicated over time. But they are rooted in what’s called the Four Step process, a rough approximation of how humans make decisions about getting from A to B. At the end, the model spits out numbers estimating how many trips there will be along certain routes.

As befits its name, the model goes through four steps in order to arrive at that number. First, it generates a kind of algorithmic map based on expected land use patterns (businesses will generate more trips than homes) and socio-economic factors (for example, high rates of employment will generate more trips than lower ones). Then it will estimate where people will generally be coming from and going to. The third step is to guess how they will get there, and the fourth is to then plot their actual routes, based mostly on travel time. The end result is a number of how many trips there will be in the project area and how long it will take to get around. Engineers and planners will then add a new highway, transit line, bridge, or other travel infrastructure to the model and see how things change. Or they will change the numbers in the first step to account for expected population or employment growth into the future. Often, these numbers are then used by policymakers to justify a given project, whether it’s a highway expansion or a light rail line…(More)”.

Too many AI researchers think real-world problems are not relevant


Essay by Hannah Kerner: “Any researcher who’s focused on applying machine learning to real-world problems has likely received a response like this one: “The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.”

These words are straight from a review I received for a paper I submitted to the NeurIPS (Neural Information Processing Systems) conference, a top venue for machine-learning research. I’ve seen the refrain time and again in reviews of papers where my coauthors and I presented a method motivated by an application, and I’ve heard similar stories from countless others.

This makes me wonder: If the community feels that aiming to solve high-impact real-world problems with machine learning is of limited significance, then what are we trying to achieve?

The goal of artificial intelligence (pdf) is to push forward the frontier of machine intelligence. In the field of machine learning, a novel development usually means a new algorithm or procedure, or—in the case of deep learning—a new network architecture. As others have pointed out, this hyperfocus on novel methods leads to a scourge of papers that report marginal or incremental improvements on benchmark data sets and exhibit flawed scholarship (pdf) as researchers race to top the leaderboard.

Meanwhile, many papers that describe new applications present both novel concepts and high-impact results. But even a hint of the word “application” seems to spoil the paper for reviewers. As a result, such research is marginalized at major conferences. Their authors’ only real hope is to have their papers accepted in workshops, which rarely get the same attention from the community.

This is a problem because machine learning holds great promise for advancing health, agriculture, scientific discovery, and more. The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning. If others in the field had prioritized real-world applications, what other groundbreaking discoveries would we have made by now?

This is not a new revelation. To quote a classic paper titled “Machine Learning that Matters” (pdf), by NASA computer scientist Kiri Wagstaff: “Much of current machine learning research has lost its connection to problems of import to the larger world of science and society.” The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then….(More)”.

AI technologies — like police facial recognition — discriminate against people of colour


Jane Bailey et al at The Conversation: “…In his game-changing 1993 book, The Panoptic Sort, scholar Oscar Gandy warned that “complex technology [that] involves the collection, processing and sharing of information about individuals and groups that is generated through their daily lives … is used to coordinate and control their access to the goods and services that define life in the modern capitalist economy.” Law enforcement uses it to pluck suspects from the general public, and private organizations use it to determine whether we have access to things like banking and employment.

Gandy prophetically warned that, if left unchecked, this form of “cybernetic triage” would exponentially disadvantage members of equality-seeking communities — for example, groups that are racialized or socio-economically disadvantaged — both in terms of what would be allocated to them and how they might come to understand themselves.

Some 25 years later, we’re now living with the panoptic sort on steroids. And examples of its negative effects on equality-seeking communities abound, such as the false identification of Williams.

Pre-existing bias

This sorting using algorithms infiltrates the most fundamental aspects of everyday life, occasioning both direct and structural violence in its wake.

The direct violence experienced by Williams is immediately evident in the events surrounding his arrest and detention, and the individual harms he experienced are obvious and can be traced to the actions of police who chose to rely on the technology’s “match” to make an arrest. More insidious is the structural violence perpetrated through facial recognition technology and other digital technologies that rate, match, categorize and sort individuals in ways that magnify pre-existing discriminatory patterns.

Structural violence harms are less obvious and less direct, and cause injury to equality-seeking groups through systematic denial to power, resources and opportunity. Simultaneously, it increases direct risk and harm to individual members of those groups.

Predictive policing uses algorithmic processing of historical data to predict when and where new crimes are likely to occur, assigns police resources accordingly and embeds enhanced police surveillance into communities, usually in lower-income and racialized neighbourhoods. This increases the chances that any criminal activity — including less serious criminal activity that might otherwise prompt no police response — will be detected and punished, ultimately limiting the life chances of the people who live within that environment….(More)”.

Algorithmic Colonisation of Africa Read


Abeba Birhane at The Elephant: “The African equivalents of Silicon Valley’s tech start-ups can be found in every possible sphere of life around all corners of the continent—in “Sheba Valley” in Addis Abeba, “Yabacon Valley” in Lagos, and “Silicon Savannah” in Nairobi, to name a few—all pursuing “cutting-edge innovations” in sectors like banking, finance, healthcare, and education. They are headed by technologists and those in finance from both within and outside the continent who seemingly want to “solve” society’s problems, using data and AI to provide quick “solutions”. As a result, the attempt to “solve” social problems with technology is exactly where problems arise. Complex cultural, moral, and political problems that are inherently embedded in history and context are reduced to problems that can be measured and quantified—matters that can be “fixed” with the latest algorithm.

As dynamic and interactive human activities and processes are automated, they are inherently simplified to the engineers’ and tech corporations’ subjective notions of what they mean. The reduction of complex social problems to a matter that can be “solved” by technology also treats people as passive objects for manipulation. Humans, however, far from being passive objects, are active meaning-seekers embedded in dynamic social, cultural, and historical backgrounds.

The discourse around “data mining”, “abundance of data”, and “data-rich continent” shows the extent to which the individual behind each data point is disregarded. This muting of the individual—a person with fears, emotions, dreams, and hopes—is symptomatic of how little attention is given to matters such as people’s well-being and consent, which should be the primary concerns if the goal is indeed to “help” those in need. Furthermore, this discourse of “mining” people for data is reminiscent of the coloniser’s attitude that declares humans as raw material free for the taking. Complex cultural, moral, and political problems that are inherently embedded in history and context are reduced to problems that can be measured and quantified Data is necessarily always about something and never about an abstract entity.

The collection, analysis, and manipulation of data potentially entails monitoring, tracking, and surveilling people. This necessarily impacts people directly or indirectly whether it manifests as change in their insurance premiums or refusal of services. The erasure of the person behind each data point makes it easy to “manipulate behavior” or “nudge” users, often towards profitable outcomes for companies. Considerations around the wellbeing and welfare of the individual user, the long-term social impacts, and the unintended consequences of these systems on society’s most vulnerable are pushed aside, if they enter the equation at all. For companies that develop and deploy AI, at the top of the agenda is the collection of more data to develop profitable AI systems rather than the welfare of individual people or communities. This is most evident in the FinTech sector, one of the prominent digital markets in Africa. People’s digital footprints, from their interactions with others to how much they spend on their mobile top ups, are continually surveyed and monitored to form data for making loan assessments. Smartphone data from browsing history, likes, and locations is recorded forming the basis for a borrower’s creditworthiness.

Artificial Intelligence technologies that aid decision-making in the social sphere are, for the most part, developed and implemented by the private sector whose primary aim is to maximise profit. Protecting individual privacy rights and cultivating a fair society is therefore the least of their concerns, especially if such practice gets in the way of “mining” data, building predictive models, and pushing products to customers. As decision-making of social outcomes is handed over to predictive systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by corporate incentives, we are also allowing moral questions to be dictated by corporate interest.

“Digital nudges”, behaviour modifications developed to suit commercial interests, are a prime example. As “nudging” mechanisms become the norm for “correcting” individuals’ behaviour, eating habits, or exercise routines, those developing predictive models are bestowed with the power to decide what “correct” is. In the process, individuals that do not fit our stereotypical ideas of a “fit body”, “good health”, and “good eating habits” end up being punished, outcast, and pushed further to the margins. When these models are imported as state-of-the-art technology that will save money and “leapfrog” the continent into development, Western values and ideals are enforced, either deliberately or intentionally….(More)”.

Politics without Politicians


Nathan Heller at the New Yorker: “Imagine being a citizen of a diverse, wealthy, democratic nation filled with eager leaders. At least once a year—in autumn, say—it is your right and civic duty to go to the polls and vote. Imagine that, in your country, this act is held to be not just an important task but an essential one; the government was designed at every level on the premise of democratic choice. If nobody were to show up to vote on Election Day, the superstructure of the country would fall apart.

So you try to be responsible. You do your best to stay informed. When Election Day arrives, you make the choices that, as far as you can discern, are wisest for your nation. Then the results come with the morning news, and your heart sinks. In one race, the candidate you were most excited about, a reformer who promised to clean up a dysfunctional system, lost to the incumbent, who had an understanding with powerful organizations and ultra-wealthy donors. Another politician, whom you voted into office last time, has failed to deliver on her promises, instead making decisions in lockstep with her party and against the polls. She was reëlected, apparently with her party’s help. There is a notion, in your country, that the democratic structure guarantees a government by the people. And yet, when the votes are tallied, you feel that the process is set up to favor interests other than the people’s own.

What corrective routes are open? One might wish for pure direct democracy—no body of elected representatives, each citizen voting on every significant decision about policies, laws, and acts abroad. But this seems like a nightmare of majoritarian tyranny and procedural madness: How is anyone supposed to haggle about specifics and go through the dialogue that shapes constrained, durable laws? Another option is to focus on influencing the organizations and business interests that seem to shape political outcomes. But that approach, with its lobbyists making backroom deals, goes against the promise of democracy. Campaign-finance reform might clean up abuses. But it would do nothing to insure that a politician who ostensibly represents you will be receptive to hearing and acting on your thoughts….(More)”.