AI planners in Minecraft could help machines design better cities


Article by Will Douglas Heaven: “A dozen or so steep-roofed buildings cling to the edges of an open-pit mine. High above them, on top of an enormous rock arch, sits an inaccessible house. Elsewhere, a railway on stilts circles a group of multicolored tower blocks. Ornate pagodas decorate a large paved plaza. And a lone windmill turns on an island, surrounded by square pigs. This is Minecraft city-building, AI style.

Minecraft has long been a canvas for wild invention. Fans have used the hit block-building game to create replicas of everything from downtown Chicago and King’s Landing to working CPUs. In the decade since its first release, anything that can be built has been.

Since 2018, Minecraft has also been the setting for a creative challenge that stretches the abilities of machines. The annual Generative Design in Minecraft (GDMC) competition asks participants to build an artificial intelligence that can generate realistic towns or villages in previously unseen locations. The contest is just for fun, for now, but the techniques explored by the various AI competitors are precursors of ones that real-world city planners could use….(More)”.

Global citizen deliberation on genome editing


Essay by John S. Dryzek et al at Science: “Genome editing technologies provide vast possibilities for societal benefit, but also substantial risks and ethical challenges. Governance and regulation of such technologies have not kept pace in a systematic or internationally consistent manner, leaving a complex, uneven, and incomplete web of national and international regulation (1). How countries choose to regulate these emergent technologies matters not just locally, but globally, because the implications of technological developments do not stop at national boundaries. Practices deemed unacceptable in one country may find a more permissive home in another: not necessarily through national policy choice, but owing to a persistent national legal and regulatory void that enables “ethics dumping” (2)—for example, if those wanting to edit genes to “perfect” humans seek countries with little governance capacity. Just as human rights are generally recognized as a matter of global concern, so too should technologies that may impinge on the question of what it means to be human. Here we show how, as the global governance vacuum is filled, deliberation by a global citizens’ assembly should play a role, for legitimate and effective governance….(More)”.

Politicians should take citizens’ assemblies seriously


The Economist: “In 403bc Athens decided to overhaul its institutions. A disastrous war with Sparta had shown that direct democracy, whereby adult male citizens voted on laws, was not enough to stop eloquent demagogues from getting what they wanted, and indeed from subverting democracy altogether. So a new body, chosen by lot, was set up to scrutinise the decisions of voters. It was called the nomothetai or “layers down of law” and it would be given the time to ponder difficult decisions, unmolested by silver-tongued orators and the schemes of ambitious politicians.

This ancient idea is back in vogue, and not before time. Around the world “citizens’ assemblies” and other deliberative groups are being created to consider questions that politicians have struggled to answer (see article). Over weeks or months, 100 or so citizens—picked at random, but with a view to creating a body reflective of the population as a whole in terms of gender, age, income and education—meet to discuss a divisive topic in a considered, careful way. Often they are paid for their time, to ensure that it is not just political wonks who sign up. At the end they present their recommendations to politicians. Before covid-19 these citizens met in conference centres in large cities where, by mingling over lunch-breaks, they discovered that the monsters who disagree with them turned out to be human after all. Now, as a result of the pandemic, they mostly gather on Zoom.

Citizens’ assemblies are often promoted as a way to reverse the decline in trust in democracy, which has been precipitous in most of the developed world over the past decade or so. Last year the majority of people polled in America, Britain, France and Australia—along with many other rich countries—felt that, regardless of which party wins an election, nothing really changes. Politicians, a common complaint runs, have no understanding of, or interest in, the lives and concerns of ordinary people.

Citizens’ assemblies can help remedy that. They are not a substitute for the everyday business of legislating, but a way to break the deadlock when politicians have tried to deal with important issues and failed. Ordinary people, it turns out, are quite reasonable. A large four-day deliberative experiment in America softened Republicans’ views on immigration; Democrats became less eager to raise the minimum wage. Even more strikingly, two 18-month-long citizens’ assemblies in Ireland showed that the country, despite its deep Catholic roots, was far more socially liberal than politicians had realised. Assemblies overwhelmingly recommended the legalisation of both same-sex marriage and abortion….(More)”.

How Tech Companies Can Advance Data Science for Social Good


Essay by Nick Martin: “As the world struggles to achieve the UN’s Sustainable Development Goals (SDGs), the need for reliable data to track our progress is more important than ever. Government, civil society, and private sector organizations all play a role in producing, sharing, and using this data, but their information-gathering and -analysis efforts have been able to shed light on only 68 percent of the SDG indicators so far, according to a 2019 UN study.

To help fill the gap, the data science for social good (DSSG) movement has for years been making datasets about important social issues—such as health care infrastructure, school enrollment, air quality, and business registrations—available to trusted organizations or the public. Large tech companies such as Facebook, Google, Amazon, and others have recently begun to embrace the DSSG movement. Spurred on by advances in the field, the Development Data Partnership, the World Economic Forum’s 2030Vision consortium, and Data Collaboratives, they’re offering information about social media users’ mobility during COVID-19, cloud computing infrastructure to help nonprofits analyze large datasets, and other important tools and services.

But sharing data resources doesn’t mean they’ll be used effectively, if at all, to advance social impact. High-impact results require recipients of data assistance to inhabit a robust, holistic data ecosystem that includes assets like policies for safely handling data and the skills to analyze it. As tech firms become increasingly involved with using data and data science to help achieve the SDGs, it’s important that they understand the possibilities and limitations of the nonprofits and other civil society organizations they’re working with. Without a firm grasp on the data ecosystems of their partners, all the technical wizardry in the world may be for naught.

Companies must ask questions such as: What incentives or disincentives are in place for nonprofits to experiment with data science in their work? What gaps remain between what nonprofits or data scientists need and the resources funders provide? What skills must be developed? To help find answers, TechChange, an organization dedicated to using technology for social good, partnered with Project17, Facebook’s partnerships-led initiative to accelerate progress on the SDGs. Over the past six months, the team led interviews with top figures in the DSSG community from industry, academia, and the public sector. The 14 experts shared numerous insights into using data and data science to advance social good and the SDGs. Four takeaways emerged from our conversations and research…(More)”.

Ethical Challenges and Opportunities Associated With the Ability to Perform Medical Screening From Interactions With Search Engines


Viewpoint by Elad Yom-Tov and Yuval Cherlow: “Recent research has shown the efficacy of screening for serious medical conditions from data collected while people interact with online services. In particular, queries to search engines and the interactions with them were shown to be advantageous for screening a range of conditions including diabetes, several forms of cancer, eating disorders, and depression. These screening abilities offer unique advantages in that they can serve a broad strata of the society, including people in underserved populations and in countries with poor access to medical services. However, these advantages need to be balanced against the potential harm to privacy, autonomy, and nonmaleficence, which are recognized as the cornerstones of ethical medical care. Here, we discuss these opportunities and challenges, both when collecting data to develop online screening services and when deploying them. We offer several solutions that balance the advantages of these services with the ethical challenges they pose….(More)”.

AI ethics groups are repeating one of society’s classic mistakes


Article by Abhishek Gupta and Victoria Heath: “International organizations and corporations are racing to develop global guidelines for the ethical use of artificial intelligence. Declarations, manifestos, and recommendations are flooding the internet. But these efforts will be futile if they fail to account for the cultural and regional contexts in which AI operates.

AI systems have repeatedly been shown to cause problems that disproportionately affect marginalized groups while benefiting a privileged few. The global AI ethics efforts under way today—of which there are dozens—aim to help everyone benefit from this technology, and to prevent it from causing harm. Generally speaking, they do this by creating guidelines and principles for developers, funders, and regulators to follow. They might, for example, recommend routine internal audits or require protections for users’ personally identifiable information.

We believe these groups are well-intentioned and are doing worthwhile work. The AI community should, indeed, agree on a set of international definitions and concepts for ethical AI. But without more geographic representation, they’ll produce a global vision for AI ethics that reflects the perspectives of people in only a few regions of the world, particularly North America and northwestern Europe.

This work is not easy or straightforward. “Fairness,” “privacy,” and “bias” mean different things (pdf) in different places. People also have disparate expectations of these concepts depending on their own political, social, and economic realities. The challenges and risks posed by AI also differ depending on one’s locale.

If organizations working on global AI ethics fail to acknowledge this, they risk developing standards that are, at best, meaningless and ineffective across all the world’s regions. At worst, these flawed standards will lead to more AI systems and tools that perpetuate existing biases and are insensitive to local cultures….(More)”.

How Billionaires Can Fund Moonshot Efforts to Save the World


Essay by Ivan Amato: “For the past year, since the 50th anniversary of the original moon landing and amid the harsh entrance and unfolding of a pandemic that has affected the entire globe’s citizenry, I have been running a philanthropy-supported publishing experiment on Medium.com titled the Moonshot CatalogThe goal has been to inspire the nation’s more than 2,000 ultrawealthy households to mobilize a smidgeon more — even 1 percent more — of their collective wealth to help solve big problems that threaten our future.

A single percent may seem a small fraction to devote. But when you consider that the richest families have amassed a net worth of more than $4 trillion, that 1 percent tops $40 billion — enough to make a real difference in any number of ways. This truth only magnifies now as we approach a more honest reality-based acknowledgment of the systemic racial and social inequities and injustices that have shunted so much wealth, privilege, and security into such a rarefied micropercentage of the world’s 7.8 billion people.

Such was the simple conceit underlying the Moonshot Catalog, which just came to a close: The deepest pocketed among us would up their philanthropy game if they were more aware of hugely consequential projects they could help usher to the finish line by donating a tad more of the wealth they control….

The first moonshot articles had titles including “Feeding 2050’s Ten Billion People,” “Taming the Diseases of Aging,” and the now tragically premonitional “Ending Pandemic Disease.” Subsequent articles featured achievable solutions for our carbon-emission crisis, including ones replacing current cement and cooling technologies, underappreciated perpetrators of climate change that are responsible for some 16 percent of the world’s carbon emissions; next-generation battery technology, without which much of the potential benefit of renewable energy will remain untapped; advanced nuclear-power plants safe enough to help enable a carbon-neutral economy; and hastening the arrival of fusion energy….

Common to these projects, and others such as the UN’s Sustainability Development Goals, is the huge and difficult commitment each one demands. Many require a unique, creative, and sustained synthesis of science, engineering, entrepreneurship, policy and financial support, and international cooperation.

But there is no magical thinking in the Catalog. The projects are demonstrably doable. What’s more, humanity already has successfully taken on comparably ambitious challenges. Think of the eradication of polio, the development of birth-control technologies, the mitigation of acid rain and the ozone hole, and the great, albeit imperfect, public-health win of municipal water treatment. Oh, and the 1969 moonshot….(More)”.

Questioning the Quantified Life


Special issue of the HedgeHog Review: “Numbers may be our greatest tool, but do we use them wisely?…

At a time when distraction and mendacity degrade public discourse, the heartbreaking toll of the current pandemic should at least remind us that quantification—data, numbers, statistics—are vitally important to policy, governance, and decision-making more broadly.

Confounding as they may be to some of us, numbers are arguably humankind’s most useful technology—our greatest discovery, or possibly our greatest invention. But the current global crisis should also remind us of something equally important: Good numbers, like good science, can only do so much to inform wise decisions about our personal and collective good. They cannot, in any true sense, make those decisions for us. Let the numbers speak for themselves is the rhetoric of the naïf or the con artist, and should long ago have been consigned to the dustbin of pernicious hokum. Yet how seldom in these Big Data days, in our Big Data daze, does it go unchallenged.

Or—to consider the flip side of the current bedazzlement—how often it goes challenged in exactly the wrong way, in a way that declares all facts, all data, all science to be nothing but relative, your facts versus our facts, “alternative facts.” That is the way of sophistry, where cynicism rules and might alone makes right.

Excessive or misplaced faith in the tools that should assist us in arriving at truth—a faith that can engender dangerously unreasoning or cynical reactions—is the theme of this issue. In six essays, we explore the ways the quantitative imperative has insinuated itself into various corners of our culture and society, asserting primacy if not absolute authority in matters where it should tread modestly. In the name of numbers that measure everything from GDP to personal well-being, technocrats and other masters of the postmodern economy have engineered an increasingly soulless, instrumentalizing culture whose denizens either submit to its dictates or flail darkly and destructively against them.

The origins of this nightmare version of modernity, a version that grows increasingly real, dates from at least the first stirrings of modern science in the fifteenth and sixteenth centuries, but its distinctive institutional features emerged most clearly in the early part of the last century, when progressive thinkers and leaders in politics, business, and other walks of life sought to harness humankind’s physical and mental energies to the demands of an increasingly technocratic, consumerist society….(More)”.

Statistics, lies and the virus: lessons from a pandemic


Tim Hartford at the Financial Times: “Will this year be 1954 all over again? Forgive me, I have become obsessed with 1954, not because it offers another example of a pandemic (that was 1957) or an economic disaster (there was a mild US downturn in 1953), but for more parochial reasons. Nineteen fifty-four saw the appearance of two contrasting visions for the world of statistics — visions that have shaped our politics, our media and our health. This year confronts us with a similar choice.

The first of these visions was presented in How to Lie with Statistics, a book by a US journalist named Darrell Huff. Brisk, intelligent and witty, it is a little marvel of numerical communication. The book received rave reviews at the time, has been praised by many statisticians over the years and is said to be the best-selling work on the subject ever published. It is also an exercise in scorn: read it and you may be disinclined to believe a number-based claim ever again….

But they can — and back in 1954, the alternative perspective was embodied in the publication of an academic paper by the British epidemiologists Richard Doll and Austin Bradford Hill. They marshalled some of the first compelling evidence that smoking cigarettes dramatically increases the risk of lung cancer. The data they assembled persuaded both men to quit smoking and helped save tens of millions of lives by prompting others to do likewise. This was no statistical trickery, but a contribution to public health that is almost impossible to exaggerate…

As described in books such as Merchants of Doubt by Erik Conway and Naomi Oreskes, this industry perfected the tactics of spreading uncertainty: calling for more research, emphasising doubt and the need to avoid drastic steps, highlighting disagreements between experts and funding alternative lines of inquiry. The same tactics, and sometimes even the same personnel, were later deployed to cast doubt on climate science. These tactics are powerful in part because they echo the ideals of science.

It is a short step from the Royal Society’s motto, “nullius in verba” (take nobody’s word for it), to the corrosive nihilism of “nobody knows anything”.  So will 2020 be another 1954? From the point of view of statistics, we seem to be standing at another fork in the road.

The disinformation is still out there, as the public understanding of Covid-19 has been muddied by conspiracy theorists, trolls and government spin doctors.  Yet the information is out there too. The value of gathering and rigorously analysing data has rarely been more evident. Faced with a complete mystery at the start of the year, statisticians, scientists and epidemiologists have been working miracles. I hope that we choose the right fork, because the pandemic has lessons to teach us about statistics — and vice versa — if we are willing to learn…(More)”.

The open source movement takes on climate data


Article by Heather Clancy: “…many companies are moving to disclose “climate risk,” although far fewer are moving to actually minimize it. And as those tasked with preparing those reports can attest, the process of gathering the data for them is frustrating and complex, especially as the level of detail desired and required by investors becomes deeper.

That pain point was the inspiration for a new climate data project launched this week that will be spearheaded by the Linux Foundation, the nonprofit host organization for thousands of the most influential open source software and data initiatives in the world such as GitHub. The foundation is central to the evolution of the Linux software that runs in the back offices of most major financial services firms. 

There are four powerful founding members for the new group, the LF Climate Finance Foundation (LFCF): Insurance and asset management company Allianz, cloud software giants Amazon and Microsoft, and data intelligence powerhouse S&P Global. The foundation’s “planning team” includes World Wide Fund for Nature (WWF), Ceres and the Sustainability Account Standards Board (SASB).

The group’s intention is to collaborate on an open source project called the OS-Climate platform, which will include economic and physical risk scenarios that investors, regulators, companies, financial analysts and others can use for their analysis. 

The idea is to create a “public service utility” where certain types of climate data can be accessed easily, then combined with other, more proprietary information that someone might be using for risk analysis, according to Truman Semans, CEO of OS-Climate, who was instrumental in getting the effort off the ground. “There are a whole lot of initiatives out there that address pieces of the puzzle, but no unified platform to allow those to interoperate,” he told me. There are a whole lot of initiatives out there that address pieces of the puzzle, but no unified platform to allow those to interoperate.

Why does this matter? It helps to understand the history of open source software, which was once a thing that many powerful software companies, notably Microsoft, abhorred because they were worried about the financial hit on their intellectual property. Flash forward to today and the open source software movement, “staffed” by literally millions of software developers, is credited with accelerating the creation of common system-level elements so that companies can focus their own resources on solving problems directly related to their business.

In short, this budding effort could make the right data available more quickly, so that businesses — particularly financial institutions — can make better informed decisions.

Or, as Microsoft’s chief intellectual property counsel, Jennifer Yokoyama, observed in the announcement press release: “Addressing climate issues in a meaningful way requires people and organizations to have access to data to better understand the impact of their actions. Opening up and sharing our contribution of significant and relevant sustainability data through the LF Climate Finance Foundation will help advance the financial modeling and understanding of climate change impact — an important step in affecting political change. We’re excited to collaborate with the other founding members and hope additional organizations will join.”…(More)”