What is machine learning?


Chris Meserole at Brookings: “In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term “artificial intelligence” to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observations—to demonstrate, that is, an innate intelligence.

The question was how to achieve that goal. Early efforts focused primarily on what’s known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950s—one of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as “machine learning”—it wasn’t until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.

The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, it’s not possible to tease out the implications of AI without understanding how machine learning works—as well as how it doesn’t….(More)”.

Nervous States


Book by William Davies: “Why do we no longer trust experts, facts and statistics? Why has politics become so fractious and warlike? What caused the populist political upheavals of recent years?How can the history of ideas help us understand our present?

In this bold and far-reaching exploration of our new political landscape, William Davies reveals how feelings have come to reshape our world. Drawing deep on history, philosophy, psychology and economics, he shows how some of the fundamental assumptions that defined the modern world have dissolved. With advances in science and medicine, the division between mind and body is no longer so clear-cut. The spread of digital and military technology has left us not quite at war nor exactly at peace. In the murky new space between mind and body, between war and peace, lie nervous states: with all of us relying increasingly on feeling rather than fact.

In a book of profound insight and astonishing breadth, William Davies reveals the origins of this new political reality. Nervous States is a compelling and essential guide to the turbulent times we are living through….(More)”.

Translating science into business innovation: The case of open food and nutrition data hackathons


Paper by Christopher TucciGianluigi Viscusi and Heidi Gautschi: “In this article, we explore the use of hackathons and open data in corporations’ open innovation portfolios, addressing a new way for companies to tap into the creativity and innovation of early-stage startup culture, in this case applied to the food and nutrition sector. We study the first Open Food Data Hackdays, held on 10-11 February 2017 in Lausanne and Zurich. The aim of the overall project that the Hackdays event was part of was to use open food and nutrition data as a driver for business innovation. We see hackathons as a new tool in the innovation manager’s toolkit, a kind of live crowdsourcing exercise that goes beyond traditional ideation and develops a variety of prototypes and new ideas for business innovation. Companies then have the option of working with entrepreneurs and taking some of the ideas forward….(More)”.

What is the true value of data? New series on the return on investment of data interventions


Case studies prepared by Jessica Espey and Hayden Dahmm for  SDSN TReNDS: “But what is the ROI of investing in data for altruistic means–e.g., for sustainable development?

Today, we are launching a series of case studies to answer this question in collaboration with the Global Partnership on Sustainable Development Data. The ten examples we will profile range from earth observation data gathered via satellites to investments in national statistics systems, with costs from just a few hundred thousand dollars (US) per year to millions over decades.

The series includes efforts to revamp existing statistical systems. It also supports the growing movement to invest in less traditional approaches to data collection and analysis beyond statistical systems–such as through private sector data sources or emerging technologies enabled by the growth of the information and communications technology (ICT) sector.

Some highlights from the first five case studies–available now:

An SMS-based system called mTRAC, implemented in Uganda, has supported significant improvements in the country’s health system–including halving of response time to disease outbreaks and reducing medication stock-outs, the latter of which resulted in fewer malaria-related deaths.

NASA’s and the U.S. Geological Survey’s Landsat program–satellites that provide imagery known as earth observation data–is enabling discoveries and interventions across the science and health sectors, and provided an estimated worldwide economic benefit as high as US$2.19 billion as of 2011.

BudgIT, a civil society organization making budget data in Nigeria more accessible to citizens through machine-readable PDFs and complementary online/offline campaigns, is empowering citizens to partake in the federal budget process.

International nonprofit BRAC is ensuring mothers and infants in the slums of Bangladesh are not left behind through a data-informed intervention combining social mapping, local censuses, and real-time data sharing. BRAC estimates that from 2008 to 2017, 1,087 maternal deaths were averted out of the 2,476 deaths that would have been expected based on national statistics.

Atlantic City police are developing new approaches to their patrolling, community engagement, and other activities through risk modeling based on crime and other data, resulting in reductions in homicides and shooting injuries (26 percent) and robberies (37 percent) in just the first year of implementation….(More)”.

Tech Was Supposed to Be Society’s Great Equalizer. What Happened?


Derek Thompson at The Atlantic: “Historians may look back at the early 21st century as the Gilded Age 2.0. Not since the late 1800s has the U.S. been so defined by the triad of rapid technological change, gaping economic inequality, and sudden social upheaval.

Ironically, the digital revolution was supposed to be an equalizer. The early boosters of the Internet sprang from the counterculture of the 1960s and the New Communalist movement. Some of them, like Stewart Brand, hoped to spread the sensibilities of hippie communes throughout the wilderness of the web. Others saw the internet more broadly as an opportunity to build a society that amended the failures of the physical world.

But in the last few years, the most successful tech companies have built a new economy that often accentuates the worst parts of the old world they were bent on replacing. Facebook’s platform amplifies preexisting biases—both of ideology and race—and political propaganda. Amazon’s dominion over online retail has allowed it to squash competition, not unlike the railroad monopolies of the 19th century. And Apple, in designing the most profitable product in modern history, has also designed another instrument of harmful behavioral addiction….

The only way to make technology that helps a broad array of people is to consult a broad array of people to make that technology. But the computer industry has a multi-decade history of gender discrimination. It is, perhaps, the industry’s original sin. After World War II, Great Britain was the world’s leader in computing. Its efforts to decipher Nazi codes led to the creation of the world’s first programmable digital computer. But within 30 years, the British advantage in computing and software had withered, in part due to explicit efforts to push women out of the computer-science workforce, according to Marie Hicks’ history, Programmed Inequality.

The tech industry isn’t a digital hippie commune, anymore. It’s the new aristocracy. The largest and fastest growing companies in the world, in both the U.S. and China, are tech giants. It’s our responsibility, as users and voters, to urge these companies to use their political and social power responsibly. “I think absolute power corrupts absolutely,” Broussard said. “In the history of America, we’ve had gilded ages before and we’ve had companies that have had giant monopolies over industries and it hasn’t worked out so great. So I think that one of the things that we need to do as a society is we need to take off our blinders when it comes to technology and we need to kind of examine our techno-chauvinist beliefs and say what kind of a world do we want?”…(More)”.

Senators introduce the ‘Artificial Intelligence in Government Act’


Tajha Chappellet-Lanier at FedScoop: “A cadre of senators is looking to prompt the federal government to be a bit more proactive in utilizing artificial intelligence technologies.

To this end, the bipartisan group including Sens. Brian Schatz, D-Hawaii, Cory Gardner, R-Colo., Rob Portman, R-Ohio, and Kamala Harris, D-Calif., introduced the Artificial Intelligence in Government Acton Wednesday. Per a news release, the bill would seek to “improve the use of AI across the federal government by providing resources and directing federal agencies to include AI in data-related planning.”

The bill aims to do a number of things, including establishing an AI in government advisory board, directing the White House Office of Management and Budget to look into AI as part of the federal data strategy, getting the Office of Personnel Management to look at what kinds of employee skills are necessary for AI competence in government and expanding “an office” at the General Services Administration that will provide expertise, do research and “promote U.S. competitiveness.”

“Artificial intelligence has the potential to benefit society in ways we cannot imagine today,” Harris said in a statement. “We already see its immense value in applications as diverse as diagnosing cancer to routing vehicles. The AI in Government Act gives the federal government the tools and resources it needs to build its expertise and in partnership with industry and academia. The bill will help develop the policies to ensure that society reaps the benefits of these emerging technologies, while protecting people from potential risks, such as biases in AI.”

The proposed legislation is supported by a bunch of companies and advocacy groups in the tech space including BSA, the Center for Democracy and Technology, the Information Technology and Innovation Foundation, Intel, the Internet Association, the Lincoln Network, Microsoft, the Niskanen Center, and the R Street Institute.

The senators are hardly alone in their conviction that AI will be a powerful tool for government. At a summit in May, the White House Office of Science and Technology Policy created a Select Committee on artificial intelligence, comprised of senior research and development officials from across the government….(More)”.

Mission Failure


Matthew Sawh at Stanford Social Innovation Review: “Exposing the problems of policy schools can ignite new ways to realize the mission of educating public servants in the 21st century….

Public policy schools were founded with the aim to educate public servants with academic insights that could be applied to government administration. And while these programs have adapted the tools and vocabularies of the Reagan Revolution, such as the use of privatization and the rhetoric of competition, they have not come to terms with his philosophical legacy that describes our contemporary political culture. To do so, public policy schools need to acknowledge that the public perceives the government as the problem, not the solution, to society’s ills. Today, these programs need to ask how decisionmakers should improve the design of their organizations, their decision-making processes, and their curriculum in order to address the public’s skeptical mindset.

I recently attended a public policy school, Columbia University’s School of International and Public Affairs (SIPA), hoping to learn how to bridge the distrust between public servants and citizens, and to help forge bonds between bureaucracies and voters who feel ignored by their government officials. Instead of building bridges across these divides, the curriculum of my policy program reinforced them—training students to navigate bureaucratic silos in our democracy. Of course, public policy students go to work in the government we have, not the government we wish we had—but that’s the point. These schools should lead the national conversation and equip their graduates to think and act beyond the divides between the governing and the governed.

Most US public policy programs require a core set of courses, including macroeconomics, microeconomics, statistics, and organizational management. SIPA has broader requirements, including a financial management course, a client consulting workshop, and an internship. Both sets of core curricula undervalue the intrapersonal and interpersonal elements of leadership, particularly politics, which I define aspersuasion, particularly within groups and institutions.

Public service is more than developing smart ideas; it entails the ability to marshal the financial, political, and organizational supports to make those ideas resonate with the public and take effect in government policy. Unfortunately, these programs aren’t adequately training early career professionals to implement their ideas by giving short shrift to the intrapersonal and institutional contexts of real changemaking.

Within the core curriculum, the story of change is told as the product of processes wherein policymakers can know the rational expectations of the public. But the people themselves have concerns beyond those perceived by policymakers. As public servants, our success depends on our ability to meet people where they are, rather than where we suppose they should be.  …

Public policy schools must reach a consensus on core identity questions: Who is best placed to lead a policy school? What are their aims in crafting a professional class? What exactly should a policy degree mean in the wider world? The problem is that these programs are meant to teach students about not only the science of good government, but the human art of good governance.

Curricula based on an outdated sense both of the political process and of advocacy is a predominant feature of policy programs. Instead, core courses should cover how to advocate effectively in this new political world of the 21st century. Students should learn how to raise money for a political campaign; how to lobby; how to make an advertising budget; and how to purchase airtime in the digital age…(More)”

Urban Science: Putting the “Smart” in Smart Cities


Introduction to Special Issue on Urban Modeling and Simulation by Shade T. Shutters: “Increased use of sensors and social data collection methods have provided cites with unprecedented amounts of data. Yet, data alone is no guarantee that cities will make smarter decisions and many of what we call smart cities would be more accurately described as data-driven cities.

Parallel advances in theory are needed to make sense of those novel data streams and computationally intensive decision support models are needed to guide decision makers through the avalanche of new data. Fortunately, extraordinary increases in computational ability and data availability in the last two decades have led to revolutionary advances in the simulation and modeling of complex systems.

Techniques, such as agent-based modeling and systems dynamic modeling, have taken advantage of these advances to make major contributions to diverse disciplines such as personalized medicine, computational chemistry, social dynamics, or behavioral economics. Urban systems, with dynamic webs of interacting human, institutional, environmental, and physical systems, are particularly suited to the application of these advanced modeling and simulation techniques. Contributions to this special issue highlight the use of such techniques and are particularly timely as an emerging science of cities begins to crystallize….(More)”.

Designing Cognitive Cities


Book edited by Edy Portmann, Marco E. Tabacchi, Rudolf Seising and Astrid Habenstein: “This book illustrates various aspects and dimensions of cognitive cities. Following a comprehensive introduction, the first part of the book explores conceptual considerations for the design of cognitive cities, while the second part focuses on concrete applications. The contributions provide an overview of the wide diversity of cognitive city conceptualizations and help readers to better understand why it is important to think about the design of our cities. The book adopts a transdisciplinary approach since the cognitive city concept can only be achieved through cooperation across different academic disciplines (e.g., economics, computer science, mathematics) and between research and practice. More and more people live in a growing number of ever-larger cities. As such, it is important to reflect on how cities need to be designed to provide their inhabitants with the means and resources for a good life. The cognitive city is an emerging, innovative approach to address this need….(More)”.

How Insurance Companies Used Bad Science to Discriminate


Jessie Wright-Mendoza at JStor: “After the Civil War, the United States searched for ways to redefine itself. But by the 1880’s, the hopes of Reconstruction had dimmed. Across the United States there was instead a push to formalize and legalize discrimination against African-Americans. The effort to marginalize the first generation of free black Americans infiltrated nearly every aspect of daily life, including the cost of insurance.

Initially, African-Americans could purchase life insurance policies on equal footing with whites. That all changed in 1881. In March of that year Prudential, one of the country’s largest insurers, announced that policies held by black adults would be worth one-third less than the same plans held by whites. Their weekly premiums would remain the same. Benefits for black children didn’t change, but weekly premiums for their policies would rise by five cents.

Prudential defended the decision by pointing out that the black mortality rate was higher than the white mortality rate. Therefore, they explained, claims paid out for black policyholders were a disproportionate amount of all payouts. Most of the major life insurance companies followed suit, making it nearly impossible for African-Americans to gain coverage. Across the industry, companies blocked agents from soliciting African-American customers and denied commission for any policies issued to blacks.

The public largely accepted the statistical explanation for unequal coverage. The insurer’s job was to calculate risk. Race was merely another variable like occupation or geographic location. As one trade publication put it in 1891: “Life insurance companies are not negro-maniacs, they are business institutions…there is no sentiment and there are no politics in it.”

Companies considered race-based risk the same for all African-Americans, whether they were strong or sickly, educated or uneducated, from the country or the city. The “science” behind the risk formula is credited to Prudential statistician Frederick L. Hoffman, whose efforts to prove the genetic inferiority of the black race were used to justify the company’s discriminatory policies….(More)”.