What is the true value of data? New series on the return on investment of data interventions


Case studies prepared by Jessica Espey and Hayden Dahmm for  SDSN TReNDS: “But what is the ROI of investing in data for altruistic means–e.g., for sustainable development?

Today, we are launching a series of case studies to answer this question in collaboration with the Global Partnership on Sustainable Development Data. The ten examples we will profile range from earth observation data gathered via satellites to investments in national statistics systems, with costs from just a few hundred thousand dollars (US) per year to millions over decades.

The series includes efforts to revamp existing statistical systems. It also supports the growing movement to invest in less traditional approaches to data collection and analysis beyond statistical systems–such as through private sector data sources or emerging technologies enabled by the growth of the information and communications technology (ICT) sector.

Some highlights from the first five case studies–available now:

An SMS-based system called mTRAC, implemented in Uganda, has supported significant improvements in the country’s health system–including halving of response time to disease outbreaks and reducing medication stock-outs, the latter of which resulted in fewer malaria-related deaths.

NASA’s and the U.S. Geological Survey’s Landsat program–satellites that provide imagery known as earth observation data–is enabling discoveries and interventions across the science and health sectors, and provided an estimated worldwide economic benefit as high as US$2.19 billion as of 2011.

BudgIT, a civil society organization making budget data in Nigeria more accessible to citizens through machine-readable PDFs and complementary online/offline campaigns, is empowering citizens to partake in the federal budget process.

International nonprofit BRAC is ensuring mothers and infants in the slums of Bangladesh are not left behind through a data-informed intervention combining social mapping, local censuses, and real-time data sharing. BRAC estimates that from 2008 to 2017, 1,087 maternal deaths were averted out of the 2,476 deaths that would have been expected based on national statistics.

Atlantic City police are developing new approaches to their patrolling, community engagement, and other activities through risk modeling based on crime and other data, resulting in reductions in homicides and shooting injuries (26 percent) and robberies (37 percent) in just the first year of implementation….(More)”.

Tech Was Supposed to Be Society’s Great Equalizer. What Happened?


Derek Thompson at The Atlantic: “Historians may look back at the early 21st century as the Gilded Age 2.0. Not since the late 1800s has the U.S. been so defined by the triad of rapid technological change, gaping economic inequality, and sudden social upheaval.

Ironically, the digital revolution was supposed to be an equalizer. The early boosters of the Internet sprang from the counterculture of the 1960s and the New Communalist movement. Some of them, like Stewart Brand, hoped to spread the sensibilities of hippie communes throughout the wilderness of the web. Others saw the internet more broadly as an opportunity to build a society that amended the failures of the physical world.

But in the last few years, the most successful tech companies have built a new economy that often accentuates the worst parts of the old world they were bent on replacing. Facebook’s platform amplifies preexisting biases—both of ideology and race—and political propaganda. Amazon’s dominion over online retail has allowed it to squash competition, not unlike the railroad monopolies of the 19th century. And Apple, in designing the most profitable product in modern history, has also designed another instrument of harmful behavioral addiction….

The only way to make technology that helps a broad array of people is to consult a broad array of people to make that technology. But the computer industry has a multi-decade history of gender discrimination. It is, perhaps, the industry’s original sin. After World War II, Great Britain was the world’s leader in computing. Its efforts to decipher Nazi codes led to the creation of the world’s first programmable digital computer. But within 30 years, the British advantage in computing and software had withered, in part due to explicit efforts to push women out of the computer-science workforce, according to Marie Hicks’ history, Programmed Inequality.

The tech industry isn’t a digital hippie commune, anymore. It’s the new aristocracy. The largest and fastest growing companies in the world, in both the U.S. and China, are tech giants. It’s our responsibility, as users and voters, to urge these companies to use their political and social power responsibly. “I think absolute power corrupts absolutely,” Broussard said. “In the history of America, we’ve had gilded ages before and we’ve had companies that have had giant monopolies over industries and it hasn’t worked out so great. So I think that one of the things that we need to do as a society is we need to take off our blinders when it comes to technology and we need to kind of examine our techno-chauvinist beliefs and say what kind of a world do we want?”…(More)”.

Senators introduce the ‘Artificial Intelligence in Government Act’


Tajha Chappellet-Lanier at FedScoop: “A cadre of senators is looking to prompt the federal government to be a bit more proactive in utilizing artificial intelligence technologies.

To this end, the bipartisan group including Sens. Brian Schatz, D-Hawaii, Cory Gardner, R-Colo., Rob Portman, R-Ohio, and Kamala Harris, D-Calif., introduced the Artificial Intelligence in Government Acton Wednesday. Per a news release, the bill would seek to “improve the use of AI across the federal government by providing resources and directing federal agencies to include AI in data-related planning.”

The bill aims to do a number of things, including establishing an AI in government advisory board, directing the White House Office of Management and Budget to look into AI as part of the federal data strategy, getting the Office of Personnel Management to look at what kinds of employee skills are necessary for AI competence in government and expanding “an office” at the General Services Administration that will provide expertise, do research and “promote U.S. competitiveness.”

“Artificial intelligence has the potential to benefit society in ways we cannot imagine today,” Harris said in a statement. “We already see its immense value in applications as diverse as diagnosing cancer to routing vehicles. The AI in Government Act gives the federal government the tools and resources it needs to build its expertise and in partnership with industry and academia. The bill will help develop the policies to ensure that society reaps the benefits of these emerging technologies, while protecting people from potential risks, such as biases in AI.”

The proposed legislation is supported by a bunch of companies and advocacy groups in the tech space including BSA, the Center for Democracy and Technology, the Information Technology and Innovation Foundation, Intel, the Internet Association, the Lincoln Network, Microsoft, the Niskanen Center, and the R Street Institute.

The senators are hardly alone in their conviction that AI will be a powerful tool for government. At a summit in May, the White House Office of Science and Technology Policy created a Select Committee on artificial intelligence, comprised of senior research and development officials from across the government….(More)”.

Mission Failure


Matthew Sawh at Stanford Social Innovation Review: “Exposing the problems of policy schools can ignite new ways to realize the mission of educating public servants in the 21st century….

Public policy schools were founded with the aim to educate public servants with academic insights that could be applied to government administration. And while these programs have adapted the tools and vocabularies of the Reagan Revolution, such as the use of privatization and the rhetoric of competition, they have not come to terms with his philosophical legacy that describes our contemporary political culture. To do so, public policy schools need to acknowledge that the public perceives the government as the problem, not the solution, to society’s ills. Today, these programs need to ask how decisionmakers should improve the design of their organizations, their decision-making processes, and their curriculum in order to address the public’s skeptical mindset.

I recently attended a public policy school, Columbia University’s School of International and Public Affairs (SIPA), hoping to learn how to bridge the distrust between public servants and citizens, and to help forge bonds between bureaucracies and voters who feel ignored by their government officials. Instead of building bridges across these divides, the curriculum of my policy program reinforced them—training students to navigate bureaucratic silos in our democracy. Of course, public policy students go to work in the government we have, not the government we wish we had—but that’s the point. These schools should lead the national conversation and equip their graduates to think and act beyond the divides between the governing and the governed.

Most US public policy programs require a core set of courses, including macroeconomics, microeconomics, statistics, and organizational management. SIPA has broader requirements, including a financial management course, a client consulting workshop, and an internship. Both sets of core curricula undervalue the intrapersonal and interpersonal elements of leadership, particularly politics, which I define aspersuasion, particularly within groups and institutions.

Public service is more than developing smart ideas; it entails the ability to marshal the financial, political, and organizational supports to make those ideas resonate with the public and take effect in government policy. Unfortunately, these programs aren’t adequately training early career professionals to implement their ideas by giving short shrift to the intrapersonal and institutional contexts of real changemaking.

Within the core curriculum, the story of change is told as the product of processes wherein policymakers can know the rational expectations of the public. But the people themselves have concerns beyond those perceived by policymakers. As public servants, our success depends on our ability to meet people where they are, rather than where we suppose they should be.  …

Public policy schools must reach a consensus on core identity questions: Who is best placed to lead a policy school? What are their aims in crafting a professional class? What exactly should a policy degree mean in the wider world? The problem is that these programs are meant to teach students about not only the science of good government, but the human art of good governance.

Curricula based on an outdated sense both of the political process and of advocacy is a predominant feature of policy programs. Instead, core courses should cover how to advocate effectively in this new political world of the 21st century. Students should learn how to raise money for a political campaign; how to lobby; how to make an advertising budget; and how to purchase airtime in the digital age…(More)”

Urban Science: Putting the “Smart” in Smart Cities


Introduction to Special Issue on Urban Modeling and Simulation by Shade T. Shutters: “Increased use of sensors and social data collection methods have provided cites with unprecedented amounts of data. Yet, data alone is no guarantee that cities will make smarter decisions and many of what we call smart cities would be more accurately described as data-driven cities.

Parallel advances in theory are needed to make sense of those novel data streams and computationally intensive decision support models are needed to guide decision makers through the avalanche of new data. Fortunately, extraordinary increases in computational ability and data availability in the last two decades have led to revolutionary advances in the simulation and modeling of complex systems.

Techniques, such as agent-based modeling and systems dynamic modeling, have taken advantage of these advances to make major contributions to diverse disciplines such as personalized medicine, computational chemistry, social dynamics, or behavioral economics. Urban systems, with dynamic webs of interacting human, institutional, environmental, and physical systems, are particularly suited to the application of these advanced modeling and simulation techniques. Contributions to this special issue highlight the use of such techniques and are particularly timely as an emerging science of cities begins to crystallize….(More)”.

Designing Cognitive Cities


Book edited by Edy Portmann, Marco E. Tabacchi, Rudolf Seising and Astrid Habenstein: “This book illustrates various aspects and dimensions of cognitive cities. Following a comprehensive introduction, the first part of the book explores conceptual considerations for the design of cognitive cities, while the second part focuses on concrete applications. The contributions provide an overview of the wide diversity of cognitive city conceptualizations and help readers to better understand why it is important to think about the design of our cities. The book adopts a transdisciplinary approach since the cognitive city concept can only be achieved through cooperation across different academic disciplines (e.g., economics, computer science, mathematics) and between research and practice. More and more people live in a growing number of ever-larger cities. As such, it is important to reflect on how cities need to be designed to provide their inhabitants with the means and resources for a good life. The cognitive city is an emerging, innovative approach to address this need….(More)”.

How Insurance Companies Used Bad Science to Discriminate


Jessie Wright-Mendoza at JStor: “After the Civil War, the United States searched for ways to redefine itself. But by the 1880’s, the hopes of Reconstruction had dimmed. Across the United States there was instead a push to formalize and legalize discrimination against African-Americans. The effort to marginalize the first generation of free black Americans infiltrated nearly every aspect of daily life, including the cost of insurance.

Initially, African-Americans could purchase life insurance policies on equal footing with whites. That all changed in 1881. In March of that year Prudential, one of the country’s largest insurers, announced that policies held by black adults would be worth one-third less than the same plans held by whites. Their weekly premiums would remain the same. Benefits for black children didn’t change, but weekly premiums for their policies would rise by five cents.

Prudential defended the decision by pointing out that the black mortality rate was higher than the white mortality rate. Therefore, they explained, claims paid out for black policyholders were a disproportionate amount of all payouts. Most of the major life insurance companies followed suit, making it nearly impossible for African-Americans to gain coverage. Across the industry, companies blocked agents from soliciting African-American customers and denied commission for any policies issued to blacks.

The public largely accepted the statistical explanation for unequal coverage. The insurer’s job was to calculate risk. Race was merely another variable like occupation or geographic location. As one trade publication put it in 1891: “Life insurance companies are not negro-maniacs, they are business institutions…there is no sentiment and there are no politics in it.”

Companies considered race-based risk the same for all African-Americans, whether they were strong or sickly, educated or uneducated, from the country or the city. The “science” behind the risk formula is credited to Prudential statistician Frederick L. Hoffman, whose efforts to prove the genetic inferiority of the black race were used to justify the company’s discriminatory policies….(More)”.

To Secure Knowledge: Social Science Partnerships for the Common Good


Social Science Research Council: “For decades, the social sciences have generated knowledge vital to guiding public policy, informing business, and understanding and improving the human condition. But today, the social sciences face serious threats. From dwindling federal funding to public mistrust in institutions to widespread skepticism about data, the infrastructure supporting the social sciences is shifting in ways that threaten to undercut research and knowledge production.

How can we secure social knowledge for future generations?

This question has guided the Social Science Research Council’s Task Force. Following eighteen months of consultation with key players as well as internal deliberation, we have identified both long-term developments and present threats that have created challenges for the social sciences, but also created unique opportunities. And we have generated recommendations to address these issues.

Our core finding focuses on the urgent need for new partnerships and collaborations among several key players: the federal government, academic institutions, donor organizations, and the private sector. Several decades ago, these institutions had clear zones of responsibility in producing social knowledge, with the federal government constituting the largest portion of funding for basic research. Today, private companies represent an increasingly large share not just of research and funding, but also the production of data that informs the social sciences, from smart phone usage to social media patterns.

In addition, today’s social scientists face unprecedented demands for accountability, speedy publication, and generation of novel results. These pressures have emerged from the fragmented institutional foundation that undergirds research. That foundation needs a redesign in order for the social sciences to continue helping our communities address problems ranging from income inequality to education reform.

To build a better future, we identify five areas of action: Funding, Data, Ethics, Research Quality, and Research Training. In each area, our recommendations range from enlarging corporate-academic pilot programs to improving social science training in digital literacy.

A consistent theme is that none of the measures, if taken unilaterally, can generate optimal outcomes. Instead, we have issued a call to forge a new research compact to harness the potential of the social sciences for improving human lives. That compact depends on partnerships, and we urge the key players in the construction of social science knowledge—including universities, government, foundations, and corporations—to act swiftly. With the right realignments, the security of social knowledge lies within our reach….(More)”

Ethics and Data Science


(Open) Ebook by Mike LoukidesHilary Mason and DJ Patil: “As the impact of data science continues to grow on society there is an increased need to discuss how data is appropriately used and how to address misuse. Yet, ethical principles for working with data have been available for decades. The real issue today is how to put those principles into action. With this report, authors Mike Loukides, Hilary Mason, and DJ Patil examine practical ways for making ethical data standards part of your work every day.

To help you consider all of possible ramifications of your work on data projects, this report includes:

  • A sample checklist that you can adapt for your own procedures
  • Five framing guidelines (the Five C’s) for building data products: consent, clarity, consistency, control, and consequences
  • Suggestions for building ethics into your data-driven culture

Now is the time to invest in a deliberate practice of data ethics, for better products, better teams, and better outcomes….(More)”.

The Promise and Peril of the Digital Knowledge Loop


Excerpt of Albert Wenger’s draft book World After Capital: “The zero marginal cost and universality of digital technologies are already impacting the three phases of learning, creating and sharing, giving rise to a Digital Knowledge Loop. This Digital Knowledge Loop holds both amazing promise and great peril, as can be seen in the example of YouTube.

YouTube has experienced astounding growth since its release in beta form in 2005. People around the world now upload over 100 hours of video content to YouTube every minute. It is difficult to grasp just how much content that is. If you were to spend 100 years watching YouTube twenty-four hours a day, you still wouldn’t be able to watch all the video that people upload in the course of a single week. YouTube contains amazing educational content on topics as diverse as gardening and theoretical math. Many of those videos show the promise of the Digital Knowledge loop. For example, Destin Sandlin, the creator of the Smarter Every Day series of videos. Destin is interested in all things science. When he learns something new, such as the make-up of butterfly wings, he creates a new engaging video sharing that with the world. But the peril of the Digital Knowledge Loop is right there as well: YouTube is also full of videos that peddle conspiracies, spread mis-information, and even incite outright hate.

Both the promise and the peril are made possible by the same characteristics of YouTube: All of the videos are available for free to anyone in the world (except for those countries in which YouTube is blocked). They are also available 24×7. And they become available globally the second someone publishes a new one. Anybody can publish a video. All you need to access these videos is an Internet connection and a smartphone—you don’t even need a laptop or other traditional computer. That means already today two to three billion people, almost half of the world’s population has access to YouTube and can participate in the Digital Knowledge Loop for good and for bad.

These characteristics, which draw on the underlying capabilities of digital technology, are also found in other systems that similarly show the promise and peril of the Digital Knowledge Loop.

Wikipedia, the collectively-produced online encyclopedia is another great example. Here is how it works at its most promising: Someone reads an entry and learns the method used by Pythagoras to approximate the number pi. They then go off and create an animation that illustrates this method. Finally, they share the animation by publishing it back to Wikipedia thus making it easier for more people to learn. Wikipedia entries result from a large collaboration and ongoing revision process, with only a single entry per topic visible at any given time (although you can examine both the history of the page and the conversations about it). What makes this possible is a piece of software known as a wiki that keeps track of all the historical edits [58]. When that process works well it raises the quality of entries over time. But when there is a coordinated effort at manipulation or insufficient editing resources, Wikipedia too can spread misinformation instantly and globally.

Wikipedia illustrates another important aspect of the Digital Knowledge Loop: it allows individuals to participate in extremely small or minor ways. If you wish, you can contribute to Wikipedia by fixing a single typo. In fact, the minimal contribution unit is just one letter! I have not yet contributed anything of length to Wikipedia, but I have fixed probably a dozen or so typos. That doesn’t sound like much, but if you get ten thousand people to fix a typo every day, that’s 3.65 million typos a year. Let’s assume that a single person takes two minutes on average to discover and fix a typo. It would take nearly fifty people working full time for a year (2500 hours) to fix 3.65 million typos.

Small contributions by many that add up are only possible in the Digital Knowledge Loop. The Wikipedia spelling correction example shows the power of such contributions. Their peril can be seen in systems such as Twitter and Facebook, where the smallest contributions are Likes and Retweets or Reposts to one’s friends or followers. While these tiny actions can amplify high quality content, they can just as easily spread mistakes, rumors and propaganda. The impact of these information cascades ranges from viral jokes to swaying the outcomes of elections and has even led to major outbreaks of violence.

Some platforms even make it possible for people to passively contribute to the Digital Knowledge Loop. The app Waze is a good example. …The promise of the Digital Knowledge Loop is broad access to a rapidly improving body of knowledge. The peril is a fragmented post-truth society constantly in conflict. Both of these possibilities are enabled by the same fundamental characteristics of digital technologies. And once again we see clearly that technology by itself does not determine the future…(More).