Open data could have helped us learn from another mining dam disaster


Paulo A. de Souza Jr. at Nature: “The recent Brumadinho dam disaster in Brazil is an example of infrastructure failure with catastrophic consequences. Over 300 people were reported dead or missing, and nearly 400 more were rescued alive. The environmental impact is massive and difficult to quantify. The frequency of these disasters demonstrates that the current assets for monitoring integrity and generating alerting managers, authorities and the public to ongoing change in tailings are, in many cases, not working as they should. There is also the need for adequate prevention procedures. Monitoring can be perfect, but without timely and appropriate action, it will be useless. Good management therefore requires quality data. Undisputedly, management practices of industrial sites, including audit procedures, must improve, and data and metadata available from preceding accidents should be better used. There is a rich literature available about design, construction, operation, maintenance and decommissioning of tailing facilities. These include guidelines, standards, case studies, technical reports, consultancy and audit practices, and scientific papers. Regulation varies from country to country and in some cases, like Australia and Canada, it is controlled by individual state agencies. There are, however, few datasets available that are shared with the technical and scientific community more globally; particularly for prior incidents. Conspicuously lacking are comprehensive data related to monitoring of large infrastructures such as mining dams.

Today, Scientific Data published a Data Descriptor presenting a dataset obtained from 54 laboratory experiments on the breaching of fluvial dikes because of flow overtopping. (Re)use of such data can help improve our understanding of fundamental processes underpinning industrial infrastructure collapse (e.g., fluvial dike breaching, mining dam failure), and assess the accuracy of numerical models for the prediction of such incidents. This is absolutely essential for better management of floods, mitigation of dam collapses, and similar accidents. The authors propose a framework that could exemplify how data involving similar infrastructure can be stored, shared, published, and reused…(More)”.

When to Use User-Centered Design for Public Policy


Stephen Moilanen at the Stanford Social Innovation Review: “Throughout Barack Obama’s presidency, technology company executives regularly sounded off on what, from their perspective, the administration might do differently. In 2010, Steve Jobs reportedly warned Obama that he likely wouldn’t win reelection, because his administration’s policies disadvantaged businesses like Apple. And in a speech at the 2016 Republican National Convention, Peter Thiel expressed his disapproval of the political establishment by quipping, “Instead of going to Mars, we have invaded the Middle East.”

Against this backdrop, one specific way Silicon Valley has tried to nudge Washington in a new direction is with respect to policy development. Specifically, leading technologists have begun encouraging policy makers to apply user-centered design (otherwise known as design thinking or human-centered design) to the public sector. The thinking goes that if government develops policy with users more squarely in mind, it might accelerate social progress rather than—as has often been the case—stifle it.

At a moment when fewer Americans than ever believe government is meeting their needs, a new approach that elevates the voices of citizens is long overdue. Even so, it would be misguided to view user-centered design as a cure-all for what ails the public sector. The approach holds great promise, but only in a well-defined set of circumstances.

User-Centered Design in the Public Policy Arena

The term “user-centered design” refers simply to a method of building products with an eye toward what users want and need.

To date, the approach has been applied primarily to the domain of for-profit start-ups. In recent months and years, however, supporters of user-centered design have sought to introduce it to other domains. A 2013 article authored by the head of a Danish design consultancy, for example, heralded the fact that “public sector design is on this rise.” And in the recent book Lean Impact, former Google executive and USAID official Ann-Mei Chang made an incisive and compelling case for why the social sector stands to benefit from this approach.

According to this line of thinking, we should be driving toward a world where government designs policy with an eye toward the individuals that stand to benefit from—or that could be hurt by—changes to public policy.

An Imperfect Fit

The merits of user-centered design in this context may seem self-evident. Yet it stands in stark contrast to how public sector leaders typically approach policy development. As leading design thinking theorist Jeanne Liedkta notes in her book Design Thinking for the Greater Good, “Innovation and design are [currently] the domain of experts, policy makers, planners and senior leaders. Everyone else is expected to step away.”

But while user-centered design has much to offer the policy development, it does not map perfectly onto this new territory….(More)”.

San Francisco becomes the first US city to ban facial recognition by government agencies


Colin Lecher at The Verge: “In a first for a city in the United States, San Francisco has voted to ban its government agencies from using facial recognition technology.

The city’s Board of Supervisors voted eight to one to approve the proposal, set to take effect in a month, that would bar city agencies, including law enforcement, from using the tool. The ordinance would also require city agencies to get board approval for their use of surveillance technology, and set up audits of surveillance tech already in use. Other cities have approved similar transparency measures.“

The plan, called the Stop Secret Surveillance Ordinance, was spearheaded by Supervisor Aaron Peskin. In a statement read ahead of the vote, Peskin said it was “an ordinance about having accountability around surveillance technology.”

“This is not an anti-technology policy,” he said, stressing that many tools used by law enforcement are still important to the city’s security. Still, he added, facial recognition is “uniquely dangerous and oppressive.”

The ban comes amid a broader debate over facial recognition, which can be used to rapidly identify people and has triggered new questions about civil liberties. Experts have raised specific concerns about the tools, as studies have demonstrated instances of troubling bias and error rates.

Microsoft, which offers facial recognition tools, has called for some form of regulation for the technology — but how, exactly, to regulate the tool has been contested. Proposals have ranged from light regulation to full moratoriums. Legislation has largely stalled, however.

San Francisco’s decision will inevitably be used as an example as the debate continues and other cities and states decide whether and how to regulate facial recognition. Civil liberties groups like the ACLU of Northern California have already thrown their support behind the San Francisco plan, while law enforcement in the area has pushed back….(More)”.

How AI could save lives without spilling medical secrets


Will Knight at MIT Technology Review: “The potential for artificial intelligence to transform health care is huge, but there’s a big catch.

AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease. That means imagery, genomic information, or electronic health records—all potentially very sensitive information.

That’s why researchers are working on ways to let AI learn from large amounts of medical data while making it very hard for that data to leak.

One promising approach is now getting its first big test at Stanford Medical School in California. Patients there can choose to contribute their medical data to an AI system that can be trained to diagnose eye disease without ever actually accessing their personal details.

Participants submit ophthalmology test results and health record data through an app. The information is used to train a machine-learning model to identify signs of eye disease in the images. But the data is protected by technology developed by Oasis Labs, a startup spun out of UC Berkeley, which guarantees that the information cannot be leaked or misused. The startup was granted permission by regulators to start the trial last week.

The sensitivity of private patient data is a looming problem. AI algorithms trained on data from different hospitals could potentially diagnose illness, prevent disease, and extend lives. But in many countries medical records cannot easily be shared and fed to these algorithms for legal reasons. Research on using AI to spot disease in medical images or data usually involves relatively small data sets, which greatly limits the technology’s promise….

Oasis stores the private patient data on a secure chip, designed in collaboration with other researchers at Berkeley. The data remains within the Oasis cloud; outsiders are able to run algorithms on the data, and receive the results, without its ever leaving the system. A smart contractsoftware that runs on top of a blockchain—is triggered when a request to access the data is received. This software logs how the data was used and also checks to make sure the machine-learning computation was carried out correctly….(More)”.

Echo Chambers May Not Be as Dangerous as You Think, New Study Finds


News Release: “In the wake of the 2016 American presidential election, western media outlets have become almost obsessed with echo chambers. With headlines like “Echo Chambers are Dangerous” and “Are You in a Social Media Echo Chamber?,” news media consumers have been inundated by articles discussing the problems with spending most of one’s time around likeminded people.

But are social bubbles really all that bad? Perhaps not.

A new study from the Annenberg School for Communication at the University of Pennsylvania and the School of Media and Public Affairs at George Washington University, published today in the Proceedings of the National Academy of Sciences, shows that collective intelligence — peer learning within social networks — can increase belief accuracy even in politically homogenous groups.

“Previous research showed that social information processing could work in mixed groups,” says lead author and Annenberg alum Joshua Becker (Ph.D. ’18), who is currently a postdoctoral fellow at Northwestern University’s Kellogg School of Management. “But theories of political polarization argued that social influence within homogenous groups should only amplify existing biases.”

It’s easy to imagine that networked collective intelligence would work when you’re asking people neutral questions, such as how many jelly beans are in a jar. But what about probing hot button political topics? Because people are more likely to adjust the facts of the world to match their beliefs than vice versa, prior theories claimed that a group of people who agree politically would be unable to use collective reasoning to arrive at a factual answer if it challenges their beliefs.

“Earlier this year, we showed that when Democrats and Republicans interact with each other within properly designed social media networks, it can eliminate polarization and improve both groups’ understanding of contentious issues such as climate change,” says senior author Damon Centola, Associate Professor of Communication at the Annenberg School. “Remarkably, our new findings show that properly designed social media networks can even lead to improved understanding of contentious topics within echo chambers.”

Becker and colleagues devised an experiment in which participants answered fact-based questions that stir up political leanings, like “How much did unemployment change during Barack Obama’s presidential administration?” or “How much has the number of undocumented immigrants changed in the last 10 years?” Participants were placed in groups of only Republicans or only Democrats and given the opportunity to change their responses based on the other group members’ answers.

The results show that individual beliefs in homogenous groups became 35% more accurate after participants exchanged information with one another. And although people’s beliefs became more similar to their own party members, they also became more similar to members of the other political party, even without any between-group exchange. This means that even in homogenous groups — or echo chambers — social influence increases factual accuracy and decreases polarization.

“Our results cast doubt on some of the gravest concerns about the role of echo chambers in contemporary democracy,” says co-author Ethan Porter, Assistant Professor of Media and Public Affairs at George Washington University. “When it comes to factual matters, political echo chambers need not necessarily reduce accuracy or increase polarization. Indeed, we find them doing the opposite….(More)… (Full Paper: “The Wisdom of Partisan Crowds“)

The Dark Side of Sunlight


Essay by James D’Angelo and Brent Ranalli in Foreign Affairs: “…76 percent of Americans, according to a Gallup poll, disapprove of Congress.

This dysfunction started well before the Trump presidency. It has been growing for decades, despite promise after promise and proposal after proposal to reverse it. Many explanations have been offered, from the rise of partisan media to the growth of gerrymandering to the explosion of corporate money. But one of the most important causes is usually overlooked: transparency. Something usually seen as an antidote to corruption and bad government, it turns out, is leading to both.

The problem began in 1970, when a group of liberal Democrats in the House of Representatives spearheaded the passage of new rules known as “sunshine reforms.” Advertised as measures that would make legislators more accountable to their constituents, these changes increased the number of votes that were recorded and allowed members of the public to attend previously off-limits committee meetings.

But the reforms backfired. By diminishing secrecy, they opened up the legislative process to a host of actors—corporations, special interests, foreign governments, members of the executive branch—that pay far greater attention to the thousands of votes taken each session than the public does. The reforms also deprived members of Congress of the privacy they once relied on to forge compromises with political opponents behind closed doors, and they encouraged them to bring useless amendments to the floor for the sole purpose of political theater.

Fifty years on, the results of this experiment in transparency are in. When lawmakers are treated like minors in need of constant supervision, it is special interests that benefit, since they are the ones doing the supervising. And when politicians are given every incentive to play to their base, politics grows more partisan and dysfunctional. In order for Congress to better serve the public, it has to be allowed to do more of its work out of public view.

The idea of open government enjoys nearly universal support. Almost every modern president has paid lip service to it. (Even the famously paranoid Richard Nixon said, “When information which properly belongs to the public is systematically withheld by those in power, the people soon become ignorant of their own affairs, distrustful of those who manage them, and—eventually—incapable of determining their own destinies.”) From former Republican Speaker of the House Paul Ryan to Democratic Speaker of the House Nancy Pelosi, from the liberal activist Ralph Nader to the anti-tax crusader Grover Norquist, all agree that when it comes to transparency, more is better.

It was not always this way. It used to be that secrecy was seen as essential to good government, especially when it came to crafting legislation. …(More)”

Surround Sound


Report by the Public Affairs Council: “Millions of citizens and thousands of organizations contact Congress each year to urge Senators and House members to vote for or against legislation. Countless others weigh in with federal agencies on regulatory issues ranging from healthcare to livestock grazing rights. Congressional and federal agency personnel are inundated with input. So how do staff know what to believe? Who do they trust? And which methods of communicating with government seem to be most effective? To find out, the Public Affairs Council teamed up with Morning Consult in an online survey of 173 congressional and federal employees. Participants were asked for their views on social media, fake news, influential methods of communication and trusted sources of policy information.

When asked to compare the effectiveness of different advocacy techniques, congressional staff rate personal visits to Washington, D.C., (83%) or district offices (81%), and think tank reports (81%) at the top of the list. Grassroots advocacy techniques such as emails, phone calls and postal mail campaigns also score above 75% for effectiveness.

Traditional in-person visits from lobbyists are considered effective by a strong majority (75%), as are town halls (73%) and lobby days (72%). Of the 13 options considered, the lowest score goes to social media posts, which are still rated effective by 57% of survey participants.

Despite their unpopularity with the general public, corporate CEOs are an asset when it comes to getting meetings scheduled with members of Congress. Eighty-three percent (83%) of congressional staffers say their boss would likely meet with a CEO from their district or state when that executive comes to Washington, D.C., compared with only 7% who say their boss would be unlikely to take the meeting….(More)”.

New Report Examines Reproducibility and Replicability in Science, Recommends Ways to Improve Transparency and Rigor in Research


National Academies of Sciences: “While computational reproducibility in scientific research is generally expected when the original data and code are available, lack of ability to replicate a previous study — or obtain consistent results looking at the same scientific question but with different data — is more nuanced and occasionally can aid in the process of scientific discovery, says a new congressionally mandated report from the National Academies of Sciences, Engineering, and Medicine.  Reproducibility and Replicability in Science recommends ways that researchers, academic institutions, journals, and funders should help strengthen rigor and transparency in order to improve the reproducibility and replicability of scientific research.

Defining Reproducibility and Replicability

The terms “reproducibility” and “replicability” are often used interchangeably, but the report uses each term to refer to a separate concept.  Reproducibility means obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis.  Replicability means obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.   

Reproducing research involves using the original data and code, while replicating research involves new data collection and similar methods used in previous studies, the report says.  Even when a study was rigorously conducted according to best practices, correctly analyzed, and transparently reported, it may fail to be replicated. 

“Being able to reproduce the computational results of another researcher starting with the same data and replicating a previous study to test its results facilitate the self-correcting nature of science, and are often cited as hallmarks of good science,” said Harvey Fineberg, president of the Gordon and Betty Moore Foundation and chair of the committee that conducted the study.  “However, factors such as lack of transparency of reporting, lack of appropriate training, and methodological errors can prevent researchers from being able to reproduce or replicate a study.  Research funders, journals, academic institutions, policymakers, and scientists themselves each have a role to play in improving reproducibility and replicability by ensuring that scientists adhere to the highest standards of practice, understand and express the uncertainty inherent in their conclusions, and continue to strengthen the interconnected web of scientific knowledge — the principal driver of progress in the modern world.”….(More)”.

GIS and the 2020 Census


ESRI:GIS and the 2020 Census: Modernizing Official Statistics provides statistical organizations with the most recent GIS methodologies and technological tools to support census workers’ needs at all the stages of a census. Learn how to plan and carry out census work with GIS using new technologies for field data collection and operations management. International case studies illustrate concepts in practice….(More)”.

Habeas Data: Privacy vs. The Rise of Surveillance Tech


Book by Cyrus Farivar: “Habeas Data shows how the explosive growth of surveillance technology has outpaced our understanding of the ethics, mores, and laws of privacy.

Award-winning tech reporter Cyrus Farivar makes the case by taking ten historic court decisions that defined our privacy rights and matching them against the capabilities of modern technology. It’s an approach that combines the charge of a legal thriller with the shock of the daily headlines.

Chapters include: the 1960s prosecution of a bookie that established the “reasonable expectation of privacy” in nonpublic places beyond your home (but how does that ruling apply now, when police can chart your every move and hear your every conversation within your own home — without even having to enter it?); the 1970s case where the police monitored a lewd caller — the decision of which is now the linchpin of the NSA’s controversial metadata tracking program revealed by Edward Snowden; and a 2010 low-level burglary trial that revealed police had tracked a defendant’s past 12,898 locations before arrest — an invasion of privacy grossly out of proportion to the alleged crime, which showed how authorities are all too willing to take advantage of the ludicrous gap between the slow pace of legal reform and the rapid transformation of technology.

A dazzling exposé that journeys from Oakland, California to the halls of the Supreme Court to the back of a squad car, Habeas Data combines deft reportage, deep research, and original interviews to offer an X-ray diagnostic of our current surveillance state….(More)”.