Facebook releases a trio of maps to aid with fighting disease outbreaks


Sarah Perez at Techcrunch: “Facebook… announced a new initiative focused on using its data and technologies to help nonprofit organizations and universities working in public health better map the spread of infectious diseases around the world. Specifically, the company is introducing three new maps: population density maps with demographic estimates, movement maps and network coverage maps. These, says Facebook, will help the health partners to understand where people live, how they’re moving and if they have connectivity — all factors that can aid in determining how to respond to outbreaks, and where supplies should be delivered.

As Facebook explained, health organizations rely on information like this when planning public health campaigns. But much of the information they rely on is outdated, like older census data. In addition, information from more remote communities can be scarce.

By combining the new maps with other public health data, Facebook believes organizations will be better equipped to address epidemics.

The new high-resolution population density maps will estimate the number of people living within 30-meter grid tiles, and provide insights on demographics, including the number of children under five, the number of women of reproductive age, as well as young and elderly populations. These maps aren’t built using Facebook data, but are instead built by using Facebook’s AI capabilities with satellite imagery and census information.

Movement maps, meanwhile, track aggregate data about Facebook users’ movements via their mobile phones (when location services are enabled). At scale, health partners can combine this with other data to predict where other outbreaks may occur next….(More)”.

Open data could have helped us learn from another mining dam disaster


Paulo A. de Souza Jr. at Nature: “The recent Brumadinho dam disaster in Brazil is an example of infrastructure failure with catastrophic consequences. Over 300 people were reported dead or missing, and nearly 400 more were rescued alive. The environmental impact is massive and difficult to quantify. The frequency of these disasters demonstrates that the current assets for monitoring integrity and generating alerting managers, authorities and the public to ongoing change in tailings are, in many cases, not working as they should. There is also the need for adequate prevention procedures. Monitoring can be perfect, but without timely and appropriate action, it will be useless. Good management therefore requires quality data. Undisputedly, management practices of industrial sites, including audit procedures, must improve, and data and metadata available from preceding accidents should be better used. There is a rich literature available about design, construction, operation, maintenance and decommissioning of tailing facilities. These include guidelines, standards, case studies, technical reports, consultancy and audit practices, and scientific papers. Regulation varies from country to country and in some cases, like Australia and Canada, it is controlled by individual state agencies. There are, however, few datasets available that are shared with the technical and scientific community more globally; particularly for prior incidents. Conspicuously lacking are comprehensive data related to monitoring of large infrastructures such as mining dams.

Today, Scientific Data published a Data Descriptor presenting a dataset obtained from 54 laboratory experiments on the breaching of fluvial dikes because of flow overtopping. (Re)use of such data can help improve our understanding of fundamental processes underpinning industrial infrastructure collapse (e.g., fluvial dike breaching, mining dam failure), and assess the accuracy of numerical models for the prediction of such incidents. This is absolutely essential for better management of floods, mitigation of dam collapses, and similar accidents. The authors propose a framework that could exemplify how data involving similar infrastructure can be stored, shared, published, and reused…(More)”.

When to Use User-Centered Design for Public Policy


Stephen Moilanen at the Stanford Social Innovation Review: “Throughout Barack Obama’s presidency, technology company executives regularly sounded off on what, from their perspective, the administration might do differently. In 2010, Steve Jobs reportedly warned Obama that he likely wouldn’t win reelection, because his administration’s policies disadvantaged businesses like Apple. And in a speech at the 2016 Republican National Convention, Peter Thiel expressed his disapproval of the political establishment by quipping, “Instead of going to Mars, we have invaded the Middle East.”

Against this backdrop, one specific way Silicon Valley has tried to nudge Washington in a new direction is with respect to policy development. Specifically, leading technologists have begun encouraging policy makers to apply user-centered design (otherwise known as design thinking or human-centered design) to the public sector. The thinking goes that if government develops policy with users more squarely in mind, it might accelerate social progress rather than—as has often been the case—stifle it.

At a moment when fewer Americans than ever believe government is meeting their needs, a new approach that elevates the voices of citizens is long overdue. Even so, it would be misguided to view user-centered design as a cure-all for what ails the public sector. The approach holds great promise, but only in a well-defined set of circumstances.

User-Centered Design in the Public Policy Arena

The term “user-centered design” refers simply to a method of building products with an eye toward what users want and need.

To date, the approach has been applied primarily to the domain of for-profit start-ups. In recent months and years, however, supporters of user-centered design have sought to introduce it to other domains. A 2013 article authored by the head of a Danish design consultancy, for example, heralded the fact that “public sector design is on this rise.” And in the recent book Lean Impact, former Google executive and USAID official Ann-Mei Chang made an incisive and compelling case for why the social sector stands to benefit from this approach.

According to this line of thinking, we should be driving toward a world where government designs policy with an eye toward the individuals that stand to benefit from—or that could be hurt by—changes to public policy.

An Imperfect Fit

The merits of user-centered design in this context may seem self-evident. Yet it stands in stark contrast to how public sector leaders typically approach policy development. As leading design thinking theorist Jeanne Liedkta notes in her book Design Thinking for the Greater Good, “Innovation and design are [currently] the domain of experts, policy makers, planners and senior leaders. Everyone else is expected to step away.”

But while user-centered design has much to offer the policy development, it does not map perfectly onto this new territory….(More)”.

San Francisco becomes the first US city to ban facial recognition by government agencies


Colin Lecher at The Verge: “In a first for a city in the United States, San Francisco has voted to ban its government agencies from using facial recognition technology.

The city’s Board of Supervisors voted eight to one to approve the proposal, set to take effect in a month, that would bar city agencies, including law enforcement, from using the tool. The ordinance would also require city agencies to get board approval for their use of surveillance technology, and set up audits of surveillance tech already in use. Other cities have approved similar transparency measures.“

The plan, called the Stop Secret Surveillance Ordinance, was spearheaded by Supervisor Aaron Peskin. In a statement read ahead of the vote, Peskin said it was “an ordinance about having accountability around surveillance technology.”

“This is not an anti-technology policy,” he said, stressing that many tools used by law enforcement are still important to the city’s security. Still, he added, facial recognition is “uniquely dangerous and oppressive.”

The ban comes amid a broader debate over facial recognition, which can be used to rapidly identify people and has triggered new questions about civil liberties. Experts have raised specific concerns about the tools, as studies have demonstrated instances of troubling bias and error rates.

Microsoft, which offers facial recognition tools, has called for some form of regulation for the technology — but how, exactly, to regulate the tool has been contested. Proposals have ranged from light regulation to full moratoriums. Legislation has largely stalled, however.

San Francisco’s decision will inevitably be used as an example as the debate continues and other cities and states decide whether and how to regulate facial recognition. Civil liberties groups like the ACLU of Northern California have already thrown their support behind the San Francisco plan, while law enforcement in the area has pushed back….(More)”.

The Ruin of the Digital Town Square


Special Issue of The Atlantis: “Across the political spectrum, a consensus has arisen that Twitter, Facebook, YouTube, and other digital platforms are laying ruin to public discourse. They trade on snarkiness, trolling, outrage, and conspiracy theories, and encourage tribalism, information bubbles, and social discord. How did we get here, and how can we get out? The essays in this symposium seek answers to the crisis of “digital discourse” beyond privacy policies, corporate exposés, and smarter algorithms.

The Inescapable Town Square
L. M. Sacasas on how social media combines the worst parts of past eras of communication

Preserving Real-Life Childhood
Naomi Schaefer Riley on why decency online requires raising kids who know life offline

How Not to Regulate Social Media
Shoshana Weissmann on proposed privacy and bot laws that would do more harm than good

The Four Facebooks
Nolen Gertz on misinformation, manipulation, dependency, and distraction

Do You Know Who Your ‘Friends’ Are?
Ashley May on why treating others well online requires defining our relationships

The Distance Between Us
Micah Meadowcroft on why we act badly when we don’t speak face-to-face

The Emergent Order of Twitter
Andy Smarick on why the platform should be fixed from the bottom up, not the top down

Imagine All the People
James Poulos on how the fantasies of the TV era created the disaster of social media

Making Friends of Trolls
Caitrin Keiper on finding familiar faces behind the black mirror…(More)”

The Dark Side of Sunlight


Essay by James D’Angelo and Brent Ranalli in Foreign Affairs: “…76 percent of Americans, according to a Gallup poll, disapprove of Congress.

This dysfunction started well before the Trump presidency. It has been growing for decades, despite promise after promise and proposal after proposal to reverse it. Many explanations have been offered, from the rise of partisan media to the growth of gerrymandering to the explosion of corporate money. But one of the most important causes is usually overlooked: transparency. Something usually seen as an antidote to corruption and bad government, it turns out, is leading to both.

The problem began in 1970, when a group of liberal Democrats in the House of Representatives spearheaded the passage of new rules known as “sunshine reforms.” Advertised as measures that would make legislators more accountable to their constituents, these changes increased the number of votes that were recorded and allowed members of the public to attend previously off-limits committee meetings.

But the reforms backfired. By diminishing secrecy, they opened up the legislative process to a host of actors—corporations, special interests, foreign governments, members of the executive branch—that pay far greater attention to the thousands of votes taken each session than the public does. The reforms also deprived members of Congress of the privacy they once relied on to forge compromises with political opponents behind closed doors, and they encouraged them to bring useless amendments to the floor for the sole purpose of political theater.

Fifty years on, the results of this experiment in transparency are in. When lawmakers are treated like minors in need of constant supervision, it is special interests that benefit, since they are the ones doing the supervising. And when politicians are given every incentive to play to their base, politics grows more partisan and dysfunctional. In order for Congress to better serve the public, it has to be allowed to do more of its work out of public view.

The idea of open government enjoys nearly universal support. Almost every modern president has paid lip service to it. (Even the famously paranoid Richard Nixon said, “When information which properly belongs to the public is systematically withheld by those in power, the people soon become ignorant of their own affairs, distrustful of those who manage them, and—eventually—incapable of determining their own destinies.”) From former Republican Speaker of the House Paul Ryan to Democratic Speaker of the House Nancy Pelosi, from the liberal activist Ralph Nader to the anti-tax crusader Grover Norquist, all agree that when it comes to transparency, more is better.

It was not always this way. It used to be that secrecy was seen as essential to good government, especially when it came to crafting legislation. …(More)”

We’ll soon know the exact air pollution from every power plant in the world. That’s huge.


David Roberts at Vox: “A nonprofit artificial intelligence firm called WattTime is going to use satellite imagery to precisely track the air pollution (including carbon emissions) coming out of every single power plant in the world, in real time. And it’s going to make the data public.

This is a very big deal. Poor monitoring and gaming of emissions data have made it difficult to enforce pollution restrictions on power plants. This system promises to effectively eliminate poor monitoring and gaming of emissions data….

The plan is to use data from satellites that make theirs publicly available (like the European Union’s Copernicus network and the US Landsat network), as well as data from a few private companies that charge for their data (like Digital Globe). The data will come from a variety of sensors operating at different wavelengths, including thermal infrared that can detect heat.

The images will be processed by various algorithms to detect signs of emissions. It has already been demonstrated that a great deal of pollution can be tracked simply through identifying visible smoke. WattTime says it can also use infrared imaging to identify heat from smokestack plumes or cooling-water discharge. Sensors that can directly track NO2 emissions are in development, according to WattTime executive director Gavin McCormick.

Between visible smoke, heat, and NO2, WattTime will be able to derive exact, real-time emissions information, including information on carbon emissions, for every power plant in the world. (McCormick says the data may also be used to derive information about water pollutants like nitrates or mercury.)

Google.org, Google’s philanthropic wing, is getting the project off the ground (pardon the pun) with a $1.7 million grant; it was selected through the Google AI Impact Challenge….(More)”.

Belgium’s democratic experiment


David van Reybrouck in Politico: “Those looking for a solution to the wave of anger and distrust sweeping Western democracies should have a look at an experiment in European democracy taking place in a small region in eastern Belgium.

Starting in September, the parliament representing the German-speaking region of Belgium will hand some of its powers to a citizens’ assembly drafted by lot. It’ll be the first time a political institution creates a permanent structure to involve citizens in political decision making.

It’s a move Belgian media has rightly hailed as “historic.” I was in parliament the night MPs from all six parties moved past ideological differences to endorse the bill. It was a courageous move, a sign to other politicians — who tend to see their voters as a threat rather than a resource — that citizens should be trusted, not feared, or “spun.”

Nowhere else in the world will everyday citizens be so consistently involved in shaping the future of their community. In times of massive, widespread distrust of party politics, German-speaking Belgians will be empowered to put the issues they care about on the agenda, to discuss potential solutions, and to monitor the follow-up of their recommendations as they pass through parliament and government. Politicians, in turn, will be able to tap independent citizens’ panels to deliberate over thorny political issues.

This experiment is happening on a small scale: Belgium’s German-speaking community, the country’s third linguistic region, is the smallest federal entity in Europe. But its powers are comparable with those of Scotland or the German province of North Rhine-Westphalia, and the lessons of its experiment with a “people’s senate” will have implications for democrats across Europe….(More)”.

A New Way of Voting That Makes Zealotry Expensive


Peter Coy at Bloomberg Business Week: “An intriguing new tool of democracy just had its first test in the real world of politics, and it passed with flying colors.

The tool is called quadratic voting, and it’s just as nerdy as it sounds. The concept is that each voter is given a certain number of tokens—say, 100—to spend as he or she sees fit on votes for a variety of candidates or issues. Casting one vote for one candidate or issue costs one token, but two votes cost four tokens, three votes cost nine tokens, and so on up to 10 votes costing all 100 of your tokens. In other words, if you really care about one candidate or issue, you can cast up to 10 votes for him, her, or it, but it’s going to cost you all your tokens.

Quadratic voting was invented not by political scientists but by economists and others, including Glen Weyl, an economist and principal researcher at Microsoft Corp. The purpose of quadratic voting is to determine “whether the intense preferences of the minority outweigh the weak preferences of the majority,” Weyl and Eric Posner, a University of Chicago Law School professor, wrote last year in an important book called Radical Markets: Uprooting Capitalism and Democracy for a Just Society. ….

This spring, quadratic voting was used in a successful experiment by the Democratic caucus of the Colorado House of Representatives. The lawmakers used it to decide on their legislative priorities for the coming two years among 107 possible bills. (Wiredmagazine wrote about it here.)…

In this year’s experiment, the 41 lawmakers in the Democratic caucus were given 100 tokens each to allocate among the 107 bills. No one chose to spend all 100 tokens on a single bill. Many of them spread their votes around widely but thinly because it was inexpensive to do so—one vote is just one token. The top vote-getter by a wide margin turned out to be a bill guaranteeing equal pay to women for equal work. “There was clear separation” of the favorites from the also-rans, Hansen says.

The computer interface and other logistics were provided by Democracy Earth, which describes itself as a borderless community and “a global commons of self-sovereign citizens.” The lawmakers had more immediate concerns—hammering out a party agenda. “Some members were more tech-savvy,” Hansen says. “Some started skeptical but came around. I was pleasantly surprised. There was this feeling of ownership—your voice being heard.”

I recently wrote about the democratic benefits of ranked-choice voting, in which voters rank all the candidates in a race and votes are reassigned from the lowest vote-getters to the higher finishers until someone winds up with a majority. But although ranked-choice voting is gaining in popularity, it traces its roots back to the 19th century. Quadratic voting is much more of a break from the past. “This is a new idea, which is rare in economic theory, so it should be saluted as such, especially since it is accompanied by outstanding execution,” George Mason University economist Tyler Cowen wrote in 2015. (He did express some cautions about it as well.)…(More)”.

As Surveys Falter Big Data Polling Narrows Our Societal Understanding


Kalev Leetaru at Forbes: “One of the most talked-about stories in the world of polling and survey research in recent years has been the gradual death of survey response rates and the reliability of those insights….

The online world’s perceived anonymity has offered some degree of reprieve in which online polls and surveys have often bested traditional approaches in assessing views towards society’s most controversial issues. Yet, here as well increasing public understanding of phishing and online safety are ever more problematic.

The answer has been the rise of “big data” analysis of society’s digital exhaust to fill in the gaps….

Is it truly the same answer though?

Constructing and conducting a well-designed survey means being able to ask the public exactly the questions of interest. Most importantly, it entails being able to ensure representative demographics of respondents.

An online-only poll is unlikely to accurately capture the perspectives of the three quarters of the earth’s population that the digital revolution has left behind. Even within the US, social media platforms are extraordinarily skewed.

The far greater problem is that society’s data exhaust is rarely a perfect match for the questions of greatest interest to policymakers and public.

Cellphone mobility records can offer an exquisitely detailed look at how the people of a city go about their daily lives, but beneath all that blinding light are the invisible members of society not deemed valuable to advertisers and thus not counted. Even for the urban society members whose phones are their ever-present companions, mobility data only goes so far. It can tell us that occupants of a particular part of the city during the workday spend their evenings in a particular part of the city, allowing us to understand their work/life balance, but it offers few insights into their political leanings.

One of the greatest challenges of today’s “big data” surveying is that it requires us to narrow our gaze to only those questions which can be easily answered from the data at hand.

Much as AI’s crisis of bias comes from the field’s steadfast refusal to pay for quality data, settling for highly biased free data, so too has “big data” surveying limited itself largely to datasets it can freely and easily acquire.

The result is that with traditional survey research, we are free to ask the precise questions we are most interested in. With data exhaust research, we must imperfectly shoehorn our questions into the few available metrics. With sufficient creativity it is typically possible to find some way of proxying the given question, but the resulting proxies may be highly unstable, with little understanding of when and where they may fail.

Much like how the early rise of the cluster computing era caused “big data” researchers to limit the questions they asked of their data to just those they could fit into a set of tiny machines, so too has the era of data exhaust surveying forced us to greatly restrict our understanding of society.

Most dangerously, however, big data surveying implicitly means we are measuring only the portion of society our vast commercial surveillance state cares about.

In short, we are only able to measure those deemed of greatest interest to advertisers and thus the most monetizable.

Putting this all together, the decline of traditional survey research has led to the rise of “big data” analysis of society’s data exhaust. Instead of giving us an unprecedented new view into the heartbeat of daily life, this reliance on the unintended output of our digital lives has forced researchers to greatly narrow the questions they can explore and severely skews them to the most “monetizable” portions of society.

In the end, the shift of societal understanding from precision surveys to the big data revolution has led not to an incredible new understanding of what makes us tick, but rather a far smaller, less precise and less accurate view than ever before, just our need to understand ourselves has never been greater….(More)”.