Is the First Amendment Obsolete?


Essay by Tim Wu: “The First Amendment was a dead letter for much of American history. Unfortunately, there is reason to fear it is entering a new period of political irrelevance. We live in a golden age of efforts by governments and other actors to control speech, discredit and harass the press, and manipulate public debate. Yet as these efforts mount, and the expressive environment deteriorates, the First Amendment has been confined to a narrow and frequently irrelevant role. Hence the question — when it comes to political speech in the twenty-first century, is the First Amendment obsolete?

The most important change in the expressive environment can be boiled down to one idea: it is no longer speech itself that is scarce, but the attention of listeners. Emerging threats to public discourse take advantage of this change. As Zeynep Tufekci puts it, “censorship during the Internet era does not operate under the same logic [as] it did under the heyday of print or even broadcast television.” Instead of targeting speakers directly, it targets listeners or it undermines speakers indirectly. More precisely, emerging techniques of speech control depend on (1) a range of new punishments, like unleashing “troll armies” to abuse the press and other critics, and (2) “flooding” tactics (sometimes called “reverse censorship”) that distort or drown out disfavored speech through the creation and dissemination of fake news, the payment of fake commentators, and the deployment of propaganda robots. As journalist Peter Pomerantsev writes, these techniques employ “information . . . in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze.”

The First Amendment first came to life in the early twentieth century, when the main threat to the nation’s political speech environment was state suppression of dissidents. The jurisprudence of the First Amendment was shaped by that era. It presupposes an information-poor world, and it focuses exclusively on the protection of speakers from government, as if they were rare and delicate butterflies threatened by one terrible monster.

But today, speakers are more like moths — their supply is apparently endless. The massive decline in barriers to publishing makes information abundant, especially when speakers congregate on brightly lit matters of public controversy. The low costs of speaking have, paradoxically, made it easier to weaponize speech as a tool of speech control. The unfortunate truth is that cheap speech may be used to attack, harass, and silence as much as it is used to illuminate or debate. And the use of speech as a tool to suppress speech is, by its nature, something very challenging for the First Amendment to deal with. In the face of such challenges, First Amendment doctrine seems at best unprepared. It is a body of law that waits for a pamphleteer to be arrested before it will recognize a problem. Even worse, the doctrine may actually block efforts to deal with some of the problems described here….(More)”

Co-creating an Open Government Data Driven Public Service: The Case of Chicago’s Food Inspection Forecasting Model


Conference paper by Keegan Mcbride et al: “Large amounts of Open Government Data (OGD) have become available and co-created public services have started to emerge, but there is only limited empirical material available on co-created OGD-driven public services. The authors have built a conceptual model around an innovation process based on the ideas of co-production and agile development for co-created OGD-driven public service. An exploratory case study on Chicago’s use of OGD in a predictive analytics model that forecasts critical safety violations at food serving establishments was carried out to expose the intricate process of how co-creation occurs and what factors allow for it to take place. Six factors were identified as playing a key role in allowing the co-creation of an OGD-driven public service to take place: external funding, motivated stakeholders, innovative leaders, proper communication channels, an existing OGD portal, and agile development practices. The conceptual model was generally validated, but further propositions on co-created OGD-driven public services emerged. These propositions state that the availability of OGD and tools for data analytics have the potential to enable the co-creation of OGD-driven public services, governments releasing OGD are acting as a platform and from this platform the co-creation of new and innovative OGD-driven public services may take place, and that the idea of Government as a Platform (GaaP) does appear to be an idea that allows for the topics of co-creation and OGD to be merged together….(More)”.

Blockchain-like ID may mean end of paper birth certificates


Chris Baraniuk at New Scientist: “There’s a new way to prove you are who you say you are – inspired by the tech underpinning bitcoin. Usually, when you need to verify your identity, the process is archaic, insecure and time-consuming. You get a copy of your birth certificate in the post, put it in an envelope and hope it gets to whoever is asking for it. In the digital era, this should take seconds.

But putting something as sensitive as a birth certificate online risks identity theft in the era of hacks and leaks. Now, the US state of Illinois is experimenting with a secure way of putting control of that data into its citizens’ hands, with the help of distributed ledgers, similar to the blockchain used by bitcoin.

Just last month, Illinois announced a pilot project to create “secure ‘self-sovereign’ identity” for Illinois citizens wishing to access their birth certificate. The idea is to use a blockchain-like distributed ledger that allows online access only to the people owning the ID, and any third parties granted their permission.

Illinois is working with software firm Evernym of Herriman, Utah, to create a record of who should be able to access data from the state’s birth register. Once this is done, no central authority should be required, just your say-so.They’re not the only ones. According to a report by Garrick Hileman and Michael Rauchs at the Cambridge Centre for Alternative Finance, UK, governments are increasingly experimenting with it, including the UK and Brazil.

Activists have long called for people to have greater control of their data. Hacks and leaks are making it too risky for authorities to be the central repository of citizens’ most vital information.

With distributed ledgers, all participants within a network can have their own identical copy of data like access permissions – so no one can view cryptographically sealed birth certificate data unless they’re meant to. Blockchains are a type of distributed ledger that gets the whole network to observe and verify transactions – such as when someone sends a bitcoin to their friend….(More)”.

Using Public Data From Different Sources


Chapter byYair Cohen in Maximizing Social Science Research Through Publicly Accessible Data Sets, book edited by S. Marshall Perry: “The United States federal government agencies as well as states agencies are liberating their data through web portals. Web portals like data.gov, census.gov, healthdata.gov, ed.gov and many others on the state level provide great opportunity for researchers of all fields. This chapter shows the challenges and the opportunities that lie by merging data from different pubic sources. The researcher collected and merged data from the following datasets: NYSED school report card, NYSED Fiscal Profile Reporting System, Civil Rights Data Collection, and Census 2010 School District Demographics System. The challenges include data validation, data cleaning, flatting data for easy reporting, and merging datasets based on text fields….(More)”.

 

Disaster recovery’s essential tool: Data


Amy Liu and Allison Plyer at Brookings: “To recover from a disaster on the scale of Harvey and Irma requires a massive coordinated effort. Federal, state and local governments must lead. Philanthropy, nonprofits and the private sector will be key partners. Residents will voice their views, through community planning meetings and other venues, on how best to spend disaster-recovery dollars. With so many stakeholders and rebuilding needs, the process of restoring neighborhoods and economic activity will become emotionally and politically charged. As Brock Long, administrator of the Federal Emergency Management Agency, has already warned in Texas: “This is going to be a frustrating and painful process.”

For public officials to effectively steer a recovery process and for citizens to trust in the effort, reliable, transparent information will be essential. Leaders and the public need a shared understanding of the scale and extent of the damage and which households, businesses and neighborhoods have been affected. This is not a one-time effort. Data must be collected and issued regularly over months and years to match the duration of the rebuilding effort.

Without this information, it will be nearly impossible to estimate the nature of aid required, determine how best to deploy resources, prioritize spending and monitor progress. Rebuilding processes are chaotic, with emotions high over multiple, competing priorities. Credible public information organized in one place can help to neutralize misconceptions, put every need in context and depoliticize decision-making. Most importantly, data on recovery needs also can enable citizen involvement and allow residents to hold public leaders accountable for progress.

We know this first-hand from our experience in New Orleans, where the Brookings Institution and the New Orleans Data Center teamed up to produce what became the New Orleans Index following Hurricane Katrina in 2005. We set out to help the public and decision-makers understand the level of outstanding damage in New Orleans and the region and to monitor the extent to which the city was bouncing back….(More)”.

The Promise of Evidence-Based Policymaking


Final Report by the Commission on Evidence-Based Policymaking: “…There are many barriers to the efective use of government data to generate evidence. Better access to these data holds the potential for substantial gains for society. The Commission’s recommendations recognize that the country’s laws and practices are not currently optimized to support the use of data for evidence building, nor in a manner that best protects privacy. To correct these problems, the Commission makes the following recommendations:

  • Establish a National Secure Data Service to facilitate access to data for evidence building while ensuring privacy and transparency in how those data are used. As a state-of-the-art resource for improving government’s capacity to use the data it already collects, the National Secure Data Service will be able to temporarily link existing data and provide secure access to those data for exclusively statistical purposes in connection with approved projects. The National Secure Data Service will do this without creating a data clearinghouse or warehouse.
  • Require stringent privacy qualifcations for acquiring and combining data for statistical purposes at the National Secure Data Service to ensure that data continue to be efectively protected while improving the government’s ability to understand the impacts of programs on a wider range of outcomes. At the same time, consider additional statutory changes to enable ongoing statistical production that, under the same stringent privacy qualifcations, may make use of combined data.
  • Review and, where needed, revise laws authorizing Federal data collection and use to ensure that limited access to administrative and survey data is possible to return benefts to the public through improved programs and policies, but only under strict privacy controls.
  • Ensure state-collected quarterly earnings data are available for statistical purposes, including to support the many evidence-building activities for which earnings are an important outcome.
  • Make additional state-collected data about Federal programs available for evidence building. Where appropriate, states that administer programs with substantial Federal investment should in return provide the data necessary for evidence building.
  • Develop a uniform process for external researchers to apply and qualify for secure access to confdential government data for evidence-building purposes while protecting privacy by carefully restricting data access to qualifed and approved researchers…(More)”

Who serves the poor ? surveying civil servants in the developing world


Worldbank working paper by Daniel Oliver Rogger: “Who are the civil servants that serve poor people in the developing world? This paper uses direct surveys of civil servants — the professional body of administrators who manage government policy — and their organizations from Ethiopia, Ghana, Indonesia, Nigeria, Pakistan and the Philippines, to highlight key aspects of their characteristics and experience of civil service life. Civil servants in the developing world face myriad challenges to serving the world’s poor, from limited facilities to significant political interference in their work. There are a number of commonalities across service environments, and the paper summarizes these in a series of ‘stylized facts’ of the civil service in the developing world. At the same time, the particular challenges faced by a public official vary substantially across and within countries and regions. For example, measured management practices differ widely across local governments of a single state in Nigeria. Surveys of civil servants allow us to document these differences, build better models of the public sector, and make more informed policy choices….(More)”.

Automation Beyond the Physical: AI in the Public Sector


Ben Miller at Government Technology: “…The technology is, by nature, broadly applicable. If a thing involves data — “data” itself being a nebulous word — then it probably has room for AI. AI can help manage the data, analyze it and find patterns that humans might not have thought of. When it comes to big data, or data sets so big that they become difficult for humans to manually interact with, AI leverages the speedy nature of computing to find relationships that might otherwise be proverbial haystack needles.

One early area of government application is in customer service chatbots. As state and local governments started putting information on websites in the past couple of decades, they found that they could use those portals as a means of answering questions that constituents used to have to call an office to ask.

Ideally that results in a cyclical victory: Government offices didn’t have as many calls to answer, so they could devote more time and resources to other functions. And when somebody did call in, their call might be answered faster.

With chatbots, governments are betting they can answer even more of those questions. When he was the chief technology and innovation officer of North Carolina, Eric Ellis oversaw the setup of a system that did just that for IT help desk calls.

Turned out, more than 80 percent of the help desk’s calls were people who wanted to change their passwords. For something like that, where the process is largely the same each time, a bot can speed up the process with a little help from AI. Then, just like with the government Web portal, workers are freed up to respond to the more complicated calls faster….

Others are using AI to recognize and report objects in photographs and videos — guns, waterfowl, cracked concrete, pedestrians, semi-trucks, everything. Others are using AI to help translate between languages dynamically. Some want to use it to analyze the tone of emails. Some are using it to try to keep up with cybersecurity threats even as they morph and evolve. After all, if AI can learn to beat professional poker players, then why can’t it learn how digital black hats operate?

Castro sees another use for the technology, a more introspective one. The problem is this: The government workforce is a lot older than the private sector, and that can make it hard to create culture change. According to U.S. Census Bureau data, about 27 percent of public-sector workers are millennials, compared with 38 percent in the private sector.

“The traditional view [of government work] is you fill out a lot of forms, there are a lot of boring meetings. There’s a lot of bureaucracy in government,” Castro said. “AI has the opportunity to change a lot of that, things like filling out forms … going to routine meetings and stuff.”

As AI becomes more and more ubiquitous, people who work both inside and with government are coming up with an ever-expanding list of ways to use it. Here’s an inexhaustive list of specific use cases — some of which are already up and running and some of which are still just ideas….(More)”.

Making Sense of Corruption


Book by Bo Rothstein and Aiysha Varraich: “Corruption is a serious threat to prosperity, democracy and human well-being, with mounting empirical evidence highlighting its detrimental effects on society. Yet defining this threat has resulted in profound disagreement, producing a multidimensional concept. Tackling this important and provocative topic, the authors provide an accessible and systematic analysis of how our understanding of corruption has evolved. They identify gaps in the research and make connections between related concepts such as clientelism, patronage, patrimonialism, particularism and state capture. A fundamental issue discussed is how the opposite of corruption should be defined. By arguing for the possibility of a universal understanding of corruption, and specifically what corruption is not, an innovative solution to this problem is presented. This book provides an accessible overview of corruption, allowing scholars and students alike to see the far reaching place it has within academic research….(More)”.

 

The Case for Sharing All of America’s Data on Mosquitoes


Ed Yong in the Atlantic: “The U.S. is sitting on one of the largest data sets on any animal group, but most of it is inaccessible and restricted to local agencies….For decades, agencies around the United States have been collecting data on mosquitoes. Biologists set traps, dissect captured insects, and identify which species they belong to. They’ve done this for millions of mosquitoes, creating an unprecedented trove of information—easily one of the biggest long-term attempts to monitor any group of animals, if not the very biggest.

The problem, according to Micaela Elvira Martinez from Princeton University and Samuel Rund from the University of Notre Dame, is that this treasure trove of data isn’t all in the same place, and only a small fraction of it is public. The rest is inaccessible, hoarded by local mosquito-control agencies around the country.

Currently, these agencies can use their data to check if their attempts to curtail mosquito populations are working. Are they doing enough to remove stagnant water, for example? Do they need to spray pesticides? But if they shared their findings, Martinez and Rund say that scientists could do much more. They could better understand the ecology of these insects, predict the spread of mosquito-borne diseases like dengue fever or Zika, coordinate control efforts across states and counties, and quickly spot the arrival of new invasive species.

That’s why Martinez and Rund are now calling for the creation of a national database of mosquito records that anyone can access. “There’s a huge amount of taxpayer investment and human effort that goes into setting traps, checking them weekly, dissecting all those mosquitoes under a microscope, and tabulating the data,” says Martinez. “It would be a big bang for our buck to collate all that data and make it available.”

Martinez is a disease modeler—someone who uses real-world data to build simulations that reveal how infections rise, spread, and fall. She typically works with childhood diseases like measles and polio, where researchers are almost spoiled for data. Physicians are legally bound to report any cases, and the Centers for Disease Control and Prevention (CDC) compiles and publishes this information as a weekly report.

The same applies to cases of mosquito-borne diseases like dengue and Zika, but not to populations of the insects themselves. So, during last year’s Zika epidemic, when Martinez wanted to study the Aedes aegypti mosquito that spreads the disease, she had a tough time. “I was really surprised that I couldn’t find data on Aedes aegypti numbers,” she says. Her colleagues explained that scientists use climate variables like temperature and humidity to predict where mosquitoes are going to be abundant. That seemed ludicrous to her, especially since organizations collect information on the actual insects. It’s just that no one ever gathers those figures together….

Together with Rund and a team of undergraduate students, she found that there are more than 1,000 separate agencies in the United States that collect mosquito data—at least one in every county or jurisdiction. Only 152 agencies make their data publicly available in some way. The team collated everything they could find since 2009, and ended up with information about more than 15 million mosquitoes. Imagine what they’d have if all the datasets were open, especially since some go back decades.

A few mosquito-related databases do exist, but none are quite right. ArboNET, which is managed by the CDC and state health departments, mainly stores data about mosquito-borne diseases, and whatever information it has on the insects themselves isn’t precise enough in either time or space to be useful for modeling. MosquitoNET, which was developed by the CDC, does track mosquitoes, but “it’s a completely closed system, and hardly anyone has access to it,” says Rund. The Smithsonian Institution’s VectorMap is better in that it’s accessible, “but it lacks any real-time data from the continental United States,” says Rund. “When I checked a few months ago, it had just one record of Aedes aegypti since 2013.”…

Some scientists who work on mosquito control apparently disagree, and negative reviews have stopped Martinez and Rund from publishing their ideas in prominent academic journals. (For now, they’ve uploaded a paper describing their vision to the preprint repository bioRxiv.) “Some control boards say: What if people want to sue us because we’re showing that they have mosquito vectors near their homes, or if their house prices go down?” says Martinez. “And one mosquito-control scientist told me that no one should be able to work with mosquito data unless they’ve gone out and trapped mosquitoes themselves.”…

“Data should be made available without having to justify exactly what’s going to be done with it,” Martinez says. “We should put it out there for scientists to start unlocking it. I think there are a ton of biologists who will come up with cool things to do.”…(More)”.