The Ruin of the Digital Town Square


Special Issue of The Atlantis: “Across the political spectrum, a consensus has arisen that Twitter, Facebook, YouTube, and other digital platforms are laying ruin to public discourse. They trade on snarkiness, trolling, outrage, and conspiracy theories, and encourage tribalism, information bubbles, and social discord. How did we get here, and how can we get out? The essays in this symposium seek answers to the crisis of “digital discourse” beyond privacy policies, corporate exposés, and smarter algorithms.

The Inescapable Town Square
L. M. Sacasas on how social media combines the worst parts of past eras of communication

Preserving Real-Life Childhood
Naomi Schaefer Riley on why decency online requires raising kids who know life offline

How Not to Regulate Social Media
Shoshana Weissmann on proposed privacy and bot laws that would do more harm than good

The Four Facebooks
Nolen Gertz on misinformation, manipulation, dependency, and distraction

Do You Know Who Your ‘Friends’ Are?
Ashley May on why treating others well online requires defining our relationships

The Distance Between Us
Micah Meadowcroft on why we act badly when we don’t speak face-to-face

The Emergent Order of Twitter
Andy Smarick on why the platform should be fixed from the bottom up, not the top down

Imagine All the People
James Poulos on how the fantasies of the TV era created the disaster of social media

Making Friends of Trolls
Caitrin Keiper on finding familiar faces behind the black mirror…(More)”

A Smart City Stakeholder Classification Model


Paper by Anthea Van der Hoogen, Brenda Scholtz and Andre Calitz: “Cities globally are facing an increasing forecasted citizen growth for the next decade. It has therefore become a necessity for cities to address their initiatives in smarter ways to overcome the challenges of possible extinction of resources. Cities in South Africa are trying to involve stakeholders to help address these challenges. Stakeholders are an important component in any smart city initiatives. The purpose of this paper is to report on a review of existing literature related to smart cities, and to propose a Smart City Stakeholder Classification Model. The common dimensions of smart cities are identified and the roles of the various stakeholders are classified according to these dimensions in the model. Nine common dimensions and related factors were identified through an analysis of existing frameworks for smart cities. The model was then used to identify and classify the stakeholders participating in two smart city projects in the Eastern Cape province of South Africa….(More)”.

The Dark Side of Sunlight


Essay by James D’Angelo and Brent Ranalli in Foreign Affairs: “…76 percent of Americans, according to a Gallup poll, disapprove of Congress.

This dysfunction started well before the Trump presidency. It has been growing for decades, despite promise after promise and proposal after proposal to reverse it. Many explanations have been offered, from the rise of partisan media to the growth of gerrymandering to the explosion of corporate money. But one of the most important causes is usually overlooked: transparency. Something usually seen as an antidote to corruption and bad government, it turns out, is leading to both.

The problem began in 1970, when a group of liberal Democrats in the House of Representatives spearheaded the passage of new rules known as “sunshine reforms.” Advertised as measures that would make legislators more accountable to their constituents, these changes increased the number of votes that were recorded and allowed members of the public to attend previously off-limits committee meetings.

But the reforms backfired. By diminishing secrecy, they opened up the legislative process to a host of actors—corporations, special interests, foreign governments, members of the executive branch—that pay far greater attention to the thousands of votes taken each session than the public does. The reforms also deprived members of Congress of the privacy they once relied on to forge compromises with political opponents behind closed doors, and they encouraged them to bring useless amendments to the floor for the sole purpose of political theater.

Fifty years on, the results of this experiment in transparency are in. When lawmakers are treated like minors in need of constant supervision, it is special interests that benefit, since they are the ones doing the supervising. And when politicians are given every incentive to play to their base, politics grows more partisan and dysfunctional. In order for Congress to better serve the public, it has to be allowed to do more of its work out of public view.

The idea of open government enjoys nearly universal support. Almost every modern president has paid lip service to it. (Even the famously paranoid Richard Nixon said, “When information which properly belongs to the public is systematically withheld by those in power, the people soon become ignorant of their own affairs, distrustful of those who manage them, and—eventually—incapable of determining their own destinies.”) From former Republican Speaker of the House Paul Ryan to Democratic Speaker of the House Nancy Pelosi, from the liberal activist Ralph Nader to the anti-tax crusader Grover Norquist, all agree that when it comes to transparency, more is better.

It was not always this way. It used to be that secrecy was seen as essential to good government, especially when it came to crafting legislation. …(More)”

Surround Sound


Report by the Public Affairs Council: “Millions of citizens and thousands of organizations contact Congress each year to urge Senators and House members to vote for or against legislation. Countless others weigh in with federal agencies on regulatory issues ranging from healthcare to livestock grazing rights. Congressional and federal agency personnel are inundated with input. So how do staff know what to believe? Who do they trust? And which methods of communicating with government seem to be most effective? To find out, the Public Affairs Council teamed up with Morning Consult in an online survey of 173 congressional and federal employees. Participants were asked for their views on social media, fake news, influential methods of communication and trusted sources of policy information.

When asked to compare the effectiveness of different advocacy techniques, congressional staff rate personal visits to Washington, D.C., (83%) or district offices (81%), and think tank reports (81%) at the top of the list. Grassroots advocacy techniques such as emails, phone calls and postal mail campaigns also score above 75% for effectiveness.

Traditional in-person visits from lobbyists are considered effective by a strong majority (75%), as are town halls (73%) and lobby days (72%). Of the 13 options considered, the lowest score goes to social media posts, which are still rated effective by 57% of survey participants.

Despite their unpopularity with the general public, corporate CEOs are an asset when it comes to getting meetings scheduled with members of Congress. Eighty-three percent (83%) of congressional staffers say their boss would likely meet with a CEO from their district or state when that executive comes to Washington, D.C., compared with only 7% who say their boss would be unlikely to take the meeting….(More)”.

We’ll soon know the exact air pollution from every power plant in the world. That’s huge.


David Roberts at Vox: “A nonprofit artificial intelligence firm called WattTime is going to use satellite imagery to precisely track the air pollution (including carbon emissions) coming out of every single power plant in the world, in real time. And it’s going to make the data public.

This is a very big deal. Poor monitoring and gaming of emissions data have made it difficult to enforce pollution restrictions on power plants. This system promises to effectively eliminate poor monitoring and gaming of emissions data….

The plan is to use data from satellites that make theirs publicly available (like the European Union’s Copernicus network and the US Landsat network), as well as data from a few private companies that charge for their data (like Digital Globe). The data will come from a variety of sensors operating at different wavelengths, including thermal infrared that can detect heat.

The images will be processed by various algorithms to detect signs of emissions. It has already been demonstrated that a great deal of pollution can be tracked simply through identifying visible smoke. WattTime says it can also use infrared imaging to identify heat from smokestack plumes or cooling-water discharge. Sensors that can directly track NO2 emissions are in development, according to WattTime executive director Gavin McCormick.

Between visible smoke, heat, and NO2, WattTime will be able to derive exact, real-time emissions information, including information on carbon emissions, for every power plant in the world. (McCormick says the data may also be used to derive information about water pollutants like nitrates or mercury.)

Google.org, Google’s philanthropic wing, is getting the project off the ground (pardon the pun) with a $1.7 million grant; it was selected through the Google AI Impact Challenge….(More)”.

The Voluntariness of Voluntary Consent: Consent Searches and the Psychology of Compliance


Paper by Roseanna Sommers and Vanessa K. Bohns: “Consent-based searches are by far the most ubiquitous form of search undertaken by police. A key legal inquiry in these cases is whether consent was granted voluntarily. This Essay suggests that fact finders’ assessments of voluntariness are likely to be impaired by a systematic bias in social perception. Fact finders are likely to underappreciate the degree to which suspects feel pressure to comply with police officers’ requests to perform searches.

In two preregistered laboratory studies, we approached a total of 209 participants (“Experi- encers”) with a highly intrusive request: to unlock their password-protected smartphones and hand them over to an experimenter to search through while they waited in another room. A sepa- rate 194 participants (“Forecasters”) were brought into the lab and asked whether a reasonable person would agree to the same request if hypothetically approached by the same researcher. Both groups then reported how free they felt, or would feel, to refuse the request.

Study 1 found that whereas most Forecasters believed a reasonable person would refuse the experimenter’s request, most Experiencers—100 out of 103 people—promptly unlocked their phones and handed them over. Moreover, Experiencers reported feeling significantly less free to refuse than did Forecasters contemplating the same situation hypothetically.

Study 2 tested an intervention modeled after a commonly proposed reform of consent searches, in which the experimenter explicitly advises participants that they have the right to with- hold consent. We found that this advisory did not significantly reduce compliance rates or make Experiencers feel more free to say no. At the same time, the gap between Experiencers and Fore- casters remained significant.

These findings suggest that decision makers judging the voluntariness of consent consistently underestimate the pressure to comply with intrusive requests. This is problematic because it indi- cates that a key justification for suspicionless consent searches—that they are voluntary—relies on an assessment that is subject to bias. The results thus provide support to critics who would like to see consent searches banned or curtailed, as they have been in several states.

The results also suggest that a popular reform proposal—requiring police to advise citizens of their right to refuse consent—may have little effect. This corroborates previous observational studies, which find negligible effects of Miranda warnings on confession rates among interrogees, and little change in rates of consent once police start notifying motorists of their right to refuse vehicle searches. We suggest that these warnings are ineffective because they fail to address the psychology of compliance. The reason people comply with police, we contend, is social, not informational. The social demands of police-citizen interactions persist even when people are informed of their rights. It is time to abandon the myth that notifying people of their rights makes them feel empowered to exercise those rights…(More)”.

Digital Government: Managing Public Sector Reform in the Digital Era


Book by Miriam Lips: “Digital Government: Managing Public Sector Reform in the Digital Era presents a public management perspective on digital government and technology-enabled change in the public sector. It incorporates theoretical and empirical insights to provide students with a broader and deeper understanding of the complex and multidisciplinary nature of digital government initiatives, impacts, and implications.

The rise of digital government and its increasingly integral role in many government processes and activities, including overseeing fundamental changes at various levels across government, means that it is no longer perceived as just a technology issue. In this book Miriam Lips provides students with practical approaches and perspectives to better understand digital government. The text also explores emerging issues and barriers as well as strategies to more effectively manage digital government and technology-enabled change in the public sector….(More)”.

New Report Examines Reproducibility and Replicability in Science, Recommends Ways to Improve Transparency and Rigor in Research


National Academies of Sciences: “While computational reproducibility in scientific research is generally expected when the original data and code are available, lack of ability to replicate a previous study — or obtain consistent results looking at the same scientific question but with different data — is more nuanced and occasionally can aid in the process of scientific discovery, says a new congressionally mandated report from the National Academies of Sciences, Engineering, and Medicine.  Reproducibility and Replicability in Science recommends ways that researchers, academic institutions, journals, and funders should help strengthen rigor and transparency in order to improve the reproducibility and replicability of scientific research.

Defining Reproducibility and Replicability

The terms “reproducibility” and “replicability” are often used interchangeably, but the report uses each term to refer to a separate concept.  Reproducibility means obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis.  Replicability means obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.   

Reproducing research involves using the original data and code, while replicating research involves new data collection and similar methods used in previous studies, the report says.  Even when a study was rigorously conducted according to best practices, correctly analyzed, and transparently reported, it may fail to be replicated. 

“Being able to reproduce the computational results of another researcher starting with the same data and replicating a previous study to test its results facilitate the self-correcting nature of science, and are often cited as hallmarks of good science,” said Harvey Fineberg, president of the Gordon and Betty Moore Foundation and chair of the committee that conducted the study.  “However, factors such as lack of transparency of reporting, lack of appropriate training, and methodological errors can prevent researchers from being able to reproduce or replicate a study.  Research funders, journals, academic institutions, policymakers, and scientists themselves each have a role to play in improving reproducibility and replicability by ensuring that scientists adhere to the highest standards of practice, understand and express the uncertainty inherent in their conclusions, and continue to strengthen the interconnected web of scientific knowledge — the principal driver of progress in the modern world.”….(More)”.

The State of Open Data


Open Access Book edited by Tim Davies, Stephen B. Walker, Mor Rubinstein and Fernando Perini: “It’s been ten years since open data first broke onto the global stage. Over the past decade, thousands of programmes and projects around the world have worked to open data and use it to address a myriad of social and economic challenges. Meanwhile, issues related to data rights and privacy have moved to the centre of public and political discourse. As the open data movement enters a new phase in its evolution, shifting to target real-world problems and embed open data thinking into other existing or emerging communities of practice, big questions still remain. How will open data initiatives respond to new concerns about privacy, inclusion, and artificial intelligence? And what can we learn from the last decade in order to deliver impact where it is most needed? 

The State of Open Data brings together over 60 authors from around the world to address these questions and to take stock of the real progress made to date across sectors and around the world, uncovering the issues that will shape the future of open data in the years to come….(More)”.

Ethics of identity in the time of big data


Paper by James Brusseau in First Monday: “Compartmentalizing our distinct personal identities is increasingly difficult in big data reality. Pictures of the person we were on past vacations resurface in employers’ Google searches; LinkedIn which exhibits our income level is increasingly used as a dating web site. Whether on vacation, at work, or seeking romance, our digital selves stream together.

One result is that a perennial ethical question about personal identity has spilled out of philosophy departments and into the real world. Ought we possess one, unified identity that coherently integrates the various aspects of our lives, or, incarnate deeply distinct selves suited to different occasions and contexts? At bottom, are we one, or many?

The question is not only palpable today, but also urgent because if a decision is not made by us, the forces of big data and surveillance capitalism will make it for us by compelling unity. Speaking in favor of the big data tendency, Facebook’s Mark Zuckerberg promotes the ethics of an integrated identity, a single version of selfhood maintained across diverse contexts and human relationships.

This essay goes in the other direction by sketching two ethical frameworks arranged to defend our compartmentalized identities, which amounts to promoting the dis-integration of our selves. One framework connects with natural law, the other with language, and both aim to create a sense of selfhood that breaks away from its own past, and from the unifying powers of big data technology….(More)”.