Paper by Karl O’Connor, Colin Knox and Saltanat Janenova: “Open government has long been regarded as a pareto-efficient policy – after all, who could be against such compelling policy objectives as transparency, accountability, citizen engagement and integrity. This paper addresses why an authoritarian state would adopt a policy of open government, which seems counter-intuitive, and tracks its outworking by examining several facets of the policy in practice. The research uncovers evidence of insidious bureaucratic obstruction and an implementation deficit counter-posed with an outward-facing political agenda to gain international respectability. The result is ‘half-open’ government in which the more benign elements have been adopted but the vested interests of government and business elites remain largely unaffected….(More)”.
Stephen Moilanen at the Stanford Social Innovation Review: “Throughout Barack Obama’s presidency, technology company executives regularly sounded off on what, from their perspective, the administration might do differently. In 2010, Steve Jobs reportedly warned Obama that he likely wouldn’t win reelection, because his administration’s policies disadvantaged businesses like Apple. And in a speech at the 2016 Republican National Convention, Peter Thiel expressed his disapproval of the political establishment by quipping, “Instead of going to Mars, we have invaded the Middle East.”
Against this backdrop, one specific way Silicon Valley has tried to nudge Washington in a new direction is with respect to policy development. Specifically, leading technologists have begun encouraging policy makers to apply user-centered design (otherwise known as design thinking or human-centered design) to the public sector. The thinking goes that if government develops policy with users more squarely in mind, it might accelerate social progress rather than—as has often been the case—stifle it.
At a moment when fewer Americans than ever believe government is meeting their needs, a new approach that elevates the voices of citizens is long overdue. Even so, it would be misguided to view user-centered design as a cure-all for what ails the public sector. The approach holds great promise, but only in a well-defined set of circumstances.
User-Centered Design in the Public Policy Arena
The term “user-centered design” refers simply to a method of building products with an eye toward what users want and need.
To date, the approach has been applied primarily to the domain of for-profit start-ups. In recent months and years, however, supporters of user-centered design have sought to introduce it to other domains. A 2013 article authored by the head of a Danish design consultancy, for example, heralded the fact that “public sector design is on this rise.” And in the recent book Lean Impact, former Google executive and USAID official Ann-Mei Chang made an incisive and compelling case for why the social sector stands to benefit from this approach.
According to this line of thinking, we should be driving toward a world where government designs policy with an eye toward the individuals that stand to benefit from—or that could be hurt by—changes to public policy.
An Imperfect Fit
The merits of user-centered design in this context may seem self-evident. Yet it stands in stark contrast to how public sector leaders typically approach policy development. As leading design thinking theorist Jeanne Liedkta notes in her book Design Thinking for the Greater Good, “Innovation and design are [currently] the domain of experts, policy makers, planners and senior leaders. Everyone else is expected to step away.”
But while user-centered design has much to offer the policy development, it does not map perfectly onto this new territory….(More)”.
Paper by Aziz Z. Huq: “The theory and the practice of democracy alike are entangled with the prospect of failure. This is so in the sense that a failure of one kind or another is almost always to be found at democracy’s inception. Further, different kinds of shortfalls dog its implementation. No escape is found in theory, which precipitates internal contradictions that can only be resolved by compromising important democratic values. A stable democratic equilibrium proves elusive because of the tendency of discrete lapses to catalyze wider, systemically disruption. Worse, the very pervasiveness of local failure also obscures the tipping point at which systemic change occurs. Social coordination in defense of democracy is therefore very difficult, and its failure correspondingly more likely. This thicket of intimate entanglements has implications for both the proper description and normative analysis of democracy. At a minimum, the nexus of democracy and failure elucidates the difficulty of dichotomizing democracies into the healthy and the ailing. It illuminates the sound design of democratic institutions by gesturing toward resources usefully deployed to mitigate the costs of inevitable failure. Finally, it casts light on the public psychology best adapted to persisting democracy. To grasp the proximity of democracy’s entanglements with failure is thus to temper the aspiration for popular self-government as a steady-state equilibrium, to open new questions about the appropriate political psychology for a sound democracy, and to limn new questions about democracy’s optimal institutional specification….(More)”.
Aaron Smith, Laura Silver, Courtney Johnson, Kyle Taylor and Jingjing Jiang at Pew Research: “In recent years, the internet and social media have been integral to political protests, social movements and election campaigns around the globe. Events from the Arab Spring to the worldwide spread of#MeToo have been aided by digital connectivity in both advanced and emerging economies. But popular social media and messaging platforms like Facebook and WhatsApp have drawn attention for their potential role in spreading misinformation, facilitating political manipulation by foreign and domestic actors, and increasing violence and hate crimes.
Recently, the Sri Lankan government shut down several of the country’s social media and messaging services immediately after Easter day bombings at Catholic churches killed and wounded hundreds. Some technology enthusiasts praised the decision but wondered if this development marked a change from pro-democracy, Arab Spring-era hopes that digital technology would be a liberating tool to a new fear that it has become “a force that can corrode” societies.
In the context of these developments, a Pew Research Center survey of adults in 11 emerging economies finds these publics are worried about the risks associated with social media and other communications technologies – even as they cite their benefits in other respects. Succinctly put, the prevailing view in the surveyed countries is that mobile phones, the internet and social media have collectively amplified politics in both positive and negative directions – simultaneously making people more empowered politically andpotentially more exposed to harm.
When it comes to the benefits, adults in these countries see digital connectivity enhancing people’s access to political information and facilitating engagement with their domestic politics. Majorities in each country say access to the internet, mobile phones and social media has made people more informed about current events, and majorities in most countries believe social media have increased ordinary people’s ability to have a meaningful voice in the political process. Additionally, half or more in seven of these 11 countries say technology has made people more accepting of those who have different views than they do.
But these perceived benefits are frequently accompanied by concerns about the limitations of technology as a tool for political action or information seeking. Even as many say social media have increased the influence of ordinary people in the political process, majorities in eight of these 11 countries feel these platforms have simultaneously increased the risk that people might be manipulated by domestic politicians. Around half or more in eight countries also think these platforms increase the risk that foreign powers might interfere in their country’s elections….(More)”.
Research Paper by Roya Pakzad: “Efforts are being made to use information and communications technologies (ICTs) to improve accountability in providing refugee aid. However, there remains a pressing need for increased accountability and transparency when designing and deploying humanitarian technologies. This paper outlines the challenges and opportunities of emerging technologies, such as machine learning and blockchain, in the refugee system.
The paper concludes by recommending the creation of quantifiable metrics for sharing information across both public and private initiatives; the creation of the equivalent of a “Hippocratic oath” for technologists working in the humanitarian field; the development of predictive early-warning systems for human rights abuses; and greater accountability among funders and technologists to ensure the sustainability and real-world value of humanitarian apps and other digital platforms….(More)”
Book by Carmit Wiesslitz: “What role does the Internet play in the activities of organizations for social change? This book examines to what extent the democratic potential ascribed to the Internet is realized in practice, and how civil society organizations exploit the unique features of the Internet to attain their goals. This is the story of the organization members’ outlooks and impressions of digital platforms’ role as tools for social change; a story that debunks a common myth about the Internet and collective action. In a time when social media are credited with immense power in generating social change, this book serves as an important reminder that reality for activists and social change organizations is more complicated. Thus, the book sheds light on the back stage of social change organizations’ operations as they struggle to gain visibility in the infinite sea of civil groups competing for attention in the online public sphere. While many studies focus on the performative dimension of collective action (such as protests), this book highlights the challenges of these organizations’ mundane routines. Using a unique analytical perspective based on a structural-organizational approach, and a longitudinal study that utilizes a decade worth of data related to the specific case of Israel and its highly conflicted and turbulent society, the book makes a significant contribution to study of new media and to theories of Internet, democracy, and social change….(More)”.
This dysfunction started well before the Trump presidency. It has been growing for decades, despite promise after promise and proposal after proposal to reverse it. Many explanations have been offered, from the rise of partisan media to the growth of gerrymandering to the explosion of corporate money. But one of the most important causes is usually overlooked: transparency. Something usually seen as an antidote to corruption and bad government, it turns out, is leading to both.
The problem began in 1970, when a group of liberal Democrats in the House of Representatives spearheaded the passage of new rules known as “sunshine reforms.” Advertised as measures that would make legislators more accountable to their constituents, these changes increased the number of votes that were recorded and allowed members of the public to attend previously off-limits committee meetings.
But the reforms backfired. By diminishing secrecy, they opened up the legislative process to a host of actors—corporations, special interests, foreign governments, members of the executive branch—that pay far greater attention to the thousands of votes taken each session than the public does. The reforms also deprived members of Congress of the privacy they once relied on to forge compromises with political opponents behind closed doors, and they encouraged them to bring useless amendments to the floor for the sole purpose of political theater.
Fifty years on, the results of this experiment in transparency are in. When lawmakers are treated like minors in need of constant supervision, it is special interests that benefit, since they are the ones doing the supervising. And when politicians are given every incentive to play to their base, politics grows more partisan and dysfunctional. In order for Congress to better serve the public, it has to be allowed to do more of its work out of public view.
The idea of open government enjoys nearly universal support. Almost every modern president has paid lip service to it. (Even the famously paranoid Richard Nixon said, “When information which properly belongs to the public is systematically withheld by those in power, the people soon become ignorant of their own affairs, distrustful of those who manage them, and—eventually—incapable of determining their own destinies.”) From former Republican Speaker of the House Paul Ryan to Democratic Speaker of the House Nancy Pelosi, from the liberal activist Ralph Nader to the anti-tax crusader Grover Norquist, all agree that when it comes to transparency, more is better.
It was not always this way. It used to be that secrecy was seen as essential to good government, especially when it came to crafting legislation. …(More)”
Paper by Roseanna Sommers and Vanessa K. Bohns: “Consent-based searches are by far the most ubiquitous form of search undertaken by police. A key legal inquiry in these cases is whether consent was granted voluntarily. This Essay suggests that fact finders’ assessments of voluntariness are likely to be impaired by a systematic bias in social perception. Fact finders are likely to underappreciate the degree to which suspects feel pressure to comply with police officers’ requests to perform searches.
In two preregistered laboratory studies, we approached a total of 209 participants (“Experi- encers”) with a highly intrusive request: to unlock their password-protected smartphones and hand them over to an experimenter to search through while they waited in another room. A sepa- rate 194 participants (“Forecasters”) were brought into the lab and asked whether a reasonable person would agree to the same request if hypothetically approached by the same researcher. Both groups then reported how free they felt, or would feel, to refuse the request.
Study 1 found that whereas most Forecasters believed a reasonable person would refuse the experimenter’s request, most Experiencers—100 out of 103 people—promptly unlocked their phones and handed them over. Moreover, Experiencers reported feeling significantly less free to refuse than did Forecasters contemplating the same situation hypothetically.
Study 2 tested an intervention modeled after a commonly proposed reform of consent searches, in which the experimenter explicitly advises participants that they have the right to with- hold consent. We found that this advisory did not significantly reduce compliance rates or make Experiencers feel more free to say no. At the same time, the gap between Experiencers and Fore- casters remained significant.
These findings suggest that decision makers judging the voluntariness of consent consistently underestimate the pressure to comply with intrusive requests. This is problematic because it indi- cates that a key justification for suspicionless consent searches—that they are voluntary—relies on an assessment that is subject to bias. The results thus provide support to critics who would like to see consent searches banned or curtailed, as they have been in several states.
The results also suggest that a popular reform proposal—requiring police to advise citizens of their right to refuse consent—may have little effect. This corroborates previous observational studies, which find negligible effects of Miranda warnings on confession rates among interrogees, and little change in rates of consent once police start notifying motorists of their right to refuse vehicle searches. We suggest that these warnings are ineffective because they fail to address the psychology of compliance. The reason people comply with police, we contend, is social, not informational. The social demands of police-citizen interactions persist even when people are informed of their rights. It is time to abandon the myth that notifying people of their rights makes them feel empowered to exercise those rights…(More)”.
Book by Miriam Lips: “Digital Government: Managing Public Sector Reform in the Digital Era presents a public management perspective on digital government and technology-enabled change in the public sector. It incorporates theoretical and empirical insights to provide students with a broader and deeper understanding of the complex and multidisciplinary nature of digital government initiatives, impacts, and implications.
The rise of digital government and its increasingly integral role in many government processes and activities, including overseeing fundamental changes at various levels across government, means that it is no longer perceived as just a technology issue. In this book Miriam Lips provides students with practical approaches and perspectives to better understand digital government. The text also explores emerging issues and barriers as well as strategies to more effectively manage digital government and technology-enabled change in the public sector….(More)”.
National Academies of Sciences: “While computational reproducibility in scientific research is generally expected when the original data and code are available, lack of ability to replicate a previous study — or obtain consistent results looking at the same scientific question but with different data — is more nuanced and occasionally can aid in the process of scientific discovery, says a new congressionally mandated report from the National Academies of Sciences, Engineering, and Medicine. Reproducibility and Replicability in Science recommends ways that researchers, academic institutions, journals, and funders should help strengthen rigor and transparency in order to improve the reproducibility and replicability of scientific research.
Defining Reproducibility and Replicability
The terms “reproducibility” and “replicability” are often used interchangeably, but the report uses each term to refer to a separate concept. Reproducibility means obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis. Replicability means obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.
Reproducing research involves using the original data and code, while replicating research involves new data collection and similar methods used in previous studies, the report says. Even when a study was rigorously conducted according to best practices, correctly analyzed, and transparently reported, it may fail to be replicated.
“Being able to reproduce the computational results of another researcher starting with the same data and replicating a previous study to test its results facilitate the self-correcting nature of science, and are often cited as hallmarks of good science,” said Harvey Fineberg, president of the Gordon and Betty Moore Foundation and chair of the committee that conducted the study. “However, factors such as lack of transparency of reporting, lack of appropriate training, and methodological errors can prevent researchers from being able to reproduce or replicate a study. Research funders, journals, academic institutions, policymakers, and scientists themselves each have a role to play in improving reproducibility and replicability by ensuring that scientists adhere to the highest standards of practice, understand and express the uncertainty inherent in their conclusions, and continue to strengthen the interconnected web of scientific knowledge — the principal driver of progress in the modern world.”….(More)”.