China asserts firm grip on research data


ScienceMag: “In a move few scientists anticipated, the Chinese government has decreed that all scientific data generated in China must be submitted to government-sanctioned data centers before appearing in publications. At the same time, the regulations, posted last week, call for open access and data sharing.

The possibly conflicting directives puzzle researchers, who note that the yet-to-be-established data centers will have latitude in interpreting the rules. Scientists in China can still share results with overseas collaborators, says Xie Xuemei, who specializes in innovation economics at Shanghai University. Xie also believes that the new requirements to register data with authorities before submitting papers to journals will not affect most research areas. Gaining approval could mean publishing delays, Xie says, but “it will not have a serious impact on scientific research.”

The new rules, issued by the powerful State Council, apply to all groups and individuals generating research data in China. The creation of a national data center will apparently fall to the science ministry, though other ministries and local governments are expected to create their own centers as well. Exempted from the call for open access and sharing are data involving state and business secrets, national security, “public interest,” and individual privacy… (More)”

Accountability in modern government: what are the issues?


Discussion Paper by Benoit Guerin, Julian McCrae and Marcus Shepheard: “…Accountability lies at the heart of democratic government. It enables people to know how the Government is doing and how to gain redress when things go wrong. It ensures ministers and civil servants are acting in the interests of the people they serve.

Accountability is a part of good governance and it can increase the trustworthiness and legitimacy of the state in the eyes of the public. Every day, 5.4 million public sector workers deliver services ranging from health care to schools to national defence.1 A host of bodies hold them to account – whether the National Audit Office undertaking around 60 value for money inquiries a year,2 Ofsted inspecting more than 5,000 schools per year, or the main Government ombudsman services dealing with nearly 80,000 complaints from the public in 2016/17 alone. More than 21,000 elected officials, ranging from MPs to local councillors, scrutinise these services on behalf of citizens.

When that accountability works properly, it helps the UK’s government to be among the best in the world. For example, public spending is authorised by Parliament and routinely stays within the limits set. The accountability that surrounds this – provided through oversight by the Treasury, audit by the National Audit Office and scrutiny by the Public Accounts Committee – is strong and dates back to the 19th century. However, in areas where that accountability is weak, the risk of failure – whether financial mismanagement, the collapse of services or chronic underperformance – increases. …

There are three factors underpinning the weak accountability that is perpetuating failure. They are: fundamental gaps in accountability in Whitehall; a failure of accountability beyond Whitehall to keep pace with an increasingly complex public sector landscape; and a pervading culture of blame….

This paper suggests potential options for strengthening accountability, based on our analysis. These involve changes to structures, increased transparency and moves to improve the culture. These options are meant to elicit discussion rather than to set the Institute for Government’s position at this stage….(More)”

Privacy’s Blueprint: The Battle to Control the Design of New Technologies


Book by Woodrow Hartzog: “Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them.

In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information.

Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust….(More)”.

What Is Human-Centric Design?


Zack Quaintance at GovTech: “…Government services, like all services, have historically used some form of design to deploy user-facing components. The design portion of this equation is nothing new. What Olesund says is new, however, is the human-centric component.

“In the past, government services were often designed from the perspective and need of the government institution, not necessarily with the needs or desires of residents or constituents in mind,” said Olesund. “This might lead, for example, to an accumulation of stats and requirements for residents, or utilization of outdated technology because the government institution is locked into a contract.”

Basically, government has never set out to design its services to be clunky or hard to use. These qualities have, however, grown out of the legally complex frameworks that governments must adhere to, which can subsequently result in a failure to prioritize the needs of the people using the services rather than the institution.

Change, however, is underway. Human-centric design is one of the main priorities of the U.S. Digital Service (USDS) and 18F, a pair of organizations created under the Obama administration with missions that largely involve making government services more accessible to the citizenry through efficient use of tech.

Although the needs of state and municipal governments are more localized, the gov tech work done at the federal level by the USDS and 18F has at times served as a benchmark or guidepost for smaller government agencies.

“They both redesign services to make them digital and user-friendly,” Olesund said. “But they also do a lot of work creating frameworks and best practices for other government agencies to adopt in order to achieve some of the broader systemic change.”

One of the most tangible examples of human-centered design at the state or local level can be found at Michigan’s Department of Health and Human Services, which recently worked with the Detroit-based design studio Civillato reduce its paper services application from 40 pages, 18,000-some words and 1,000 questions, down to 18 pages, 3,904 words and 213 questions. Currently, Civilla is working with the nonprofit civic tech group Code for America to help bring the same massive level of human-centered design progress to the state’s digital services.

Other work is underway in San Francisco’s City Hall and within the state of California. A number of cities also have iTeams funded through Bloomberg Philanthropies, and their missions are to innovate in ways that solve ongoing municipal problems, a mission that often requires use of human-centric design….(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Digitalization and Public Sector Transformations


Book by Jannick Schou and Morten Hjelholt: “This book provides a study of governmental digitalization, an increasingly important area of policymaking within advanced capitalist states. It dives into a case study of digitalization efforts in Denmark, fusing a national policy study with local institutional analysis. Denmark is often framed as an international forerunner in terms of digitalizing its public sector and thus provides a particularly instructive setting for understanding this new political instrument.

Advancing a cultural political economic approach, Schou and Hjelholt argue that digitalization is far from a quick technological fix. Instead, this area must be located against wider transformations within the political economy of capitalist states. Doing so, the book excavates the political roots of digitalization and reveals its institutional consequences. It shows how new relations are being formed between the state and its citizens.

Digitalization and Public Sector Transformations pushes for a renewed approach to governmental digitalization and will be of interest to scholars working in the intersections of critical political economy, state theory and policy studies…(More)”.

A Race to the Top? The Aid Transparency Index and the Social Power of Global Performance Indicators


Paper by Dan Honig and Catherine Weaver: “Recent studies on global performance indicators (GPIs) reveal the distinct power that non-state actors can accrue and exercise in world politics. How and when does this happen? Using a mixed-methods approach, we examine the impact of the Aid Transparency Index (ATI), an annual rating and rankings index produced by the small UK-based NGO Publish What You Fund.

The ATI seeks to shape development aid donors’ behavior with respect to their transparency – the quality and kind of information they publicly disclose. To investigate the ATI’s effect, we construct an original panel dataset of donor transparency performance before and after ATI inclusion (2006-2013) to test whether, and which, donors alter their behavior in response to inclusion in the ATI. To further probe the causal mechanisms that explain variations in donor behavior we use qualitative research, including over 150 key informant interviews conducted between 2010-2017.

Our analysis uncovers the conditions under which the ATI influences powerful aid donors. Moreover, our mixed methods evidence reveals how this happens. Consistent with Kelley & Simmons’ central argument that GPIs exercise influence via social pressure, we find that the ATI shapes donor behavior primarily via direct effects on elites: the diffusion of professional norms, organizational learning, and peer pressure….(More)”.

What if a nuke goes off in Washington, D.C.? Simulations of artificial societies help planners cope with the unthinkable


Mitchell Waldrop at Science: “…The point of such models is to avoid describing human affairs from the top down with fixed equations, as is traditionally done in such fields as economics and epidemiology. Instead, outcomes such as a financial crash or the spread of a disease emerge from the bottom up, through the interactions of many individuals, leading to a real-world richness and spontaneity that is otherwise hard to simulate.

That kind of detail is exactly what emergency managers need, says Christopher Barrett, a computer scientist who directs the Biocomplexity Institute at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, which developed the NPS1 model for the government. The NPS1 model can warn managers, for example, that a power failure at point X might well lead to a surprise traffic jam at point Y. If they decide to deploy mobile cell towers in the early hours of the crisis to restore communications, NPS1 can tell them whether more civilians will take to the roads, or fewer. “Agent-based models are how you get all these pieces sorted out and look at the interactions,” Barrett says.

The downside is that models like NPS1 tend to be big—each of the model’s initial runs kept a 500-microprocessor computing cluster busy for a day and a half—forcing the agents to be relatively simple-minded. “There’s a fundamental trade-off between the complexity of individual agents and the size of the simulation,” says Jonathan Pfautz, who funds agent-based modeling of social behavior as a program manager at the Defense Advanced Research Projects Agency in Arlington, Virginia.

But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. In fields as diverse as economics, transportation, public health, and urban planning, more and more decision-makers are taking agent-based models seriously. “They’re the most flexible and detailed models out there,” says Ira Longini, who models epidemics at the University of Florida in Gainesville, “which makes them by far the most effective in understanding and directing policy.”

he roots of agent-based modeling go back at least to the 1940s, when computer pioneers such as Alan Turing experimented with locally interacting bits of software to model complex behavior in physics and biology. But the current wave of development didn’t get underway until the mid-1990s….(More)”.

Digital Identity: On the Threshold of a Digital Identity Revolution


White Paper by the World Economic Forum: “For individuals, legal entities and devices alike, a verifiable and trusted identity is necessary to interact and transact with others.

The concept of identity isn’t new – for much of human history, we have used evolving credentials, from beads and wax seals to passports, ID cards and birth certificates, to prove who we are. The issues associated with identity proofing – fraud, stolen credentials and social exclusion – have challenged individuals throughout history. But, as the spheres in which we live and transact have grown, first geographically and now into the digital economy, the ways in which humans, devices and other entities interact are quickly evolving – and how we manage identity will have to change accordingly.

As we move into the Fourth Industrial Revolution and more transactions are conducted digitally, a digital representation of one’s identity has become increasingly important; this applies to humans, devices, legal entities and beyond. For humans, this proof of identity is a fundamental prerequisite to access critical services and participate in modern economic, social and political systems. For devices, their digital identity is critical in conducting transactions, especially as the devices will be able to transact relatively independent of humans in the near future. For legal entities, the current state of identity management consists of inefficient manual processes that could benefit from new technologies and architecture to support digital growth.

As the number of digital services, transactions and entities grows, it will be increasingly important to ensure the transactions take place in a secure and trusted network where each entity can be identified and authenticated. Identity is the first step of every transaction between two or more parties.

Over the ages, the majority of transactions between two identities has been mostly viewed in relation to the validation of a credential (“Is this genuine information?”), verification (“Does the information match the identity?”) and authentication of an identity (“Does this human/thing match the identity? Are you really who you claim to be?”). These questions have not changed over time, only the methods have change. This paper explores the challenges with current identity systems and the trends that will have significant impact on identity in the future….(More)”.

Managing Public Trust


Book edited by Barbara Kożuch, Sławomir J. Magala and Joanna Paliszkiewicz: “This book brings together the theory and practice of managing public trust. It examines the current state of public trust, including a comprehensive global overview of both the research and practical applications of managing public trust by presenting research from seven countries (Brazil, Finland, Poland, Hungary, Portugal, Taiwan, Turkey) from three continents. The book is divided into five parts, covering the meaning of trust, types, dimension and the role of trust in management; the organizational challenges in relation to public trust; the impact of social media on the development of public trust; the dynamics of public trust in business; and public trust in different cultural contexts….(More)”.