Reduced‐Boundary Governance: The Advantages of Working Together


Introduction by Jeremy L. Hall and R. Paul Battaglio of Special Issue of the Public Administration Review: “Collaboration, cooperation, and coproduction are all approaches that reflect the realization that creative solutions look beyond traditional, organizational, and structural boundaries to overcome various capacity deficiencies while working toward shared goals….One of the factors complicating measurement and analysis in multistakeholder approaches to solving problems and delivering services is the inherently intergovernmental and intersectoral nature of the work. Performance now depends on accumulated capacity across organizations, including a special form of capacity—the ability to work together collaboratively. Such activity within a government has been referred to as “whole of government” approaches or “joined up government” (Christensen and Lægreid 2007). We have terms for work across levels of government (intergovernmental relations) and between government and the public and private sectors (intersectoral relations), but on the whole, the creative, collaborative, and interactive activities in which governments are involved today transcend even these neat categories and classifications. We might call this phenomenon reduced‐boundary governance. Moving between levels of government or between sectors often changes the variables that are available for analysis, or at least introduces validity issues associated with differences in measurement and estimation (see Brandsen and Honingh 2016; Nabatchi, Sancino, and Sicilia 2017). Sometimes data are not available at all. And, of course, collaboration or pooling of resources typically occurs in an ad hoc or one‐off basis that is limited to a single problem, a single program, or a single defined period of time, further complicating study and knowledge accumulation.

Increasingly, public service is accomplished together rather than alone. Boundaries between organizations are becoming blurred in new approaches to solving public problems (Christensen and Lægreid 2007). PAR is committed to better understanding the circumstances under which collaboration, cooperation, and coproduction occurs. What are the necessary antecedents? What are the deterrents? We are interested in the challenges that organizations face as they pursue collaborative action that transcends boundaries. And, of course, we are interested in the efficiency and performance gains that are achieved as a result of those efforts, as well as in their long‐term sustainability.

In this issue, we feature a series of articles that highlight research that focuses on working together, through collaboration, coproduction, or cooperation. The issue begins with a look at right‐sizing the use of volunteerism in public and nonprofit organizations given their limitations and possibilities (Nesbit, Christensen, and Brudney 2018). Uzochukwu and Thomas (2018) then explore coproduction using a case study of Atlanta to better understand who uses it and why. Klok et al. (2018) presents a fascinating look at intermunicipal cooperation through polycentric regional governance in the Netherlands, with an eye toward the costs and effectiveness of those arrangements. McGuire, Hoang, and Prakash (2018) look at the effectiveness of voluntary environmental programs in pollution reduction. Using different policy tools as lenses for analysis, Jung, Malatesta, and LaLonde (2018) ask whether work release programs are improved by working together or working alone. Finally, Yi et al. (2018) explore the role of regional governance and institutional collective action in promoting environmental sustainability. Each of these pieces explores unique dimensions of working together, or governing beyond traditional boundaries….(More)”.

The Global Council on Extended Intelligence


“The IEEE Standards Association (IEEE-SA) and the MIT Media Lab are joining forces to launch a global Council on Extended Intelligence (CXI) composed of individuals who agree on the following:

One of the most powerful narratives of modern times is the story of scientific and technological progress. While our future will undoubtedly be shaped by the use of existing and emerging technologies – in particular, of autonomous and intelligent systems (A/IS) – there is no guarantee that progress defined by “the next” is beneficial. Growth for humanity’s future should not be defined by reductionist ideas of speed or size alone but as the holistic evolution of our species in positive alignment with the environmental and other systems comprising the modern algorithmic world.

We believe all systems must be responsibly created to best utilize science and technology for tangible social and ethical progress. Individuals, businesses and communities involved in the development and deployment of autonomous and intelligent technologies should mitigate predictable risks at the inception and design phase and not as an afterthought. This will help ensure these systems are created in such a way that their outcomes are beneficial to society, culture and the environment.

Autonomous and intelligent technologies also need to be created via participatory design, where systems thinking can help us avoid repeating past failures stemming from attempts to control and govern the complex-adaptive systems we are part of. Responsible living with or in the systems we are part of requires an awareness of the constrictive paradigms we operate in today. Our future practices will be shaped by our individual and collective imaginations and by the stories we tell about who we are and what we desire, for ourselves and the societies in which we live.

These stories must move beyond the “us versus them” media mentality pitting humans against machines. Autonomous and intelligent technologies have the potential to enhance our personal and social skills; they are much more fully integrated and less discrete than the term “artificial intelligence” implies. And while this process may enlarge our cognitive intelligence or make certain individuals or groups more powerful, it does not necessarily make our systems more stable or socially beneficial.

We cannot create sound governance for autonomous and intelligent systems in the Algorithmic Age while utilizing reductionist methodologies. By proliferating the ideals of responsible participant design, data symmetry and metrics of economic prosperity prioritizing people and the planet over profit and productivity, The Council on Extended Intelligence will work to transform reductionist thinking of the past to prepare for a flourishing future.

Three Priority Areas to Fulfill Our Vision

1 – Build a new narrative for intelligent and autonomous technologies inspired by principles of systems dynamics and design.

“Extended Intelligence” is based on the hypothesis that intelligence, ideas, analysis and action are not formed in any one individual collection of neurons or code…..

2 – Reclaim our digital identity in the algorithmic age

Business models based on tracking behavior and using outdated modes of consent are compounded by the appetites of states, industries and agencies for all data that may be gathered….

3 – Rethink our metrics for success

Although very widely used, concepts of exponential growth and productivity such as the gross domestic product (GDP) index are insufficient to holistically measure societal prosperity. … (More)”.

Balancing Act: Innovation vs. Privacy in the Age of Data Portability


Thursday, July 12, 2018 @ 2 MetroTech Center, Brooklyn, NY 11201

RSVP here.

The ability of people to move or copy data about themselves from one service to another — data portability — has been hailed as a way of increasing competition and driving innovation. In many areas, such as through the Open Banking initiative in the United Kingdom, the practice of data portability is fully underway and propagating. The launch of GDPR in Europe has also elevated the issue among companies and individuals alike. But recent online security breaches and other experiences of personal data being transferred surreptitiously from private companies, (e.g., Cambridge Analytica’s appropriation of Facebook data), highlight how data portability can also undermine people’s privacy.

The GovLab at the NYU Tandon School of Engineering is pleased to present Jeni Tennison, CEO of the Open Data Institute, for its next Ideas Lunch, where she will discuss how data portability has been regulated in the UK and Europe, and what governments, businesses and people need to do to strike the balance between its risks and benefits.

Jeni Tennison is the CEO of the Open Data Institute. She gained her PhD from the University of Nottingham then worked as an independent consultant, specialising in open data publishing and consumption, before joining the ODI in 2012. Jeni was awarded an OBE for services to technology and open data in the 2014 New Year Honours.

Before joining the ODI, Jeni was the technical architect and lead developer for legislation.gov.uk. She worked on the early linked data work on data.gov.uk, including helping to engineer new standards for publishing statistics as linked data. She continues her work within the UK’s public sector as a member of the Open Standards Board.

Jeni also works on international web standards. She was appointed to serve on the W3C’s Technical Architecture Group from 2011 to 2015 and in 2014 she started to co-chair the W3C’s CSV on the Web Working Group. She also sits on the Advisory Boards for Open Contracting Partnership and the Data Transparency Lab.

Twitter handle: @JeniT

Against the Dehumanisation of Decision-Making – Algorithmic Decisions at the Crossroads of Intellectual Property, Data Protection, and Freedom of Information


Paper by Guido Noto La Diega: “Nowadays algorithms can decide if one can get a loan, is allowed to cross a border, or must go to prison. Artificial intelligence techniques (natural language processing and machine learning in the first place) enable private and public decision-makers to analyse big data in order to build profiles, which are used to make decisions in an automated way.

This work presents ten arguments against algorithmic decision-making. These revolve around the concepts of ubiquitous discretionary interpretation, holistic intuition, algorithmic bias, the three black boxes, psychology of conformity, power of sanctions, civilising force of hypocrisy, pluralism, empathy, and technocracy.

The lack of transparency of the algorithmic decision-making process does not stem merely from the characteristics of the relevant techniques used, which can make it impossible to access the rationale of the decision. It depends also on the abuse of and overlap between intellectual property rights (the “legal black box”). In the US, nearly half a million patented inventions concern algorithms; more than 67% of the algorithm-related patents were issued over the last ten years and the trend is increasing.

To counter the increased monopolisation of algorithms by means of intellectual property rights (with trade secrets leading the way), this paper presents three legal routes that enable citizens to ‘open’ the algorithms.

First, copyright and patent exceptions, as well as trade secrets are discussed.

Second, the GDPR is critically assessed. In principle, data controllers are not allowed to use algorithms to take decisions that have legal effects on the data subject’s life or similarly significantly affect them. However, when they are allowed to do so, the data subject still has the right to obtain human intervention, to express their point of view, as well as to contest the decision. Additionally, the data controller shall provide meaningful information about the logic involved in the algorithmic decision.

Third, this paper critically analyses the first known case of a court using the access right under the freedom of information regime to grant an injunction to release the source code of the computer program that implements an algorithm.

Only an integrated approach – which takes into account intellectual property, data protection, and freedom of information – may provide the citizen affected by an algorithmic decision of an effective remedy as required by the Charter of Fundamental Rights of the EU and the European Convention on Human Rights….(More)”.

Developing an impact framework for cultural change in government


Jesper Christiansen at Nesta: “Innovation teams and labs around the world are increasingly being tasked with building capacity and contributing to cultural change in government. There’s also an increasing recognition that we need to go beyond projects or single structures and make innovation become a part of the way governments operate more broadly.

However, there is a significant gap in our understanding of what “cultural change” or better “capacity” actually means.

At the same time, most innovation labs and teams are still being held to account in ways that don’t productively support this work. There is a lack of useful ways to measure outcomes, as opposed to outputs (for example, being asked to account for the number of workshops, rather than the increased capacity or impact that these workshops led to).

Consequently, we need a more developed awareness and understanding of what the signs of success look like, and what the intermediary outcomes (and measures) are in order to create a shift in accountability and better support ongoing capacity building….

One of the goals of States of Change, the collective we initiated last year to build this capability and culture, is to proactively address the common challenges that innovation practitioners face again and again. The field of public innovation is still emerging and evolving, and so our aim is to inspire action through practice-oriented, collaborative R&D activities and to develop the field based on practice rather than theory….(More)”.

Preprints: The What, The Why, The How.


Center for Open Science: “The use of preprint servers by scholarly communities is definitely on the rise. Many developments in the past year indicate that preprints will be a huge part of the research landscape. Developments with DOIs, changes in funder expectations, and the launch of many new services indicate that preprints will become much more pervasive and reach beyond the communities where they started.

From funding agencies that want to realize impact from their efforts sooner to researchers’ desire to disseminate their research more quickly, the growth of these servers and the number of works being shared, has been substantial. At COS, we already host twenty different organizations’ services via the OSF Preprints platform.

So what’s a preprint and what is it good for? A preprint is a manuscript submitted to a  dedicated repository (like OSF PreprintsPeerJbioRxiv or arXiv) prior to peer review and formal publication. Some of those repositories may also accept other types of research outputs, like working papers and posters or conference proceedings. Getting a preprint out there has a variety of benefits for authors other stakeholders in the research:

  • They increase the visibility of research, and sooner. While traditional papers can languish in the peer review process for months, even years, a preprint is live the minute it is submitted and moderated (if the service moderates). This means your work gets indexed by Google Scholar and Altmetric, and discovered by more relevant readers than ever before.
  • You can get feedback on your work and make improvements prior to journal submission. Many authors have publicly commented about the recommendations for improvements they’ve received on their preprint that strengthened their work and even led to finding new collaborators.
  • Papers with an accompanying preprint get cited 30% more often than papers without. This research from PeerJsums it up, but that’s a big benefit for scholars looking to get more visibility and impact from their efforts.
  • Preprints get a permanent DOI, which makes them part of the freely accessible scientific record forever. This means others can relay on that permanence when citing your work in their research. It also means that your idea, developed by you, has a “stake in the ground” where potential scooping and intellectual theft are concerned.

So, preprints can really help lubricate scientific progress. But there are some things to keep in mind before you post. Usually, you can’t post a preprint of an article that’s already been submitted to a journal for peer review. Policies among journals vary widely, so it’s important to check with the journal you’re interested in sending your paper to BEFORE you submit a preprint that might later be published. A good resource for doing this is JISC’s SHERPA/RoMEO database. It’s also a good idea to understand the licensing choices available. At OSF Preprints, we recommend the CC-BY license suite, but you can check choosealicense.com or https://osf.io/6uupa/ for good overviews on how best to license your submissions….(More)”.

Data Protection and e-Privacy: From Spam and Cookies to Big Data, Machine Learning and Profiling


Chapter by Lilian Edwards in L Edwards ed Law, Policy and the Internet (Hart , 2018): “In this chapter, I examine in detail how data subjects are tracked, profiled and targeted by their activities on line and, increasingly, in the “offline” world as well. Tracking is part of both commercial and state surveillance, but in this chapter I concentrate on the former. The European law relating to spam, cookies, online behavioural advertising (OBA), machine learning (ML) and the Internet of Things (IoT) is examined in detail, using both the GDPR and the forthcoming draft ePrivacy Regulation. The chapter concludes by examining both code and law solutions which might find a way forward to protect user privacy and still enable innovation, by looking to paradigms not based around consent, and less likely to rely on a “transparency fallacy”. Particular attention is drawn to the new work around Personal Data Containers (PDCs) and distributed ML analytics….(More)”.

Civic Tech: Making Technology Work for People


Book by Andrew Schrock: “The term “Civic Tech” has gained international recognition as a way to unite communities and government through technology design. But what does it mean for our shared future? In this book, Andrew Schrock cuts through the hype by telling stories of the people and ideas driving the movement. He argues that Civic Tech emerged in response to inequality and persistent social problems. The collaborative approaches and early successes of “techies” may not be easy solutions, but they exemplify a powerful political alternative. Civic Tech draws our attention to the challenges of public ownership and democratizing technology design—vital goals for the years ahead….(More)”.

Activating Agency or Nudging?


Article by Michael Walton: “Two ideas in development – activating agency of citizens and using “nudges” to change their behavior – seem diametrically opposed in spirit: activating latent agency at the ground level versus  top-down designs that exploit people’s behavioral responses. Yet both start from a psychological focus and a belief that changes in people’s behavior can lead to “better” outcomes, for the individuals involved and for society.  So how should we think of these contrasting sets of ideas? When should each approach be used?…

Let’s compare the two approaches with respect to diagnostic frame, practice and ethics.

Diagnostic frame.  

The common ground is recognition that people use short-cuts for decision-making, in ways that can hurt their own interests.  In both approaches, there is an emphasis that decision-making is particularly tough for poor people, given the sheer weight of daily problem-solving.  In behavioral economics one core idea is that we have limited mental “bandwidth” and this form of scarcity hampers decision-making. However, in the “agency” tradition, there is much more emphasis on unearthing and working with the origins of the prevailing mental models, with respect to social exclusion, stigmatization, and the typically unequal economic and cultural relations with respect to more powerful groups in a society.  One approach works more with symptoms, the other with root causes.

Implications for practice.  

The two approaches on display in Cerrito both concern social gains, and both involve a role for an external actor.  But here the contrast is sharp. In the “nudge” approach the external actor is a beneficent technocrat, trying out alternative offers to poor (or non-poor) people to improve outcomes.  A vivid example is alternative messages to tax payers in Guatemala, that induce varying improvements in tax payments.  In the “agency” approach the essence of the interaction is between a front-line worker and an individual or family, with a co-created diagnosis and plan, designed around goals and specific actions that the poor person chooses.  This is akin to what anthropologist Arjun Appadurai termed increasing the “capacity to aspire,” and can extend to greater engagement in civic and political life.

Ethics.

In both approaches, ethics is central.  As implicated in the “nudging for social good as opposed to electoral gain,” some form of ethical regulation is surely needed. In “action to activate agency,” the central ethical issue is of maintaining equality in design between activist and citizen, and explicit owning of any decisions.

What does this imply?

To some degree this is a question of domain of action.  Nudging is most appropriate in a program for which there is a fully supported political and social program, and the issue is how to make it work (as in paying taxes).  The agency approach has a broader ambition, but starts from domains that are potentially within an individual’s control once the sources of “ineffective” or inhibited behavior are tackled, including via front-line interactions with public or private actors….(More)”.

Data Ethics Framework


Introduction by Matt Hancock MP, Secretary of State for Digital, Culture, Media and Sport to the UK’s Data Ethics Framework: “Making better use of data offers huge benefits, in helping us provide the best possible services to the people we serve.

However, all new opportunities present new challenges. The pace of technology is changing so fast that we need to make sure we are constantly adapting our codes and standards. Those of us in the public sector need to lead the way.

As we set out to develop our National Data Strategy, getting the ethics right, particularly in the delivery of public services, is critical. To do this, it is essential that we agree collective standards and ethical frameworks.

Ethics and innovation are not mutually exclusive. Thinking carefully about how we use our data can help us be better at innovating when we use it.

Our new Data Ethics Framework sets out clear principles for how data should be used in the public sector. It will help us maximise the value of data whilst also setting the highest standards for transparency and accountability when building or buying new data technology.

We have come a long way since we published the first version of the Data Science Ethical Framework. This new version focuses on the need for technology, policy and operational specialists to work together, so we can make the most of expertise from across disciplines.

We want to work with others to develop transparent standards for using new technology in the public sector, promoting innovation in a safe and ethical way.

This framework will build the confidence in public sector data use needed to underpin a strong digital economy. I am looking forward to working with all of you to put it into practice…. (More)”

The Data Ethics Framework principles

1.Start with clear user need and public benefit

2.Be aware of relevant legislation and codes of practice

3.Use data that is proportionate to the user need

4.Understand the limitations of the data

5.Ensure robust practices and work within your skillset

6.Make your work transparent and be accountable

7.Embed data use responsibly

The Data Ethics Workbook