Meaningful Inefficiencies: Civic Design in an Age of Digital Expediency


Book by Eric Gordon and Gabriel Mugar: “Public trust in the institutions that mediate civic life-from governing bodies to newsrooms-is low. In facing this challenge, many organizations assume that ensuring greater efficiency will build trust. As a result, these organizations are quick to adopt new technologies to enhance what they do, whether it’s a new app or dashboard. However, efficiency, or charting a path to a goal with the least amount of friction, is not itself always built on a foundation of trust.

Meaningful Inefficiencies is about the practices undertaken by civic designers that challenge the normative applications of “smart technologies” in order to build or repair trust with publics. Based on over sixty interviews with change makers in public serving organizations throughout the United States, as well as detailed case studies, this book provides a practical and deeply philosophical picture of civic life in transition. The designers in this book are not professional designers, but practitioners embedded within organizations who have adopted an approach to public engagement Eric Gordon and Gabriel Mugar call “meaningful inefficiencies,” or the deliberate design of less efficient over more efficient means of achieving some ends. This book illustrates how civic designers are creating meaningful inefficiencies within public serving organizations. It also encourages a rethinking of how innovation within these organizations is understood, applied, and sought after. Different than market innovation, civic innovation is not just about invention and novelty; it is concerned with building communities around novelty, and cultivating deep and persistent trust.

At its core, Meaningful Inefficiencies underlines that good civic innovation will never just involve one single public good, but must instead negotiate a plurality of publics. In doing so, it creates the conditions for those publics to play, resulting in people truly caring for the world. Meaningful Inefficiencies thus presents an emergent and vitally needed approach to creating civic life at a moment when smart and efficient are the dominant forces in social and organizational change….(More)”.

What is My Data Worth?


Ruoxi Jia at Berkeley artificial intelligence research: “People give massive amounts of their personal data to companies every day and these data are used to generate tremendous business values. Some economists and politicians argue that people should be paid for their contributions—but the million-dollar question is: by how much?

This article discusses methods proposed in our recent AISTATS and VLDB papers that attempt to answer this question in the machine learning context. This is joint work with David Dao, Boxin Wang, Frances Ann Hubis, Nezihe Merve Gurel, Nick Hynes, Bo Li, Ce Zhang, Costas J. Spanos, and Dawn Song, as well as a collaborative effort between UC Berkeley, ETH Zurich, and UIUC. More information about the work in our group can be found here.

What are the existing approaches to data valuation?

Various ad-hoc data valuation schemes have been studied in the literature and some of them have been deployed in the existing data marketplaces. From a practitioner’s point of view, they can be grouped into three categories:

  • Query-based pricing attaches values to user-initiated queries. One simple example is to set the price based on the number of queries allowed during a time window. Other more sophisticated examples attempt to adjust the price to some specific criteria, such as arbitrage avoidance.
  • Data attribute-based pricing constructs a price model that takes into account various parameters, such as data age, credibility, potential benefits, etc. The model is trained to match market prices released in public registries.
  • Auction-based pricing designs auctions that dynamically set the price based on bids offered by buyers and sellers.

However, existing data valuation schemes do not take into account the following important desiderata:

  • Task-specificness: The value of data depends on the task it helps to fulfill. For instance, if Alice’s medical record indicates that she has disease A, then her data will be more useful to predict disease A as opposed to other diseases.
  • Fairness: The quality of data from different sources varies dramatically. In the worst-case scenario, adversarial data sources may even degrade model performance via data poisoning attacks. Hence, the data value should reflect the efficacy of data by assigning high values to data which can notably improve the model’s performance.
  • Efficiency: Practical machine learning tasks may involve thousands or billions of data contributors; thus, data valuation techniques should be capable of scaling up.

With the desiderata above, we now discuss a principled notion of data value and computationally efficient algorithms for data valuation….(More)”.

Dollars for Profs: How to Investigate Professors’ Conflicts of Interest


ProPublica: “When professors moonlight, the income may influence their research and policy views. Although most universities track this outside work, the records have rarely been accessible to the public, potentially obscuring conflicts of interests.

That changed last month when ProPublica launched Dollars for Profs, an interactive database that, for the first time ever, allows you to look up more than 37,000 faculty and staff disclosures from about 20 public universities and the National Institutes of Health.

We believe there are hundreds of stories in this database, and we hope to tell as many as possible. Already, we’ve revealed how the University of California’s weak monitoring of conflicts has allowed faculty members to underreport their outside income, potentially depriving the university of millions of dollars. In addition, using a database of NIH records, we found that health researchers have acknowledged a total of at least $188 million in financial conflicts of interest since 2012.

We hope journalists all over the country will look into the database and find more. Here are tips for local education reporters, college newspaper journalists and anyone else who wants to hold academia accountable on how to dig into the disclosures….(More)”.

The Case for an Institutionally Owned Knowledge Infrastructure


Article by James W. Weis, Amy Brand and Joi Ito: “Science and technology are propelled forward by the sharing of knowledge. Yet despite their vital importance in today’s innovation-driven economy, our knowledge infrastructures have failed to scale with today’s rapid pace of research and discovery.

For example, academic journals, the dominant dissemination platforms of scientific knowledge, have not been able to take advantage of the linking, transparency, dynamic communication and decentralized authority and review that the internet enables. Many other knowledge-driven sectors, from journalism to law, suffer from a similar bottleneck — caused not by a lack of technological capacity, but rather by an inability to design and implement efficient, open and trustworthy mechanisms of information dissemination.

Fortunately, growing dissatisfaction with current knowledge-sharing infrastructures has led to a more nuanced understanding of the requisite features that such platforms must provide. With such an understanding, higher education institutions around the world can begin to recapture the control and increase the utility of the knowledge they produce.

When the World Wide Web emerged in the 1990s, an era of robust scholarship based on open sharing of scientific advancements appeared inevitable. The internet — initially a research network — promised a democratization of science, universal access to the academic literature and a new form of open publishing that supported the discovery and reuse of knowledge artifacts on a global scale. Unfortunately, however, that promise was never realized. Universities, researchers and funding agencies, for the most part, failed to organize and secure the investment needed to build scalable knowledge infrastructures, and publishing corporations moved in to solidify their position as the purveyors of knowledge.

In the subsequent decade, such publishers have consolidated their hold. By controlling the most prestigious journals, they have been able to charge for access — extracting billions of dollars in subscription fees while barring much of the world from the academic literature. Indeed, some of the world’s wealthiest academic institutions are no longer able or willing to pay the subscription costs required.

Further, by controlling many of the most prestigious journals, publishers have also been able to position themselves between the creation and consumption of research, and so wield enormous power over peer review and metrics of scientific impact. Thus, they are able to significantly influence academic reputation, hirings, promotions, career progressions and, ultimately, the direction of science itself.

But signs suggest that the bright future envisioned in the early days of the internet is still within reach. Increasing awareness of, and dissatisfaction with, the many bottlenecks that the commercial monopoly on research information has imposed are stimulating new strategies for developing the future’s knowledge infrastructures. One of the most promising is the shift toward infrastructures created and supported by academic institutions, the original creators of the information being shared, and nonprofit consortia like the Collaborative Knowledge Foundation and the Center for Open Science.

Those infrastructures should fully exploit the technological capabilities of the World Wide Web to accelerate discovery, encourage more research support and better structure and transmit knowledge. By aligning academic incentives with socially beneficial outcomes, such a system could enrich the public while also amplifying the technological and societal impact of investment in research and innovation.

We’ve outlined below the three areas in which a shift to an academically owned platforms would yield the highest impact.

  • Truly Open Access
  • Meaningful Impact Metrics
  • Trustworthy Peer Review….(More)”.

Icelandic Citizen Engagement Tool Offers Tips for U.S.


Zack Quaintance at Government Technology: “The world of online discourse was vastly different one decade ago. This was before foreign election meddling, before social media execs were questioned by Congress, and before fighting with cantankerous uncles became an online trope. The world was perhaps more naïve, with a wide-eyed belief in some circles that Internet forums would amplify the voiceless within democracy.

This was the world in which Róbert Bjarnason and his collaborators lived. Based in Iceland, Bjarnason and his team developed a platform in 2010 for digital democracy. It was called Shadow Parliament, and its aim was simply to connect Iceland’s people with its governmental leadership. The platform launched one morning that year, with a comments section for debate. By evening, two users were locked in a deeply personal argument.

“We just looked at each other and thought, this is not going to be too much fun,” Bjarnason recalled recently. “We had just created one more platform for people to argue on.”

Sure, the engagement level was quite high, bringing furious users back to the site repeatedly to launch vitriol, but Shadow Parliament was not fostering the helpful discourse for which it was designed. So, developers scrapped it, pulling from the wreckage lessons to inform future work.

Bjarnason and team, officially a nonprofit called Citizens Foundation, worked for roughly a year, and, eventually, a new platform called Better Reykjavik was born. Better Reykjavik had key differences, chief among them a new debate system with simple tweaks: Citizens must list arguments for and against ideas, and instead of replying to each other directly, they can only down-vote things with which they disagree. This is a design that essentially forces users to create standalone points, rather than volley combative responses at one another, threaded in the fashion of Facebook or Twitter.

“With this framing of it,” Bjarnason said, “we’re not asking people to write the first comment they think of. We’re actually asking people to evaluate the idea.”

One tradeoff is that fury has proven itself to be an incredible driver of traffic, and the site loses that. But what the platform sacrifices in irate engagement, it gains in thoughtful debate. It’s essentially trading anger clicks for coherent discourse, and it’s seen tremendous success within Iceland — where some municipalities report 20 percent citizen usage — as well as throughout the international community, primarily in Europe. All told, Citizens Foundation has now built like-minded projects in 20 countries. And now, it is starting to build platforms for communities in the U.S….(More)”.

The Starving State


Article by Joseph E. Stiglitz, Todd N. Tucker, and Gabriel Zucman at Foreign Affairs: “For millennia, markets have not flourished without the help of the state. Without regulations and government support, the nineteenth-century English cloth-makers and Portuguese winemakers whom the economist David Ricardo made famous in his theory of comparative advantage would have never attained the scale necessary to drive international trade. Most economists rightly emphasize the role of the state in providing public goods and correcting market failures, but they often neglect the history of how markets came into being in the first place. The invisible hand of the market depended on the heavier hand of the state.

The state requires something simple to perform its multiple roles: revenue. It takes money to build roads and ports, to provide education for the young and health care for the sick, to finance the basic research that is the wellspring of all progress, and to staff the bureaucracies that keep societies and economies in motion. No successful market can survive without the underpinnings of a strong, functioning state.

That simple truth is being forgotten today. In the United States, total tax revenues paid to all levels of government shrank by close to four percent of national income over the last two decades, from about 32 percent in 1999 to approximately 28 percent today, a decline unique in modern history among wealthy nations. The direct consequences of this shift are clear: crumbling infrastructure, a slowing pace of innovation, a diminishing rate of growth, booming inequality, shorter life expectancy, and a sense of despair among large parts of the population. These consequences add up to something much larger: a threat to the sustainability of democracy and the global market economy….(More)”.

Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems


Paper by Tim Wu: “Software has partially or fully displaced many former human activities, such as catching speeders or flying airplanes, and proven itself able to surpass humans in certain contests, like Chess and Jeopardy. What are the prospects for the displacement of human courts as the centerpiece of legal decision-making?

Based on the case study of hate speech control on major tech platforms, particularly on Twitter and Facebook, this Essay suggests displacement of human courts remains a distant prospect, but suggests that hybrid machine–human systems are the predictable future of legal adjudication, and that there lies some hope in that combination, if done well….(More)”.

Bridging the Elite-Grassroots Divide Among Anticorruption Activists


Abigail Bellows at the Carnegie Endowment for International Peace: “Corruption-fueled political change is occurring at a historic rate—but is not necessarily producing the desired systemic reforms. There are many reasons for this, but one is the dramatic dissipation of public momentum after a transition. In countries like Armenia, the surge in civic participation that generated 2018’s Velvet Revolution largely evaporated after the new government assumed power. That sort of civic demobilization makes it difficult for government reformers, facing stubbornly entrenched interests, to enact a transformative agenda.

The dynamics in Armenia reflect a trend across the anticorruption landscape, which is also echoed in other sectors. As the field has become more professionalized, anticorruption nongovernment organizations (NGOs) have developed the legal and technical expertise to serve as excellent counterparts/watchdogs for government. Yet this strength can also be a hurdle when it comes to building credibility with the everyday people they seek to represent. The result is a disconnect between elite and grassroots actors, which is problematic at multiple levels:

  • Technocratic NGOs lack the “people power” to advance their policy recommendations and are exposed to attack as illegitimate or foreign-sponsored.
  • Grassroots networks struggle to turn protest energy into targeted demands and lasting reform, which can leave citizens frustrated and disillusioned about democracy itself.
  • Government reformers lack the sustained popular mandate to deliver on the ambitious agenda they promised, leaving them politically vulnerable to the next convulsion of public anger at corruption.

Two strategies can help civil society address this challenge. First, organizations can seek to hybridize, with in-house capacities for both policy analysis and mass mobilization. Alternatively, organizations can build formal or informal coalitions between groups operating at the elite and grassroots levels, respectively. Both strategies pose challenges: learning new skills, weaving together distinct organizational cultures and methodologies, and defining demands that are both technically sound and publicly appealing. In many instances, coalition-building will be an easier road given it does not require altering internal organizational and personnel structures. Political windows-of-opportunity on anticorruption may lend urgency to this difficult task and help crystallize what both sides have to gain from increased partnership….(More)“.

Trusted smart statistics: Motivations and principles


Paper by Fabio Ricciato et al : “In this contribution we outline the concept of Trusted Smart Statistics as the natural evolution of official statistics in the new datafied world. Traditional data sources, namely survey and administrative data, represent nowadays a valuable but small portion of the global data stock, much thereof being held in the private sector. The availability of new data sources is only one aspect of the global change that concerns official statistics. Other aspects, more subtle but not less important, include the changes in perceptions, expectations, behaviours and relations between the stakeholders. The environment around official statistics has changed: statistical offices are not any more data monopolists, but one prominent species among many others in a larger (and complex) ecosystem. What was established in the traditional world of legacy data sources (in terms of regulations, technologies, practices, etc.) is not guaranteed to be sufficient any more with new data sources.

Trusted Smart Statistics is not about replacing existing sources and processes, but augmenting them with new ones. Such augmentation however will not be only incremental: the path towards Trusted Smart Statistics is not about tweaking some components of the legacy system but about building an entirely new system that will coexist with the legacy one. In this position paper we outline some key design principles for the new Trusted Smart Statistics system. Taken collectively they picture a system where the smart and trust aspects enable and reinforce each other. A system that is more extrovert towards external stakeholders (citizens, private companies, public authorities) with whom Statistical Offices will be sharing computation, control, code, logs and of course final statistics, without necessarily sharing the raw input data….(More)”.

Towards adaptive governance in big data health research: implementing regulatory principles


Chapter by Alessandro Blasimme and Effy Vayena: “While data-enabled health care systems are in their infancy, biomedical research is rapidly adopting the big data paradigm. Digital epidemiology for example, already employs data generated outside the public health care system – that is, data generated without the intent of using them for epidemiological research – to understand and prevent patterns of diseases in populations (Salathé 2018)(Salathé 2018). Precision medicine – pooling together genomic, environmental and lifestyle data – also represents a prominent example of how data integration can drive both fundamental and translational research in important medical domains such as oncology (D. C. Collins et al. 2017). All of this requires the collection, storage, analysis and distribution of massive amounts of personal information as well as the use of state-of-the art data analytics tools to uncover healthand disease related patterns.


The realization of the potential of big data in health evokes a necessary commitment to a sense of “continuity” articulated in three distinct ways: a) from data generation to use (as in the data enabled learning health care ); b) from research to clinical practice e.g. discovery of new mutations in the context of diagnostics; c) from strictly speaking health data (Vayena and Gasser 2016) e.g. clinical records, to less so e.g. tweets used in digital epidemiology. These continuities face the challenge of regulatory and governance approaches that were designed for clear data taxonomies, for a less blurred boundary between research and clinical practice, and for rules that focused mostly on data generation and less on their eventual and multiple uses.

The result is significant uncertainty about how responsible use of such large amounts of sensitive personal data could be fostered. In this chapter we focus on the uncertainties surrounding the use of biomedical big data in the context of health research. Are new criteria needed to review biomedical big data research projects? Do current mechanisms, such as informed consent, offer sufficient protection to research participants’ autonomy and privacy in this new context? Do existing oversight mechanisms ensure transparency and accountability in data access and sharing? What monitoring tools are available to assess how personal data are used over time? Is the equitable distribution of benefits accruing from such data uses considered, or can it be ensured? How is the public being involved – if at all – with decisions about creating and using large data
repositories for research purposes? What is the role that IT (information technology) players, and especially big ones, acquire in research? And what regulatory instruments do we have to ensure that such players do not undermine the independence of research?…(More)”.