Giving Voice to Patients: Developing a Discussion Method to Involve Patients in Translational Research


Paper by Marianne Boenink, Lieke van der Scheer, Elisa Garcia and Simone van der Burg in NanoEthics: “Biomedical research policy in recent years has often tried to make such research more ‘translational’, aiming to facilitate the transfer of insights from research and development (R&D) to health care for the benefit of future users. Involving patients in deliberations about and design of biomedical research may increase the quality of R&D and of resulting innovations and thus contribute to translation. However, patient involvement in biomedical research is not an easy feat. This paper discusses the development of a method for involving patients in (translational) biomedical research aiming to address its main challenges.

After reviewing the potential challenges of patient involvement, we formulate three requirements for any method to meaningfully involve patients in (translational) biomedical research. It should enable patients (1) to put forward their experiential knowledge, (2) to develop a rich view of what an envisioned innovation might look like and do, and (3) to connect their experiential knowledge with the envisioned innovation. We then describe how we developed the card-based discussion method ‘Voice of patients’, and discuss to what extent the method, when used in four focus groups, satisfied these requirements. We conclude that the method is quite successful in mobilising patients’ experiential knowledge, in stimulating their imaginaries of the innovation under discussion and to some extent also in connecting these two. More work is needed to translate patients’ considerations into recommendations relevant to researchers’ activities. It also seems wise to broaden the audience for patients’ considerations to other actors working on a specific innovation….(More)”

Explaining Explanations in AI


Paper by Brent Mittelstadt Chris Russell and Sandra Wachter: “Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that “All models are wrong but some are useful.”

We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a “do it yourself kit” for explanations, allowing a practitioner to directly answer “what if questions” or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly… (More)”.

Recalculating GDP for the Facebook age


Gillian Tett at the Financial Times: How big is the impact of Facebook on our lives? That question has caused plenty of hand-wringing this year, as revelations have tumbled out about the political influence of Big Tech companies.

Economists are attempting to look at this question too — but in a different way. They have been quietly trying to calculate the impact of Facebook on gross domestic product data, ie to measure what our social-media addiction is doing to economic output….

Kevin Fox, an Australian economist, thinks there is. Working with four other economists, including Erik Brynjolfsson, a professor at MIT, he recently surveyed consumers to see what they would “pay” for Facebook in monetary terms, concluding conservatively that this was about $42 a month. Extrapolating this to the wider economy, he then calculated that the “value” of the social-media platform is equivalent to 0.11 per cent of US GDP. That might not sound transformational. But this week Fox presented the group’s findings at an IMF conference on the digital economy in Washington DC and argued that if Facebook activity had been counted as output in the GDP data, it would have raised the annual average US growth rate from 1.83 per cent to 1.91 per cent between 2003 and 2017. The number would rise further if you included other platforms – researchers believe that “maps” and WhatsApp are particularly important – or other services.  Take photographs.

Back in 2000, as the group points out, about 80 billion photos were taken each year at a cost of 50 cents a picture in camera and processing fees. This was recorded in GDP. Today, 1.6 trillion photos are taken each year, mostly on smartphones, for “free”, and excluded from that GDP data. What would happen if that was measured too, along with other types of digital services?

The bad news is that there is no consensus among economists on this point, and the debate is still at a very early stage. … A separate paper from Charles Hulten and Leonard Nakamura, economists at the University of Maryland and Philadelphia Fed respectively, explained another idea: a measurement known as “EGDP” or “Expanded GDP”, which incorporates “welfare” contributions from digital services. “The changes wrought by the digital revolution require changes to official statistics,” they said.

Yet another paper from Nakamura, co-written with Diane Coyle of Cambridge University, argued that we should also reconfigure the data to measure how we “spend” our time, rather than “just” how we spend our money. “To recapture welfare in the age of digitalisation, we need shadow prices, particularly of time,” they said. Meanwhile, US government number-crunchers have been trying to measure the value of “free” open-source software, such as R, Python, Julia and Java Script, concluding that if captured in statistics these would be worth about $3bn a year. Another team of government statisticians has been trying to value the data held by companies – this estimates, using one method, that Amazon’s data is currently worth $125bn, with a 35 per cent annual growth rate, while Google’s is worth $48bn, growing at 22 per cent each year. It is unlikely that these numbers – and methodologies – will become mainstream any time soon….(More)”.

The soft spot of hard code: blockchain technology, network governance and pitfalls of technological utopianism


Moritz Hutten at Global Networks: “The emerging blockchain technology is expected to contribute to the transformation of ownership, government services and global supply chains. By analysing a crisis that occurred with one of its frontrunners, Ethereum, in this article I explore the discrepancies between the purported governance of blockchains and the de facto control of them through expertise and reputation. Ethereum is also thought to exemplify libertarian techno‐utopianism.

When ‘The DAO’, a highly publicized but faulty crowd‐funded venture fund was deployed on the Ethereum blockchain, the techno‐utopianism was suspended, and developers fell back on strong network ties. Now that the blockchain technology is seeing an increasing uptake, I shall also seek to unearth broader implications of the blockchain for the proliferation or blockage of global finance and beyond. Contrasting claims about the disruptive nature of the technology, in this article I show that, by redeeming the positive utopia of ontic, individualized debt, blockchains reinforce our belief in a crisis‐ridden, financialized capitalism….(More)”.

Digital Technologies for Transparency in Public Investment: New Tools to Empower Citizens and Governments


Paper by Kahn, Theodore; Baron, Alejandro; Vieyra, Juan Cruz: Improving infrastructure and basic services is a central task in the region’s growth and development agenda. Despite the importance of private sector participation, governments will continue to play a defining role in planning, financing, executing, and overseeing key infrastructure projects and service delivery. This reality puts a premium on the efficient and transparent management of public investment, especially in light of the considerable technical, administrative, and political challenges and vulnerability to corruption and rent-seeking associated with large public works.

The recent spate of corruption scandals surrounding public procurement and infrastructure projects in the region underscores the urgency of this agenda. The emergence of new digital technologies offers powerful tools for governments and citizens in the region to improve the transparency and efficiency of public investment. This paper examines the challenges of building transparent public investment management systems, both conceptually and in the specific case of Latin America and the Caribbean, and highlights how a suite of new technological tools can improve the implementation of infrastructure projects and public services. The discussion is informed by the experience of the Inter-American Development Bank in designing and implementing the MapaInversiones platform. The paper concludes with several concrete policy recommendations for the region…. (More)”

Using Data to Raise the Voices of Working Americans


Ida Rademacher at the Aspen Institute: “…At the Aspen Institute Financial Security Program, we sense a growing need to ground these numbers in what people experience day-to-day. We’re inspired by projects like the Financial Diaries that helped create empathy for what the statistics mean. …the Diaries was a time-delimited project, and the insights we can gain from major banking institutions are somewhat limited in their ability to show the challenges of economically marginalized populations. That’s why we’ve recently launched a consumer insights initiative to develop and translate a more broadly sourced set of data that lifts the curtain on the financial lives of low- and moderate-income US consumers. What does it really mean to lack $400 when you need it? How do people cope? What are the aspirations and anxieties that fuel choices? Which strategies work and which fall flat? Our work exists to focus the dialogue about financial insecurity by keeping an ear to the ground and amplifying what we hear. Our ultimate goal: Inspire new solutions that react to reality, ones that can genuinely improve the financial well-being of many.

Our consumer insights initiative sees power in partnerships and collaboration. We’re building a big tent for a range of actors to query and share what their data says: private sector companies, public programs, and others who see unique angles into the financial lives of low- and moderate-income households. We are creating a new forum to lift up these firms serving consumers – and in doing so, we’re raising the voices of consumers themselves.

One example of this work is our Consumer Insights Collaborative (CIC), a group of nine leading non-profits from across the country. Each has a strong sense of challenges and opportunities on the ground because every day their work brings them face-to-face with a wide array of consumers, many of whom are low- and moderate-income families. And most already work independently to learn from their data. Take EARN and its Big Data on Small Savings project; the Financial Clinic’s insights series called Change Matters; Mission Asset Fund’s R&D Lab focused on human-centered design; and FII which uses data collection as part of its main service.

Through the CIC, they join forces to see more than any one nonprofit can on their own. Together CIC members articulate common questions and synthesize collective answers. In the coming months we will publish a first-of-its-kind report on a jointly posed question: What are the dimensions and drivers of short term financial stability?

An added bonus of partnerships like the CIC is the community of practice that naturally emerges. We believe that data scientists from all walks can, and indeed must, learn from each other to have the greatest impact. Our initiative especially encourages cooperative capacity-building around data security and privacy. We acknowledge that as access to information grows, so does the risk to consumers themselves. We endorse collaborative projects that value ethics, respect, and integrity as much as they value cross-organizational learning.

As our portfolio grows, we will invite an even broader network to engage. We’re already working with NEST Insights to draw on NEST’s extensive administrative data on retirement savings, with an aim to understand more about the long-term implications of non-traditional work and unstable household balance sheets on financial security….(More)”.

Crowdlaw: Collective Intelligence and Lawmaking


Paper by Beth Noveck in Analyse & Kritik: “To tackle the fast-moving challenges of our age, law and policymaking must become more flexible, evolutionary and agile. Thus, in this Essay we examine ‘crowdlaw’, namely how city councils at the local level and parliaments at the regional and national level are turning to technology to engage with citizens at every stage of the law and policymaking process.

As we hope to demonstrate, crowdlaw holds the promise of improving the quality and effectiveness of outcomes by enabling policymakers to interact with a broader public using methods designed to serve the needs of both institutions and individuals. crowdlaw is less a prescription for more deliberation to ensure greater procedural legitimacy by having better inputs into lawmaking processes than a practical demand for more collaborative approaches to problem-solving that yield better outputs, namely policies that achieve their intended aims. However, as we shall explore, the projects that most enhance the epistemic quality of lawmaking are those that are designed to meet the specific informational needs for that stage of problem-solving….(More)”,

Parliament and the people


Report by Rebecca Rumbul, Gemma Moulder, and Alex Parsons at MySociety: “The publication and dissemination of parliamentary information in developed countries has been shown to improve citizen engagement in governance and reduce the distance between the representative and the represented. While it is clear that these channels are being used, it is not clear how they are being used, or why some digital tools achieve greater reach or influence than others.

With the support of the Indigo Trust, mySociety has undertaken research to better understand how digital tools for parliamentary openness and engagement are operating in Sub-Saharan Africa, and how future tools can be better designed and targeted to achieve greater social impact. Read the executive summary of the report’s conclusions.

The report provides an analysis of the data and digital landscapes of four case study countries in Sub-Saharan Africa (KenyaNigeriaSouth Africa and Uganda), and interrogates how digital channels are being used in those countries to create and disseminate information on parliamentary activity. It examines the existing academic and practitioner literature in this field, compares and contrasts the landscape in each case study country, and provides a thematic overview of common and relevant factors in the operation of digital platforms for democratic engagement in parliamentary activity…(More)”.

Democracy is an information system


Bruce Shneier on Security: “That’s the starting place of our new paper: “Common-Knowledge Attacks on Democracy.” In it, we look at democracy through the lens of information security, trying to understand the current waves of Internet disinformation attacks. Specifically, we wanted to explain why the same disinformation campaigns that act as a stabilizing influence in Russia are destabilizing in the United States.

The answer revolves around the different ways autocracies and democracies work as information systems. We start by differentiating between two types of knowledge that societies use in their political systems. The first is common political knowledge, which is the body of information that people in a society broadly agree on. People agree on who the rulers are and what their claim to legitimacy is. People agree broadly on how their government works, even if they don’t like it. In a democracy, people agree about how elections work: how districts are created and defined, how candidates are chosen, and that their votes count­ — even if only roughly and imperfectly.

We contrast this with a very different form of knowledge that we call contested political knowledge,which is, broadly, things that people in society disagree about. Examples are easy to bring to mind: how much of a role the government should play in the economy, what the tax rules should be, what sorts of regulations are beneficial and what sorts are harmful, and so on.

This seems basic, but it gets interesting when we contrast both of these forms of knowledge across autocracies and democracies. These two forms of government have incompatible needs for common and contested political knowledge.

For example, democracies draw upon the disagreements within their population to solve problems. Different political groups have different ideas of how to govern, and those groups vie for political influence by persuading voters. There is also long-term uncertainty about who will be in charge and able to set policy goals. Ideally, this is the mechanism through which a polity can harness the diversity of perspectives of its members to better solve complex policy problems. When no-one knows who is going to be in charge after the next election, different parties and candidates will vie to persuade voters of the benefits of different policy proposals.

But in order for this to work, there needs to be common knowledge both of how government functions and how political leaders are chosen. There also needs to be common knowledge of who the political actors are, what they and their parties stand for, and how they clash with each other. Furthermore, this knowledge is decentralized across a wide variety of actors­ — an essential element, since ordinary citizens play a significant role in political decision making.

Contrast this with an autocracy….(More)”.

Driven to safety — it’s time to pool our data


Kevin Guo at TechCrunch: “…Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together….

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition….(More)”