Lawmakers’ use of scientific evidence can be improved


Paper by D. Max Crowley et al: “This study is an experimental trial that demonstrates the potential for formal outreach strategies to change congressional use of research. Our results show that collaboration between policy and research communities can change policymakers’ value of science and result in legislation that appears to be more inclusive of research evidence. The findings of this study also demonstrated changes in researchers’ knowledge and motivation to engage with policymakers as well as their actual policy engagement behavior. Together, the observed changes in both policymakers and researchers randomized to receive an intervention for supporting legislative use of research evidence (i.e., the Research-to-Policy Collaboration model) provides support for the underlying theories around the social nature of research translation and evidence use….(More)”.

How ‘Good’ Social Movements Can Triumph over ‘Bad’ Ones


Essay by Gilda Zwerman and Michael Schwartz: “…How, then, can we judge which movement was the “good” one and which the “bad?”

The answer can be found in the sociological study of social movements. Over decades of focused research, the field has demonstrated that evaluating the moral compass of individual participants does little to advance our understanding of the morality or the actions of a large movement. Only by assessing the goals, tactics and outcomes of movements as collective phenomena can we begin to discern the distinction between “good” and “bad” movements.

Modern social movement theory developed from foundational studies by several generations of scholars, notably W.E.B. DuBoisIda B. WellsC.L.R. JamesE.P. ThompsonEric HobsbawmCharles Tilly and Howard Zinn. Their works analyzing “large” historical processes provided later social scientists with three working propositions.

First, the morality of a movement is measured by the type of change it seeks. “Good” movements are emancipatory: they seek to pressure institutional authorities into reducing systemic inequality, extending democratic rights to previously excluded groups, and alleviating material, social, and political injustices. “Bad” movements tend to be reactionary. They arise in response to good movements and they seek to preserve or intensify the exclusionary structures, laws and policies that the emancipatory movements are challenging.

Second, large-scale institutional changes that broaden freedom or advance the cause of social justice are rarely initiated by institutional authorities or political elites. Rather, most social progress is the result of pressure exerted from the bottom up, by ordinary people who press for reform by engaging in collective and creative disorders outside the bounds of mainstream institutions.

And third, good intentions—aspiring to achieve emancipatory goals—by no means guarantee that a movement will succeed.

The highly popular and emancipatory protests of the 1960s, as well as the influence of groundbreaking works in social history mentioned above, inspired a renaissance in the study of social movements in subsequent decades. Focusing primarily on “good” movements, a new generation of social scientists sought to identify the environmental circumstances, organizational features and strategic choices that increased the likelihood that “good intentions” would translate into tangible change. This research has generated a rich trove of findings:…(More)”.

A New Portal for the Decentralized Web and its Guiding Principles


Internet Archive: “For a long time, we’ve felt that the growing, diverse, global community interested in building the decentralized Web needed an entry point. A portal into the events, concepts, voices, and resources critical to moving the Decentralized Web forward.

This is why we created, getdweb.net, to serve as a portal, a welcoming entry point for people to learn and share strategies, analysis, and tools around how to build a decentralized Web.

Screenshot of https://getdweb.net/

It began at DWeb Camp 2019, when designer Iryna Nezhynska of Jolocom led a workshop to imagine what form that portal should take. Over the next 18 months, Iryna steered a dedicated group of DWeb volunteers through a process to create this new website. If you are new to the DWeb, it should help you learn about its core concepts. If you are a seasoned coder, it should point you to opportunities nearby. For our nine local nodes, it should be a clearinghouse and archive for past and future events.

Above all, the new website was designed to clearly state the principles we believe in as a community, the values we are trying to build right into the code.

At our February DWeb Meetup, our designer Iryna took us on a tour of the new website and the design concepts that support it.

Then John Ryan and I (Associate Producer of DWeb Projects) shared the first public version of the Principles of the DWeb and described the behind-the-scenes process that went into developing them. It was developed in consultation with dozens of community members, including technologists, organizers, academics, policy experts, and artists. These DWeb Principles are a starting point, not an end point — open for iteration.

As stewards, we felt that we needed to crystallize the shared vision of this community, to demonstrate how and why we are building a Decentralized Web. Our aim is to identify our guiding principles through discussion and distill them into a living document that we can point to. It is to create a set of practical guiding values as we design and build the Web of the future….(More)”.

Sustainable mobility: Policy making for data sharing


WBCSD report: “The demand for mobility will grow significantly in the coming years, but our urban transportation systems are at their limits. Increasing digitalization and data sharing in urban mobility can help governments and businesses to respond to this challenge and accelerate the transition toward sustainability. There is an urgent need for greater policy coherence in data-sharing ecosystems and governments need to adopt a more collaborative approach toward policy making.

With well-orchestrated policies, data sharing can result in shared value for public and private sectors and support the achievement of sustainability goals. Data-sharing policies should also aim to minimize risks around privacy and cybersecurity, minimize mobility biases rooted in race, gender and age, prevent the creation of runaway data monopolies and bridge the widening data divide.

This report outlines a global policy framework and practical guidance for policy making on data sharing. The report offers multiple case studies from across the globe to document emerging good practices and policy suggestions, recognizing the hyperlocal context of mobility needs and policies, the nascent state of the data-sharing market and limited evidence from regulatory practices….(More)”

The speed of science


Essay by Saloni Dattani & Nathaniel Bechhofer: “The 21st century has seen some phenomenal advances in our ability to make scientific discoveries. Scientists have developed new technology to build vaccines swiftly, new algorithms to predict the structure of proteins accurately, new equipment to sequence DNA rapidly, and new engineering solutions to harvest energy efficiently. But in many fields of science, reliable knowledge and progress advance staggeringly slowly. What slows it down? And what can we learn from individual fields of science to pick up the pace across the board – without compromising on quality?

By and large, scientific research is published in journals in the form of papers – static documents that do not update with new data or new methods. Instead of sharing the data and the code that produces their results, most scientists simply publish a textual description of their research in online publications. These publications are usually hidden behind paywalls, making it harder for outsiders to verify their authenticity.

On the occasion when a reader spots a discrepancy in the data or an error in the methods, they must read the intricate details of a study’s method scrupulously, and cross-check the statistics manually. When scientists don’t share the data to produce their results openly, the task becomes even harder. The process of error correction – from scientists publishing a paper, to readers spotting errors, to having the paper corrected or retracted – can take years, assuming those errors are spotted at all.

When scientists reference previous research, they cite entire papers, not specific results or values from them. And although there is evidence that scientists hold back from citing papers once they have been retracted, the problem is compounded over time – consider, for example, a researcher who cites a study that itself derives its data or assumptions from prior research that has been disputed, corrected or retracted. The longer it takes to sift through the science, to identify which results are accurate, the longer it takes to gather an understanding of scientific knowledge.

What makes the problem even more challenging is that flaws in a study are not necessarily mathematical errors. In many situations, researchers make fairly arbitrary decisions as to how they collect their data, which methods they apply to analyse them, and which results they report – altogether leaving readers blind to the impact of these decisions on the results.

This murkiness can result in what is known as p-hacking: when researchers selectively apply arbitrary methods in order to achieve a particular result. For example, in a study that compares the well-being of overweight people to that of underweight people, researchers may find that certain cut-offs of weight (or certain subgroups in their sample) provide the result they’re looking for, while others don’t. And they may decide to only publish the particular methods that provided that result…(More)”.

Governance Innovation ver.2: A Guide to Designing and Implementing Agile Governance


Draft report by the Ministry of Economy, Trade and Industry (METI): “Japan has been aiming at the realization of “Society 5.0,” a policy for building a human-centric society which realizes both economic development and solutions to social challenges by taking advantage of a system in which cyberspaces, including AI, IoT and big data, and physical spaces are integrated in a sophisticated manner (CPSs: cyber-physical systems). In advancing social implementation of innovative technologies toward the realization of the Society 5.0, it is considered necessary to fundamentally reform governance models in view of changes in social structures which new technologies may bring about.

Triggered by this problem awareness, at the G20 Ministerial Meeting on Trade and Digital Economy, which Japan hosted in June 2019, the ministers declared in the ministerial statement the need for “governance innovation” tailored to social changes which will be brought about by digital technologies and social implementation thereof.

In light of this, METI inaugurated its Study Group on a New Governance Model in Society 5.0 (hereinafter referred to as the “study group”) and in July 2020, the study group published a report titled “GOVERNANCE INNOVATION: Redesigning Law and Architecture for Society 5.0” (hereinafter referred to as the “first report”). The first report explains ideal approaches to cross-sectoral governance by multi-stakeholders, including goal-based regulations, importance for businesses to fulfill their accountability, and enforcement of laws with an emphasis on incentives.

Against this backdrop, the study group, while taking into consideration the outcomes of the first report, presented approaches to “agile governance” as an underlying idea of the governance shown in the Society 5.0 policy, and then prepared the draft report titled “Governance Innovation ver.2: A Guide to Designing and Implementing Agile Governance” as a compilation presenting a variety of ideal approaches to governance mechanisms based on agile governance, including corporate governance, regulations, infrastructures, markets and social norms.

In response, METI opened a call for public comments on this draft report in order to receive opinions from a variety of people. As the subjects shown in the draft report are common challenges seen across the world and many parts of the subjects require international cooperation, METI wishes to receive wide-ranging, frank opinions not only from people in Japan but also from those in overseas countries….(More)”.

An early warning approach to monitor COVID-19 activity with multiple digital traces in near real time


Paper by Nicole E. Kogan et al: “We propose that several digital data sources may provide earlier indication of epidemic spread than traditional COVID-19 metrics such as confirmed cases or deaths. Six such sources are examined here: (i) Google Trends patterns for a suite of COVID-19–related terms; (ii) COVID-19–related Twitter activity; (iii) COVID-19–related clinician searches from UpToDate; (iv) predictions by the global epidemic and mobility model (GLEAM), a state-of-the-art metapopulation mechanistic model; (v) anonymized and aggregated human mobility data from smartphones; and (vi) Kinsa smart thermometer measurements.

We first evaluate each of these “proxies” of COVID-19 activity for their lead or lag relative to traditional measures of COVID-19 activity: confirmed cases, deaths attributed, and ILI. We then propose the use of a metric combining these data sources into a multiproxy estimate of the probability of an impending COVID-19 outbreak. Last, we develop probabilistic estimates of when such a COVID-19 outbreak will occur on the basis of multiproxy variability. These outbreak-timing predictions are made for two separate time periods: the first, a “training” period, from 1 March to 31 May 2020, and the second, a “validation” period, from 1 June to 30 September 2020. Consistent predictive behavior among proxies in both of these subsequent and nonoverlapping time periods would increase the confidence that they may capture future changes in the trajectory of COVID-19 activity….(More)”.

Coming wave of video games could build empathy on racism, environment and aftermath of war


Mike Snider at USA Today: “Some of the newest video games in development aren’t really games at all, but experiences that seek to build empathy for others.

Among the five such projects getting funding grants and support from 3D software engine maker Unity is “Our America,” in which the player takes the role of a Black man who is driving with his son when their car is pulled over by a police officer.

The father worries about getting his car registration from the glove compartment because the officer “might think it’s a gun or something,” the character says in the trailer.

On the project’s website, the developers describe “Our America” as “an autobiographical VR Experience” in which “the audience must make quick decisions, answer questions – but any wrong move is the difference between life and death.”…

The other Unity for Humanity winners include:

  • Ahi Kā Rangers: An ecological mobile game with development led by Māori creators. 
  • Dot’s Home: A game that explores historical housing injustices faced by Black and brown home buyers. 
  • Future Aleppo: A VR experience for children to rebuild homes and cities destroyed by war. 
  • Samudra: A children’s environmental puzzle game that takes the player across a polluted sea to learn about pollution and plastic waste.

While “Our America” may serve best as a VR experience, other projects such as “Dot’s Home” may be available on mobile devices to expand its accessibility….(More)”.

European Data Economy: Between Competition and Regulation


Report by René Arnold, Christian Hildebrandt, and Serpil Taş: “Data and its economic impact permeates all sectors of the economy. The data economy is not a new sector, but more like a challenge for all firms to compete and innovate as part of a new wave of economic value creation.

With data playing an increasingly important role across all sectors of the economy, the results of this report point European policymakers to promote the development and adoption of unified reference architectures. These architectures constitute a technology-neutral and cross-sectoral approach that will enable companies small and large to compete and to innovate—unlocking the economic potential of data capture in an increasingly digitized world.

Data access appears to be less of a hindrance to a thriving data economy due to the net increase in capabilities in data capture, elevation, and analysis. What does prove difficult for firms is discovering existing datasets and establishing their suitability for achieving their economic objectives. Reference architectures can facilitate this process as they provide a framework to locate potential providers of relevant datasets and carry sufficient additional information (metadata) about datasets to enable firms to understand whether a particular dataset, or parts of it, fits their purpose.

Whether third-party data access is suitable to solve a specific business task in the first place ought to be a decision at the discretion of the economic actors involved. As our report underscores, data captured in one context with a specific purpose may not be fit for another context or another purpose. Consequently, a firm has to evaluate case-by-case whether first-party data capture, third-party data access, or a mixed approach is the best solution. This evaluation will naturally depend on whether there is any other firm capturing data suitable for the task that is willing to negotiate conditions for third-party access to this data. Unified data architectures may also lower the barriers for a firm capturing suitable data to engage in negotiations, since its adoption will lower the costs of making the data ready for a successful exchange. Such architectures may further integrate licensing provisions ensuring that data, once exchanged, is not used beyond the agreed purpose. It can also bring in functions that improve the discoverability of potential data providers….(More)”.

Who Is Making Sure the A.I. Machines Aren’t Racist?


Cade Metz at the New York Times: “Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

In the nearly 10 years I’ve written about artificial intelligence, two things have remained a constant: The technology relentlessly improves in fits and sudden, great leaps forward. And bias is a thread that subtly weaves through that work in a way that tech companies are reluctant to acknowledge.

On her first night home in Menlo Park, Calif., after the Barcelona conference, sitting cross-​legged on the couch with her laptop, Dr. Gebru described the A.I. work force conundrum in a Facebook post.

“I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community — especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google….(More)”.