The Modern Research Data Portal: a design pattern for networked, data-intensive science


 et al in PeerJ Computer Science: “We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs.

We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals….(More)”.

Handbook on Participatory Governance


Book edited by Hubert Heinelt: “Can participatory governance really improve the quality of democracy? Concentrating on democracy beyond governmental structures, this Handbook argues that it is a political task to engage individuals at all levels of governance.

The Handbook on Participatory Governance reveals that transforming governance arrangements does in fact enhance democracy and that the democratic quality of participatory governance is crucial. The contributors reflect on the notion of democracy and participatory governance and how they relate to each other. Case studies are presented from regional, national and international levels, to identify how governance can be turned into a participatory form. With chapters reviewing participatory governance’s role alongside power, science and employment relations, innovative ideas for future progress in participatory governance are presented….(More)”.

How the Data That Internet Companies Collect Can Be Used for the Public Good


Stefaan G. Verhulst and Andrew Young at Harvard Business Review: “…In particular, the vast streams of data generated through social media platforms, when analyzed responsibly, can offer insights into societal patterns and behaviors. These types of behaviors are hard to generate with existing social science methods. All this information poses its own problems, of complexity and noise, of risks to privacy and security, but it also represents tremendous potential for mobilizing new forms of intelligence.

In a recent report, we examine ways to harness this potential while limiting and addressing the challenges. Developed in collaboration with Facebook, the report seeks to understand how public and private organizations can join forces to use social media data — through data collaboratives — to mitigate and perhaps solve some our most intractable policy dilemmas.

Data Collaboratives: Public-Private Partnerships for Our Data Age 

For all of data’s potential to address public challenges, most data generated today is collected by the private sector. Typically ensconced in corporate databases, and tightly held in order to maintain competitive advantage, this data contains tremendous possible insights and avenues for policy innovation. But because the analytical expertise brought to bear on it is narrow, and limited by private ownership and access restrictions, its vast potential often goes untapped.

Data collaboratives offer a way around this limitation. They represent an emerging public-private partnership model, in which participants from different areas , including the private sector, government, and civil society , can come together to exchange data and pool analytical expertise in order to create new public value. While still an emerging practice, examples of such partnerships now exist around the world, across sectors and public policy domains….

Professionalizing the Responsible Use of Private Data for Public Good

For all its promise, the practice of data collaboratives remains ad hoc and limited. In part, this is a result of the lack of a well-defined, professionalized concept of data stewardship within corporations. Today, each attempt to establish a cross-sector partnership built on the analysis of social media data requires significant and time-consuming efforts, and businesses rarely have personnel tasked with undertaking such efforts and making relevant decisions.

As a consequence, the process of establishing data collaboratives and leveraging privately held data for evidence-based policy making and service delivery is onerous, generally one-off, not informed by best practices or any shared knowledge base, and prone to dissolution when the champions involved move on to other functions.

By establishing data stewardship as a corporate function, recognized within corporations as a valued responsibility, and by creating the methods and tools needed for responsible data-sharing, the practice of data collaboratives can become regularized, predictable, and de-risked.

If early efforts toward this end — from initiatives such as Facebook’s Data for Good efforts in the social media space and MasterCard’s Data Philanthropy approach around finance data — are meaningfully scaled and expanded, data stewards across the private sector can act as change agents responsible for determining what data to share and when, how to protect data, and how to act on insights gathered from the data.

Still, many companies (and others) continue to balk at the prospect of sharing “their” data, which is an understandable response given the reflex to guard corporate interests. But our research has indicated that many benefits can accrue not only to data recipients but also to those who share it. Data collaboration is not a zero-sum game.

With support from the Hewlett Foundation, we are embarking on a two-year project toward professionalizing data stewardship (and the use of data collaboratives) and establishing well-defined data responsibility approaches. We invite others to join us in working to transform this practice into a widespread, impactful means of leveraging private-sector assets, including social media data, to create positive public-sector outcomes around the world….(More)”.

 

‘Politics done like science’: Critical perspectives on psychological governance and the experimental state


Paper by  and  There has been a growing academic recognition of the increasing significance of psychologically – and behaviourally – informed modes of governance in recent years in a variety of different states. We contend that this academic research has neglected one important theme, namely the growing use of experiments as a way of developing and testing novel policies. Drawing on extensive qualitative and documentary research, this paper develops critical perspectives on the impacts of the psychological sciences on public policy, and considers more broadly the changing experimental form of modern states. The tendency for emerging forms of experimental governance to be predicated on very narrow, socially disempowering, visions of experimental knowledge production is critiqued. We delineate how psychological governance and emerging forms of experimental subjectivity have the potential to enable more empowering and progressive state forms and subjectivities to emerge through more open and collective forms of experimentation…(More)”.

The Assault on Reason


Zia Haider Rahman at the New York Review of Books: “Albert Einstein was awarded a Nobel Prize not for his work on relativity, but for his explanation of the photoelectric effect. Both results, and others of note, were published in 1905, his annus mirabilis. The prize was denied him for well over a decade, with the Nobel Committee maintaining that relativity was yet unproven. Philosophers of science, most notably Karl Popper, have argued that for a theory to be regarded as properly scientific it must be capable of being contradicted by observation. In other words, it must yield falsifiable predictions—predictions that could, in principle, be shown to be wrong. On the basis of his theory, Einstein predicted that starlight was being deflected by the sun by specified degrees. This was a prediction that was, in principle, capable of being wrong and therefore capable of falsifying relativity. The physicist offered signs others could look for that would lend credibility to his theory—or refute it. Evidence eventually came from the work of Arthur Eddington and the arrival of instruments that could make sufficiently fine measurements, though Einstein’s Nobel medal would elude him for two more years because of gathering anti-Semitism in Europe.

Mathematics, so often lumped together with the sciences, actually adheres to an entirely different standard. A mathematical theorem never submits itself to hypothesis testing, never needs an experiment to support its validity. Once described to me as an education in thinking without the encumbrance of facts, mathematics is unlike the sciences in that no empirical finding can ever shift a mathematical theorem by one iota; it is true forever. Mathematical reasoning is a given, something commonly understood and shared by all mathematicians, because mathematical reasoning is, fundamentally, no more than logical reasoning, a thing universally shared. My own study of mathematics has left me with a deep respect for the distinction between relevance and irrelevance in making a reasoned argument.

These are the gold standards of human intellectual progress. Society, however, has to deal with wildly contested facts. We live in a post-truth world, by some accounts, in which facts are willfully bent to serve political ends. If the forty-fifth president is to be believed, Christmas has apparently been restored to the White House. Never mind the contradictory videos of the forty-fourth president and his family celebrating the holiday.

But there is nothing particularly new about this distorting. In his landmark work, Public Opinion, published in 1922, the formidable American journalist, Walter Lippmann reflected on the functions of the press:

That the manufacture of consent is capable of great refinements no one, I think, denies. The process by which public opinions arise is certainly no less intricate than it has appeared in these pages, and the opportunities for manipulation open to anyone who understands the process are plain enough.… as a result of psychological research, coupled with the modern means of communication, the practice of democracy has turned a corner. A revolution is taking place, infinitely more significant than any shifting of economic power.… Under the impact of propaganda, not necessarily in the sinister meaning of the word alone, the old constants of our thinking have become variables. It is no longer possible, for example, to believe in the original dogma of democracy; that the knowledge needed for the management of human affairs comes up spontaneously from the human heart. Where we act on that theory we expose ourselves to self-deception, and to forms of persuasion that we cannot verify. It has been demonstrated that we cannot rely upon intuition, conscience, or the accidents of casual opinion if we are to deal with the world beyond our reach.

Everyone is entitled to his own opinion, but not his own facts, as United States Senator Daniel Patrick Moynihan was fond of saying. None of us is in a position, however, to verify all the facts presented to us. Somewhere, we each draw a line and say on this I will defer to so-and-so or such-and-such. We have only so many hours in the day. Besides, we acknowledge that some matters lie outside our expertise or even our capacity to comprehend. Doctors and lawyers make their livings on such basis.

But it is not merely facts that are under assault in the polarized politics of the US, the UK, and other nations twisting in the winds of what some call populism. There is also a troubling assault on reason….(More)”.

The Potential for Human-Computer Interaction and Behavioral Science


Article by Kweku Opoku-Agyemang as  part of a special issue by Behavioral Scientist on “Connected State of Mind,” which explores the impact of tech use on our behavior and relationships (complete issue here):

A few days ago, one of my best friends texted me a joke. It was funny, so a few seconds later I replied with the “laughing-while-crying emoji.” A little yellow smiley face with tear drops perched on its eyes captured exactly what I wanted to convey to my friend. No words needed. If this exchange happened ten years ago, we would have emailed each other. Two decades ago, snail mail.

As more of our interactions and experiences are mediated by screens and technology, the way we relate to one another and our world is changing. Posting your favorite emoji may seem superficial, but such reflexes are becoming critical for understanding humanity in the 21st century.

Seemingly ubiquitous computer interfaces—on our phones and laptops, not to mention our cars, coffee makers, thermostats, and washing machines—are blurring the lines between our connected and our unconnected selves. And it’s these relationships, between users and their computers, which define the field of human–computer interaction (HCI). HCI is based on the following premise: The more we understand about human behavior, the better we can design computer interfaces that suit people’s needs.

For instance, HCI researchers are designing tactile emoticons embedded in the Braille system for individuals with visual impairments. They’re also creating smartphones that can almost read your mind—predicting when and where your finger is about to touch them next.

Understanding human behavior is essential for designing human-computer interfaces. But there’s more to it than that: Understanding how people interact with computer interfaces can help us understand human behavior in general.

One of the insights that propelled behavioral science into the DNA of so many disciplines was the idea that we are not fully rational: We procrastinate, forget, break our promises, and change our minds. What most behavioral scientists might not realize is that as they transcended rationality, rational models found a new home in artificial intelligence. Much of A.I. is based on the familiar rational theories that dominated the field of economics prior to the rise of behavioral economics. However, one way to better understand how to apply A.I. in high-stakes scenarios, like self-driving cars, may be to embrace ways of thinking that are less rational.

It’s time for information and computer science to join forces with behavioral science. The mere presence of a camera phone can alter our cognition even when switched off, so if we ignore HCI in behavioral research in a world of constant clicks, avatars, emojis, and now animojis we limit our understanding of human behavior.

Below I’ve outlined three very different cases that would benefit from HCI researchers and behavioral scientists working together: technology in the developing world, video games and the labor market, and online trolling and bullying….(More)”.

The Future Computed: Artificial Intelligence and its role in society


Brad Smith at the Microsoft Blog: “Today Microsoft is releasing a new book, The Future Computed: Artificial Intelligence and its role in society. The two of us have written the foreword for the book, and our teams collaborated to write its contents. As the title suggests, the book provides our perspective on where AI technology is going and the new societal issues it has raised.

On a personal level, our work on the foreword provided an opportunity to step back and think about how much technology has changed our lives over the past two decades and to consider the changes that are likely to come over the next 20 years. In 1998, we both worked at Microsoft, but on opposite sides of the globe. While we lived on separate continents and in quite different cultures, we shared similar experiences and daily routines which were managed by manual planning and movement. Twenty years later, we take for granted the digital world that was once the stuff of science fiction.

Technology – including mobile devices and cloud computing – has fundamentally changed the way we consume news, plan our day, communicate, shop and interact with our family, friends and colleagues. Two decades from now, what will our world look like? At Microsoft, we imagine that artificial intelligence will help us do more with one of our most precious commodities: time. By 2038, personal digital assistants will be trained to anticipate our needs, help manage our schedule, prepare us for meetings, assist as we plan our social lives, reply to and route communications, and drive cars.

Beyond our personal lives, AI will enable breakthrough advances in areas like healthcare, agriculture, education and transportation. It’s already happening in impressive ways.

But as we’ve witnessed over the past 20 years, new technology also inevitably raises complex questions and broad societal concerns. As we look to a future powered by a partnership between computers and humans, it’s important that we address these challenges head on.

How do we ensure that AI is designed and used responsibly? How do we establish ethical principles to protect people? How should we govern its use? And how will AI impact employment and jobs?

To answer these tough questions, technologists will need to work closely with government, academia, business, civil society and other stakeholders. At Microsoft, we’ve identified six ethical principles – fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability – to guide the cross-disciplinary development and use of artificial intelligence. The better we understand these or similar issues — and the more technology developers and users can share best practices to address them — the better served the world will be as we contemplate societal rules to govern AI.

We must also pay attention to AI’s impact on workers. What jobs will AI eliminate? What jobs will it create? If there has been one constant over 250 years of technological change, it has been the ongoing impact of technology on jobs — the creation of new jobs, the elimination of existing jobs and the evolution of job tasks and content. This too is certain to continue.

Some key conclusions are emerging….

The Future Computed is available here and additional content related to the book can be found here.”

Digital platforms for facilitating access to research infrastructures


New OECD paper: “Shared research infrastructures are playing an increasingly important role in most scientific fields and represent a significant proportion of the total public investment in science. Many of these infrastructures have the potential to be used outside of their traditional scientific domain and outside of the academic community but this potential if often not fully realised.  A major challenge for potential users (and for policy-makers) is simply identifying what infrastructures are available under what conditions.

This report includes an analysis of 8 case studies of digital platforms that collate information and provide services to promote broader access to, and more effective use of, research infrastructures. Although there is considerable variety amongst the cases, a number of key issues are identified that can help guide policy-makers, funders, institutions and managers, who are interested in developing or contributing to such platforms….(More)”.

Toward Information Justice


Book by Jeffrey Alan Johnson: “…presents a theory of information justice that subsumes the question of control and relates it to other issues that influence just social outcomes. Data does not exist by nature. Bureaucratic societies must provide standardized inputs for governing algorithms, a problem that can be understood as one of legibility. This requires, though, converting what we know about social objects and actions into data, narrowing the many possible representations of the objects to a definitive one using a series of translations. Information thus exists within a nexus of problems, data, models, and actions that the social actors constructing the data bring to it.

This opens information to analysis from social and moral perspectives, while the scientistic view leaves us blind to the gains from such analysis—especially to the ways that embedded values and assumptions promote injustice. Toward Information Justice answers a key question for the 21st Century: how can an information-driven society be just?

Many of those concerned with the ethics of data focus on control over data, and argue that if data is only controlled by the right people then just outcomes will emerge. There are serious problems with this control metaparadigm, however, especially related to the initial creation of data and prerequisites for its use.  This text is suitable for academics in the fields of information ethics, political theory, philosophy of technology, and science and technology studies, as well as policy professionals who rely on data to reach increasingly problematic conclusions about courses of action….(More)”.

Using new data sources for policymaking


Technical report by the Joint Research Centre (JRC) of the European Commission: “… synthesises the results of our work on using new data sources for policy-making. It reflects a recent shift from more general considerations in the area of Big Data to a more dedicated investigation of Citizen Science, and it summarizes the state of play. With this contribution, we start promoting Citizen Science as an integral component of public participation in policy in Europe.

The particular need to focus on the citizen dimension emerged due to (i) the increasing interest in the topic from policy Directorate-Generals (DGs) of the European Commission (EC); (ii) the considerable socio-economic impact policy making has on citizens’ life and society as a whole; and (iii) the clear potentiality of citizens’ contributions to increase the relevance of policy making and the effectiveness of policies when addressing societal challenges.

We explicitly concentrate on Citizen Science (or public participation in scientific research) as a way to engage people in practical work, and to develop a mutual understanding between the participants from civil society, research institutions and the public sector by working together on a topic that is of common interest.

Acknowledging this new priority, this report concentrates on the topic of Citizen Science and presents already ongoing collaborations and recent achievements. The presented work particularly addresses environment-related policies, Open Science and aspects of Better Regulation. We then introduce the six phases of the ‘cyclic value chain of Citizen Science’ as a concept to frame citizen engagement in science for policy. We use this structure in order to detail the benefits and challenges of existing approaches – building on the lessons that we learned so far from our own practical work and thanks to the knowledge exchange from third parties. After outlining additional related policy areas, we sketch the future work that is required in order to overcome the identified challenges, and translate them into actions for ourselves and our partners.

Next steps include the following:

 Develop a robust methodology for data collection, analysis and use of Citizen Science for EU policy;

 Provide a platform as an enabling framework for applying this methodology to different policy areas, including the provision of best practices;

 Offer guidelines for policy DGs in order to promote the use of Citizen Science for policy in Europe;

 Experiment and evaluate possibilities of overarching methodologies for citizen engagement in science and policy, and their case specifics; and

 Continue to advance interoperability and knowledge sharing between currently disconnected communities of practise. …(More)”.