The Open Innovation in Science research field: a collaborative conceptualisation approach


Paper by Susanne Beck et al: “Openness and collaboration in scientific research are attracting increasing attention from scholars and practitioners alike. However, a common understanding of these phenomena is hindered by disciplinary boundaries and disconnected research streams. We link dispersed knowledge on Open Innovation, Open Science, and related concepts such as Responsible Research and Innovation by proposing a unifying Open Innovation in Science (OIS) Research Framework. This framework captures the antecedents, contingencies, and consequences of open and collaborative practices along the entire process of generating and disseminating scientific insights and translating them into innovation. Moreover, it elucidates individual-, team-, organisation-, field-, and society‐level factors shaping OIS practices. To conceptualise the framework, we employed a collaborative approach involving 47 scholars from multiple disciplines, highlighting both tensions and commonalities between existing approaches. The OIS Research Framework thus serves as a basis for future research, informs policy discussions, and provides guidance to scientists and practitioners….(More)”.

Calling Bullshit: The Art of Scepticism in a Data-Driven World


Book by Carl Bergstrom and Jevin West: “Politicians are unconstrained by facts. Science is conducted by press release. Higher education rewards bullshit over analytic thought. Startup culture elevates bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit — and take advantage of our lowered guard to bombard us with bullshit of the second order. The majority of administrative activity, whether in private business or the public sphere, seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit.

We’re sick of it. It’s time to do something, and as educators, one constructive thing we know how to do is to teach people. So, the aim of this course is to help students navigate the bullshit-rich modern environment by identifying bullshit, seeing through it, and combating it with effective analysis and argument.

What do we mean, exactly, by bullshit and calling bullshit? As a first approximation:

Bullshit involves language, statistical figures, data graphics, and other forms of presentation intended to persuade by impressing and overwhelming a reader or listener, with a blatant disregard for truth and logical coherence.

Calling bullshit is a performative utterance, a speech act in which one publicly repudiates something objectionable. The scope of targets is broader than bullshit alone. You can call bullshit on bullshit, but you can also call bullshit on lies, treachery, trickery, or injustice.

In this course we will teach you how to spot the former and effectively perform the latter.

While bullshit may reach its apogee in the political domain, this is not a course on political bullshit. Instead, we will focus on bullshit that comes clad in the trappings of scholarly discourse. Traditionally, such highbrow nonsense has come couched in big words and fancy rhetoric, but more and more we see it presented instead in the guise of big data and fancy algorithms — and these quantitative, statistical, and computational forms of bullshit are those that we will be addressing in the present course.

Of course an advertisement is trying to sell you something, but do you know whether the TED talk you watched last night is also bullshit — and if so, can you explain why? Can you see the problem with the latest New York Times or Washington Post article fawning over some startup’s big data analytics? Can you tell when a clinical trial reported in the New England Journal or JAMA is trustworthy, and when it is just a veiled press release for some big pharma company?…(More)”.

Interventions to mitigate the racially discriminatory impacts of emerging tech including AI


Joint Civil Society Statement: “As widespread recent protests have highlighted, racial inequality remains an urgent and devastating issue around the world, and this is as true in the context of technology as it is everywhere else. In fact, it may be more so, as algorithmic technologies based on big data are deployed at previously unimaginable scale, reproducing the discriminatory systems that build and govern them.

The undersigned organizations welcome the publication of the report “Racial discrimination and emerging digital technologies: a human rights analysis,” by Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, E. Tendayi Achiume, and wish to underscore the importance and timeliness of a number of the recommendations made therein:

  1. Technologies that have had or will have significant racially discriminatory impacts should be banned outright.
    While incremental regulatory approaches may be appropriate in some contexts, where a technology is demonstrably likely to cause racially discriminatory harm, it should not be deployed until that harm can be prevented. Moreover, certain technologies may always have disparate racial impacts, no matter how much their accuracy can be improved. In the present moment, racially discriminatory technologies include facial and affect recognition technology and so-called predictive analytics. We support Special Rapporteur Achiume’s call for mandatory human rights impact assessments as a prerequisite for the adoption of new technologies. We also believe that where such assessments reveal that a technology has a high likelihood of deleterious racially disparate impacts, states should prevent its use through a ban or moratorium. We join the Special Rapporteur in welcoming recent municipal bans, for example, on the use of facial recognition technology, and encourage national governments to adopt similar policies.  Correspondingly, we reiterate our support for states’ imposition of an immediate moratorium on the trade and use of privately developed surveillance tools until such time as states enact appropriate safeguards, and congratulate Special Rapporteur Achiume on joining that call.
  2. Gender mainstreaming and representation along racial, national and other intersecting identities requires radical improvement at all levels of the tech sector.
  3. Technologists cannot solve political, social, and economic problems without the input of domain experts and those personally impacted.
  4. Access to technology is as urgent an issue of racial discrimination as inequity in the design of technologies themselves.
  5. Representative and disaggregated data is a necessary, if not sufficient, condition for racial equity in emerging digital technologies, but it must be collected and managed equitably as well.
  6. States as well as corporations must provide remedies for racial discrimination, including reparations.… (More)”.

The Misinformation Edition


On-Line Exhibition by the Glass Room: “…In this exhibition – aimed at young people as well as adults – we explore how social media and the web have changed the way we read information and react to it. Learn why finding “fake news” is not as easy as it sounds, and how the term “fake news” is as much a problem as the news it describes. Dive into the world of deep fakes, which are now so realistic that they are virtually impossible to detect. And find out why social media platforms are designed to keep us hooked, and how they can be used to change our minds. You can also read our free Data Detox Kit, which reveals how to tell facts from fiction and why it benefits everyone around us when we take a little more care about what we share…(More)”.

EXPLORE OUR ONLINE EXHIBITION

Digital in the Time of the Coronavirus: Data Science and Technology as a Force for Inclusion


Blog by Aleem Walji: “Crises do not create inequity and fault lines in society, they expose them. The systems and structures that give rise to inequality and inequity are deep-rooted and powerful. In recent months, we have seen the coronavirus bring into high relief many social and economic vulnerabilities across the world. It is now clear that Hispanics and Blacks are even more vulnerable to Covid-19 because of underlying health conditions, more frequent exposure to the virus, and broken social safety nets. This trend will only accelerate as the virus gains a foothold in Africa, parts of Asia, and Latin America.

The impact of the virus in places where health systems are weak, poverty is high, and large numbers of people are immunocompromised could be devastating. How do we mitigate the medium-term and second-order effects of a pandemic that will shrink economic growth and exacerbate inequality? This year alone, more than 500 million people are expected to fall into poverty, mostly in Africa and Asia. To defeat a virus that does not respect geographic boundaries, it is urgent for public and private actors, philanthropies, and global development institutions to use every tool available to alleviate a global humanitarian emergency and attendant economic collapse.

Technology, data science, and digital readiness are crucial elements for an effective emergency response and foundational to sustain a long-term recovery. Already, scientists and researchers across the world are leveraging data and digital platforms to accelerate the development of a vaccine, fast-track clinical trials, and contact tracing using mobile-enabled tools. Sensors are collecting huge amounts of data, and machine learning algorithms are helping policymakers decide when to relax physical distancing and where to open the economy and for how long.

Access to reliable information for decisionmaking, however, is not evenly spread. High frequency, granular, and anonymized datasets are essential for public-health officials and community health workers to target interventions and reach vulnerable populations faster and at a lower cost. Equipped with reliable data, civic technologists can leverage tools like artificial intelligence and machine learning to flatten the curve of Covid-19 and also the curve of inequity and unequal access to services and support.

This will not happen on its own. Preventing a much deeper digital divide will require forward-leaning policymakers, far-sighted investors and grant makers, civic-minded tech innovators and businesses, and a robust, digitally savvy civil society to work collaboratively for social and economic inclusion. It will require political will and improved data governance to deploy digital platforms to serve populations furthest behind. It is in our collective interest to ensure the health and well-being of every segment of society. Digital inclusion is part of the solution.

There are certain pathways public, private and social actors can follow to leverage data science, digital tools, and platforms today….(More)”.

Four Principles for Integrating AI & Good Governance


Oxford Commission on AI and Good Governance: “Many governments, public agencies and institutions already employ AI in providing public services, the distribution of resources and the delivery of governance goods. In the public sector, AI-enabled governance may afford new efficiencies that have the potential to transform a wide array of public service tasks.
But short-sighted design and use of AI can create new problems, entrench existing inequalities, and calcify and ultimately undermine government organizations.

Frameworks for the procurement and implementation of AI in public service have widely remained undeveloped. Frequently, existing regulations and national laws are no longer fit for purpose to ensure
good behaviour (of either AI or private suppliers) and are ill-equipped to provide guidance on the democratic use of AI.
As technology evolves rapidly, we need rules to guide the use of AI in ways that safeguard democratic values. Under what conditions can AI be put into service for good governance?

We offer a framework for integrating AI with good governance. We believe that with dedicated attention and evidence-based policy research, it should be possible to overcome the combined technical and organizational challenges of successfully integrating AI with good governance. Doing so requires working towards:


Inclusive Design: issues around discrimination and bias of AI in relation to inadequate data sets, exclusion of minorities and under-represented
groups, and the lack of diversity in design.
Informed Procurement: issues around the acquisition and development in relation to due diligence, design and usability specifications and the assessment of risks and benefits.
Purposeful Implementation: issues around the use of AI in relation to interoperability, training needs for public servants, and integration with decision-making processes.
Persistent Accountability: issues around the accountability and transparency of AI in relation to ‘black box’ algorithms, the interpretability and explainability of systems, monitoring and auditing…(More)”

Trade-offs and considerations for the future: Innovation and the COVID-19 response


Essay by Benjamin Kumpf: “…Here are some of the relevant trade-offs I identified. 

Rigour vs. Speed

How to best balance high-quality rigorous research and the need to gain actionable insights rapidly?  

Responding to a pandemic requires working at pace, while investing in ongoing research and the cross-fertilization of disciplines. In our response, we witness the importance of strong networks with academia and DFID’s focus on high-quality research. In parallel, we invest in supporting partners with rapid data collection through methods such as phone surveys, field visits, onsite interviews where possible as well as big data analysis and more. For example, through the International Growth Centre, DFID has supported a Sierra Leone COVID-19 dashboard, providing real time data on current economic conditions and trends from phone–based surveys from 195 towns and villages across Sierra Leone. ….

Breadth vs. depth

How to best balance providing services to large proportions of populations in need, while addressing challenges of specific communities?  

We are seeing emerging evidence that the virus and measures to prevent spread are disproportionately impacting marginalized communities and minorities. For example, in indigenous people are disproportionally affected by the virus in Brazil, Dalits are among the worst affected in India. In development and humanitarian contexts, it is paramount to guide innovation efforts with explicit values, including on the trade-off between scale and addressing last-mile challenges to leaveno–one behind. For example, to facilitate behaviour-change and embed insights from behavioural science and adaptive practices, DFID is supporting the Hygiene Hub, hosted at the London School for Hygiene and Tropical Medicine. The Hub provides free-of-charge advisory services to governments and non-governmental organizations working on COVID-19 related challenges in low and medium-income countries, balancing the need to reach large audiences and to design bespoke interventions for specific communities.  

Exploration vs. adaptation

How to best diversify innovation efforts and investments betweensearching for local solution and adapting proven approaches? 

Adaptive vs. locally-led

How to best learn and adapt, while providing ownership to local players?

Single-point solutions vs. systems-practices

How to advance specific tech and non-tech innovations that address urgent needs, while further improving existing systems? 

Supporting domestic innovators vs. strengthening local solutions and ecosystems

We need explicit conversations to ensure better transparency about this trade-off in innovation investments generally.…(More)”.

Adolescent Mental Health: Using A Participatory Mapping Methodology to Identify Key Priorities for Data Collaboration


Blog by Alexandra Shaw, Andrew J. Zahuranec, Andrew Young, Stefaan G. Verhulst, Jennifer Requejo, Liliana Carvajal: “Adolescence is a unique stage of life. The brain undergoes rapid development; individuals face new experiences, relationships, and environments. These events can be exciting, but they can also be a source of instability and hardship. Half of all mental conditions manifest by early adolescence and between 10 and 20 percent of all children and adolescents report mental health conditions. Despite the increased risks and concerns for adolescents’ well-being, there remain significant gaps in availability of data at the country level for policymakers to address these issues.

In June, The GovLab partnered with colleagues at UNICEF’s Health and HIV team in the Division of Data, Analysis, Planning & Monitoring and the Data for Children Collaborative — a collaboration between UNICEF, the Scottish Government, and the University of Edinburgh — to design and apply a new methodology of participatory mapping and prioritization of key topics and issues associated with adolescent mental health that could be addressed through enhanced data collaboration….

The event led to three main takeaways. First, the topic mapping allows participants to deliberate and prioritize the various aspects of adolescent mental health in a more holistic manner. Unlike the “blind men and the elephant” parable, a topic map allows the participants to see and discuss  the interrelated parts of the topic, including those which they might be less familiar with.

Second, the workshops demonstrated the importance of tapping into distributed expertise via participatory processes. While the topic map provided a starting point, the inclusion of various experts allowed the findings of the document to be reviewed in a rapid, legitimate fashion. The diverse inputs helped ensure the individual aspects could be prioritized without a perspective being ignored.

Lastly, the approach showed the importance of data initiatives being driven and validated by those individuals representing the demand. By soliciting the input of those who would actually use the data, the methodology ensured data initiatives focused on the aspects thought to be most relevant and of greatest importance….(More)”

German coronavirus experiment enlists help of concertgoers


Philip Oltermann at the Guardian: “German scientists are planning to equip 4,000 pop music fans with tracking gadgets and bottles of fluorescent disinfectant to get a clearer picture of how Covid-19 could be prevented from spreading at large indoor concerts.

As cultural mass gatherings across the world remain on hold for the foreseeable future, researchers in eastern Germany are recruiting volunteers for a “coronavirus experiment” with the singer-songwriter Tim Bendzko, to be held at an indoor stadium in the city of Leipzig on 22 August.

Participants, aged between 18 and 50, will wear matchstick-sized “contact tracer” devices on chains around their necks that transmit a signal at five-second intervals and collect data on each person’s movements and proximity to other members of the audience.

Inside the venue, they will also be asked to disinfect their hands with a fluorescent hand-sanitiser – designed to not just add a layer of protection but allow scientists to scour the venue with UV lights after the concerts to identify surfaces where a transmission of the virus through smear infection is most likely to take place.

Vapours from a fog machine will help visualise the possible spread of coronavirus via aerosols, which the scientists will try to predict via computer-generated models in advance of the event.

The €990,000 cost of the Restart-19 project will be shouldered between the federal states of Saxony and Saxony-Anhalt. The project’s organisers say the aim is to “identify a framework” for how larger cultural and sports events could be held “without posing a danger for the population” after 30 September….

To stop the Leipzig experiment from becoming the source of a new outbreak, signed-up volunteers will be sent a DIY test kit and have a swab at a doctor’s practice or laboratory 48 hours before the concert starts. Those who cannot show proof of a negative test at the door will be denied entry….(More)”.

Coronavirus: how the pandemic has exposed AI’s limitations


Kathy Peach at The Conversation: “It should have been artificial intelligence’s moment in the sun. With billions of dollars of investment in recent years, AI has been touted as a solution to every conceivable problem. So when the COVID-19 pandemic arrived, a multitude of AI models were immediately put to work.

Some hunted for new compounds that could be used to develop a vaccine, or attempted to improve diagnosis. Some tracked the evolution of the disease, or generated predictions for patient outcomes. Some modelled the number of cases expected given different policy choices, or tracked similarities and differences between regions.

The results, to date, have been largely disappointing. Very few of these projects have had any operational impact – hardly living up to the hype or the billions in investment. At the same time, the pandemic highlighted the fragility of many AI models. From entertainment recommendation systems to fraud detection and inventory management – the crisis has seen AI systems go awry as they struggled to adapt to sudden collective shifts in behaviour.

The unlikely hero

The unlikely hero emerging from the ashes of this pandemic is instead the crowd. Crowds of scientists around the world sharing data and insights faster than ever before. Crowds of local makers manufacturing PPE for hospitals failed by supply chains. Crowds of ordinary people organising through mutual aid groups to look after each other.

COVID-19 has reminded us of just how quickly humans can adapt existing knowledge, skills and behaviours to entirely new situations – something that highly-specialised AI systems just can’t do. At least yet….

In one of the experiments, researchers from the Istituto di Scienze e Tecnologie della Cognizione in Rome studied the use of an AI system designed to reduce social biases in collective decision-making. The AI, which held back information from the group members on what others thought early on, encouraged participants to spend more time evaluating the options by themselves.

The system succeeded in reducing the tendency of people to “follow the herd” by failing to hear diverse or minority views, or challenge assumptions – all of which are criticisms that have been levelled at the British government’s scientific advisory committees throughout the pandemic…(More)”.