Introducing Reach: find and track research being put into action


Blog by Dawn Duhaney: “At Wellcome Data Labs we’re releasing our first product, Reach. Our goal is to support funding organisations and researchers by making it easier to find and track scientific research being put into action by governments and global health organisations.

https://reach.wellcomedatalabs.org/
https://reach.wellcomedatalabs.org/

We focused on solving this problem in collaboration with our internal Insights and Analysis team for Wellcome and with partner organisations before deciding to release Reach more widely.

We found that evaluation teams wanted tools to help them measure the influence academic research was having on policy making institutions. We noticed that it is often challenging to track how scientific evidence makes its way into policy making. Institutions like the UK Government and the World Health Organisation have hundreds of thousands of policy documents available — it’s a heavily manual task to search through them to find evidence of our funded research.

At Wellcome we have some established methods for collecting evidence of policy influence from our funded research such as end of scheme reporting and via word of mouth. Through these methods we found great examples of how funded research was being put into policy and practice by government and global health organisations.

One example is from Kenya. The KEMRI Research Programme — a collaboration between the Kenyan Medical Research Institute, Wellcome and Oxford University launched a research programme to improve maternal health in 2005. Their research was cited in the World Health Organisation and with advocacy efforts from the KEMRI team influenced the development of new Kenyan national guidelines of paediatric care.

In Wellcome Data Labs we wanted to build a tool that would aid the discovery of evidence based policy making and be a step in the process of assessing research influence for evaluators, researchers and funding institutions….(More)”.

Armchair Survey Research: A Possible Post-COVID-19 Boon in Social Science


Paper by Samiul Hasan: “Post-COVID-19 technologies for higher education and corporate communication have opened-up wonderful opportunity for Online Survey Research. These technologies could be used for one-to-one interview, group interview, group questionnaire survey, online questionnaire survey, or even ‘focus group’ discussions. This new trend, which may aptly be called ‘armchair survey research’ may be the only or new trend in social science research. If that is the case, an obvious question might be what is ‘survey research’ and how is it going to be easier in the post-COVID-19 world? My intention is to offer some help to the promising researchers who have all quality and eagerness to undertake good social science research for publication, but no fund.

The text is divided into three main parts. Part one deals with “Science, Social Science and Research” to highlight some important points about the importance of ‘What’, ‘Why’, and ‘So what’ and ‘framing of a research question’ for a good research. Then the discussion moves to ‘reliability and validity’ in social science research including falsifiability, content validity, and construct validity. This part ends with discussions on concepts, constructs, and variables in a theoretical (conceptual) framework. The second part deals categorically with ‘survey research’ highlighting the use and features of interviews and questionnaire surveys. It deals primarily with the importance and use of nominal response or scale and ordinal response or scale as well as the essentials of question content and wording, and question sequencing. The last part deals with survey research in the post-COVID-19 period highlighting strategies for undertaking better online survey research, without any fund….(More)”.

Scaling up Citizen Science


Report for the European Commission: “The rapid pace of technology advancements, the open innovation paradigm, and the ubiquity of high-speed connectivity, greatly facilitate access to information to individuals, increasing their opportunities to achieve greater emancipation and empowerment. This provides new opportunities for widening participation in scientific research and policy, thus opening a myriad of avenues driving a paradigm shift across fields and disciplines, including the strengthening of Citizen Science. Nowadays, the application of Citizen Science principles spans across several scientific disciplines, covering different geographical scales. While the interdisciplinary approach taken so far has shown significant results and findings, the current situation depicts a wide range of projects that are heavily context-dependent and where the learning outcomes of pilots are very much situated within the specific areas in which these projects are implemented. There is little evidence on how to foster the spread and scalability in Citizen Science. Furthermore, the Citizen Science community currently lacks a general agreement on what these terms mean, entail and how these can be approached.

To address these issues, we developed a theoretically grounded framework to unbundle the meaning of scaling and spreading in Citizen Science. In this framework, we defined nine constructs that represent the enablers of these complex phenomena. We then validated, enriched, and instantiated this framework through four qualitative case studies of, diverse, successful examples of scaling and spreading in Citizen Science. The framework and the rich experiences allow formulating four theoretically and empirically grounded scaling scenarios. We propose the framework and the in-depth case studies as the main contribution from this report. We hope to stimulate future research to further refine our understanding of the important, complex and multifaceted phenomena of scaling and spreading in Citizen Science. The framework also proposes a structured mindset for practitioners that either want to ideate and start a new Citizen Science intervention that is scalable-by-design, or for those that are interested in assessing the scalability potential of an existing initiative….(More)”.

The Next Generation Humanitarian Distributed Platform


Report by Mercy Corps, the Danish Red Cross and hiveonline: “… call for the development of a shared, sector-wide “blockchain for good” to allow the aid sector to better automate and track processes in real-time, and maintain secure records. This would help modernize and coordinate the sector to reach more people as increasing threats such as pandemics, climate change and natural disasters require aid to be disbursed faster, more widely and efficiently.

A cross-sector blockchain platform – a digital database that can be simultaneously used and shared within a large decentralized, publicly accessible network – could support applications ranging from cash and voucher distribution to identity services, natural capital and carbon tracking, and donor engagement.

The report authors call for the creation of a committee to develop cross-sector governance and coordinate the implementation of a shared “Humanitarian Distributed Platform.” The authors believe the technology can help organizations fulfill commitments made to transparency, collaboration and efficiency under the Humanitarian Grand Bargain.

The report is compiled from responses of 35 survey participants, representing stakeholders in the humanitarian sector, including NGO project implementers, consultants, blockchain developers, academics, and founders. A further 39 direct interviews took place over the course of the research between July and September 2020….(More)”.

Facial-recognition research needs an ethical reckoning


Editorial in Nature: “…As Nature reports in a series of Features on facial recognition this week, many in the field are rightly worried about how the technology is being used. They know that their work enables people to be easily identified, and therefore targeted, on an unprecedented scale. Some scientists are analysing the inaccuracies and biases inherent in facial-recognition technology, warning of discrimination, and joining the campaigners calling for stronger regulation, greater transparency, consultation with the communities that are being monitored by cameras — and for use of the technology to be suspended while lawmakers reconsider where and how it should be used. The technology might well have benefits, but these need to be assessed against the risks, which is why it needs to be properly and carefully regulated.Is facial recognition too biased to be let loose?

Responsible studies

Some scientists are urging a rethink of ethics in the field of facial-recognition research, too. They are arguing, for example, that scientists should not be doing certain types of research. Many are angry about academic studies that sought to study the faces of people from vulnerable groups, such as the Uyghur population in China, whom the government has subjected to surveillance and detained on a mass scale.

Others have condemned papers that sought to classify faces by scientifically and ethically dubious measures such as criminality….One problem is that AI guidance tends to consist of principles that aren’t easily translated into practice. Last year, the philosopher Brent Mittelstadt at the University of Oxford, UK, noted that at least 84 AI ethics initiatives had produced high-level principles on both the ethical development and deployment of AI (B. Mittelstadt Nature Mach. Intell. 1, 501–507; 2019). These tended to converge around classical medical-ethics concepts, such as respect for human autonomy, the prevention of harm, fairness and explicability (or transparency). But Mittelstadt pointed out that different cultures disagree fundamentally on what principles such as ‘fairness’ or ‘respect for autonomy’ actually mean in practice. Medicine has internationally agreed norms for preventing harm to patients, and robust accountability mechanisms. AI lacks these, Mittelstadt noted. Specific case studies and worked examples would be much more helpful to prevent ethics guidance becoming little more than window-dressing….(More)”.

Technologies of Speculation: The limits of knowledge in a data-driven society


Book by Sun-ha Hong: “What counts as knowledge in the age of big data and smart machines? In its pursuit of better knowledge, technology is reshaping what counts as knowledge in its own image – and demanding that the rest of us catch up to new machinic standards for what counts as suspicious, informed, employable. In the process, datafication often generates speculation as much as it does information. The push for algorithmic certainty sets loose an expansive array of incomplete archives, speculative judgments and simulated futures where technology meets enduring social and political problems.

Technologies of Speculation traces this technological manufacturing of speculation as knowledge. It shows how unprovable predictions, uncertain data and black-boxed systems are upgraded into the status of fact – with lasting consequences for criminal justice, public opinion, employability, and more. It tells the story of vast dragnet systems constructed to predict the next terrorist, and how familiar forms of prejudice seep into the data by the back door. In software placeholders like ‘Mohammed Badguy’, the fantasy of pure data collides with the old spectre of national purity. It tells the story of smart machines for ubiquitous and automated self-tracking, manufacturing knowledge that paradoxically lies beyond the human senses. Such data is increasingly being taken up by employers, insurers and courts of law, creating imperfect proxies through which my truth can be overruled.

The book situates ongoing controversies over AI and algorithms within a broader societal faith in objective truth and technological progress. It argues that even as datafication leverages this faith to establish its dominance, it is dismantling the longstanding link between knowledge and human reason, rational publics and free individuals. Technologies of Speculation thus emphasises the basic ethical problem underlying contemporary debates over privacy, surveillance and algorithmic bias: who, or what, has the right to the truth of who I am and what is good for me? If data promises objective knowledge, then we must ask in return: knowledge by and for whom, enabling what forms of life for the human subject?…(More)”.

Smart urban governance: an alternative to technocratic “smartness”


Paper by Huaxiong Jiang, Stan Geertman & Patrick Witte: “This paper argues for a specific urban planning perspective on smart governance that we call “smart urban governance,” which represents a move away from the technocratic way of governing cities often found in smart cities. A framework on smart urban governance is proposed on the basis of three intertwined key components, namely spatial, institutional, and technological components. To test the applicability of the framework, we conducted an international questionnaire survey on smart city projects. We then identified and discursively analyzed two smart city projects—Smart Nation Singapore and Helsinki Smart City—to illustrate how this framework works in practice. The questionnaire survey revealed that smart urban governance varies remarkably: As urban issues differ in different contexts, the governance modes and relevant ICT functionalities applied also differ considerably. Moreover, the case analysis indicates that a focus on substantive urban challenges helps to define appropriate modes of governance and develop dedicated technologies that can contribute to solving specific smart city challenges. The analyses of both cases highlight the importance of context (cultural, political, economic, etc.) in analyzing interactions between the components. In this, smart urban governance promotes a sociotechnical way of governing cities in the “smart” era by starting with the urban issue at stake, promoting demand-driven governance modes, and shaping technological intelligence more socially, given the specific context….(More)”.

Digital Democracy’s Road Ahead


Richard Hughes Gibson at the Hedgehog Review: “In the last decade of the twentieth century, as we’ve seen, Howard Rheingold and William J. Mitchell imagined the Web as an “electronic agora” where netizens would roam freely, mixing business, pleasure, and politics. Al Gore envisioned it as an “information superhighway” system for which any computer could offer an onramp. Our current condition, by contrast, has been likened to shuffling between “walled gardens,” each platform—be it Facebook, Apple, Amazon, or Google—being its own tightly controlled ecosystem. Yet even this metaphor is perhaps too benign. As the cultural critic Alan Jacobs has observed, “they are not gardens; they are walled industrial sites, within which users, for no financial compensation, produce data which the owners of the factories sift and then sell.”

Harvard Business School professor Shoshanna Zuboff has dubbed the business model underlying these factories “surveillance capitalism.” Surveillance capitalism works by collecting information about you (your Internet activity, call history, app usage, your voice, your location, even your fitness level), which creates profiles of what you like, where you go, who you know, and who you are. That shadowy portrait makes a powerful tool for predicting what kinds of products and services you might like to purchase, and other companies are happy to pay for such finely-tuned targeted advertising. (Facebook alone generated $69 billion in ad revenue last year.)

The information-gathering can’t ever stop, however; the business model depends on a steady supply of new user data to inform the next round of predictions. This “extraction imperative,” as Zuboff calls it, is inherently monopolistic, rival companies being both a threat that must be eliminated and a potential gold mine from which more user data can be extracted (see Facebook’s acquisitions of competitors Whatsapp and Instagram). Equally worryingly, the big tech companies have begun moving into other sectors of the economy, as seen, for example, in Google’s quiet entry last year into the medical records business (unbeknownst to the patients and physicians whose data was mined).

There is growing consensus among legal scholars and social scientists that these practices are hazardous to democracy. Commentators worry over the consequences of putting so much wealth in so few hands so quickly (Zuboff calls it a “new Gilded Age”). They note the number of tech executives who’ve gone on to high-ranking government posts and vice versa. They point to the fact that—contrary to Mark Zuckerberg’s 2010 declaration that privacy is no longer a “social norm”—users are indeed worried about privacy. Scholars note, furthermore, that these platforms are not a genuine reflection of public opinion, though they are often treated as such. Social media can operate as echo chambers, only showing you what people like you read, think, do. Paradoxically, they can also become pressure cookers. As is now widely documented, many algorithms reward—and thereby amplify—the most divisive and thus most attention-grabbing content. Keeping us dialed in—whether for the next round of affirmation or outrage—is essential to their success….(More)”.

The Few, the Tired, the Open Source Coders


Article by Clive Thompson: “…When the open source concept emerged in the ’90s, it was conceived as a bold new form of communal labor: digital barn raisings. If you made your code open source, dozens or even hundreds of programmers would chip in to improve it. Many hands would make light work. Everyone would feel ownership.

Now, it’s true that open source has, overall, been a wild success. Every startup, when creating its own software services or products, relies on open source software from folks like Thornton: open source web-server code, open source neural-net code. But, with the exception of some big projects—like Linux—the labor involved isn’t particularly communal. Most are like Bootstrap, where the majority of the work landed on a tiny team of people.

Recently, Nadia Eghbal—the head of writer experience at the email newsletter platform Substack—published Working in Public, a fascinating book for which she spoke to hundreds of open source coders. She pinpointed the change I’m describing here. No matter how hard the programmers worked, most “still felt underwater in some shape or form,” Eghbal told me.

Why didn’t the barn-raising model pan out? As Eghbal notes, it’s partly that the random folks who pitch in make only very small contributions, like fixing a bug. Making and remaking code requires a lot of high-level synthesis—which, as it turns out, is hard to break into little pieces. It lives best in the heads of a small number of people.

Yet those poor top-level coders still need to respond to the smaller contributions (to say nothing of requests for help or reams of abuse). Their burdens, Eghbal realized, felt like those of YouTubers or Instagram influencers who feel overwhelmed by their ardent fan bases—but without the huge, ad-based remuneration.

Sometimes open source coders simply walk away: Let someone else deal with this crap. Studies suggest that about 9.5 percent of all open source code is abandoned, and a quarter is probably close to being so. This can be dangerous: If code isn’t regularly updated, it risks causing havoc if someone later relies on it. Worse, abandoned code can be hijacked for ill use. Two years ago, the pseudonymous coder right9ctrl took over a piece of open source code that was used by bitcoin firms—and then rewrote it to try to steal cryptocurrency….(More)”.

Data could hold the key to stopping Alzheimer’s


Blog post by Bill Gates: “My family loves to do jigsaw puzzles. It’s one of our favorite activities to do together, especially when we’re on vacation. There is something so satisfying about everyone working as a team to put down piece after piece until finally the whole thing is done.

In a lot of ways, the fight against Alzheimer’s disease reminds me of doing a puzzle. Your goal is to see the whole picture, so that you can understand the disease well enough to better diagnose and treat it. But in order to see the complete picture, you need to figure out how all of the pieces fit together.

Right now, all over the world, researchers are collecting data about Alzheimer’s disease. Some of these scientists are working on drug trials aimed at finding a way to stop the disease’s progression. Others are studying how our brain works, or how it changes as we age. In each case, they’re learning new things about the disease.

But until recently, Alzheimer’s researchers often had to jump through a lot of hoops to share their data—to see if and how the puzzle pieces fit together. There are a few reasons for this. For one thing, there is a lot of confusion about what information you can and can’t share because of patient privacy. Often there weren’t easily available tools and technologies to facilitate broad data-sharing and access. In addition, pharmaceutical companies invest a lot of money into clinical trials, and often they aren’t eager for their competitors to benefit from that investment, especially when the programs are still ongoing.

Unfortunately, this siloed approach to research data hasn’t yielded great results. We have only made incremental progress in therapeutics since the late 1990s. There’s a lot that we still don’t know about Alzheimer’s, including what part of the brain breaks down first and how or when you should intervene. But I’m hopeful that will change soon thanks in part to the Alzheimer’s Disease Data Initiative, or ADDI….(More)“.