Stefaan Verhulst
Ben Williamson at Code Acts in Education: “Digital technologies are increasingly playing a significant role in techniques of governance in sectors such as education as well as healthcare, urban management, and in government innovation and citizen engagement in government services. But these technologies need to be sponsored and advocated by particular individuals and groups before they are embedded in these settings.
I have produced a working paper entitled Testing governance: the laboratory lives and methods of policy innovation labs which examines the role of innovation labs as sponsors of new digital technologies of governance. By combining resources and practices from politics, data analysis, media, design, and digital innovation, labs act as experimental R&D labs and practical ideas organizations for solving social and public problems, located in the borderlands between sectors, fields and disciplinary methodologies. Labs are making methods such as data analytics, design thinking and experimentation into a powerful set of governing resources.They are, in other words, making digital methods into key techniques for understanding social and public issues, and in the creation and circulation of solutions to the problems of contemporary governance–in education and elsewhere.
The working paper analyses the key methods and messages of the labs field, in particular by investigating the documentary history of Futurelab, a prototypical lab for education research and innovation that operated in Bristol, UK, between 2002 and 2010, and tracing methodological continuities through the current wave of lab development. Centrally, the working paper explores Futurelab’s contribution to the production and stabilization of a ‘sociotechnical imaginary’ of the future of education specifically, and to the future of public services more generally. It offers some preliminary analysis of how such an imaginary was embedded in the ‘laboratory life’ of Futurelab, established through its organizational networks, and operationalized in its digital methods of research and development as well as its modes of communication….(More)”
Magdalena Mis at Reuters: “Struggling with frequent water cuts, residents of Syria‘s battered city of Aleppo have a new way to find the water needed for their daily lives – an interactive map on mobile phones.
The online map, created by the Red Cross and accessible through mobile phones with 3G technology, helps to locate the closest of over 80 water points across the divided city of 2 million and guides them to it using a Global Positioning System.
“The map is very simple and works on every phone, and everybody now has access to a mobile phone with 3G,” International Committee of the Red Cross (ICRC) spokesman Pawel Krzysiek told the Thomson Reuters Foundation in a phone interview from Damascus on Wednesday.
“The important thing is that it’s not just a map – which many people may not know how to read – it’s the GPS that’s making a difference because people can actually be guided to the water point closest to them,” he said.
Aleppo was Syria’s most populated city and commercial hub before the civil war erupted in 2011, but many areas have been reduced to rubble and the city has been carved up between government forces and various insurgent groups.
Water cuts are a regular occurrence, amounting to about two weeks each month, and the infrastructure is on the brink of collapse, Krzysiek said.
The water supply was restored on Wednesday after a four-day cut caused by damage to the main power line providing electricity to some 80 percent of households, Krzysiek said.
More cuts are likely because fighting is preventing engineers from repairing the power line, and diesel, used for standby generators, may run out, he added….
Krzysiek said the ICRC started working on the map after a simple version created for engineers was posted on its Facebook page in the summer, sparking a wave of comments and requests.
“Suddenly people started to share this map and were sending comments on how to improve it and asking for a new, more detailed one.”
Krzysiek said that about 140,000 people were using the old version of the map and 20,000 had already used the new version, launched on Monday…(More)”
Ed Jong at the Atlantic: “…In 2010, I posted a vial of my finest spit to the genetic-testing company 23andme. In return, I got to see what my genes reveal about my ancestry, how they affect my risk of diseases or my responses to medical drugs, and even what they say about the texture of my earwax. (It’s dry.) 23andme now has around a million users, as do other similar companies like Ancestry.com.
But these communities are largely separated from one another, a situation that frustrated Yaniv Erlich from the New York Genome Center and Columbia University. “Tens of millions of people will soon have access to their genomes,” he says. “Are we just going to let these data sit in silos, or can we partner with these large communities to enable some really large science? That’s why we developed DNA.LAND.”
DNA.LAND, which Erlich developed together with colleague Joe Pickrell, is a website that allows customers of other genetic-testing services to upload files containing their genetic data. Scientists can then use this data for research, to the extent that each user consents to. “DNA.LAND is a way for getting the general public to participate in large-scale genetic studies,” says Erlich. “And we’re not a company. We’re a non-profit website, run by scientists.”…(More)”
Deepa Rai at the Worldbank blog: “….Following the earthquake, there was an overwhelming response from technocrats and data crunchers to use data visualizations for disaster risk assessment. The Government of Nepal made datasets available through its Disaster Data Portal and many organizations and individuals also pitched in and produced visual data platforms.
However, the use of open data has not been limited to disaster response. It was, and still is, instrumental in tracking how much funding has been received and how it’s being allocated. Through the use of open data, people can make their own analysis based on the information provided online.
Direct Relief, a not-for-profit company, has collected such information and helped gathered data from the Prime Minister’s relief fund and then created infographics which have been useful for media and immediate distribution on social platforms. MapJournal’s visual maps became vital during the Post Disaster Needs Assessment (PDNA) to assess and map areas where relief and reconstruction efforts were urgently needed.

Photo Credit: Data Relief Services
Open data and accountability
However, the work of open data doesn’t end with relief distribution and disaster risk assessment. It is also hugely impactful in keeping track of how relief money is pledged, allocated, and spent. One such web application,openenet.net is making this possible by aggregating post disaster funding data from international and national sources into infographics. “The objective of the system,” reads the website “is to ensure transparency and accountability of relief funds and resources to ensure that it reaches to targeted beneficiaries. We believe that transparency of funds in an open and accessible manner within a central platform is perhaps the first step to ensure effective mobilization of available resources.”
Four months after the earthquake, Nepali media have already started to report on aid spending — or the lack of it. This has been made possible by the use of open data from the Ministry of Home Affairs (MoHA) and illustrates how critical data is for the effective use of aid money.
Open data platforms emerging after the quakes have been crucial in questioning the accountability of aid provisions and ultimately resulting in more successful development outcomes….(More)”
Hien To, Seon Ho Kim, and Cyrus Shahabi: “Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save peoples lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visualdata collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework….(More)”
Paper by Kush R. Varshney: “This paper presents a viewpoint on an emerging dichotomy in data science: applications in which predictions of datadriven algorithms are used to support people in making consequential decisions that can have a profound effect on other people’s lives and applications in which data-driven algorithms act autonomously in settings of low consequence and large scale. An example of the first type of application is prison sentencing and of the second type is selecting news stories to appear on a person’s web portal home page. It is argued that the two types of applications require data, algorithms and models with vastly different properties along several dimensions, including privacy, equitability, robustness, interpretability, causality, and openness. Furthermore, it is argued that the second type of application cannot always be used as a surrogate to develop methods for the first type of application. To contribute to the development of methods for the first type of application, one must really be working on the first type of application….(More)”
Minna Ruckenstein and Mika Pantzar in New Media and Society: “This article investigates the metaphor of the Quantified Self (QS) as it is presented in the magazine Wired (2008–2012). Four interrelated themes—transparency, optimization, feedback loop, and biohacking—are identified as formative in defining a new numerical self and promoting a dataist paradigm. Wired captures certain interests and desires with the QS metaphor, while ignoring and downplaying others, suggesting that the QS positions self-tracking devices and applications as interfaces that energize technological engagements, thereby pushing us to rethink life in a data-driven manner. The thematic analysis of the QS is treated as a schematic aid for raising critical questions about self-quantification, for instance, detecting the merging of epistemological claims, technological devices, and market-making efforts. From this perspective, another definition of the QS emerges: a knowledge system that remains flexible in its aims and can be used as a resource for epistemological inquiry and in the formation of alternative paradigms….(More)”
Michael Schudson in Columbia Journalism Review: “…what began as an effort to keep the executive under check by the Congress became a law that helped journalists, historians, and ordinary citizens monitor federal agencies. Nearly 50 years later, it may all sound easy and obvious. It was neither. And this burst of political engagement is rarely, if ever, mentioned by journalists themselves as an exception to normal “acts of journalism.”
But how did it happen at all? In 1948, the American Society of Newspaper Editors set up its first-ever committee on government restrictions on the freedom to gather and publish news. It was called the “Committee on World Freedom of Information”—a name that implied that limiting journalists’ access or straightforward censorship was a problem in other countries. The committee protested Argentina’s restrictions on what US correspondents could report, censorship in Guatemala, and—closer to home—US military censorship in occupied Japan.
When the ASNE committee turned to the problem of secrecy in the US government in the early 1950s, it chose to actively criticize such secrecy, but not to “become a legislative committee.” Even in 1953, when ASNE leaders realized that significant progress on government secrecy might require federal legislation, they concluded that “watching all such legislation” would be an important task for the committee, but did not suggest taking a public position.
Representative Moss changed this. Moss was a small businessman who had served several terms in the California legislature before his election to Congress in 1952. During his first term, he requested some data from the Civil Service Commission about dismissals of government employees on suspicion of disloyalty. The commission flatly turned him down. “My experience in Washington quickly proved that you had a hell of a time getting any information,” Moss recalled. Two years later, a newly re-elected Moss became chair of a House subcommittee on government information….(More)”
Raphael Silberzahn & Eric L. Uhlmann in Nature: “…For many research problems, crowdsourcing analyses will not be the optimal solution. It demands a huge amount of resources for just one research question. Some questions will not benefit from a crowd of analysts: researchers’ approaches will be much more similar for simple data sets and research designs than for large and complex ones. Importantly, crowdsourcing does not eliminate all bias. Decisions must still be made about what hypotheses to test, from where to get suitable data, and importantly, which variables can or cannot be collected. (For instance, we did not consider whether a particular player’s skin tone was lighter or darker than that of most of the other players on his team.) Finally, researchers may continue to disagree about findings, which makes it challenging to present a manuscript with a clear conclusion. It can also be puzzling: the investment of more resources can lead to less-clear outcomes.
“Under the current system, strong storylines win out over messy results.”
Still, the effort can be well worth it. Crowdsourcing research can reveal how conclusions are contingent on analytical choices. Furthermore, the crowdsourcing framework also provides researchers with a safe space in which they can vet analytical approaches, explore doubts and get a second, third or fourth opinion. Discussions about analytical approaches happen before committing to a particular strategy. In our project, the teams were essentially peer reviewing each other’s work before even settling on their own analyses. And we found that researchers did change their minds through the course of analysis.
Crowdsourcing also reduces the incentive for flashy results. A single-team project may be published only if it finds significant effects; participants in crowdsourced projects can contribute even with null findings. A range of scientific possibilities are revealed, the results are more credible and analytical choices that seem to sway conclusions can point research in fruitful directions. What is more, analysts learn from each other, and the creativity required to construct analytical methodologies can be better appreciated by the research community and the public.
Of course, researchers who painstakingly collect a data set may not want to share it with others. But greater certainty comes from having an independent check. A coordinated effort boosts incentives for multiple analyses and perspectives in a way that simply making data available post-publication does not.
The transparency resulting from a crowdsourced approach should be particularly beneficial when important policy issues are at stake. The uncertainty of scientific conclusions about, for example, the effects of the minimum wage on unemployment, and the consequences of economic austerity policies should be investigated by crowds of researchers rather than left to single teams of analysts.
Under the current system, strong storylines win out over messy results. Worse, once a finding has been published in a journal, it becomes difficult to challenge. Ideas become entrenched too quickly, and uprooting them is more disruptive than it ought to be. The crowdsourcing approach gives space to dissenting opinions.
Scientists around the world are hungry for more-reliable ways to discover knowledge and eager to forge new kinds of collaborations to do so. Our first project had a budget of zero, and we attracted scores of fellow scientists with two tweets and a Facebook post.
Researchers who are interested in starting or participating in collaborative crowdsourcing projects can access resources available online. We have publicly shared all our materials and survey templates, and the Center for Open Science has just launched ManyLab, a web space where researchers can join crowdsourced projects….(More).
See also Nature special collection:reproducibility
Geoff Mulgan at NESTA: “Many of us spend much of our time in meetings and at conferences. But too often these feel like a waste of time, or fail to make the most of the knowledge and experience of the people present.
Meetings have changed – with much more use of online tools, and a growing range of different meeting formats. But our sense is that meetings could be much better run and achieve better results.
This paper tries to help. It summarises some of what’s known about how meetings work well or badly; makes recommendations about how to make meetings better; and showcases some interesting recent innovations. It forms part of a larger research programme at Nesta on collective intelligence which is investigating how groups and organisations can make the most of their brains, and of the technologies they use.
We hope the paper will be helpful to anyone designing or running meetings of any kind, and that readers will contribute good examples, ideas and evidence which can be added into future versions….(More)”
