Explore our articles
View All Results

Stefaan Verhulst

Shaunacy Ferro at FastCompany: “In 2011, researchers at the MIT Media Lab debuted Place Pulse, a website that served as a kind of “hot or not” for cities. Given two Google Street View images culled from a select few cities including New York City and Boston, the site asked users to click on the one that seemed safer, more affluent, or more unique. The result was an empirical way to measure urban aesthetics.

Now, that data is being used to predict what parts of cities feel the safest. StreetScore, a collaboration between the MIT Media Lab’s Macro Connections and Camera Culture groups, uses an algorithm to create a super high-resolution map of urban perceptions. The algorithmically generated data could one day be used to research the connection between urban perception and crime, as well as informing urban design decisions.

The algorithm, created by Nikhil Naik, a Ph.D. student in the Camera Culture lab, breaks an image down into its composite features—such as building texture, colors, and shapes. Based on how Place Pulse volunteers rated similar features, the algorithm assigns the streetscape a perceived safety score between 1 and 10. These scores are visualized as geographic points on a map, designed by MIT rising sophomore Jade Philipoom. Each image available from Google Maps in the two cities are represented by a colored dot: red for the locations that the algorithm tags as unsafe, and dark green for those that appear safest. The site, now limited to New York and Boston, will be expanded to feature Chicago and Detroit later this month, and eventually, with data collected from a new version of Place Pulse, will feature dozens of cities around the world….(More)”

How Crowdsourcing And Machine Learning Will Change The Way We Design Cities

Paper by Frank Mols et al in the European Journal of Political Research: “Policy makers can use four different modes of governance: ‘hierarchy’, ‘markets’, ‘networks’ and ‘persuasion’. In this article, it is argued that ‘nudging’ represents a distinct (fifth) mode of governance. The effectiveness of nudging as a means of bringing about lasting behaviour change is questioned and it is argued that evidence for its success ignores the facts that many successful nudges are not in fact nudges; that there are instances when nudges backfire; and that there may be ethical concerns associated with nudges. Instead, and in contrast to nudging, behaviour change is more likely to be enduring where it involves social identity change and norm internalisation. The article concludes by urging public policy scholars to engage with the social identity literature on ‘social influence’, and the idea that those promoting lasting behaviour change need to engage with people not as individual cognitive misers, but as members of groups whose norms they internalise and enact. …(Also)”

Why a nudge is not enough: A social identity critique of governance by stealth

New paper by Shoshana Zuboff in the Journal of Information Technology: “This article describes an emergent logic of accumulation in the networked sphere, ‘surveillance capitalism,’ and considers its implications for ‘information civilization.’ Google is to surveillance capitalism what General Motors was to managerial capitalism. Therefore the institutionalizing practices and operational assumptions of Google Inc. are the primary lens for this analysis as they are rendered in two recent articles authored by Google Chief Economist Hal Varian. Varian asserts four uses that follow from computer-mediated transactions: ‘data extraction and analysis,’ ‘new contractual forms due to better monitoring,’ ‘personalization and customization,’ and ‘continuous experiments.’ An examination of the nature and consequences of these uses sheds light on the implicit logic of surveillance capitalism and the global architecture of computer mediation upon which it depends. This architecture produces a distributed and largely uncontested new expression of power that I christen: ‘Big Other.’ It is constituted by unexpected and often illegible mechanisms of extraction, commodification, and control that effectively exile persons from their own behavior while producing new markets of behavioral prediction and modification. Surveillance capitalism challenges democratic norms and departs in key ways from the centuries long evolution of market capitalism….(More)”

Big Other: Surveillance Capitalism and the Prospects of an Information Civilization

Open Knowledge today announced plans to develop Open Trials, an open, online database of information about the world’s clinical research trials funded by The Laura and John Arnold Foundation. The project, which is designed to increase transparency and improve access to research, will be directed by Dr. Ben Goldacre, an internationally known leader on clinical transparency.

logo-black-CMYK

Open Trials will aggregate information from a wide variety of existing sources in order to provide a comprehensive picture of the data and documents related to all trials of medicines and other treatments around the world. Conducted in partnership with the Center for Open Science and supported by the Center’s Open Science Framework, the project will also track whether essential information about clinical trials is transparent and publicly accessible so as to improve understanding of whether specific treatments are effective and safe.

“There have been numerous positive statements about the need for greater transparency on information about clinical trials, over many years, but it has been almost impossible to track and audit exactly what is missing,” Dr. Goldacre, the project’s Chief Investigator and a Senior Clinical Research Fellow in the Centre for Evidence Based Medicine at the University of Oxford, explained. “This project aims to draw together everything that is known around each clinical trial. The end product will provide valuable information for patients, doctors, researchers, and policymakers—not just on individual trials, but also on how whole sectors, researchers, companies, and funders are performing. It will show who is failing to share information appropriately, who is doing well, and how standards can be improved.”

Patients, doctors, researchers, and policymakers use the evidence from clinical trials to make informed decisions about which treatments are best. But studies show that roughly half of all clinical trial results are not published, with positive results published twice as often as negative results. In addition, much of the important information about the methods and findings of clinical trials is only made available outside the normal indexes of academic journals….

Open Trials will help to automatically identify which trial results have not been disclosed by matching registry data on trials that have been conducted against documents containing trial results. This will facilitate routine public audit of undisclosed results. It will also improve discoverability of other documents around clinical trials, which will be indexed and, in some cases, hosted. Lastly, it will help improve recruitment for clinical trials by making information and commentary on ongoing trials more accessible….(More)”

Open Trials

 in the Financial Times: “As finance ministers gather this week in Washington DC they cannot but agree and commit to fighting extreme poverty. All of us must rejoice in the fact that over the past 15 years, the world has reportedly already “halved the number of poor people living on the planet”.

But none of us really knows it for sure. It could be less, it could be more. In fact, for every crucial issue related to human development, whether it is poverty, inequality, employment, environment or urbanization, there is a seminal crisis at the heart of global decision making – the crisis of poor data.

Because the challenges are huge and the resources scarce, on these issues more maybe than anywhere else, we need data, to monitor the results and adapt the strategies whenever needed. Bad data feed bad management, weak accountability, loss of resources and, of course, corruption.

It is rather bewildering that while we live in this technology-driven age, the development communities and many of our African governments are relying too much on guesswork. Our friends in the development sector and our African leaders would not dream of driving their cars or flying without instruments. But somehow they pretend they can manage and develop countries without reliable data.

The development community must admit it has a big problem. The sector is relying on dodgy data sets. Take the data on extreme poverty. The data we have are mainly extrapolations of estimates from years back – even up to a decade or more ago. For 38 out of 54 African countries, data on poverty and inequality are either out-dated or non-existent. How can we measure progress with such a shaky baseline? To make things worse we also don’t know how much countries spend on fighting poverty. Only 3 per cent of African citizens live in countries where governmental budgets and expenditures are made open, according to the Open Budget Index. We will never end extreme poverty if we don’t know who or where the poor are, or how much is being spent to help them.

Our African countries have all fought and won their political independence. They should now consider the battle for economic sovereignty, which begins with the ownership of sound and robust national data: how many citizens, living where, and how, to begin with.

There are three levels of intervention required.

First, a significant increase in resources for credible, independent, national statistical institutions. Establishing a statistical office is less eye-catching than building a hospital or school but data driven policy will ensure that more hospital and schools are delivered more effectively and efficiently. We urgently need these boring statistical offices. In 2013, out of a total aid budget of $134.8bn, a mere $280m went in support of statistics. Governments must also increase the resources they put into data.

Second, innovative means of collecting data. Mobile phones, geocoding, satellites and the civic engagement of young tech-savvy citizens to collect data can all secure rapid improvements in baseline data if harnessed.

Third, everyone must take on this challenge of the global public good dimension of high quality open data. Public registers of the ownership of companies, global standards on publishing payments and contracts in the extractives sector and a global charter for open data standards will help media and citizens to track corruption and expose mismanagement. Proposals for a new world statistics body – “Worldstat” – should be developed and implemented….(More)”

The extreme poverty of data

Chapter by Kazjon Grace et al in Design Computing and Cognition ’14: “Crowdsourcing design has been applied in various areas of graphic design, software design, and product design. This paper draws on those experiences and research in diversity, creativity and motivation to present a process model for crowdsourcing experience design. Crowdsourcing experience design for volunteer online communities serves two purposes: to increase the motivation of participants by making them stakeholders in the success of the project, and to increase the creativity of the design by increasing the diversity of expertise beyond experts in experience design. Our process model for crowdsourcing design extends the meta-design architecture, where for online communities is designed to be iteratively re-designed by its users. We describe how our model has been deployed and adapted to a citizen science project where nature preserve visitors can participate in the design of a system called NatureNet. The major contribution of this paper is a model for crowdsourcing experience design and a case study of how we have deployed it for the design and development of NatureNet….(More)”

 

A Process Model for Crowdsourcing Design: A Case Study in Citizen Science

Russell Brandom in TheVerge: “Early warning on earthquakes can help save lives, but many countries can’t afford them. That’s why scientists are turning to another location sensor already widespread in many countries: the smartphone. A single smartphone makes for a crappy earthquake sensor — but get enough of them reporting, and it won’t matter.

A new study, published today in Science Advances, says that the right network of cell phones might be able to substitute for modern seismograph arrays, providing a crucial early warning in the event of a quake. The study looks at historical earthquake data and modern smartphone hardware (based on the Nexus 5) and comes away with a map of how a smartphone-based earthquake detector might work. As it turns out, a phone’s GPS is more powerful than you might think.

A modern phone has almost everything you could want in an earthquake sensor

Early warning systems are designed to pick up the first tremors of an earthquake, projecting where the incoming quake is centered and how strong it’s likely to be. When they work, the systems are able to give citizens and first responders crucial time to prepare for the quake. There are already seismograph-based systems in place in California, Mexico, and Japan, but poorer countries often don’t have the means to implement and maintain them. This new method wouldn’t be as good as most scientific earthquake sensors, but those can cost tens of thousands of dollars each, making a smartphone-based sensor a lot cheaper. For countries that can’t afford a seismograph-based system (which includes much of the Southern Hemisphere), it could make a crucial difference in catching quakes early.

A modern phone has almost everything you could want in an earthquake sensor: specifically, a GPS-powered location sensor, an accelerometer, and multiple data connections. There are also a lot of them, even in poor countries, so a distributed system could count on getting data points from multiple angles….(More)”

These researchers want to turn phones into earthquake detectors

DataShift: “Following a study to better understand the number, type and scale of citizen-generated data initiatives across the world, the DataShift has visualised the resulting data to create an interactive online platform. Users are presented with a definition of a citizen-generated data initiative before being invited to browse the multiple initiatives according to the various themes that they address….(More)”

New Interactive Citizen-Generated Data Platform

Springwise: “There has been a lot of talk about the outdated nature of voting infrastructures. Citizens can now shop, bank and date online, but are still required to visit a polling station in person to participate in democratic votes. Harvard start-up Voatz hopes to change that with their secure, global mobile voting and campaigning platform.

Voatz could enable members of the public to cast their vote, participate in opinion polls and make campaign donations from their smartphone during elections in the not too distant future. Voters would be required to undergo comprehensive identity verification and use a biometric-enabled smartphone in order to participate in the remote, electronic voting. Voatz hope the technology can help to make voting more simple and accessible using familiar technology…(More)”

Secure app could enable people to vote from their smartphone

Eric J. Topol in Nature: ” I call for an international open medical resource to provide a database for every individual’s genomic, metabolomic, microbiomic, epigenomic and clinical information. This resource is needed in order to facilitate genetic diagnoses and transform medical care.

“We are each, in effect, one-person clinical trials”

Laurie Becklund was a noted journalist who died in February 2015 at age 66 from breast cancer. Soon thereafter, the Los Angeles Times published her op-ed entitled “As I lay dying” (Ref. 1). She lamented, “We are each, in effect, one-person clinical trials. Yet the knowledge generated from those trials will die with us because there is no comprehensive database of metastatic breast cancer patients, their characteristics and what treatments did and didn’t help them”. She went on to assert that, in the era of big data, the lack of such a resource is “criminal”, and she is absolutely right….

Around the same time of this important op-ed, the MIT Technology Review published their issue entitled “10 Breakthrough Technologies 2015” and on the list was the “Internet of DNA” (Ref. 2). While we are often reminded that the world we live in is becoming the “Internet of Things”, I have not seen this terminology applied to DNA before. The article on the “Internet of DNA” decried, “the unfolding calamity in genomics is that a great deal of life-saving information, though already collected, is inaccessible”. It called for a global network of millions of genomes and cited theMatchmaker Exchange as a frontrunner. For this international initiative, a growing number of research and clinical teams have come together to pool and exchange phenotypic and genotypic data for individual patients with rare disorders, in order to share this information and assist in the molecular diagnosis of individuals with rare diseases….

an Internet of DNA — or what I have referred to as a massive, open, online medicine resource (MOOM) — would help to quickly identify the genetic cause of the disorder4 and, in the process of doing so, precious guidance for prevention, if necessary, would become available for such families who are currently left in the lurch as to their risk of suddenly dying.

So why aren’t such MOOMs being assembled? ….

There has also been much discussion related to privacy concerns that patients might be unwilling to participate in a massive medical information resource. However, multiple global consumer surveys have shown that more than 80% of individuals are ready to share their medical data provided that they are anonymized and their privacy maximally assured4. Indeed, just 24 hours into Apple’s ResearchKit initiative, a smartphone-based medical research programme, there were tens of thousand of patients with Parkinson disease, asthma or heart disease who had signed on. Some individuals are even willing to be “open source” — that is, to make their genetic and clinical data fully available with free access online, without any assurance of privacy. This willingness is seen by the participants in the recently launched Open Humans initiative. Along with the Personal Genome Project, Go Viral and American Gut have joined in this initiative. Still, studies suggest that most individuals would only agree to be medical research participants if their identities would not be attainable. Unfortunately, to date, little has been done to protect individual medical privacy, for which there are both promising new data protection technological approaches4 and the need for additional governmental legislation.

This leaves us with perhaps the major obstacle that is holding back the development of MOOMs — researchers. Even with big, team science research projects culling together hundreds of investigators and institutions throughout the world, such as the Global Alliance for Genomics and Health (GA4GH), the data obtained clinically are just as Laurie Becklund asserted in her op-ed — “one-person clinical trials” (Ref. 1). While undertaking the construction of a MOOM is a huge endeavour, there is little motivation for researchers to take on this task, as this currently offers no academic credit and has no funding source. But the transformative potential of MOOMs to improve medical care is extraordinary. Rather than having the knowledge die with each of us, the time has come to take down the walls of academic medical centres and health-care systems around the world, and create a global knowledge medical resource that leverages each individual’s information to help one another…(More)”

The big medical data miss: challenges in establishing an open medical resource

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday