Study by Sean Martin McDonald: “…undertaken with support from the Open Society Foundation, Ford Foundation, and Media Democracy Fund, explores the use of Big Data in the form of Call Detail Record (CDR) data in humanitarian crisis.
It discusses the challenges of digital humanitarian coordination in health emergencies like the Ebola outbreak in West Africa, and the marked tension in the debate around experimentation with humanitarian technologies and the impact on privacy. McDonald’s research focuses on the two primary legal and human rights frameworks, privacy and property, to question the impact of unregulated use of CDR’s on human rights. It also highlights how the diffusion of data science to the realm of international development constitutes a genuine opportunity to bring powerful new tools to fight crisis and emergencies.
Analysing the risks of using CDRs to perform migration analysis and contact tracing without user consent, as well as the application of big data to disease surveillance is an important entry point into the debate around use of Big Data for development and humanitarian aid. The paper also raises crucial questions of legal significance about the access to information, the limitation of data sharing, and the concept of proportionality in privacy invasion in the public good. These issues hold great relevance in today’s time where big data and its emerging role for development, involving its actual and potential uses as well as harms is under consideration across the world.
The paper highlights the absence of a dialogue around the significant legal risks posed by the collection, use, and international transfer of personally identifiable data and humanitarian information, and the grey areas around assumptions of public good. The paper calls for a critical discussion around the experimental nature of data modelling in emergency response due to mismanagement of information has been largely emphasized to protect the contours of human rights….
Blog by Stefaan G. Verhulst, IrynaSusha and Alexander Kostura: “Data Collaboratives refer to a new form of collaboration, beyond the public-private partnership model, in which participants from different sectors (private companies, research institutions, and government agencies) share data to help solve public problems. Several of society’s greatest challenges — from climate change to poverty — require greater access to big (but not always open) data sets, more cross-sector collaboration, and increased capacity for data analysis. Participants at the workshop and breakout session explored the various ways in which data collaborative can help meet these needs.
Matching supply and demand of data emerged as one of the most important and overarching issues facing the big and open data communities. Participants agreed that more experimentation is needed so that new, innovative and more successful models of data sharing can be identified.
How to discover and enable such models? When asked how the international community might foster greater experimentation, participants indicated the need to develop the following:
· A responsible data framework that serves to build trust in sharing data would be based upon existing frameworks but also accommodates emerging technologies and practices. It would also need to be sensitive to public opinion and perception.
· Increased insight into different business models that may facilitate the sharing of data. As experimentation continues, the data community should map emerging practices and models of sharing so that successful cases can be replicated.
· Capacity to tap into the potential value of data. On the demand side,capacity refers to the ability to pose good questions, understand current data limitations, and seek new data sets responsibly. On the supply side, this means seeking shared value in collaboration, thinking creatively about public use of private data, and establishing norms of responsibility around security, privacy, and anonymity.
· Transparent stock of available data supply, including an inventory of what corporate data exist that can match multiple demands and that is shared through established networks and new collaborative institutional structures.
· Mapping emerging practices and models of sharing. Corporate data offers value not only for humanitarian action (which was a particular focus at the conference) but also for a variety of other domains, including science,agriculture, health care, urban development, environment, media and arts,and others. Gaining insight in the practices that emerge across sectors could broaden the spectrum of what is feasible and how.
In general, it was felt that understanding the business models underlying data collaboratives is of utmost importance in order to achieve win-win outcomes for both private and public sector players. Moreover, issues of public perception and trust were raised as important concerns of government organizations participating in data collaboratives….(More)”
Mark Gorenberg, Craig Mundie, Eric Schmidt and Marjory Blumenthal at PCAST: “Growing urbanization presents the United States with an opportunity to showcase its innovation strength, grow its exports, and help to improve citizens’ lives – all at once. Seizing this triple opportunity will involve a concerted effort to develop and apply new technologies to enhance the way cities work for the people who live there.
A new report released today by the President’s Council of Advisors on Science and Technology (PCAST), Technology and the Future of Cities, lays out why now is a good time to promote technologies for cities: more (and more diverse) people are living in cities; people are increasingly open to different ways of using space, living, working, and traveling across town; physical infrastructures for transportation, energy, and water are aging; and a wide range of innovations are in reach that can yield better infrastructures and help in the design and operation of city services.
There are also new ways to collect and use information to design and operate systems and services. Better use of information can help make the most of limited resources – whether city budgets or citizens’ time – and help make sure that the neediest as well as the affluent benefit from new technology.
Although the vision of technology’s promise applies city-wide, PCAST suggests that a practical way for cities to adopt infrastructural and other innovation is by starting in a discrete area – a district, the dimensions of which depend on the innovation in question. Experiences in districts can help inform decisions elsewhere in a given city – and in other cities. PCAST urges broader sharing of information about, and tools for, innovation in cities.
Such sharing is already happening in isolated pockets focused on either specific kinds of information or recipients of specific kinds of funding. A more comprehensive City Web, achieved through broader interconnection, could inform and impel urban innovation. A systematic approach to developing open-data resources for cities is recommended, too.
PCAST recommends a variety of steps to make the most of the Federal Government’s engagement with cities. To begin, it calls for more – and more effective – coordination among Federal agencies that are key to infrastructural investments in cities. Coordination across agencies, of course, is the key to place-based policy. Building on the White House Smart Cities Initiative, which promotes not only R&D but also deployment of IT-based approaches to help cities solve challenges, PCAST also calls for expanding research and development coordination to include the physical, infrastructural technologies that are so fundamental to city services.
A new era of city design and city life is emerging. If the United States steers Federal investments in cities in ways that foster innovation, the impacts can be substantial. The rest of the world has also seen the potential, with numerous cities showcasing different approaches to innovation. The time to aim for leadership in urban technologies and urban science is now….(More)”
Open Access Button: “Hidden data is hindering research, and we’re tired of it. Next week we’ll release the Open Data Button beta as part of Open Data Day. The Open Data Button will help people find, release, and share the data behind papers. We need your support to share, test, and improve the Open Data Button. Today, we’re going to provide some in depth info about the tool.
You’ll be able to download the free Open Data Button on the 29th of February. Follow the launch conversation on Twitter and at #opendatabutton.
How the Open Data Button works
You will be able to download the Open Data Button on Chrome, and later on Firefox. When you need the data supporting a paper (even if it’s behind a paywall), push the Button. If the data has already been made available through the Open Data Button, we’ll give you a link. If it hasn’t, youâll be able to start a request for the data. Eventually, we want to search a variety of other sources for it – but can’t yet (read on, we need your help with that).
The request will be sent to the author. We know sharing data can be hard and there’s sometimes good reasons not to. The author will be able to respond to it by saying how long it’ll take to share the data – or if they can’t. If the data is already available, the author can simply share a URL to the dataset. If it isn’t, they can attach files to a response for us to make available. Files shared with us will be deposited in the Open Science Framework for identification and archiving. The Open Science Framework supports data sharing for all disciplines. As much metadata as possible will be obtained from the paper, the rest we’ll ask the author for.
The progress of this request is tracked through our new “request” pages. On request pages others can support a request and be sent a copy of the data when it’s available. We’ll map requests, and stories will be searchable – both will now be embeddable objects.
Once available, we’ll send data to people who’ve requested it. You can award an Open Data Badge to the author if there’s enough supporting information to reproduce the data’s results.
At first we’ll only have a Chrome add-on, but support for Firefox will be available from Firefox 46. Support for a bookmarklet will also be provided, but we don’t have a release date yet….(More)”
Fatemeh Ahmadi Zeleti et al in Government Information Quarterly: “Business models for open data have emerged in response to the economic opportunities presented by the increasing availability of open data. However, scholarly efforts providing elaborations, rigorous analysis and comparison of open data models are very limited. This could be partly attributed to the fact that most discussions on Open Data Business Models (ODBMs) are predominantly in the practice community. This shortcoming has resulted in a growing list of ODBMs which, on closer examination, are not clearly delineated and lack clear value orientation. This has made the understanding of value creation and exploitation mechanisms in existing open data businesses difficult and challenging to transfer. Following the Design Science Research (DSR) tradition, we developed a 6-Value (6-V) business model framework as a design artifact to facilitate the explication and detailed analysis of existing ODBMs in practice. Based on the results from the analysis, we identify business model patterns and emerging core value disciplines for open data businesses. Our results not only help streamline existing ODBMs and help in linking them to the overall business strategy, but could also guide governments in developing the required capabilities to support and sustain the business models….(More)”
Larry Peiperl and Peter Hotez at PLOS: “The spreading epidemic of Zika virus, with its putative and alarming associations with Guillain-Barre syndrome and infant microcephaly, has arrived just as several initiatives have come into place to minimize delays in sharing the results of scientific research.
In September 2015, in response to concerns that research publishing practices had delayed access tocrucial information in the Ebola crisis, the World Health Organization convened a consultation “[i]nrecognition of the need to streamline mechanisms of data dissemination—globally and in as close toreal-time as possible” in the context of public health emergencies.
Participating medical journal editors, representing PLOS,BMJ and Nature journals and NEJM, provided a statement that journals should not act to delay access to data in a public health emergency: “In such scenarios,journals should not penalize, and, indeed, shouldencourage or mandate public sharing of relevant data…”
In a subsequent Comment in The Lancet, authors frommajor research funding organizations expressed supportfor data sharing in public health emergencies. TheInternational Committee of Medical Journal Editors(ICMJE), meeting in November 2015, lent further support to the principles of the WHO consultation byamending ICMJE “Recommendations” to endorse data sharing for public health emergencies of anygeographic scope.
Now that WHO has declared Zika to be a Public Health Emergency of International Concern, responses from these groups in recent days appear consistent with their recent declarations.
The ICMJE has announced that “In light of the need to rapidly understand and respond to the globalemergency caused by the Zika virus, content in ICMJE journals related to Zika virus is being made freeto access. We urge other journals to do the same. Further, as stated in our Recommendations, in theevent of a public health emergency (as defined by public health officials), information with immediateimplications for public health should be disseminated without concern that this will preclude subsequentconsideration for publication in a journal.”(www.icmje.org, accessed 9 Feburary 2016)
WHO has implemented special provisions for research manuscripts relevant to the Zika epidemic thatare submitted to WHO Bulletin; such papers “will be assigned a digital object identifier and posted onlinein the “Zika Open” collection within 24 hours while undergoing peer review. The data in these papers willthus be attributed to the authors while being freely available for reader scrutiny and unrestricted use”under a Creative Commons Attribution License (CC BY IGO 3.0).
At PLOS, where open access and data sharing apply as matter of course, all PLOS journals aim toexpedite peer review evaluation, pre-publication posting, and data sharing from research relevant to theZika outbreak. PLOS Currents Outbreaks offers an online platform for rapid publication of preliminaryresults, PLOS Neglected Tropical Diseases has committed to provide priority handling of Zika reports ingeneral, and other PLOS journals will prioritize submissions within their respective scopes. The PLOSZika Collection page provides central access to relevant and continually updated content from acrossthe PLOS journals, blogs, and collaborating organizations.
Today, the Wellcome Trust has issued a statement urging journals to commit to “make all content concerning the Zika virus free to access,” and funders to “require researchers undertaking work relevant to public health emergencies to set in place mechanisms to share quality-assured interim and final data as rapidly and widely as possible, including with public health and research communities and the World Health Organisation.” Among 31 initial signatories are such journals and publishers as PLOS, Springer Nature, Science journals, The JAMA Network, eLife, the Lancet, and New England Journal ofMedicine; and funding organizations including Bill and Melinda Gates Foundation, UK Medical ResearchCouncil, US National Institutes of Health, Wellcome Trust, and other major national and internationalresearch funders.
This policy shift prompts reconsideration of how we publish urgently needed data during a public health emergency….(More)”
Katy Davis at the Conversation: “Typical approaches to solving problematic finances are either to “educate” people about the need to save more or to “incentivize” savings with monetary rewards.
But when we look at traditional financial education and counseling programs, they have had virtually no long-term impact on behavior. Similarly, matched savings programs are expensive and have shown mixed results on savings rates. Furthermore, these approaches often prioritize the need for savings while treating debt repayment as a secondary concern.
Education and incentives haven’t worked because they are based on problematic assumptions about lower-income consumers that turn out to be false….
The good news is that a range of simple, behaviorally informed solutions can easily be deployed to tackle these problems, from policy innovations to product redesign.
For instance, changing the “suggested payoff” in credit card statements for targeted segments (i.e., those who were already paying in full) could help consumers more effectively pay down debt, as could allowing tax refunds to be directly applied toward debt repayment. Well-designed budgeting tools that leverage financial technology could be integrated into government programs. The state of California, for example, is currently exploring ways to implement such technologies across a variety of platforms.
But the public and private sectors both need to play a role for these tools to be effective. Creating an integrated credit-and-saving product, for example, would require buy-in from regulators along with financial providers.
While these banking solutions may not close the economic inequality gap on their own, behaviorally informed design shifts can be the missing piece of the puzzle in these efforts to fix major problems.
Our research indicates that people already want to be doing a better job with their finances; we just need to make it a little less difficult for them….(More)”
Linda Poon at CityLab: “It’s not only your friends and family who follow your online selfies and group photos. Scientists are starting to look at them, too, though they’re more interested in what’s around you. In bulk, photos can reveal weather patterns across multiple locations, air quality of a place over time, the dynamics of a neighborhood—all sorts of information that helps researchers study cities.
At the Nanyang Technological University in Singapore, a research group is using crowdsourced photos to create a low-cost alternative to air-pollution sensors. Called AirTick, the smartphone app they’ve designed will collect photos from users and analyze how hazy the environment looks. It’ll then check each image against official air quality data, and through machine-learning the app will eventually be able to predict pollution levels based on an image alone.
AirTick creator Pan Zhengziang said in a promotional video last month that the growing concern among the public over air quality can make programs like this a success—especially in Southeast Asia, where smog has gotten so bad that governments have had to shut down schools and suspend outdoor activities. “In Singapore’s recent haze episode, around 250,000 people [have] shared their concerns via Twitter,” he said. “This has made crowdsourcing-based air quality monitoring a possibility.”…(More)”
Ari Beser at National Geographic: “It appears the world-changing event didn’t change anything, and it’s disappointing,”said Pieter Franken, a researcher at Keio University in Japan (Wide Project), the MIT Media Lab (Civic Media Centre), and co-founder of Safecast, a citizen-science network dedicated to the measurement and distribution of accurate levels of radiation around the world, especially in Fukushima. “There was a chance after the disaster for humanity to innovate our thinking about energy, and that doesn’t seem like it’s happened. But what we can change is the way we measure the environment around us.”
Franken and his founding partners found a way to turn their email chain, spurred by the tsunami, into Safecast; an open-source network that allows everyday people to contribute to radiation-monitoring.
“We literally started the day after the earthquake happened,” revealed Pieter. “A friend of mine, Joi Ito, the director of MIT Media Lab, and I were basically talking about what Geiger counter to get. He was in Boston at the time and I was here in Tokyo, and like the rest of the world, we were worried, but we couldn’t get our hands on anything. There’s something happening here, we thought. Very quickly as the disaster developed, we wondered how to get the information out. People were looking for information, so we saw that there was a need. Our plan became: get information, put it together and disseminate it.”
An e-mail thread between Franken, Ito, and Sean Bonner, (co-founder of CRASH Space, a group that bills itself as Los Angeles’ first hackerspace), evolved into a network of minds, including members of Tokyo Hackerspace, Dan Sythe, who produced high-quality Geiger counters, and Ray Ozzie, Microsoft’s former Chief Technical Officer. On April 15, the group that was to become Safecast sat down together for the first time. Ozzie conceived the plan to strap a Geiger counter to a car and somehow log measurements in motion. This would became the bGeigie, Safecast’s future model of the do-it-yourself Geiger counter kit.
Armed with a few Geiger counters donated by Sythe, the newly formed team retrofitted their radiation-measuring devices to the outside of a car. Safecast’s first volunteers drove up to the city of Koriyama in Fukushima Prefecture, and took their own readings around all of the schools. Franken explained, “If we measured all of the schools, we covered all the communities; because communities surround schools. It was very granular, the readings changed a lot, and the levels were far from academic, but it was our start. This was April 24, 6 weeks after the disaster. Our thinking changed quite a bit through this process.”
Since their first tour of Koriyama, with the help of a successful Kickstarter campaign, Safecast’s team of volunteers have developed the bGeigie handheld radiation monitor, that anyone can buy on Amazon.com and construct with suggested instructions available online. So far over 350 users have contributed 41 million readings, using around a thousand fixed, mobile, and crowd-sourced devices….(More)
Paper by Andrew Schrock: “We are awash in predictions about our data-driven future. Enthusiasts believe it will offer new ways to research behavior. Critics worry it will enable powerful regimes of institutional control. Both visions, although polar opposites, tend to downplay the importance of communication. As a result, the role of communication in human-centered data science has rarely been considered. This article fills this gap by outlining three perspectives on data that foreground communication. First, I briefly review the common social scientific perspective: “communication as data.” Next, I elaborate on two less explored perspectives. A “data as communication” perspective captures how data imperfectly carry meanings and guide action. “Communication around data” describes communication in organizational and institutional data cultures. I conclude that communication offers nuanced perspectives to inform human-centered data science. Researchers should embrace a robust agenda, particularly when researching the relationship between data and power…(More)”