Stefaan Verhulst
The White House: “Today, the White House is releasing the Privacy and Trust Principles for the President’s Precision Medicine Initiative (PMI). These principles are a foundation for protecting participant privacy and building trust in activities within PMI.
PMI is a bold new research effort to transform how we characterize health and treat disease. PMI will pioneer a new model of patient-powered research that promises to accelerate biomedical discoveries and provide clinicians with new tools, knowledge, and therapies to select which treatments will work best for which patients. The initiative includes development of a new voluntary research cohort by the National Institutes of Health (NIH), a novel regulatory approach to genomic technologies by the Food and Drug Administration, and new cancer clinical trials by the National Cancer Institute at NIH. In addition, PMI includes aligned efforts by the Federal government and private sector collaborators to pioneer a new approach for health research and healthcare delivery that prioritizes patient empowerment through access to information and policies that enable safe, effective, and innovative technologies to be tested and made available to the public.
Following President Obama’s launch of PMI in January 2015, the White House Office of Science and Technology Policy worked with an interagency group to develop the Privacy and Trust Principles that will guide the Precision Medicine effort. The White House convened experts from within and outside of government over the course of many months to discuss their individual viewpoints on the unique privacy challenges associated with large-scale health data collection, analysis, and sharing. This group reviewed the bioethics literature, analyzed privacy policies for large biobanks and research cohorts, and released a draft set of Principles for public comment in July 2015…..
The Privacy and Trust Principles are organized into 6 broad categories:
- Governance that is inclusive, collaborative, and adaptable;
- Transparency to participants and the public;
- Respecting participant preferences;
- Empowering participants through access to information;
- Ensuring appropriate data sharing, access, and use;
- Maintaining data quality and integrity….(More)”
List of winners at Govtech:
1st place // City of Philadelphia, Pa.
A savvy mix of data-driven citizen engagement, tech modernization and outside-the-box thinking powered Philadelphia to its first-place ranking. A new city websitelaunched last year is designed to provide new levels of user convenience. For instance, three navigation options are squeezed into the top of the site — a search bar, a list of common actions like “report a problem” or “pay a bill,” and a menu of city functions arranged topically — giving citizens multiple ways to find what they need. The site was created using agile principles, launching as a work in progress in December and shaped by user feedback. The city also is broadening its use of open data as a citizen-engagement tool. A new generation of civic apps relies on open data sets to give residents easy access to property tax calculations, property ownership information anddetailed maps of various city resources. These improvements in customer-facing services have been facilitated by upgrades to back-end systems that are improving reliability and reducing staff support requirements. The city estimates that half of its IT systems now are procured as a service. Finally, an interesting pilot involving the city IT department and a local middle school is aimed at drawing more kids into STEM-related careers. Students met weekly in the city Innovation Lab for a series of hands-on experiences led by members of the Philadelphia Office of Information Technology.
2nd place // City of Los Angeles, Calif.
Second-ranked Los Angeles is developing a new model for funding innovative ideas, leveraging private-sector platforms to improve services, streamlining internal processes and closing the broadband gap. The city established a $1 million innovation fund late last year to seed pilot projects generated by city employees’ suggestions. More than a dozen projects have been launched so far. Through open APIs, the city trades traffic information with Google’s Waze traffic app. The app consumes city traffic data to warn drivers about closed roads, hazards and dangerous intersections, while the city transportation department uses information submitted by Waze users to identify potholes, illegal road construction and traffic patterns. MyPayLA, launched by the LA Controller’s Office and the city Information Technology Agency, is a mobile app that lets city employees view their payroll information on a mobile device. And theCityLinkLA broadband initiative is designed to attract broadband providers to the city with expedited permitting and access to existing assets like streetlights, real estate and fiber.
2nd place // City of Louisville, Ky.
Louisville’s mobile-friendly Web portal garnered the city a second-place finish in the Center for Digital Government’s Best of the Web awards earlier this year. Now, Louisville has a No. 2 ranking in the 2015 Digital Cities Survey to add to its trophy case. Besides running an excellent website — built on the open source Drupal platform and hosted in the cloud — Louisville is equipping its entire police force with body-worn cameras and expects to be finished by the end of 2015. Video from 1,000 officers, as well as footage from Metro Watch cameras placed around the city, will be stored in the cloud. Louisville’s Metro Police Department, one of 21 cities involved in the White House Police Data Initiative, also became one of the first in the nation to release data sets on assaulted officers, arrests and citations, and hate crimes on the city’s open data portal. In addition, a public-private partnership called Code Louisville offers free technology training to local residents. More than 500 people have taken 12-week classes to learn Web or mobile development skills.
3rd place // City of Kansas City, Mo.
Kansas City’s Art of Data initiative may be one of the nation’s most creative attempts to engage citizens through open data. The city selected 10 local artists earlier this year to turn information from its open data site into visual art. The artists pulled information from 10 different data sets, ranging from life expectancy by ZIP code to citizen satisfaction with the safety of their neighborhoods. The exhibit drew a large crowd when it opened in June, according to the city, and more than 3,000 residents eventually viewed the works of art. Kansas City also was chosen to participate in a new HUD digital inclusion program called ConnectHome, which will offer broadband access, training, digital literacy programs and devices for residents in assisted housing units. And the city is working with a local startup business, RFP365, to simplify its RFP process. Through a pilot partnership, Kansas City will use the RFP365 platform — which lets buyers track and receive bids from vendors and suppliers — to make the government purchasing process easier and more transparent.
3rd place // City of Phoenix, Ariz.
The development of a new citywide transportation plan in Phoenix offers a great example of how to use digital engagement tools. Using the MindMixer platform, the city developed a website to let citizens suggest ideas for new transit services and street infrastructure, as well as discuss a range of transportation-related issues. Using polling, mapping, open-ended questions and discussion prompts, residents directly helped to develop the plan. The engagement process reached more 3,700 residents and generated hundreds of comments online. In addition, a city-led technology summit held late last year brought together big companies, small businesses and citizens to discuss how technology could improve city operations and boost economic development. And new court technology lets attorneys receive hearing notifications on a mobile device and enables Web and interactive voice response (IVR) payments for a variety of cases.
…(More)”
Paper by Jonathan Stoneman at Reuters Institute for Journalism: “The Open Data movement really came into being when President Obama issued his first policy paper, on his first day in office in January 2009. The US government opened up thousands of datasets to scrutiny by the public, by journalists, by policy-makers. Coders and developers were also invited to make the data useful to people and businesses in all manner of ways. Other governments across the globe followed suit, opening up data to their populations.
Opening data in this way has not resulted in genuine openness, save in a few isolated cases. In the USA and a few European countries, developers have created apps and websites which draw on Open Data, but these are not reaching a mass audience.
At the same time, journalists are not seen by government as the end users of these data. Data releases, even in the best cases, are uneven, and slow, and do not meet the needs of journalists. Although thousands of journalists have been learning and adopting the new skills of datajournalism they have tended to work with data obtained through Freedom of Information (FOI) legislation.
Stories which have resulted from datajournalists’ efforts have rarely been front page news; in many cases data-driven stories have ended up as lesser stories on inside pages, or as infographics, which relatively few people look at.
In this context, therefore, Open Data remains outside the mainstream of journalism, and out of the consciousness of the electorate, begging the question, “what are Open Data for?”, or as one developer put it – “if Open Data is the answer, what was the question?” Openness is seen as a badge of honour – scores of national governments have signed pledges to make data open, often repeating the same kind of idealistic official language as the previous announcement of a conversion to openness. But these acts are “top down”, and soon run out of momentum, becoming simply openness for its own sake. Looking at specific examples, the United States is the nearest to a success story: there is a rich ecosystem – made up of government departments, interest groups and NGOs, the media, civil society – which allows data driven projects the space to grow and the airtime to make an impact. (It probably helped that the media in the US were facing an existential challenge urgent enough to force them to embrace new, inexpensive, ways of carrying out investigative reporting).
Elsewhere data are making less impact on journalism. In the UK the new openness is being exploited by a small minority. Where data are made published on the data.gov.uk website they are frequently out of date, incomplete, or of limited new value, so where data do drive stories, these tend to be data released under FOI legislation, and the resulting stories take the form of statistics and/or infographics.
In developing countries where Open Data Portals have been launched with a fanfare – such as Kenya, and more recently Burkina Faso – there has been little uptake by coders, journalists, or citizens, and the number of fresh datasets being published drops to a trickle, and are soon well out of date. Small, apparently randomly selected datasets are soon outdated and inertia sets in.
The British Conservative Party, pledging greater openness in its 2010 manifesto, foresaw armies of “Armchair Auditors” who would comb through the data and present the government with ideas for greater efficiency in the use of public funds. Almost needless to say, these armies have never materialised, and thousands of datasets go unscrutinised by anybody. 2 In countries like Britain large amounts of data are being published but going (probably) unread and unscrutinised by anybody. At the same time, the journalists who want to make use of data are getting what they need through FOI, or even by gathering data themselves. Open Data is thus being bypassed, and could become an irrelevance. Yet, the media could be vital agents in the quest for the release of meaningful, relevant, timely data.
Governments seem in no hurry to expand the “comfort zone” from which they release the data which shows their policies at their most effective, and keeping to themselves data which paints a gloomier picture. Journalists seem likely to remain in their comfort zone, where they make use of FOI and traditional sources of information. For their part, journalists should push for better data and use it more, working in collaboration with open data activists. They need to change the habits of a lifetime and discuss their sources: revealing the source and quality of data used in a story would in itself be as much a part of the advocacy as of the actual reporting.
If Open Data are to be part of a new system of democratic accountability, they need to be more than a gesture of openness. Nor should Open Data remain largely the preserve of companies using them for commercial purposes. Governments should improve the quality and relevance of published data, making them genuinely useful for journalists and citizens alike….(More)”
A white paper by Taylor & Francis: “Within the academic community, peer review is widely recognized as being at the heart of scholarly research. However, faith in peer review’s integrity is of ongoing and increasing concern to many. It is imperative that publishers (and academic editors) of peer-reviewed scholarly research learn from each other, working together to improve practices in areas such as ethical issues, training, and data transparency….Key findings:
- Authors, editors and reviewers all agreed that the most important motivation to publish in peer reviewed journals is making a contribution to the field and sharing research with others.
- Playing a part in the academic process and improving papers are the most important motivations for reviewers. Similarly, 90% of SAS study respondents said that playing a role in the academic community was a motivation to review.
- Most researchers, across the humanities and social sciences (HSS) and science, technology and medicine (STM), rate the benefit of the peer review process towards improving their article as 8 or above out of 10. This was found to be the most important aspect of peer review in both the ideal and the real world, echoing the earlier large-scale peer review studies.
- In an ideal world, there is agreement that peer review should detect plagiarism (with mean ratings of 7.1 for HSS and 7.5 for STM out of 10), but agreement that peer review is currently achieving this in the real world is only 5.7 HSS / 6.3 STM out of 10.
- Researchers thought there was a low prevalence of gender bias but higher prevalence of regional and seniority bias – and suggest that double blind peer review is most capable of preventing reviewer discrimination where it is based on an author’s identity.
- Most researchers wait between one and six months for an article they’ve written to undergo peer review, yet authors (not reviewers / editors) think up to two months is reasonable .
- HSS authors say they are kept less well informed than STM authors about the progress of their article through peer review….(More)”
Book edited by Philip Alston and Sarah Knuckey: “Fact-finding is at the heart of human rights advocacy, and is often at the center of international controversies about alleged government abuses. In recent years, human rights fact-finding has greatly proliferated and become more sophisticated and complex, while also being subjected to stronger scrutiny from governments. Nevertheless, despite the prominence of fact-finding, it remains strikingly under-studied and under-theorized. Too little has been done to bring forth the assumptions, methodologies, and techniques of this rapidly developing field, or to open human rights fact-finding to critical and constructive scrutiny.
The Transformation of Human Rights Fact-Finding offers a multidisciplinary approach to the study of fact-finding with rigorous and critical analysis of the field of practice, while providing a range of accounts of what actually happens. It deepens the study and practice of human rights investigations, and fosters fact-finding as a discretely studied topic, while mapping crucial transformations in the field. The contributions to this book are the result of a major international conference organized by New York University Law School’s Center for Human Rights and Global Justice. Engaging the expertise and experience of the editors and contributing authors, it offers a broad approach encompassing contemporary issues and analysis across the human rights spectrum in law, international relations, and critical theory. This book addresses the major areas of human rights fact-finding such as victim and witness issues; fact-finding for advocacy, enforcement, and litigation; the role of interdisciplinary expertise and methodologies; crowd sourcing, social media, and big data; and international guidelines for fact-finding….(More)”
Book edited by Zeadally, Sherali and Badra, Mohamad: “This comprehensive textbook/reference presents a focused review of the state of the art in privacy research, encompassing a range of diverse topics. The first book of its kind designed specifically to cater to courses on privacy, this authoritative volume provides technical, legal, and ethical perspectives on privacy issues from a global selection of renowned experts. Features: examines privacy issues relating to databases, P2P networks, big data technologies, social networks, and digital information networks; describes the challenges of addressing privacy concerns in various areas; reviews topics of privacy in electronic health systems, smart grid technology, vehicular ad-hoc networks, mobile devices, location-based systems, and crowdsourcing platforms; investigates approaches for protecting privacy in cloud applications; discusses the regulation of personal information disclosure and the privacy of individuals; presents the tools and the evidence to better understand consumers’ privacy behaviors….(More)”
Marketplace: “Imagine you’re a refugee leaving home for good. You’ll need help. But what you ask for today is much different than it would have been just 10 years ago.
“What people are demanding, more and more, is not classic food, shelter, water, healthcare, but they demand wifi,” said Melita Šunjić, a spokesperson for the United Nations High Commissioner for Refugees.
Šunjić began her work with Syrian refugees in camps in Amman, Jordan. Many were from rural areas with basic cell phones.
“The refugees we’re looking at now, who are coming to Europe – this is a completely different story,” Šunjić said. “They are middle class, urban people. Practically each family has at least one smart phone. We calculated that in each group of 20, they would have three smart phones.”
Refugees use their phones to call home and to map their routes. Even smugglers have their own Facebook pages.
“I don’t remember a crisis or refugee group where modern technology played such a role,” Šunjić said.
As refugees from Syria continue to flow into Europe, aid organizations are gearing up for what promises to be a difficult winter.
Emily Eros, a GIS mapping officer with the American Red Cross, said her organization is working on the basics like providing food, water and shelter, but it’s also helping refugees stay connected. “It’s a little bit difficult because it’s not just a matter of getting a wifi station up, it’s also a matter of having someone there who’s able to fix it if something goes wrong,” she said. …(More)”
Paper by Bendik Bygstad and Francis D’Silva: “A national administration is dependent on its archives and registers, for many purposes, such as tax collection, enforcement of law, economic governance, and welfare services. Today, these services are based on large digital infrastructures, which grow organically in volume and scope. Building on a critical realist approach we investigate a particularly successful infrastructure in Norway called Altinn, and ask: what are the evolutionary mechanisms for a successful “government as a platform”? We frame our study with two perspectives; a historical institutional perspective that traces the roots of Altinn back to the Middle Ages, and an architectural perspective that allows for a more detailed analysis of the consequences of digitalization and the role of platforms. We offer two insights from our study: we identify three evolutionary mechanisms of national registers, and we discuss a future scenario of government platforms as “digital commons”…(More)”
Emiko Jozuka at Motherboard: “Researchers in Britain want to make the first “self-repairing” city by 2035. How will they do this? By creating autonomous repair robots that patrol the streets and drainage systems, making sure your car doesn’t dip into a pothole, and that you don’t experience any gas leaks.
“The idea is to create a city that behaves almost like a living organism,” said Raul Fuentes, a researcher at the School of Civil Engineering at Leeds University, who is working on the project. “The robots will act like white cells that are able to identify bacteria or viruses and attack them. It’s kind of like an immune system.”
The £4.2 million ($6.4 million) national infrastructure project is in collaboration with Leeds City Council and the UK Collaboration for Research in Infrastructures and Cities (UKCRIC). The aim is to create a fleet of robot repair workers who will live in Leeds city, spot problems, and sort them out before they become even bigger ones by 2035, said Fuentes. The project is set to launch officially in January 2016, he added.
For their five-year project—which has a vision that extends until 2050—the researchers will develop robot designs and technologies that focus on three main areas. The first is to create drones that can perch on high structures and repair things like street lamps; the second is to develop drones that can autonomously spot when a pothole is about to form and zone in and patch that up before it worsens; and the third is to develop robots that will live in utility pipes so they can inspect, repair, and report back to humans when they spot an issue.
“The robots will be living permanently in the city, and they’ll be able to identify issues before they become real problems,” explained Fuentes. The researchers are working on making the robots autonomous, and want them to be living in swarms or packs where they can communicate with one another on how best they could get the repair job done….(More)
Beth Mole at ArsTechnica: “With big data comes big noise. Google learned this lesson the hard way with its now kaput Google Flu Trends. The online tracker, which used Internet search data to predict real-life flu outbreaks, emerged amid fanfare in 2008. Then it met a quiet death this August after repeatedly coughing up bad estimates.
But big Internet data isn’t out of the disease tracking scene yet.
With hubris firmly in check, a team of Harvard researchers have come up with a way to tame the unruly data, combine it with other data sets, and continually calibrate it to track flu outbreaks with less error. Their new model, published Monday in the Proceedings of the National Academy of Sciences, out-performs Google Flu Trends and other models with at least double the accuracy. If the model holds up in coming flu seasons, it could reinstate some optimism in using big data to monitor disease and herald a wave of more accurate second-generation models.
Big data has a lot of potential, Samuel Kou, a statistics professor at Harvard University and coauthor on the new study, told Ars. It’s just a question of using the right analytics, he said.
Kou and his colleagues built on Google’s flu tracking model for their new version, called ARGO (AutoRegression with GOogle search data). Google Flu Trends basically relied on trends in Internet search terms, such as headache and chills, to estimate the number of flu cases. Those search terms were correlated with flu outbreak data collected by the Centers for Disease Control and Prevention. The CDC’s data relies on clinical reports from around the country. But compiling and analyzing that data can be slow, leading to a lag time of one to three weeks. The Google data, on the other hand, offered near real-time tracking for health experts to manage and prepare for outbreaks.
At first Google’s tracker appeared to be pretty good, matching CDC data’s late-breaking data somewhat closely. But, two notable stumbles led to its ultimate downfall: an underestimate of the 2009 H1N1 swine flu outbreak and an alarming overestimate (almost double real numbers) of the 2012-2013 flu season’s cases…..For ARGO, he and colleagues took the trend data and then designed a model that could self-correct for changes in how people search. The model has a two-year sliding window in which it re-calibrates current search term trends with the CDC’s historical flu data (the gold standard for flu data). They also made sure to exclude winter search terms, such as March Madness and the Oscars, so they didn’t get accidentally correlated with seasonal flu trends. Last, they incorporated data on the historical seasonality of flu.
The result was a model that significantly out-competed the Google Flu Trends estimates for the period between March 29, 2009 to July 11, 2015. ARGO also beat out other models, including one based on current and historical CDC data….(More)”
See also Proceedings of the National Academy of Sciences, 2015. DOI: 10.1073/pnas.1515373112