Handbook by Douwe Korff and Marie Georges: “This Handbook was prepared for and is used in the EU-funded “T4DATA” training‐of-trainers programme. Part I explains the history and development of European data protection law and provides an overview of European data protection instruments including the Council of Europe Convention and its “Modernisation” and the various EU data protection instruments relating to Justice and Home Affairs, the CFSP and the EU institutions, before focusing on the GDPR in Part II. The final part (Part III) consists of detailed practical advice on the various tasks of the Data Protection Officer now institutionalised by the GDPR. Although produced for the T4DATA programme that focusses on DPOs in the public sector, it is hoped that the Handbook will be useful also to anyone else interested in the application of the GDPR, including DPOs in the private sector….(More)”.
Bringing machine learning to the masses
Matthew Hutson at Science: “Artificial intelligence (AI) used to be the specialized domain of data scientists and computer programmers. But companies such as Wolfram Research, which makes Mathematica, are trying to democratize the field, so scientists without AI skills can harness the technology for recognizing patterns in big data. In some cases, they don’t need to code at all. Insights are just a drag-and-drop away. One of the latest systems is software called Ludwig, first made open-source by Uber in February and updated last week. Uber used Ludwig for projects such as predicting food delivery times before releasing it publicly. At least a dozen startups are using it, plus big companies such as Apple, IBM, and Nvidia. And scientists: Tobias Boothe, a biologist at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, uses it to visually distinguish thousands of species of flatworms, a difficult task even for experts. To train Ludwig, he just uploads images and labels….(More)”.
Exploring Digital Ecosystems: Organizational and Human Challenges
Proceedings edited by Alessandra Lazazzara, Francesca Ricciardi and Stefano Za: “The recent surge of interest in digital ecosystems is not only transforming the business landscape, but also poses several human and organizational challenges. Due to the pervasive effects of the transformation on firms and societies alike, both scholars and practitioners are interested in understanding the key mechanisms behind digital ecosystems, their emergence and evolution. In order to disentangle such factors, this book presents a collection of research papers focusing on the relationship between technologies (e.g. digital platforms, AI, infrastructure) and behaviours (e.g. digital learning, knowledge sharing, decision-making). Moreover, it provides critical insights into how digital ecosystems can shape value creation and benefit various stakeholders. The plurality of perspectives offered makes the book particularly relevant for users, companies, scientists and governments. The content is based on a selection of the best papers – original double-blind peer-reviewed contributions – presented at the annual conference of the Italian chapter of the AIS, which took place in Pavia, Italy in October 2018….(More)”.
Political innovation, digitalisation and public participation in party politics
Paper by Lisa Schmidthuber; Dennis Hilgers and Maximilian Rapp: “Citizen engagement is seen as a way to address a range of societal challenges, fiscal constraints, as well as wicked problems, and increasing public participation in political decisions could help to address low levels of trust in politicians and decreasing satisfaction with political parties. This paper examines the perceived impacts of an experiment by the Austrian People’s Party which, in response to reaching a historic low in the polls, opened up its manifesto process to public participation via digital technology. Analysis of survey data from participants found that self-efficacy is positively associated with participation intensity but negatively related to satisfaction. In contrast, collective efficacy is related to positive perceptions of public participation in party politics but does not influence levels of individual participation. Future research is needed to explore the outcomes of political innovations that use digital technologies to enable public participation on voting behaviour, party membership and attitudes to representative democracy….(More)”.
Trust in Contemporary Society
Book edited by Masamichi Sasaki: “… deals with conceptual, theoretical and social interaction analyses, historical data on societies, national surveys or cross-national comparative studies, and methodological issues related to trust. The authors are from a variety of disciplines: psychology, sociology, political science, organizational studies, history, and philosophy, and from Britain, the United States, the Czech Republic, the Netherlands, Australia, Germany, and Japan. They bring their vast knowledge from different historical and cultural backgrounds to illuminate contemporary issues of trust and distrust. The socio-cultural perspective of trust is important and increasingly acknowledged as central to trust research. Accordingly, future directions for comparative trust research are also discussed….(More)”.
Understanding our Political Nature: How to put knowledge and reason at the heart of political decision-making
EU report by Rene Van Bavel et al: “Recognising that advances in behavioural, decision and social sciences demonstrate that we are not purely rational beings, this report brings new insights into our political behaviour and this understanding have the potential to address some of the current crises in our democracies. Sixty experts from across the globe working in the fields of behavioural and social sciences as well as the humanities, have contributed to the research that underpins this JRC report that calls upon evidence-informed policymaking not to be taken for granted. There is a chapter dedicated to each key finding which outlines the latest scientific thinking as well as an overview of the possible implications for policymaking. The key findings are:
- Misperception and Disinformation: Our thinking skills are challenged by today’s information environment and make us vulnerable to disinformation. We need to think more about how we think.
- Collective Intelligence: Science can help us re-design the way policymakers work together to take better decisions and prevent policy mistakes.
- Emotions: We can’t separate emotion from reason. Better information about citizens’ emotions and greater emotional literacy could improve policymaking.
- Values and Identities drive political behaviour but are not properly understood or debated.
- Framing, Metaphor and Narrative: Facts don’t speak for themselves. Framing, metaphors and narratives need to be used responsibly if evidence is to be heard and understood.
- Trust and Openness: The erosion of trust in experts and in government can only be addressed by greater honesty and public deliberation about interests and values.
- Evidence-informed policymaking: The principle that policy should be informed by evidence is under attack. Politicians, scientists and civil society need to defend this cornerstone of liberal democracy….(More)”
“Anonymous” Data Won’t Protect Your Identity
Sophie Bushwick at Scientific American: “The world produces roughly 2.5 quintillion bytes of digital data per day, adding to a sea of information that includes intimate details about many individuals’ health and habits. To protect privacy, data brokers must anonymize such records before sharing them with researchers and marketers. But a new study finds it is relatively easy to reidentify a person from a supposedly anonymized data set—even when that set is incomplete.
Massive data repositories can reveal trends that teach medical researchers about disease, demonstrate issues such as the effects of income inequality, coach artificial intelligence into humanlike behavior and, of course, aim advertising more efficiently. To shield people who—wittingly or not—contribute personal information to these digital storehouses, most brokers send their data through a process of deidentification. This procedure involves removing obvious markers, including names and social security numbers, and sometimes taking other precautions, such as introducing random “noise” data to the collection or replacing specific details with general ones (for example, swapping a birth date of “March 7, 1990” for “January–April 1990”). The brokers then release or sell a portion of this information.
“Data anonymization is basically how, for the past 25 years, we’ve been using data for statistical purposes and research while preserving people’s privacy,” says Yves-Alexandre de Montjoye, an assistant professor of computational privacy at Imperial College London and co-author of the new study, published this week in Nature Communications. Many commonly used anonymization techniques, however, originated in the 1990s, before the Internet’s rapid development made it possible to collect such an enormous amount of detail about things such as an individual’s health, finances, and shopping and browsing habits. This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man’s name—but could likely do so quite easily if he or she also knows the target’s birthday, number of children, zip code, employer and car model….(More)”
How we can place a value on health care data
Report by E&Y: “Unlocking the power of health care data to fuel innovation in medical research and improve patient care is at the heart of today’s health care revolution. When curated or consolidated into a single longitudinal dataset, patient-level records will trace a complete story of a patient’s demographics, health, wellness, diagnosis, treatments, medical procedures and outcomes. Health care providers need to recognize patient data for what it is: a valuable intangible asset desired by multiple stakeholders, a treasure trove of information.
Among the universe of providers holding significant data assets, the United Kingdom’s National Health Service (NHS) is the single largest integrated health care provider in the world. Its patient records cover the entire UK population from birth to death.
We estimate that the 55 million patient records held by the NHS today may have an indicative market value of several billion pounds to a commercial organization. We estimate also that the value of the curated NHS dataset could be as much as £5bn per annum and deliver around £4.6bn of benefit to patients per annum, in potential operational savings for the NHS, enhanced patient outcomes and generation of wider economic benefits to the UK….(More)”.
The plan to mine the world’s research papers
Priyanka Pulla in Nature: “Carl Malamud is on a crusade to liberate information locked up behind paywalls — and his campaigns have scored many victories. He has spent decades publishing copyrighted legal documents, from building codes to court records, and then arguing that such texts represent public-domain law that ought to be available to any citizen online. Sometimes, he has won those arguments in court. Now, the 60-year-old American technologist is turning his sights on a new objective: freeing paywalled scientific literature. And he thinks he has a legal way to do it.
Over the past year, Malamud has — without asking publishers — teamed up with Indian researchers to build a gigantic store of text and images extracted from 73 million journal articles dating from 1847 up to the present day. The cache, which is still being created, will be kept on a 576-terabyte storage facility at Jawaharlal Nehru University (JNU) in New Delhi. “This is not every journal article ever written, but it’s a lot,” Malamud says. It’s comparable to the size of the core collection in the Web of Science database, for instance. Malamud and his JNU collaborator, bioinformatician Andrew Lynn, call their facility the JNU data depot.
No one will be allowed to read or download work from the repository, because that would breach publishers’ copyright. Instead, Malamud envisages, researchers could crawl over its text and data with computer software, scanning through the world’s scientific literature to pull out insights without actually reading the text.
The unprecedented project is generating much excitement because it could, for the first time, open up vast swathes of the paywalled literature for easy computerized analysis. Dozens of research groups already mine papers to build databases of genes and chemicals, map associations between proteins and diseases, and generate useful scientific hypotheses. But publishers control — and often limit — the speed and scope of such projects, which typically confine themselves to abstracts, not full text. Researchers in India, the United States and the United Kingdom are already making plans to use the JNU store instead. Malamud and Lynn have held workshops at Indian government laboratories and universities to explain the idea. “We bring in professors and explain what we are doing. They get all excited and they say, ‘Oh gosh, this is wonderful’,” says Malamud.
But the depot’s legal status isn’t yet clear. Malamud, who contacted several intellectual-property (IP) lawyers before starting work on the depot, hopes to avoid a lawsuit. “Our position is that what we are doing is perfectly legal,” he says. For the moment, he is proceeding with caution: the JNU data depot is air-gapped, meaning that no one can access it from the Internet. Users have to physically visit the facility, and only researchers who want to mine for non-commercial purposes are currently allowed in. Malamud says his team does plan to allow remote access in the future. “The hope is to do this slowly and deliberately. We are not throwing this open right away,” he says….(More)”.
Applying design science in public policy and administration research
Paper by Sjoerd Romme and Albert Meijer: “There is increasing debate about the role that public policy research can play in identifying solutions to complex policy challenges. Most studies focus on describing and explaining how governance systems operate. However, some scholars argue that because current institutions are often not up to the task, researchers need to rethink this ‘bystander’ approach and engage in experimentation and interventions that can help to change and improve governance systems.
This paper contributes to this discourse by developing a design science framework that integrates retrospective research (scientific validation) and prospective research (creative design). It illustrates the merits and challenges of doing this through two case studies in the Netherlands and concludes that a design science framework provides a way of integrating traditional validation-oriented research with intervention-oriented design approaches. We argue that working at the interface between them will create new opportunities for these complementary modes of public policy research to achieve impact….(More)”