Paper by Lisa Schmidthuber; Dennis Hilgers and Maximilian Rapp: “Citizen engagement is seen as a way to address a range of societal challenges, fiscal constraints, as well as wicked problems, and increasing public participation in political decisions could help to address low levels of trust in politicians and decreasing satisfaction with political parties. This paper examines the perceived impacts of an experiment by the Austrian People’s Party which, in response to reaching a historic low in the polls, opened up its manifesto process to public participation via digital technology. Analysis of survey data from participants found that self-efficacy is positively associated with participation intensity but negatively related to satisfaction. In contrast, collective efficacy is related to positive perceptions of public participation in party politics but does not influence levels of individual participation. Future research is needed to explore the outcomes of political innovations that use digital technologies to enable public participation on voting behaviour, party membership and attitudes to representative democracy….(More)”.
Trust in Contemporary Society
Book edited by Masamichi Sasaki: “… deals with conceptual, theoretical and social interaction analyses, historical data on societies, national surveys or cross-national comparative studies, and methodological issues related to trust. The authors are from a variety of disciplines: psychology, sociology, political science, organizational studies, history, and philosophy, and from Britain, the United States, the Czech Republic, the Netherlands, Australia, Germany, and Japan. They bring their vast knowledge from different historical and cultural backgrounds to illuminate contemporary issues of trust and distrust. The socio-cultural perspective of trust is important and increasingly acknowledged as central to trust research. Accordingly, future directions for comparative trust research are also discussed….(More)”.
Understanding our Political Nature: How to put knowledge and reason at the heart of political decision-making
EU report by Rene Van Bavel et al: “Recognising that advances in behavioural, decision and social sciences demonstrate that we are not purely rational beings, this report brings new insights into our political behaviour and this understanding have the potential to address some of the current crises in our democracies. Sixty experts from across the globe working in the fields of behavioural and social sciences as well as the humanities, have contributed to the research that underpins this JRC report that calls upon evidence-informed policymaking not to be taken for granted. There is a chapter dedicated to each key finding which outlines the latest scientific thinking as well as an overview of the possible implications for policymaking. The key findings are:
- Misperception and Disinformation: Our thinking skills are challenged by today’s information environment and make us vulnerable to disinformation. We need to think more about how we think.
- Collective Intelligence: Science can help us re-design the way policymakers work together to take better decisions and prevent policy mistakes.
- Emotions: We can’t separate emotion from reason. Better information about citizens’ emotions and greater emotional literacy could improve policymaking.
- Values and Identities drive political behaviour but are not properly understood or debated.
- Framing, Metaphor and Narrative: Facts don’t speak for themselves. Framing, metaphors and narratives need to be used responsibly if evidence is to be heard and understood.
- Trust and Openness: The erosion of trust in experts and in government can only be addressed by greater honesty and public deliberation about interests and values.
- Evidence-informed policymaking: The principle that policy should be informed by evidence is under attack. Politicians, scientists and civil society need to defend this cornerstone of liberal democracy….(More)”
“Anonymous” Data Won’t Protect Your Identity
Sophie Bushwick at Scientific American: “The world produces roughly 2.5 quintillion bytes of digital data per day, adding to a sea of information that includes intimate details about many individuals’ health and habits. To protect privacy, data brokers must anonymize such records before sharing them with researchers and marketers. But a new study finds it is relatively easy to reidentify a person from a supposedly anonymized data set—even when that set is incomplete.
Massive data repositories can reveal trends that teach medical researchers about disease, demonstrate issues such as the effects of income inequality, coach artificial intelligence into humanlike behavior and, of course, aim advertising more efficiently. To shield people who—wittingly or not—contribute personal information to these digital storehouses, most brokers send their data through a process of deidentification. This procedure involves removing obvious markers, including names and social security numbers, and sometimes taking other precautions, such as introducing random “noise” data to the collection or replacing specific details with general ones (for example, swapping a birth date of “March 7, 1990” for “January–April 1990”). The brokers then release or sell a portion of this information.
“Data anonymization is basically how, for the past 25 years, we’ve been using data for statistical purposes and research while preserving people’s privacy,” says Yves-Alexandre de Montjoye, an assistant professor of computational privacy at Imperial College London and co-author of the new study, published this week in Nature Communications. Many commonly used anonymization techniques, however, originated in the 1990s, before the Internet’s rapid development made it possible to collect such an enormous amount of detail about things such as an individual’s health, finances, and shopping and browsing habits. This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man’s name—but could likely do so quite easily if he or she also knows the target’s birthday, number of children, zip code, employer and car model….(More)”
How we can place a value on health care data
Report by E&Y: “Unlocking the power of health care data to fuel innovation in medical research and improve patient care is at the heart of today’s health care revolution. When curated or consolidated into a single longitudinal dataset, patient-level records will trace a complete story of a patient’s demographics, health, wellness, diagnosis, treatments, medical procedures and outcomes. Health care providers need to recognize patient data for what it is: a valuable intangible asset desired by multiple stakeholders, a treasure trove of information.
Among the universe of providers holding significant data assets, the United Kingdom’s National Health Service (NHS) is the single largest integrated health care provider in the world. Its patient records cover the entire UK population from birth to death.
We estimate that the 55 million patient records held by the NHS today may have an indicative market value of several billion pounds to a commercial organization. We estimate also that the value of the curated NHS dataset could be as much as £5bn per annum and deliver around £4.6bn of benefit to patients per annum, in potential operational savings for the NHS, enhanced patient outcomes and generation of wider economic benefits to the UK….(More)”.
The plan to mine the world’s research papers
Priyanka Pulla in Nature: “Carl Malamud is on a crusade to liberate information locked up behind paywalls — and his campaigns have scored many victories. He has spent decades publishing copyrighted legal documents, from building codes to court records, and then arguing that such texts represent public-domain law that ought to be available to any citizen online. Sometimes, he has won those arguments in court. Now, the 60-year-old American technologist is turning his sights on a new objective: freeing paywalled scientific literature. And he thinks he has a legal way to do it.
Over the past year, Malamud has — without asking publishers — teamed up with Indian researchers to build a gigantic store of text and images extracted from 73 million journal articles dating from 1847 up to the present day. The cache, which is still being created, will be kept on a 576-terabyte storage facility at Jawaharlal Nehru University (JNU) in New Delhi. “This is not every journal article ever written, but it’s a lot,” Malamud says. It’s comparable to the size of the core collection in the Web of Science database, for instance. Malamud and his JNU collaborator, bioinformatician Andrew Lynn, call their facility the JNU data depot.
No one will be allowed to read or download work from the repository, because that would breach publishers’ copyright. Instead, Malamud envisages, researchers could crawl over its text and data with computer software, scanning through the world’s scientific literature to pull out insights without actually reading the text.
The unprecedented project is generating much excitement because it could, for the first time, open up vast swathes of the paywalled literature for easy computerized analysis. Dozens of research groups already mine papers to build databases of genes and chemicals, map associations between proteins and diseases, and generate useful scientific hypotheses. But publishers control — and often limit — the speed and scope of such projects, which typically confine themselves to abstracts, not full text. Researchers in India, the United States and the United Kingdom are already making plans to use the JNU store instead. Malamud and Lynn have held workshops at Indian government laboratories and universities to explain the idea. “We bring in professors and explain what we are doing. They get all excited and they say, ‘Oh gosh, this is wonderful’,” says Malamud.
But the depot’s legal status isn’t yet clear. Malamud, who contacted several intellectual-property (IP) lawyers before starting work on the depot, hopes to avoid a lawsuit. “Our position is that what we are doing is perfectly legal,” he says. For the moment, he is proceeding with caution: the JNU data depot is air-gapped, meaning that no one can access it from the Internet. Users have to physically visit the facility, and only researchers who want to mine for non-commercial purposes are currently allowed in. Malamud says his team does plan to allow remote access in the future. “The hope is to do this slowly and deliberately. We are not throwing this open right away,” he says….(More)”.
Applying design science in public policy and administration research
Paper by Sjoerd Romme and Albert Meijer: “There is increasing debate about the role that public policy research can play in identifying solutions to complex policy challenges. Most studies focus on describing and explaining how governance systems operate. However, some scholars argue that because current institutions are often not up to the task, researchers need to rethink this ‘bystander’ approach and engage in experimentation and interventions that can help to change and improve governance systems.
This paper contributes to this discourse by developing a design science framework that integrates retrospective research (scientific validation) and prospective research (creative design). It illustrates the merits and challenges of doing this through two case studies in the Netherlands and concludes that a design science framework provides a way of integrating traditional validation-oriented research with intervention-oriented design approaches. We argue that working at the interface between them will create new opportunities for these complementary modes of public policy research to achieve impact….(More)”
Review into bias in algorithmic decision-making
Interim Report by the Centre for Data Ethics and Innovation (UK): The use of algorithms has the potential to improve the quality of decision- making by increasing the speed and accuracy with which decisions are made. If designed well, they can reduce human bias in decision-making processes. However, as the volume and variety of data used to inform decisions increases, and the algorithms used to interpret the data become more complex, concerns are growing that without proper oversight, algorithms risk entrenching and potentially worsening bias.
The way in which decisions are made, the potential biases which they are subject to and the impact these decisions have on individuals are highly context dependent. Our Review focuses on exploring bias in four key sectors: policing, financial services, recruitment and local government. These have been selected because they all involve significant decisions being made about individuals, there is evidence of the growing uptake of machine learning algorithms in the sectors and there is evidence of historic bias in decision-making within these sectors. This Review seeks to answer three sets of questions:
- Data: Do organisations and regulators have access to the data they require to adequately identify and mitigate bias?
- Tools and techniques: What statistical and technical solutions are available now or will be required in future to identify and mitigate bias and which represent best practice?
- Governance: Who should be responsible for governing, auditing and assuring these algorithmic decision-making systems?
Our work to date has led to some emerging insights that respond to these three sets of questions and will guide our subsequent work….(More)”.
Improving access to information and restoring the public’s faith in democracy through deliberative institutions
Katherine R. Knobloch at Democratic Audit: “Both scholars and citizens have begun to believe that democracy is in decline. Authoritarian power grabs, polarising rhetoric, and increasing inequality can all claim responsibility for democratic systems that feel broken. Democracy depends on a polity who believe that their engagement matters, but evidence suggests democratic institutions have become unresponsive to the will of the public. How can we restore faith in self-government when both research and personal experience tell us that the public is losing power, not gaining it?
Deliberative public engagement
Deliberative democracy offers one solution, and it’s slowly shifting how the public engages in political decision-making. In Oregon, the Citizens’ Initiative Review(CIR) asks a group of randomly selected voters to carefully study public issues and then make policy recommendations based on their collective experience and insight. In Ireland, Citizens’ Assemblies are being used to amend the country’s constitution to better reflect changing cultural norms. In communities across the world, Participatory Budgeting is giving the public control over local government spending. Far from squashing democratic power, these deliberative institutions bolster it. They exemplify a new wave in democratic government, one that aims to bring community members together across political and cultural divides to make decisions about how to govern themselves.
Though the contours of deliberative events vary, most share key characteristics. A diverse body of community members gather together to learn from experts and one another, think through the short- and long-term implications of different policy positions, and discuss how issues affect not only themselves but their wider communities. At the end of those conversations, they make decisions that are representative of the diversity of participants and their ideas and which have been tested through collective reasoning….(More)”.
Digital Serfdom
/ˈdɪʤətəl ˈsɜrfdəm/
A condition where consumers give up their personal and private information in order to be able to use a particular product or service.
Serfdom is a system of forced labor that exists in a feudalistic society. It was very common in Europe during the medieval age. In this system, serfs or peasants do a variety of labor for their lords in exchange for protection from bandits and a small piece of land that they can cultivate for themselves. Serfs are also required to pay some form of tax often in the form of chickens or crops yielded from their piece of land.
Hassan Khan in The Next Web points out that the decline of property ownership is indicative that we are living in digital serfdom. In an article he says:
“The percentage of households without a car is increasing. Ride-hailing services have multiplied. Netflix boasts over 188 million subscribers. Spotify gains ten million paid members every five to six months.
“The model of “impermanence” has become the new normal. But there’s still one place where permanence finds its home, with over two billion active monthly users, Facebook has become a platform of record for the connected world. If it’s not on social media, it may as well have never happened.”
Joshua A. T. Fairfield elaborates this phenomenon in his book Owned: Property, Privacy, and the New Digital Serfdom. Fairfield discusses his book in an article in The Conversation, stating that:
“The issue of who gets to control property has a long history. In the feudal system of medieval Europe, the king owned almost everything, and everyone else’s property rights depended on their relationship with the king. Peasants lived on land granted by the king to a local lord, and workers didn’t always even own the tools they used for farming or other trades like carpentry and blacksmithing.
[…]
“Yet the expansion of the internet of things seems to be bringing us back to something like that old feudal model, where people didn’t own the items they used every day. In this 21st-century version, companies are using intellectual property law – intended to protect ideas – to control physical objects consumers think they own.”
In other words, Fairfield is suggesting that the devices and services that we use—iPhones, Fitbits, Roomba, digital door locks, Spotify, Uber, and many more—are constantly capturing data about behaviors. By using these products, consumers have no choice but to trade their personal data in order to access the full functionalities of these devices or services. This data is used by private corporations for targeted advertisement, among others. This system of digital serfdom binds consumers to private corporations that dictate the terms of use for their products or services.
Janet Burns wrote about Alex Rosenblat’s UBERLAND: How Algorithms Are Rewriting The Rules Of Work and gave some examples of how algorithms use personal data to manipulate consumers’ behaviors:
“For example, algorithms in control of assigning and pricing rides have often surprised drivers and riders, quietly taking into account other traffic in the area, regionally adjusted rates, and data on riders and drivers themselves.
“In recent years, we’ve seen similar adjustments happen behind the scenes in online shopping, as UBERLAND points out: major retailers have tweaked what price different customers see for the same item based on where they live, and how feasibly they could visit a brick-and-mortar store for it.”
To conclude, an excerpt from Fairfield’s book cautions:
“In the coming decade, if we do not take back our ownership rights, the same will be said of our self-driving cars and software-enabled homes. We risk becoming digital peasants, owned by software and advertising companies, not to mention overreaching governments.”
Sources and Further Readings:
- Fairfield, Joshua A. T. Owned: Property, Privacy, and the New Digital Serfdom. Cambridge Press, 2017.
- Fairfield, Joshua A.T. “The ‘internet of things’ is sending us back to the Middle Ages.” The Conversation, September 6, 2017.
- Burns, Janet. “Gigs And AI Are Driving Us Into Digital Servitude.” Forbes, October 28, 2018.
- Khan, Hassan. “We’re living in digital serfdom — trading privacy for convenience.” The Next Web, November 10, 2018.