Paper by Nicolas Pflanzl , Tadeu Classe, Renata Araujo, and Gottfried Vossen: “One of the challenges envisioned for eGovernment is how to actively involve citizens in the improvement of public services, allowing governments to offer better services. However, citizen involvement in public service design through ICT is not an easy goal. Services have been deployed internally in public organizations, making it difficult to be leveraged by citizens, specifically those without an IT background. This research moves towards decreasing the gap between public services process opacity and complexity and citizens’ lack of interest or competencies to understand them. The paper discusses game design as an approach to motivate, engage and change citizens’ behavior with respect to public services improvement. The design of a sample serious game is proposed; benefits and challenges are discussed using a public service delivery scenario from Brazil….(More)”
The risks of relying on robots for fairer staff recruitment
Sarah O’Connor at the Financial Times: “Robots are not just taking people’s jobs away, they are beginning to hand them out, too. Go to any recruitment industry event and you will find the air is thick with terms like “machine learning”, “big data” and “predictive analytics”.
The argument for using these tools in recruitment is simple. Robo-recruiters can sift through thousands of job candidates far more efficiently than humans. They can also do it more fairly. Since they do not harbour conscious or unconscious human biases, they will recruit a more diverse and meritocratic workforce.
This is a seductive idea but it is also dangerous. Algorithms are not inherently neutral just because they see the world in zeros and ones.
For a start, any machine learning algorithm is only as good as the training data from which it learns. Take the PhD thesis of academic researcher Colin Lee, released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview. The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding “human error and unconscious bias”.
But a model like this would absorb any human biases at work in the original recruitment decisions. For example, the research found that age was the biggest predictor of being invited to interview, with the youngest and the oldest applicants least likely to be successful. You might think it fair enough that inexperienced youngsters do badly, but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate. Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender, age and ethnicity before using them….(More)”
The SAGE Handbook of Digital Journalism
Book edited by Tamara Witschge, C. W. Anderson, David Domingo, and Alfred Hermida: “The production and consumption of news in the digital era is blurring the boundaries between professionals, citizens and activists. Actors producing information are multiplying, but still media companies hold central position. Journalism research faces important challenges to capture, examine, and understand the current news environment. The SAGE Handbook of Digital Journalism starts from the pressing need for a thorough and bold debate to redefine the assumptions of research in the changing field of journalism. The 38 chapters, written by a team of global experts, are organised into four key areas:
Section A: Changing Contexts
Section B: News Practices in the Digital Era
Section C: Conceptualizations of Journalism
Section D: Research Strategies
By addressing both institutional and non-institutional news production and providing ample attention to the question ‘who is a journalist?’ and the changing practices of news audiences in the digital era, this Handbook shapes the field and defines the roadmap for the research challenges that scholars will face in the coming decades….(More)”
Technology can boost active citizenship – if it’s chosen well
In Taiwan, for instance, tech activists have built online databases to track political contributions and create channels for public participation in parliamentary debates. In South Africa, anti-corruption organisation Corruption Watch has used online and mobile platforms to gather public votes for Public Protector candidates.
But research I recently completed with partners in Africa and Europe suggests that few of these organisations may be choosing the right technological tools to make their initiatives work.
We interviewed people in Kenya and South Africa who are responsible for choosing technologies when implementing transparency and accountability initiatives. In many cases, they’re not choosing their tech well. They often only recognised in retrospect how important their technology choices were. Most would have chosen differently if they were put in the same position again.
Our findings challenge a common mantra which holds that technological failures are usually caused by people or strategies rather than technologies. It’s certainly true that human agency matters. However powerful technologies may seem, choices are made by people – not the machines they invent. But our research supports the idea that technology isn’t neutral. It suggests that sometimes the problem really is the tech….
So what should those working in civic technology do about improving tool selection? From our research, we developed six “rules” for better tool choices. These are:
- first work out what you don’t know;
- think twice before building a new tool;
- get a second opinion;
- try it before you buy it;
- plan for failure; and
- share what you learn.
Possibly the most important of these recommendations is to try or “trial” technologies before making a final selection. This might seem obvious. But it was rarely done in our sample….(More)”
Data and Democracy
(Free) book by Andrew Therriault: “The 2016 US elections will be remembered for many things, but for those who work in politics, 2016 may be best remembered as the year that the use of data in politics reached its maturity. Through a collection of essays from leading experts in the field, this report explores how political data science helps to drive everything from overall strategy and messaging to individual voter contacts and advertising.
Curated by Andrew Therriault, former Director of Data Science for the Democratic National Committee, this illuminating report includes first-hand accounts from Democrats, Republicans, and members of the media. Tech-savvy readers will get a comprehensive account of how data analysis has prevailed over political instinct and experience and examples of the challenges these practitioners face.
Essays include:
- The Role of Data in Campaigns—Andrew Therriault, former Director of Data Science for the Democratic National Committee
- Essentials of Modeling and Microtargeting—Dan Castleman, cofounder and Director of Analytics at Clarity Campaign Labs, a leading modeler in Democratic politics
- Data Management for Political Campaigns—Audra Grassia, Deputy Political Director for the Democratic Governors Association in 2014
- How Technology Is Changing the Polling Industry—Patrick Ruffini, cofounder of Echelon Insights and Founder/Chairman of Engage, was a digital strategist for President Bush in 2004 and for the Republican National Committee in 2006
- Data-Driven Media Optimization—Alex Lundry, cofounder and Chief Data Scientist at Deep Root Analytics, a leading expert on media and voter analytics, electoral targeting, and political data mining
- How (and Why) to Follow the Money in Politics—Derek Willis, ProPublica’s news applications developer, formerly with The New York Times
- Digital Advertising in the Post-Obama Era—Daniel Scarvalone, Associate Director of Research and Data at Bully Pulpit Interactive (BPI), a digital marketer for the Democratic party
- Election Forecasting in the Media—Natalie Jackson, Senior Polling Editor atThe Huffington Post…(More)”
Nudges That Fail
Paper by Cass R. Sunstein: “Why are some nudges ineffective, or at least less effective than choice architects hope and expect? Focusing primarily on default rules, this essay emphasizes two reasons. The first involves strong antecedent preferences on the part of choosers. The second involves successful “counternudges,” which persuade people to choose in a way that confounds the efforts of choice architects. Nudges might also be ineffective, and less effective than expected, for five other reasons. (1) Some nudges produce confusion on the part of the target audience. (2) Some nudges have only short-term effects. (3) Some nudges produce “reactance” (though this appears to be rare) (4) Some nudges are based on an inaccurate (though initially plausible) understanding on the part of choice architects of what kinds of choice architecture will move people in particular contexts. (5) Some nudges produce compensating behavior, resulting in no net effect. When a nudge turns out to be insufficiently effective, choice architects have three potential responses: (1) Do nothing; (2) nudge better (or different); and (3) fortify the effects of the nudge, perhaps through counter-counternudges, perhaps through incentives, mandates, or bans….(More)”.
White House, Transportation Dept. want help using open data to prevent traffic crashes
Samantha Ehlinger in FedScoop: “The Transportation Department is looking for public input on how to better interpret and use data on fatal crashes after 2015 data revealed a startling spike of 7.2 percent more deaths in traffic accidents that year.
Looking for new solutions that could prevent more deaths on the roads, the department released three months earlier than usual the 2015 open dataset about each fatal crash. With it, the department and the White House announced a call to action for people to use the data set as a jumping off point for a dialogue on how to prevent crashes, as well as understand what might be causing the spike.
“What we’re ultimately looking for is getting more people engaged in the data … matching this with other publicly available data, or data that the private sector might be willing to make available, to dive in and to tell these stories,” said Bryan Thomas, communications director for the National Highway Traffic Safety Administration, to FedScoop.
One striking statistic was that “pedestrian and pedalcyclist fatalities increased to a level not seen in 20 years,” according to a DOT press release. …
“We want folks to be engaged directly with our own data scientists, so we can help people through the dataset and help answer their questions as they work their way through, bounce ideas off of us, etc.,” Thomas said. “We really want to be accessible in that way.”
He added that as ideas “come to fruition,” there will be opportunities to present what people have learned.
“It’s a very, very rich data set, there’s a lot of information there,” Thomas said. “Our own ability is, frankly, limited to investigate all of the questions that you might have of it. And so we want to get the public really diving in as well.”…
Here are the questions “worth exploring,” according to the call to action:
- How might improving economic conditions around the country change how Americans are getting around? What models can we develop to identify communities that might be at a higher risk for fatal crashes?
- How might climate change increase the risk of fatal crashes in a community?
- How might we use studies of attitudes toward speeding, distracted driving, and seat belt use to better target marketing and behavioral change campaigns?
- How might we monitor public health indicators and behavior risk indicators to target communities that might have a high prevalence of behaviors linked with fatal crashes (drinking, drug use/addiction, etc.)? What countermeasures should we create to address these issues?”…(More)”
Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response
More)”.
, , , , and The aim of this paper is to critically explore whether crowdsourced Big Data enables an inclusive humanitarian response at times of crisis. We argue that all data, including Big Data, are socially constructed artefacts that reflect the contexts and processes of their creation. To support our argument, we qualitatively analysed the process of ‘Big Data making’ that occurred by way of crowdsourcing through open data platforms, in the context of two specific humanitarian crises, namely the 2010 earthquake in Haiti and the 2015 earthquake in Nepal. We show that the process of creating Big Data from local and global sources of knowledge entails the transformation of information as it moves from one distinct group of contributors to the next. The implication of this transformation is that locally based, affected people and often the original ‘crowd’ are excluded from the information flow, and from the interpretation process of crowdsourced crisis knowledge, as used by formal responding organizations, and are marginalized in their ability to benefit from Big Data in support of their own means. Our paper contributes a critical perspective to the debate on participatory Big Data, by explaining the process of in and exclusion during data making, towards more responsive humanitarian relief….(Counterterrorism and Counterintelligence: Crowdsourcing Approach
Literature review by Sanket Subhash Khanwalkar: “Despite heavy investment by the United States and several other national governments, terrorism related problems are rising at an alarming rate. Lone-wolf terrorism, in particular, in the last decade, has caused 70% of all terrorism related deaths in the US and the West. This literature survey describes lone-wolf terrorism in detail to analyse its structure, characteristics, strengths and weaknesses. It also investigates crowdsourcing intelligence, as an unorthodox approach to counter lone-wolf terrorism, by reviewing its current state-of-the-art and identifying the areas for improvement….(More)”
Rethinking Nudge: Libertarian paternalism and classical utilitarianism
Hiroaki Itai, Akira Inoue, and Satoshi Kodama in Special Issue on Nudging of The Tocqueville Review/La revue Tocqueville: “Recently, libertarian paternalism has been intensely debated. It recommends us to employ policies and practices that “nudge” ordinary people to make better choices without forcing them to do so. Nudging policies and practices have penetrated our society, in cases like purchasing life insurance or a residence. They are also used for preventing people from addictive acts that may be harmful to them in the long run, such as having too much sugary or fatty food. In nudging people to act rationally, various kinds of cognitive effects impacting the consumers’ decision-making process should be considered, given the growing influence of consumer advertising. Since libertarian paternalism makes use of such effects in light of the recent development of behavioral economics and cognitive psychology in a principled manner, libertarian paternalism and its justification of nudges attract our attention as an approach providing a normative guidance for our action. …
This paper has two aims: the first is to examine whether libertarian paternalism can give an appropriate theoretical foundation to the idea and practice of nudges. The second is to show that utilitarianism, or, more precisely, the classical version of utilitarianism, treats nudges in a more consistent and plausible manner. To achieve these two aims, first of all, we dwell on how Cass Sunstein—one of the founder of libertarian paternalism—misconceives Mill’s harm principle, and that this may prompt us to see that utilitarianism can reasonably legitimate nudging policies (section one). We then point to two biases that embarrass libertarian paternalism (the scientism bias and the dominant-culture bias), which we believe stem from the fact that libertarian paternalism assumes the informed preference satisfaction view of welfare (section two). We finally argue that classical utilitarianism not only can overcome the two biases, but can also reasonably endorse any system monitoring a choice architect to discharge his or her responsibility (section three)….(More)”