Datafication and empowerment: How the open data movement re-articulates notions of democracy, participation, and journalism


Paper by Stefan Baack at Big Data and Society: “This article shows how activists in the open data movement re-articulate notions of democracy, participation, and journalism by applying practices and values from open source culture to the creation and use of data. Focusing on the Open Knowledge Foundation Germany and drawing from a combination of interviews and content analysis, it argues that this process leads activists to develop new rationalities around datafication that can support the agency of datafied publics. Three modulations of open source are identified: First, by regarding data as a prerequisite for generating knowledge, activists transform the sharing of source code to include the sharing of raw data. Sharing raw data should break the interpretative monopoly of governments and would allow people to make their own interpretation of data about public issues. Second, activists connect this idea to an open and flexible form of representative democracy by applying the open source model of participation to political participation. Third, activists acknowledge that intermediaries are necessary to make raw data accessible to the public. This leads them to an interest in transforming journalism to become an intermediary in this sense. At the same time, they try to act as intermediaries themselves and develop civic technologies to put their ideas into practice. The article concludes with suggesting that the practices and ideas of open data activists are relevant because they illustrate the connection between datafication and open source culture and help to understand how datafication might support the agency of publics and actors outside big government and big business….(More)

Crowdsourcing: a survey of applications


Paper by Jayshri Namdeorao Ganthade, Sunil R. Gupta: “Crowdsourcing, itself a multidisciplinary field, can be well-served by incorporating theories and methods from affective computing. We present a various applications which are based on crowdsourcing. The direction of research on principles and methods can enable to solve a general problem via human computation systems. Crowdsourcing is nothing but an act of outsourcing tasks to a large group of people through an open request via the Internet. It has become popular among social scientists as a source to recruit research participants from the general public for studies. Crowdsourcing is introduced as the new online distributed problem solving model in which networked people collaborate to complete a task and produce the result. However, the idea of crowdsourcing is not new, and can be traced back to Charles Darwin. Darwin was interested in studying the universality of facial expressions in conveying emotions. For this, it required large amount of database and for this he had to consider a global population to get more general conclusions.
This paper provides an introduction to crowdsourcing, guidelines for using crowdsourcing, and its applications in various fields. Finally, this article proposes conclusion which is based upon applications of crowdsourcing….(More)”.

 

Digital government evolution: From transformation to contextualization


Paper by Tomasz Janowski in the Government Information Quarterly: “The Digital Government landscape is continuously changing to reflect how governments are trying to find innovative digital solutions to social, economic, political and other pressures, and how they transform themselves in the process. Understanding and predicting such changes is important for policymakers, government executives, researchers and all those who prepare, make, implement or evaluate Digital Government decisions. This article argues that the concept of Digital Government evolves toward more complexity and greater contextualization and specialization, similar to evolution-like processes that lead to changes in cultures and societies. To this end, the article presents a four-stage Digital Government Evolution Model comprising Digitization (Technology in Government), Transformation (Electronic Government), Engagement (Electronic Governance) and Contextualization (Policy-Driven Electronic Governance) stages; provides some evidence in support of this model drawing upon the study of the Digital Government literature published in Government Information Quarterly between 1992 and 2014; and presents a Digital Government Stage Analysis Framework to explain the evolution. As the article consolidates a representative body of the Digital Government literature, it could be also used for defining and integrating future research in the area….(More)”

Innovation Experiments: Researching Technical Advance, Knowledge Production and the Design of Supporting Institutions


Paper by Kevin J. Boudreau and Karim Lakhani: “This paper discusses several challenges in designing field experiments to better understand how organizational and institutional design shapes innovation outcomes and the production of knowledge. We proceed to describe the field experimental research program carried out by our Crowd Innovation Laboratory at Harvard University to clarify how we have attempted to address these research design challenges. This program has simultaneously solved important practical innovation problems for partner organizations, like NASA and Harvard Medical School, while contributing research advances, particularly in relation to innovation contests and tournaments….(More)

Citizen Sensor Data Mining, Social Media Analytics and Applications


Paper by Amit P. Sheth: “With the rapid rise in the popularity of social media (1B+ Facebook users, 200M+ twitter users), and near ubiquitous mobile access (4+ billion actively-used mobile phones), the sharing of observations and opinions has become common-place (500M+ tweets a day). This has given us an unprecedented access to the pulse of a populace and the ability to perform analytics on social data to support a variety of socially intelligent applications — be it for brand tracking and management, crisis coordination, organizing revolutions or promoting social development in underdeveloped and developing countries. I will review: 1) understanding and analysis of informal text, esp. microblogs (e.g., issues of cultural entity extraction and role of semantic/background knowledge enhanced techniques), and 2) how we built Twitris, a comprehensive social media analytics (social intelligence) platform. I will describe the analysis capabilities along three dimensions: spatio-temporal-thematic, people-content-network, and sentiment-emption-intent. I will couple technical insights with identification of computational techniques and real-world examples using live demos of Twitris….(More)”

The data or the hunch


Ian Leslie at Intelligent Life: “THE GIFT FOR talent-spotting is mysterious, highly prized and celebrated. We love to hear stories about the baseball coach who can spot the raw ability of an erratic young pitcher, the boss who sees potential in the guy in the post room, the director who picks a soloist out of the chorus line. Talent shows are a staple of the TV schedules. We like to believe that certain people—sometimes ourselves—can just sense when a person has something special. But there is another method of spotting talent which doesn’t rely on hunches. In place of intuition, it offers data and analysis. Rather than relying on the gut, it invites us to use our heads. It tends not to make for such romantic stories, but it is effective—which is why, despite our affection, the hunch is everywhere in retreat.

Strike one against the hunch was the publication of “Moneyball” by Michael Lewis (2003), which has attained the status of a management manual for many in sport and beyond. Lewis reported on a cash-strapped major-league baseball team, the Oakland A’s, who enjoyed unlikely success against bigger and better-funded competitors. Their secret sauce was data. Their general manager, Billy Beane, had realised that when it came to evaluating players, the gut instincts of experienced baseball scouts were unreliable, and he employed statisticians to identify talent overlooked by the big clubs…..

These days, when a football club is interested in a player, it considers the average distance he runs in a game, the number of passes and tackles or blocks he makes, his shots on goal, the ratio of goals to shots, and many other details nobody thought to measure a generation ago. Sport is far from the only industry in which talent-spotting is becoming a matter of measurement. Prithwijit Mukerji, a postgraduate at the University of Westminster in London, recently published a paper on the way the music industry is being transformed by “the Moneyball approach”. By harvesting data from Facebook and Twitter and music services like Spotify and Shazam, executives can track what we are listening to in far more detail than ever before, and use it as a guide to what we will listen to next….

This is the day of the analyst. In education, academics are working their way towards a reliable method of evaluating teachers, by running data on test scores of pupils, controlled for factors such as prior achievement and raw ability. The methodology is imperfect, but research suggests that it’s not as bad as just watching someone teach. A 2011 study led by Michael Strong at the University of California identified a group of teachers who had raised student achievement and a group who had not. They showed videos of the teachers’ lessons to observers and asked them to guess which were in which group. The judges tended to agree on who was effective and ineffective, but, 60% of the time, they were wrong. They would have been better off flipping a coin. This applies even to experts: the Gates Foundation funded a vast study of lesson observations, and found that the judgments of trained inspectors were highly inconsistent.

THE LAST STRONGHOLD of the hunch is the interview. Most employers and some universities use interviews when deciding whom to hire or admit. In a conventional, unstructured interview, the candidate spends half an hour or so in a conversation directed at the whim of the interviewer. If you’re the one deciding, this is a reassuring practice: you feel as if you get a richer impression of the person than from the bare facts on their résumé, and that this enables you to make a better decision. The first theory may be true; the second is not.

Decades of scientific evidence suggest that the interview is close to useless as a tool for predicting how someone will do a job. Study after study has found that organisations make better decisions when they go by objective data, like the candidate’s qualifications, track record and performance in tests. “The assumption is, ‘if I meet them, I’ll know’,” says Jason Dana, of Yale School of Management, one of many scholars who have looked into the interview’s effectiveness. “People are wildly over-confident in their ability to do this, from a short meeting.” When employers adopt a holistic approach, combining the data with hunches formed in interviews, they make worse decisions than they do going on facts alone….” (More)

Crowdsourcing Solutions for Disaster Response: Examples and Lessons for the US Government


Paper by David Becker, and Samuel Bendett in Procedia Engineering: “Crowdsourcing has become a quick and efficient way to solve a wide variety of problems – technical solutions, social and economic actions, fundraising and troubleshooting of numerous issues that affect both the private and the public sectors. US government is now actively using crowdsourcing to solve complex problems that previously had to be handled by a limited circle of professionals. This paper outlines several examples of how a Department of Defense project headquartered at the National Defense University is using crowdsourcing for solutions to disaster response problems….(More)”

 

Democratising the Data Revolution


Jonathan Gray at Open Knowledge: “What will the “data revolution” do? What will it be about? What will it count? What kinds of risks and harms might it bring? Whom and what will it serve? And who will get to decide?

Today we are launching a new discussion paper on “Democratising the Data Revolution”, which is intended to advance thinking and action around civil society engagement with the data revolution. It looks beyond the disclosure of existing information, towards more ambitious and substantive forms of democratic engagement with data infrastructures.1

It concludes with a series of questions about what practical steps institutions and civil society organisations might take to change what is measured and how, and how these measurements are put to work.

You can download the full PDF report here, or continue to read on in this blog post.

What Counts?

How might civil society actors shape the data revolution? In particular, how might they go beyond the question of what data is disclosed towards looking at what is measured in the first place? To kickstart discussion around this topic, we will look at three kinds of intervention: changing existing forms of measurement, advocating new forms of measurement and undertaking new forms of measurement.

Changing Existing Forms of Measurement

Rather than just focusing on the transparency, disclosure and openness of public information, civil society groups can argue for changing what is measured with existing data infrastructures. One example of this is recent campaigning around company ownership in the UK. Advocacy groups wanted to unpick networks of corporate ownership and control in order to support their campaigning and investigations around tax avoidance, tax evasion and illicit financial flows.

While the UK company register recorded information about “nominal ownership”, it did not include information about so-called “beneficial ownership”, or who ultimately benefits from the ownership and control of companies. Campaigners undertook an extensive programme of activities to advocate for changes and extensions to existing data infrastructures – including via legislation, software systems, and administrative protocols.2

Advocating New Forms of Measurement

As well as changing or recalibrating existing forms of measurement, campaigners and civil society organisations can make the case for the measurement of things which were not previously measured. For example, over the past several decades social and political campaigning has resulted in new indicators about many different issues – such as gender inequality, health, work, disability, pollution or education.3 In such cases activists aimed to establish a given indicator as important and relevant for public institutions, decision makers, and broader publics – in order to, for example, inform policy development or resource allocation.

Undertaking New Forms of Measurement

Historically, many civil society organisations and advocacy groups have collected their own data to make the case for action on issues that they work on – from human rights abuses to endangered species….(More)”

Mining citizen emotions to estimate the urgency of urban issues


Christian Masdeval and Adriano Veloso in Information Systems: “Crowdsourcing technology offers exciting possibilities for local governments. Specifically, citizens are increasingly taking part in reporting and discussing issues related to their neighborhood and problems they encounter on a daily basis, such as overflowing trash-bins, broken footpaths and lifts, illegal graffiti, and potholes. Pervasive citizen participation enables local governments to respond more efficiently to these urban issues. This interaction between citizens and municipalities is largely promoted by civic engagement platforms, such as See-Click-Fix, FixMyStreet, CitySourced, and OpenIDEO, which allow citizens to report urban issues by entering free text describing what needs to be done, fixed or changed. In order to develop appropriate action plans and priorities, government officials need to figure out how urgent are the reported issues. In this paper we propose to estimate the urgency of urban issues by mining different emotions that are implicit in the text describing the issue. More specifically, a reported issue is first categorized according to the emotions expressed in it, and then the corresponding emotion scores are combined in order to produce a final urgency level for the reported issue. Our experiments use the SeeClickFix hackathon data and diverse emotion classification algorithms. They indicate that (i) emotions can be categorized efficiently with supervised learning algorithms, and (ii) the use of citizen emotions leads to accurate urgency estimates. Further, using additional features such as the type of issue or its author leads to no further accuracy gains….(More)”

Using social media in hotel crisis management: the case of bed bugs


Social media has helped to bridge the communication gap between customers and hotels. Bed bug infestations are a growing health crisis and have obtained increasing attention on social media sites. Without managing this crisis effectively, bed bug infestation can cause economic loss and reputational damages to hotel properties, ranging from negative comments and complaints, to possible law suits. Thus, it is essential for hoteliers to understand the importance of social media in crisis communication, and to incorporate social media in hotels’ crisis management plans.

This study serves as one of the first attempts in the hospitality field to offer discussions and recommendations on how hotels can manage the bed bug crisis and other crises of this kind by incorporating social media into their crisis management practices….(More)”