Book edited by Btihaj Ajana: “…provides an empirical and philosophical investigation of self-tracking practices. In recent years, there has been an explosion of apps and devices that enable the data capturing and monitoring of everyday activities, behaviours and habits. Encouraged by movements such as the Quantified Self, a growing number of people are embracing this culture of quantification and tracking in the spirit of improving their health and wellbeing.
The aim of this book is to enhance understanding of this fast-growing trend, bringing together scholars who are working at the forefront of the critical study of self-tracking practices. Each chapter provides a different conceptual lens through which one can examine these practices, while grounding the discussion in relevant empirical examples.
From phenomenology to discourse analysis, from questions of identity, privacy and agency to issues of surveillance and tracking at the workplace, this edited collection takes on a wide, and yet focused, approach to the timely topic of self-tracking. It constitutes a useful companion for scholars, students and everyday users interested in the Quantified Self phenomenon…(More)”.
Citizens Coproduction, Service Self-Provision and the State 2.0
Citizens’ engagement and citizens’ participation are rapidly becoming catch-all concepts, buzzwords continuously recurring in public policy discourses, also due to the widespread diffusion and use of social media that are claimed to have the potential to increase citizens’ participation in public sector processes, including policy development and policy implementation.
By assuming the concept of co-production as the lens through which to look at citizen’s participation in civic life, the paper shows how, when supported by a real redistribution of power between government and citizens, citizens’ participation can determine a transformational impact on the same nature of government, up to the so called ‘Do It Yourself government’ and ‘user-generated state’. Based on a conceptual research approach and with reference to the relevant literature, the paper discusses what such transformation could amount to and what role ICTs (social media) can play in the government transformation processes….(More)”.
Feasibility Study of Using Crowdsourcing to Identify Critical Affected Areas for Rapid Damage Assessment: Hurricane Matthew Case Study
Paper by Faxi Yuan and Rui Liu at the International Journal of Disaster Risk Reduction: “…rapid damage assessment plays a critical role in crisis management. Collection of timely information for rapid damage assessment is particularly challenging during natural disasters. Remote sensing technologies were used for data collection during disasters. However, due to the large areas affected by major disasters such as Hurricane Matthew, specific data cannot be collected in time such as the location information.
Social media can serve as a crowdsourcing platform for citizens’ communication and information sharing during natural disasters and provide the timely data for identifying affected areas to support rapid damage assessment during disasters. Nevertheless, there is very limited existing research on the utility of social media data in damage assessment. Even though some investigation of the relationship between social media activities and damages was conducted, the employment of damage-related social media data in exploring the fore-mentioned relationship remains blank.
This paper for the first time, establishes the index dictionary by semantic analysis for the identification of damage-related tweets posted during Hurricane Matthew in Florida. Meanwhile, the insurance claim data from the publication of Florida Office of Insurance Regulation is used as a representative of real hurricane damage data in Florida. This study performs a correlation analysis and a comparative analysis of the geographic distribution of social media data and damage data at the county level in Florida. We find that employing social media data to identify critical affected areas at the county level during disasters is viable. Damage data has a closer relationship with damage-related tweets than disaster-related tweets….(More)”.
Dawn of the techlash
Rachel Botsman at the Guardian: “…Once seen as saviours of democracy, those titans are now just as likely to be viewed as threats to truth or, at the very least, impassive billionaires falling down on the job of monitoring their own backyards.
It wasn’t always this way. Remember the early catchy slogans that emerged from those ping-pong-tabled tech temples in Silicon Valley? “A place for friends”, “Don’t be evil” or “You can make money without being evil” (rather poignant, given what was to come). Users were enchanted by the sudden, handheld power of a smartphone to voice anything, access anything; grassroots activist movements revelled in these new tools for spreading their cause. The idealism of social media – democracy, friction-free communication, one-button socialising proved infectious.
So how did that unbridled enthusiasm for all things digital morph into a critical erosion of trust in technology, particularly in politics? Was 2017 the year of reckoning, when technology suddenly crossed to the dark side or had it been heading that way for some time? It might be useful to recall how social media first discovered its political muscle….
Technology is only the means. We also need to ask why our political ideologies have become so polarised, and take a hard look at our own behaviour, as well as that of the politicians themselves and the partisan media outlets who use these platforms, with their vast reach, to sow the seeds of distrust. Why are we so easily duped? Are we unwilling or unable to discern what’s true and what isn’t or to look for the boundaries between opinion, fact and misinformation? But what part are our own prejudices playing?
Luciano Floridi, of the Digital Ethics Lab at Oxford University, points out that technology alone can’t save us from ourselves. “The potential of technology to be a powerful positive force for democracy is huge and is still there. The problems arise when we ignore how technology can accentuate or highlight less attractive sides of human nature,” he says. “Prejudice. Jealousy. Intolerance of different views. Our tendency to play zero sum games. We against them. Saying technology is a threat to democracy is like saying food is bad for you because it causes obesity.”
It’s not enough to blame the messenger. Social media merely amplifies human intent – both good and bad. We need to be honest about our own, age-old appetite for ugly gossip and spreading half-baked information, about our own blindspots.
Is there a solution to it all? Plenty of smart people are working on technical fixes, if for no other reason than the tech companies know it’s in their own best interests to stem the haemorrhaging of trust. Whether they’ll go far enough remains to be seen.
We sometimes forget how uncharted this new digital world remains – it’s a work in progress. We forget that social media, for all its flaws, still brings people together, gives a voice to the voiceless, opens vast wells of information, exposes wrongdoing, sparks activism, allows us to meet up with unexpected strangers. The list goes on. It’s inevitable that there will be falls along the way, deviousness we didn’t foresee. Perhaps the present danger is that in our rush to condemn the corruption of digital technologies, we will unfairly condemn the technologies themselves….(More).
Managing Democracy in the Digital Age
Book edited by In light of the increased utilization of information technologies, such as social media and the ‘Internet of Things,’ this book investigates how this digital transformation process creates new challenges and opportunities for political participation, political election campaigns and political regulation of the Internet. Within the context of Western democracies and China, the contributors analyze these challenges and opportunities from three perspectives: the regulatory state, the political use of social media, and through the lens of the public sphere.
The first part of the book discusses key challenges for Internet regulation, such as data protection and censorship, while the second addresses the use of social media in political communication and political elections. In turn, the third and last part highlights various opportunities offered by digital media for online civic engagement and protest in the public sphere. Drawing on different academic fields, including political science, communication science, and journalism studies, the contributors raise a number of innovative research questions and provide fascinating theoretical and empirical insights into the topic of digital transformation….(More)”.
A Really Bad Blockchain Idea: Digital Identity Cards for Rohingya Refugees
Wayan Vota at ICTworks: “The Rohingya Project claims to be a grassroots initiative that will empower Rohingya refugees with a blockchain-leveraged financial ecosystem tied to digital identity cards….
What Could Possibly Go Wrong?
Concerns about Rohingya data collection are not new, so Linda Raftree‘s Facebook post about blockchain for biometrics started a spirited discussion on this escalation of techno-utopia. Several people put forth great points about the Rohingya Project’s potential failings. For me, there were four key questions originating in the discussion that we should all be debating:
1. Who Determines Ethnicity?
Ethnicity isn’t a scientific way to categorize humans. Ethnic groups are based on human constructs such as common ancestry, language, society, culture, or nationality. Who are the Rohingya Project to be the ones determining who is Rohingya or not? And what is this rigorous assessment they have that will do what science cannot?
Might it be better not to perpetuate the very divisions that cause these issues? Or at the very least, let people self-determine their own ethnicity.
2. Why Digitally Identify Refugees?
Let’s say that we could group a people based on objective metrics. Should we? Especially if that group is persecuted where it currently lives and in many of its surrounding countries? Wouldn’t making a list of who is persecuted be a handy reference for those who seek to persecute more?
Instead, shouldn’t we focus on changing the mindset of the persecutors and stop the persecution?
3. Why Blockchain for Biometrics?
How could linking a highly persecuted people’s biometric information, such as fingerprints, iris scans, and photographs, to a public, universal, and immutable distributed ledger be a good thing?
Might it be highly irresponsible to digitize all that information? Couldn’t that data be used by nefarious actors to perpetuate new and worse exploitation of Rohingya? India has already lost Aadhaar data and the Equafax lost Americans’ data. How will the small, lightly funded Rohingya Project do better?
Could it be possible that old-fashioned paper forms are a better solution than digital identity cards? Maybe laminate them for greater durability, but paper identity cards can be hidden, even destroyed if needed, to conceal information that could be used against the owner.
4. Why Experiment on the Powerless?
Rohingya refugees already suffer from massive power imbalances, and now they’ll be asked to give up their digital privacy, and use experimental technology, as part of an NGO’s experiment, in order to get needed services.
Its not like they’ll have the agency to say no. They are homeless, often penniless refugees, who will probably have no realistic way to opt-out of digital identity cards, even if they don’t want to be experimented on while they flee persecution….(More)”
Artificial Intelligence and Foreign Policy
Paper by Ben Scott, Stefan Heumann and Philppe Lorenz: “The plot-lines of the development of Artificial Intelligence (AI) are debated and contested. But it is safe to predict that it will become one of the central technologies of the 21st century. It is fashionable these days to speak about data as the new oil. But if we want to “refine” the vast quantities of data we are collecting today and make sense of it, we will need potent AI. The consequences of the AI revolution could not be more far reaching. Value chains will be turned upside down, labor markets will get disrupted and economic power will shift to those who control this new technology. And as AI is deeply embedded in the connectivity of the Internet, the challenge of AI is global in nature. Therefore it is striking that AI is almost absent from the foreign policy agenda.
This paper seeks to provide a foundation for planning a foreign policy strategy that responds effectively to the emerging power of AI in international affairs. The developments in AI are so dynamic and the implications so wide-ranging that ministries need to begin engaging immediately. That means starting with the assets and resources at hand while planning for more significant changes in the future. Many of the tools of traditional diplomacy can be adapted to this new field. While the existing toolkit can get us started, this pragmatic approach does not preclude thinking about more drastic changes that the technological changes might require for our foreign policy institutions and instruments.
The paper approaches this challenge, drawing on the existing foreign policy toolbox and reflecting on the past lessons of adapting this toolbox to the Internet revolution. The paper goes on to make suggestions on how the tools could be applied to the international challenges that the AI revolution will bring about. The toolbox includes policy making, public diplomacy, bilateral and multilateral engagement, actions through international and treaty organizations, convenings and partnerships, grant-making and information-gathering and analysis. The analysis of the international challenges of the AI transformation are divided into three topical areas. Each of the three sections includes concrete suggestions how instruments from the tool box could be applied to address the challenges AI will bring about in international affairs….(More)“.
The Entrepreneurial Impact of Open Data
In the context of our collaboration with the GovLab-chaired MacArthur Foundation Research Network on Opening Governance, we sought to dig deeper into the broader impact of open data on entrepreneurship. To do so we combined the OD500 with databases on startup activity from Crunchbase and AngelList. This allowed us to look at the trajectories of open data companies from their founding to the present day. In particular, we compared companies that use open data to similar companies with the same founding year, location and industry to see how well open data companies fare at securing funding along with other indicators of success.
We first looked at the extent to which open data companies have access to investor capital, wondering if open data companies have difficulty gaining funding because their use of public data may be perceived as insufficiently innovative or proprietary. If this is the case, the economic impact of open data may be limited. Instead, we found that open data companies obtain more investors than similar companies that do not use open data. Open data companies have, on average, 1.74 more investors than similar companies founded at the same time. Interestingly, investors in open data companies are not a specific group who specialize in open data startups. Instead, a wide variety of investors put money into these companies. Of the investors who funded open data companies, 59 percent had only invested in one open data company, while 81 percent had invested in one or two. Open data companies appear to be appealing to a wide range of investors….(More)”.
Big Data, Thick Mediation, and Representational Opacity
Rafael Alvarado and Paul Humphreys in the New Literary History: “In 2008, the phrase “big data” shifted in meaning. It turned from referring to a problem and an opportunity for organizations with very large data sets to being the talisman for an emerging economic and cultural order that is both celebrated and feared for its deep and pervasive effects on the human condition. Economically, the phrase now denotes a data-mediated form of commerce exemplified by Google. Culturally, the phrase stands for a new form of knowledge and knowledge production. In this essay, we explore the connection between these two implicit meanings, considered as dimensions of a real social and scientific transformation with observable properties. We develop three central concepts: the datasphere, thick mediation, and representational opacity. These concepts provide a theoretical framework for making sense of how the economic and cultural dimensions interact to produce a set of effects, problems, and opportunities, not all of which have been addressed by big data’s critics and advocates….(More)”.
Is your software racist?
Li Zhou at Politico: “Late last year, a St. Louis tech executive named Emre Şarbak noticed something strange about Google Translate. He was translating phrases from Turkish — a language that uses a single gender-neutral pronoun “o” instead of “he” or “she.” But when he asked Google’s tool to turn the sentences into English, they seemed to read like a children’s book out of the 1950’s. The ungendered Turkish sentence “o is a nurse” would become “she is a nurse,” while “o is a doctor” would become “he is a doctor.”
The website Quartz went on to compose a sort-of poem highlighting some of these phrases; Google’s translation program decided that soldiers, doctors and entrepreneurs were men, while teachers and nurses were women. Overwhelmingly, the professions were male. Finnish and Chinese translations had similar problems of their own, Quartz noted.
What was going on? Google’s Translate tool “learns” language from an existing corpus of writing, and the writing often includes cultural patterns regarding how men and women are described. Because the model is trained on data that already has biases of its own, the results that it spits out serve only to further replicate and even amplify them.
It might seem strange that a seemingly objective piece of software would yield gender-biased results, but the problem is an increasing concern in the technology world. The term is “algorithmic bias” — the idea that artificially intelligent software, the stuff we count on to do everything from power our Netflix recommendations to determine our qualifications for a loan, often turns out to perpetuate social bias.
Voice-based assistants, like Amazon’s Alexa, have struggled to recognize different accents. A Microsoft chatbot on Twitter started spewing racist posts after learning from other users on the platform. In a particularly embarrassing example in 2015, a black computer programmer found that Google’s photo-recognition tool labeled him and a friend as “gorillas.”
Sometimes the results of hidden computer bias are insulting, other times merely annoying. And sometimes the effects are potentially life-changing….(More)”.