Citizens Coproduction, Service Self-Provision and the State 2.0


Chapter by Walter Castelnovo in Network, Smart and Open: “Citizens’ engagement and citizens’ participation are rapidly becoming catch-all concepts, buzzwords continuously recurring in public policy discourses, also due to the widespread diffusion and use of social media that are claimed to have the potential to increase citizens’ participation in public sector processes, including policy development and policy implementation.

By assuming the concept of co-production as the lens through which to look at citizen’s participation in civic life, the paper shows how, when supported by a real redistribution of power between government and citizens, citizens’ participation can determine a transformational impact on the same nature of government, up to the so called ‘Do It Yourself government’ and ‘user-generated state’. Based on a conceptual research approach and with reference to the relevant literature, the paper discusses what such transformation could amount to and what role ICTs (social media) can play in the government transformation processes….(More)”.

Feasibility Study of Using Crowdsourcing to Identify Critical Affected Areas for Rapid Damage Assessment: Hurricane Matthew Case Study


Paper by Faxi Yuan and Rui Liu at the International Journal of Disaster Risk Reduction: “…rapid damage assessment plays a critical role in crisis management. Collection of timely information for rapid damage assessment is particularly challenging during natural disasters. Remote sensing technologies were used for data collection during disasters. However, due to the large areas affected by major disasters such as Hurricane Matthew, specific data cannot be collected in time such as the location information.

Social media can serve as a crowdsourcing platform for citizens’ communication and information sharing during natural disasters and provide the timely data for identifying affected areas to support rapid damage assessment during disasters. Nevertheless, there is very limited existing research on the utility of social media data in damage assessment. Even though some investigation of the relationship between social media activities and damages was conducted, the employment of damage-related social media data in exploring the fore-mentioned relationship remains blank.

This paper for the first time, establishes the index dictionary by semantic analysis for the identification of damage-related tweets posted during Hurricane Matthew in Florida. Meanwhile, the insurance claim data from the publication of Florida Office of Insurance Regulation is used as a representative of real hurricane damage data in Florida. This study performs a correlation analysis and a comparative analysis of the geographic distribution of social media data and damage data at the county level in Florida. We find that employing social media data to identify critical affected areas at the county level during disasters is viable. Damage data has a closer relationship with damage-related tweets than disaster-related tweets….(More)”.

 

Dawn of the techlash


Rachel Botsman at the Guardian: “…Once seen as saviours of democracy, those titans are now just as likely to be viewed as threats to truth or, at the very least, impassive billionaires falling down on the job of monitoring their own backyards.

It wasn’t always this way. Remember the early catchy slogans that emerged from those ping-pong-tabled tech temples in Silicon Valley? “A place for friends”“Don’t be evil” or “You can make money without being evil” (rather poignant, given what was to come). Users were enchanted by the sudden, handheld power of a smartphone to voice anything, access anything; grassroots activist movements revelled in these new tools for spreading their cause. The idealism of social media – democracy, friction-free communication, one-button socialising proved infectious.

So how did that unbridled enthusiasm for all things digital morph into a critical erosion of trust in technology, particularly in politics? Was 2017 the year of reckoning, when technology suddenly crossed to the dark side or had it been heading that way for some time? It might be useful to recall how social media first discovered its political muscle….

Technology is only the means. We also need to ask why our political ideologies have become so polarised, and take a hard look at our own behaviour, as well as that of the politicians themselves and the partisan media outlets who use these platforms, with their vast reach, to sow the seeds of distrust. Why are we so easily duped? Are we unwilling or unable to discern what’s true and what isn’t or to look for the boundaries between opinion, fact and misinformation? But what part are our own prejudices playing?

Luciano Floridi, of the Digital Ethics Lab at Oxford University, points out that technology alone can’t save us from ourselves. “The potential of technology to be a powerful positive force for democracy is huge and is still there. The problems arise when we ignore how technology can accentuate or highlight less attractive sides of human nature,” he says. “Prejudice. Jealousy. Intolerance of different views. Our tendency to play zero sum games. We against them. Saying technology is a threat to democracy is like saying food is bad for you because it causes obesity.”

It’s not enough to blame the messenger. Social media merely amplifies human intent – both good and bad. We need to be honest about our own, age-old appetite for ugly gossip and spreading half-baked information, about our own blindspots.

Is there a solution to it all? Plenty of smart people are working on technical fixes, if for no other reason than the tech companies know it’s in their own best interests to stem the haemorrhaging of trust. Whether they’ll go far enough remains to be seen.

We sometimes forget how uncharted this new digital world remains – it’s a work in progress. We forget that social media, for all its flaws, still brings people together, gives a voice to the voiceless, opens vast wells of information, exposes wrongdoing, sparks activism, allows us to meet up with unexpected strangers. The list goes on. It’s inevitable that there will be falls along the way, deviousness we didn’t foresee. Perhaps the present danger is that in our rush to condemn the corruption of digital technologies, we will unfairly condemn the technologies themselves….(More).

Managing Democracy in the Digital Age


Book edited by Julia Schwanholz, Todd Graham and Peter-Tobias Stoll: “In light of the increased utilization of information technologies, such as social media and the ‘Internet of Things,’ this book investigates how this digital transformation process creates new challenges and opportunities for political participation, political election campaigns and political regulation of the Internet. Within the context of Western democracies and China, the contributors analyze these challenges and opportunities from three perspectives: the regulatory state, the political use of social media, and through the lens of the public sphere.

The first part of the book discusses key challenges for Internet regulation, such as data protection and censorship, while the second addresses the use of social media in political communication and political elections. In turn, the third and last part highlights various opportunities offered by digital media for online civic engagement and protest in the public sphere. Drawing on different academic fields, including political science, communication science, and journalism studies, the contributors raise a number of innovative research questions and provide fascinating theoretical and empirical insights into the topic of digital transformation….(More)”.

A Really Bad Blockchain Idea: Digital Identity Cards for Rohingya Refugees


Wayan Vota at ICTworks: “The Rohingya Project claims to be a grassroots initiative that will empower Rohingya refugees with a blockchain-leveraged financial ecosystem tied to digital identity cards….

What Could Possibly Go Wrong?

Concerns about Rohingya data collection are not new, so Linda Raftree‘s Facebook post about blockchain for biometrics started a spirited discussion on this escalation of techno-utopia. Several people put forth great points about the Rohingya Project’s potential failings. For me, there were four key questions originating in the discussion that we should all be debating:

1. Who Determines Ethnicity?

Ethnicity isn’t a scientific way to categorize humans. Ethnic groups are based on human constructs such as common ancestry, language, society, culture, or nationality. Who are the Rohingya Project to be the ones determining who is Rohingya or not? And what is this rigorous assessment they have that will do what science cannot?

Might it be better not to perpetuate the very divisions that cause these issues? Or at the very least, let people self-determine their own ethnicity.

2. Why Digitally Identify Refugees?

Let’s say that we could group a people based on objective metrics. Should we? Especially if that group is persecuted where it currently lives and in many of its surrounding countries? Wouldn’t making a list of who is persecuted be a handy reference for those who seek to persecute more?

Instead, shouldn’t we focus on changing the mindset of the persecutors and stop the persecution?

3. Why Blockchain for Biometrics?

How could linking a highly persecuted people’s biometric information, such as fingerprints, iris scans, and photographs, to a public, universal, and immutable distributed ledger be a good thing?

Might it be highly irresponsible to digitize all that information? Couldn’t that data be used by nefarious actors to perpetuate new and worse exploitation of Rohingya? India has already lost Aadhaar data and the Equafax lost Americans’ data. How will the small, lightly funded Rohingya Project do better?

Could it be possible that old-fashioned paper forms are a better solution than digital identity cards? Maybe laminate them for greater durability, but paper identity cards can be hidden, even destroyed if needed, to conceal information that could be used against the owner.

4. Why Experiment on the Powerless?

Rohingya refugees already suffer from massive power imbalances, and now they’ll be asked to give up their digital privacy, and use experimental technology, as part of an NGO’s experiment, in order to get needed services.

Its not like they’ll have the agency to say no. They are homeless, often penniless refugees, who will probably have no realistic way to opt-out of digital identity cards, even if they don’t want to be experimented on while they flee persecution….(More)”

Is your software racist?


Li Zhou at Politico: “Late last year, a St. Louis tech executive named Emre Şarbak noticed something strange about Google Translate. He was translating phrases from Turkish — a language that uses a single gender-neutral pronoun “o” instead of “he” or “she.” But when he asked Google’s tool to turn the sentences into English, they seemed to read like a children’s book out of the 1950’s. The ungendered Turkish sentence “o is a nurse” would become “she is a nurse,” while “o is a doctor” would become “he is a doctor.”

The website Quartz went on to compose a sort-of poem highlighting some of these phrases; Google’s translation program decided that soldiers, doctors and entrepreneurs were men, while teachers and nurses were women. Overwhelmingly, the professions were male. Finnish and Chinese translations had similar problems of their own, Quartz noted.

What was going on? Google’s Translate tool “learns” language from an existing corpus of writing, and the writing often includes cultural patterns regarding how men and women are described. Because the model is trained on data that already has biases of its own, the results that it spits out serve only to further replicate and even amplify them.

It might seem strange that a seemingly objective piece of software would yield gender-biased results, but the problem is an increasing concern in the technology world. The term is “algorithmic bias” — the idea that artificially intelligent software, the stuff we count on to do everything from power our Netflix recommendations to determine our qualifications for a loan, often turns out to perpetuate social bias.

Voice-based assistants, like Amazon’s Alexa, have struggled to recognize different accents. A Microsoft chatbot on Twitter started spewing racist posts after learning from other users on the platform. In a particularly embarrassing example in 2015, a black computer programmer found that Google’s photo-recognition tool labeled him and a friend as “gorillas.”

Sometimes the results of hidden computer bias are insulting, other times merely annoying. And sometimes the effects are potentially life-changing….(More)”.

Our Hackable Political Future


Henry J. Farrell and Rick Perlstein at the New York Times: “….A program called Face2Face, developed at Stanford, films one person speaking, then manipulates that person’s image to resemble someone else’s. Throw in voice manipulation technology, and you can literally make anyone say anything — or at least seem to….

Another harrowing potential is the ability to trick the algorithms behind self-driving cars to not recognize traffic signs. Computer scientists have shown that nearly invisible changes to a stop sign can fool algorithms into thinking it says yield instead. Imagine if one of these cars contained a dissident challenging a dictator.

In 2007, Barack Obama’s political opponents insisted that footage existed of Michelle Obama ranting against “whitey.” In the future, they may not have to worry about whether it actually existed. If someone called their bluff, they may simply be able to invent it, using data from stock photos and pre-existing footage.

The next step would be one we are already familiar with: the exploitation of the algorithms used by social media sites like Twitter and Facebook to spread stories virally to those most inclined to show interest in them, even if those stories are fake.

It might be impossible to stop the advance of this kind of technology. But the relevant algorithms here aren’t only the ones that run on computer hardware. They are also the ones that undergird our too easily hacked media system, where garbage acquires the perfumed scent of legitimacy with all too much ease. Editors, journalists and news producers can play a role here — for good or for bad.

Outlets like Fox News spread stories about the murder of Democratic staff members and F.B.I. conspiracies to frame the president. Traditional news organizations, fearing that they might be left behind in the new attention economy, struggle to maximize “engagement with content.”

This gives them a built-in incentive to spread informational viruses that enfeeble the very democratic institutions that allow a free media to thrive. Cable news shows consider it their professional duty to provide “balance” by giving partisan talking heads free rein to spout nonsense — or amplify the nonsense of our current president.

It already feels as though we are living in an alternative science-fiction universe where no one agrees on what it true. Just think how much worse it will be when fake news becomes fake video. Democracy assumes that its citizens share the same reality. We’re about to find out whether democracy can be preserved when this assumption no longer holds….(More)”.

Artificial intelligence and privacy


Report by the The Norwegian Data Protection Authority (DPA): “…If people cannot trust that information about them is being handled properly, it may limit their willingness to share information – for example with their doctor, or on social media. If we find ourselves in a situation in which sections of the population refuse to share information because they feel that their personal integrity is being violated, we will be faced with major challenges to our freedom of speech and to people’s trust in the authorities.

A refusal to share personal information will also represent a considerable challenge with regard to the commercial use of such data in sectors such as the media, retail trade and finance services.

About the report

This report elaborates on the legal opinions and the technologies described in the 2014 report «Big Data – privacy principles under pressure». In this report we will provide greater technical detail in describing artificial intelligence (AI), while also taking a closer look at four relevant AI challenges associated with the data protection principles embodied in the GDPR:

  • Fairness and discrimination
  • Purpose limitation
  • Data minimisation
  • Transparency and the right to information

This represents a selection of data protection concerns that in our opinion are most relevance for the use of AI today.

The target group for this report consists of people who work with, or who for other reasons are interested in, artificial intelligence. We hope that engineers, social scientists, lawyers and other specialists will find this report useful….(More) (Download Report)”.

The Qualified Self: Social Media and the Accounting of Everyday Life


Book byLee Humphreys: “Social critiques argue that social media have made us narcissistic, that Facebook, Twitter, Instagram, and YouTube are all vehicles for me-promotion. In The Qualified Self, Lee Humphreys offers a different view. She shows that sharing the mundane details of our lives—what we ate for lunch, where we went on vacation, who dropped in for a visit—didn’t begin with mobile devices and social media. People have used media to catalog and share their lives for several centuries. Pocket diaries, photo albums, and baby books are the predigital precursors of today’s digital and mobile platforms for posting text and images. The ability to take selfies has not turned us into needy narcissists; it’s part of a longer story about how people account for everyday life.

Humphreys refers to diaries in which eighteenth-century daily life is documented with the brevity and precision of a tweet, and cites a nineteenth-century travel diary in which a young woman complains that her breakfast didn’t agree with her. Diaries, Humphreys explains, were often written to be shared with family and friends. Pocket diaries were as mobile as smartphones, allowing the diarist to record life in real time. Humphreys calls this chronicling, in both digital and nondigital forms, media accounting. The sense of self that emerges from media accounting is not the purely statistics-driven “quantified self,” but the more well-rounded qualified self. We come to understand ourselves in a new way through the representations of ourselves that we create to be consumed….(More)”.

Social activism: Engaging millennials in social causes


Michelle I. Seelig at First Monday: “Given that young adults consume and interact with digital technologies not only a daily basis, but extensively throughout the day, it stands to reason they are more actively involved in advocating social change particularly through social media. However, national surveys of civic engagement indicate civic and community engagement drops-off after high school and while millennials attend college. While past research has compiled evidence about young adults’ social media use and some social media behaviors, limited literature has investigated the audience’s perspective of social activism campaigns through social media.

Research also has focused on the adoption of new technologies based on causal linkages between perceived ease of use and perceived usefulness, yet few studies have considered how these dynamics relate to millennials engagement with others using social media for social good. This project builds on past research to investigate the relationship between millennials’ online exposure to information about social causes and motives to take part in virtual and face-to-face engagement.

Findings suggest that while digital media environments immerse participants in mediated experiences that merge both the off-line and online worlds, and has a strong effect on person’s influence to do something, unclear is the extent to which social media and social interactions influence millennials willingness to engage both online and in-person. Even so, the results of this study indicate millennials are open to using social media for social causes, and perhaps increasing engagement off-line too….(More)”.