Twitter and Tear Gas: The Power and Fragility of Networked Protest


Screen Shot 2017-05-07 at 8.19.03 AMBook by Zeynep Tufekci: “A firsthand account and incisive analysis of modern protest, revealing internet-fueled social movements’ greatest strengths and frequent challenges….
To understand a thwarted Turkish coup, an anti–Wall Street encampment, and a packed Tahrir Square, we must first comprehend the power and the weaknesses of using new technologies to mobilize large numbers of people. An incisive observer, writer, and participant in today’s social movements, Zeynep Tufekci explains in this accessible and compelling book the nuanced trajectories of modern protests—how they form, how they operate differently from past protests, and why they have difficulty persisting in their long-term quests for change.

Tufekci speaks from direct experience, combining on-the-ground interviews with insightful analysis. She describes how the internet helped the Zapatista uprisings in Mexico, the necessity of remote Twitter users to organize medical supplies during Arab Spring, the refusal to use bullhorns in the Occupy Movement that started in New York, and the empowering effect of tear gas in Istanbul’s Gezi Park. These details from life inside social movements complete a moving investigation of authority, technology, and culture—and offer essential insights into the future of governance….(More)”

Using big data to understand consumer behaviour on ethical issues


Phani Kumar Chintakayala  and C. William Young in the Journal of Consumer Ethics: “The Consumer Data Research Centre (CDRC) was established by the UK Economic and Social Research Council and launched its data services in 2015. Te project is led by the University of Leeds and UCL, with partners at the Universities of Liverpool and Oxford. It is working with consumer-related organisations and businesses to open up their data resources to trusted researchers, enabling them to carry out important social and economic research….

Over the last few years there has been much talk about how so-called “big data” is the future and if you are not exploiting it, you are losing your competitive advantage. So what is there in the latest wave of enthusiasm on big data to help organisations, researchers and ethical consumers?…

Examples of the types of research being piloted using data from the food sector by CDRC include the consumption of milk and egg products. Te results clearly indicate that not all the sustainable  products are considered the same by consumers, and consumption behaviour varies across sustainable product categories. i) A linked data analysis was carried out by combining sales data of organic milk and free range eggs from a retailer with over 300 stores across the UK, green and ethical atitude data from CDRC’s data partner, and socio-demographic and deprivation data from open sources. Te analysis revealed that, in general, the consumers with deeper green and ethical atitudes are the most likely consumers of sustainable products. Deprivation has a negative efect on the consumption of sustainable products. Price, as expected, has a negative efect but the impact varies across products. Convenience stores have signifcant negative efect on the consumption of sustainable products. Te infuences of socio-demographic characteristics such as gender, age, ethnicity etc. seem to vary by product categories….

Big data can help organisations, researchers and ethical consumers understand the ethics around consumer behaviour and products. Te opportunities to link diferent types of data is exciting but must be research-question-led to avoid digging for non-existent causal links. Te methods and access to data is still a barrier but open access is key to solving this. Big data will probably only help in flling in the details of our knowledge on ethical consumption and on products, but this can only help our decision making…(More)”.

The rise of the algorithm need not be bad news for humans


 at the Financial Times: “The science and technology committee of the House of Commons published the responses to its inquiry on “algorithms in decision-making” on April 26. They vary in length, detail and approach, but share one important feature — the belief that human intervention may be unavoidable, indeed welcome, when it comes to trusting algorithmic decisions….

In a society in which algorithms and other automated processes are increasingly apparent, the important question, addressed by the select committee, is the extent to which we can trust such brainless technologies, which are regularly taking decisions instead of us. Now that white-collar jobs are being replaced, we may all be at the mercy of algorithmic errors — an unfair attribution of responsibility, say, or some other Kafkaesque computer-generated disaster. The best protection against such misfires is to put human intelligence back into the equation.

Trust depends on delivery, transparency and accountability. You trust your doctor, for instance, if they do what they are supposed to do, if you can see what they are doing and if they take responsibility in the event of things go wrong. The same holds true for algorithms. We trust them when it is clear what they are designed to deliver, when it is transparent whether or not they are delivering it, and, finally, when someone is accountable — or at least morally responsible, if not legally liable — if things go wrong.

Only human intelligence can solve the AI challenge Societies have to devise frameworks for directing technologies for the common good This is where humans come in. First, to design the right sorts of algorithms and so to minimise risk. Second, since even the best algorithm can sometimes go wrong, or be fed the wrong data or in some other way misused, we need to ensure that not all decisions are left to brainless machines. Third, while some crucial decisions may indeed be too complex for any human to cope with, we should nevertheless oversee and manage such decision-making processes. And fourth, the fact that a decision is taken by an algorithm is not grounds for disregarding the insight and understanding that only humans can bring when things go awry.

In short, we need a system of design, control, transparency and accountability overseen by humans. And this need not mean spurning the help provided by digital technologies. After all, while a computer may play chess better than a human, a human in tandem with a computer is unbeatable….(More).

Solving a Global Digital Identity Crisis


Seth Berkley at MIT Technology Review:” In developing countries, one in three children under age five has no record of their existence. Technology can help….Digital identities have become an integral part of modern life, but things like e-passports, digital health records, or Apple Pay really only provide faster, easier, or sometimes smarter ways of accessing services that are already available.

In developing countries it’s a different story. There, digital ID technology can have a profound impact on people’s lives by enabling them to access vital and often life-saving services for the very first time….The challenge is that in poor countries, an increasing number of people live under the radar, invisible to the often archaic, paper-based methods used to certify births, deaths, and marriages. One in three children under age five does not officially exist because their birth wasn’t registered. Even when it is, many don’t have proof in the form of birth certificates. This can have a lasting impact on children’s lives, leaving them vulnerable to neglect and abuse.

In light of this, it is difficult to see how we will meet the SDG16 deadline without a radical solution. What we need are new and affordable digital ID technologies capable of working in poorly resourced settings—for example, where there is no reliable electricity—and yet able to leapfrog current approaches to reach everyone, whether they’re living in remote villages or urban slums.

Such technologies are already emerging as part of efforts to increase global childhood vaccination coverage, with small-scale trials across Africa and Asia. With 86 percent of infants now having access to routine immunization—where they receive all three doses of a diphtheria-pertussis-tetanus vaccine—there are obvious advantages of building on an existing system with such a broad reach.

These systems were designed to help the World Health Organization, UNICEF, and my organization, Gavi, the Vaccine Alliance, close the gap on the one in seven infants still missing out. But they can also be used to help us achieve SDG16.

One, called MyChild, helps countries transition from paper to digital. At first glance it looks like a typical paper booklet on which workers can record health-record details about the child, such as vaccinations, deworming, or nutritional supplements. But each booklet contains a unique identification number and tear-out slips that are collected and scanned later. This means that even if a child’s birth hasn’t been registered, a unique digital record will follow them through childhood. Developed by Swedish startup Shifo, this system has been used to register more than 95,000 infants in Uganda, Afghanistan, and the Gambia, enabling health workers to follow up either in person or using text reminders to parents.

Another system, called Khushi Baby, is entirely paperless and involves giving each child a digital necklace that contains a unique ID number on a near-field communication chip. This can be scanned by community health workers using a cell phone, enabling them to update a child’s digital health records even in remote areas with no cell coverage. Trials in the Indian state of Rajasthan have been carried out across 100 villages to track more than 15,000 vaccination events. An organization called ID2020 is exploring the use of blockchain technology to create access to a unique identity for those who currently lack one….(More)”

Algorithmic accountability


 at TechCrunch: “When Netflix recommends you watch “Grace and Frankie” after you’ve finished “Love,” an algorithm decided that would be the next logical thing for you to watch. And when Google shows you one search result ahead of another, an algorithm made a decision that one page was more important than the other. Oh, and when a photo app decides you’d look better with lighter skin, a seriously biased algorithm that a real person developed made that call.

Algorithms are sets of rules that computers follow in order to solve problems and make decisions about a particular course of action. Whether it’s the type of information we receive, the information people see about us, the jobs we get hired to do, the credit cards we get approved for, and, down the road, the driverless cars that either see us or don’t see us, algorithms are increasingly becoming a big part of our lives.

But there is an inherent problem with algorithms that begins at the most base level and persists throughout its adaption: human bias that is baked into these machine-based decision-makers.

You may remember that time when Uber’s self-driving car ran a red light in San Francisco, or when Google’s photo app labeled images of black people as gorillas. The Massachusetts Registry of Motor Vehicles’ facial-recognition algorithm mistakenly tagged someone as a criminal and revoked their driver’s license. And Microsoft’s bot Tay went rogue and decided to become a white supremacist. Those were algorithms at their worst. They have also recently been thrust into the spotlight with the troubles around fake news stories surfacing in Google search results and on Facebook.

But algorithms going rogue have much greater implications; they can result in life-altering consequences for unsuspecting people. Think about how scary it could be with algorithmically biased self-driving cars, drones and other sorts of automated vehicles. Consider robots that are algorithmically biased against black people or don’t properly recognize people who are not cisgender white people, and then make a decision on the basis that the person is not human.

Another important element to consider is the role algorithm’s play in determining what we see in the world, as well as how people see us. Think driverless cars “driven” by algorithms mowing down black people because they don’t recognize black people as human. Or algorithmic software that predicts future criminals, which just so happens to be biased against black people.

A variety of issues can arise as a result of bad or erroneous data, good but biased data because there’s not enough of it, or an inflexible model that can’t account for different scenarios.

The dilemma is figuring out what to do about these problematic algorithmic outcomes. Many researchers and academics are actively exploring how to increase algorithmic accountability. What would it mean if tech companies provided their code in order to make these algorithmic decisions more transparent? Furthermore, what would happen if some type of government board would be in charge of reviewing them?…(More)”.

The Next Great Experiment


A collection of essays from technologists and scholars about how machines are reshaping civil society” in the Atlantic:” Technology is changing the way people think about—and participate in—democratic society. What does that mean for democracy?…

We are witnessing, on a massive scale, diminishing faith in institutions of all kinds. People don’t trust the government. They don’t trust banks and other corporations. They certainly don’t trust the news media.

At the same time, we are living through a period of profound technological change. Along with the rise of bioengineering, networked devices, autonomous robots, space exploration, and machine learning, the mobile internet is recontextualizing how we relate to one another, dramatically changing the way people seek and share information, and reconfiguring how we express our will as citizens in a democratic society.

But trust is a requisite for democratic governance. And now, many are growing disillusioned with democracy itself.

Disentangling the complex forces that are driving these changes can help us better understand what ails democracies today, and potentially guide us toward compelling solutions. That’s why we asked more than two dozen people who think deeply about the intersection of technology and civics to reflect on two straightforward questions: Is technology hurting democracy? And can technology help save democracy?

We received an overwhelming response. Our contributors widely view 2017 as a moment of reckoning. They are concerned with many aspects of democratic life and put a spotlight in particular on correcting institutional failures that have contributed most to inequality of access—to education, information, and voting—as well as to ideological divisiveness and the spread of misinformation. They also offer concrete solutions for how citizens, corporations, and governmental bodies can improve the free flow of reliable information, pull one another out of ever-deepening partisan echo chambers, rebuild spaces for robust and civil discourse, and shore up the integrity of the voting process itself.

Despite the unanimous sense of urgency, the authors of these essays are cautiously optimistic, too. Everyone who participated in this series believes there is hope yet—for democracy, and for the institutions that support it. They also believe that technology can help, though it will take time and money to make it so. Democracy can still thrive in this uncertain age, they argue, but not without deliberate and immediate action from the people who believe it is worth protecting.

We’ll publish a new essay every day for the next several weeks, beginning with Shannon Vallor’s “Lessons From Isaac Asimov’s Multivac.”…(More)”

Community-based app gets Londoners walking


Springwise: “Apps that measure a user’s exercise have been 10-a-penny for some years, but Go Jauntly is set to offer something brand new and leans much more into crowdsourcing than its rivals. Launched by a new start-up of nature-loving digital experts, and co-developed with Transport for London, Go Jauntly is a community-based initiative that’s as much about exploration and sharing with fellow jaunt-lovers. It also had £10,000 backing from the Ordnance Survey’s Geovation fund that helps start ups using geo-based technology. Big players are involved.

It’s directly tapped into TFL’s dynamic open data, and keeps users informed of everything from congestion to pollution. According to statistics, some 3.6 million journeys a day are made in London using cars and public transport, all of which could have been walked.

“We’re hoping that with Go Jauntly we’re creating technology for good that has a positive impact on society from a health, wellness and environmental perspective,” explains Hana Sutch, CEO and co-founder. “We wanted to start something that would get people out of the house and more active. Our team at Go Jauntly are all nature-loving city dwellers who spend too much of our time deskbound and wanted to be a bit more active.”

Go Jauntly is available now on the App Store with a variety of walks including Richmond and Regent’s Parks, plus a selection of South East London’s cemeteries. This isn’t just a London-centric innovation, anyone in the UK can download it, walk-the-walk, and share their jaunt. The company is hoping to get an Android version out by the end of the year.

Other apps that encourage walking include Norway’s Traffic Agent, and the UK’s Walkability was also designed to get users on the hoof….(More)”

Wiki-journalism may be part of the answer to fake news


 at the Financial Times: “During the Iraq war, the Los Angeles Times attempted to harness the collective wisdom of its readers by crowdsourcing an editorial, called a wikitorial, on the conflict. It was a disaster. The arguments between the hawks and doves quickly erupted into a ranting match. The only way to salvage the mess was to “fork” the debate inviting the two sides to refine separate arguments.

If it is impossible to crowdsource an opinion column, is it any more realistic to do so with news in our hyper-partisan age? We are about to find out as Jimmy Wales, the founder of Wikipedia, is launching Wikitribune in an attempt to do just that. Declaring that “news is broken”, Mr Wales said his intention was to combine the radical community spirit of Wikipedia with the best practices of journalism. His crowdfunded news site, free of advertising and paywalls, will initially be staffed by 10 journalists working alongside volunteer contributors.

Mr Wales is right that the news business desperately needs to regain credibility given the erosion of trusted media organisations, the proliferation of fake news and the howling whirlwind of social media. It is doubly problematic in an era in which unscrupulous politicians, governments and corporations can now disintermediate the media by providing their own “alternative facts” direct to the public.

Unlikely as it is that Wikitribune has stumbled upon the answer, it should be applauded for asking the right questions. How can the media invent sustainable new models that combine credibility, relevance and reach? One thing to note is that Wikipedia has for years been producing crowdsourced news in the Wikinews section of its site, with little impact. Wikinews invites anyone to write the news. But the service is slow, clunky and dull.

As a separate project, Wikitribune is breaking with Wikipedia’s core philosophy by entrusting experts with authority. As a journalist, I warm to the idea that Mr Wales thinks we serve some useful purpose. But it will surely take time for his new site to create a viable hybrid culture….(More)”.

DIY gun control: The people taking matters into their own hands


Legislators have always struggled to address this problem. But in the first 100 days of Donald Trump’s administration, new gun legislation has only expanded, not restricted gun rights. In short order, lawmakers made it easier for certain people with mental illness to buy guns, and pushed to expand the locations where people can carry firearms.

Over the past few years, however, gun owners and sellers have started taking matters into their own hands and have come up with creative solutions to reduce the threat from guns.

From working with public health organisations so gun sellers can recognise the signs of depression in a prospective buyer to developing biometric gun locks, citizen scientists are cobbling together measures they hope will stave off the worst aspects of US gun culture.

The Federation of American Scientists estimates that 320 million firearms circulate in the US – about enough for every man, woman and child. According to the independent policy group Gun Violence Archive, there were 385 mass shootings in 2016, and it looks as if the numbers for 2017 will not differ wildly.

In the absence of regulations against guns, individual gun sellers and owners are trying to help”

Although the number of these incidents is alarming, it is dwarfed by the amount of suicides, which account for more than half of all firearms deaths (see graph, right). And last year, a report from the Associated Press and the USA Today Network showed that accidental shootings kill almost twice as many children as is shown in US government data.

In just one week in 2009, New Hampshire gun shop owner Ralph Demicco sold three guns that were ultimately used by their new owners to end their own lives. Demicco’s horror and dismay that he had inadvertently contributed to their deaths led him to start what has become known as the Gun Shop Project.

The project uses insights from the study of suicide to teach gun sellers to recognise signs of suicidal intent in buyers, and know when to avoid selling a gun. To do this, Demicco teamed up with Catherine Barber, an epidemiologist at the Harvard T. H. Chan School of Public Health.

Part of what the project does is challenge myths. With suicide, the biggest is that people plan suicides over a long period. But empirical evidence shows that people usually act in a moment of brief but extreme emotion. One study has found that nearly half of people who attempted suicide contemplated their attempt for less than 10 minutes. In the time it takes to find another method, a suicidal crisis often passes, so even a small delay in obtaining a gun could make a difference….Another myth that Demicco and Barber are seeking to dispel is that if you take away someone’s gun, they’ll just find another way to hurt themselves. While that’s sometimes true, Barber says, alternatives are less likely to be fatal. Gun attempts result in death more than 80 per cent of the time; only 2 per cent of pill-based suicide attempts are lethal.

Within a year of its launch in 2009, half of all gun sellers in New Hampshire had hung posters about the warning signs of suicide by the cash registers in their stores. The programme has expanded to 21 states, and Barber is now analysing data to see how well it is working.

Another grass-roots project is trying to prevent children from accidentally shooting themselves. Kai Kloepfer, an undergraduate at Massachusetts Institute of Technology, has been working on a fingerprint lock to prevent anyone other than the owner using a gun. He has founded a start-up called Biofire Technologies to improve the lock’s reliability and bring it into production….

Grass-roots schemes like the Gun Shop Project have a better chance of being successful, because gun users are already buying in. But it may take years for the project to become big enough to have a significant effect on national statistics.

Regulatory changes might be needed to make any improvements stick in the long term. At the very least, new regulations shouldn’t block the gun community’s efforts at self-governance.

Change will not come quickly, regardless. Barber sees parallels between the Gun Shop Project and campaigns against drink driving in the 1980s and 90s.

“One commercial didn’t change rates of drunk driving. It was an ad on TV, a scene in a movie, repeated over and over, that ultimately had an impact,” she says….(More)

Citizen-generated data in the information ecosystem


Ssanyu Rebecca at Making All Voices Count: “The call for a data revolution by the UN Secretary General’s High Level Panel in the run up to Agenda 2030 stimulated debate and action in terms of innovative ways of generating and sharing data.

Since then, technological advances have supported increased access to data and information through initiatives such as open data platforms and SMS-based citizen reporting systems. The main driving force for these advances is for data to be timely and usable in decision-making. Among the several actors in the data field are the proponents of citizen-generated data (CGD) who assert its potential in the context of the sustainable development agenda.

Nevertheless, there is need for more evidence on the potential of CGD in influencing policy and service delivery, and contributing to the achievement of the sustainable development goals. Our study on Citizen-generated data in the information ecosystem: exploring links for sustainable development sought to obtain answers. Using case studies on the use of CGD in two different scenarios in Uganda and Kenya, Development Research and Training (DRT) and Development Initiatives (DI) collaborated to carry out this one-year study.

In Uganda, we focused on a process of providing unsolicited citizen feedback to duty- bearers and service providers in communities. This was based on the work of Community Resource Trackers, a group of volunteers supported by DRT in five post-conflict districts (Gulu, Kitgum, Pader, Katakwi and Kotido) to identify and track community resources and provide feedback on their use. These included financial and in-kind resources, allocated through central and local government, NGOs and donors.

In Kenya, we focused on a formalised process of CGD involving the Ministry of Education and National Taxpayers Association. The School Report Card (SRC) is an effort to increase parental participation in schooling. SRC is a scorecard for parents to assess the performance of their school each year in ten areas relatingto the quality of education.

What were the findings?

The two processes provided insights into the changes CGD influences in the areas of  accountability, resource allocation, service delivery and government response.

Both cases demonstrated the relevance of CGD in improving service delivery. They showed that the uptake of CGD and response by government depends significantly on the quality of relationships that CGD producers create with government, and whether the initiatives relate to existing policy priorities and interests of government.

The study revealed important effects on improving citizen behaviours. Community members who participated in CGD processes, understood their role as citizens and participated fully in development processes, with strong skills, knowledge and confidence.

The Kenya case study revealed that CGD can influence policy change if it is generated and used at large scale, and in direct linkage with a specific sector; but it also revealed that this is difficult to measure.

In Uganda we observed distinct improvements in service delivery and accessibility at the local level – which was the motivation for engaging in CGD in the first instance….(More) (Full Report)”.