Explore our articles

Stefaan Verhulst

 at the Financial Times: “The science and technology committee of the House of Commons published the responses to its inquiry on “algorithms in decision-making” on April 26. They vary in length, detail and approach, but share one important feature — the belief that human intervention may be unavoidable, indeed welcome, when it comes to trusting algorithmic decisions….

In a society in which algorithms and other automated processes are increasingly apparent, the important question, addressed by the select committee, is the extent to which we can trust such brainless technologies, which are regularly taking decisions instead of us. Now that white-collar jobs are being replaced, we may all be at the mercy of algorithmic errors — an unfair attribution of responsibility, say, or some other Kafkaesque computer-generated disaster. The best protection against such misfires is to put human intelligence back into the equation.

Trust depends on delivery, transparency and accountability. You trust your doctor, for instance, if they do what they are supposed to do, if you can see what they are doing and if they take responsibility in the event of things go wrong. The same holds true for algorithms. We trust them when it is clear what they are designed to deliver, when it is transparent whether or not they are delivering it, and, finally, when someone is accountable — or at least morally responsible, if not legally liable — if things go wrong.

Only human intelligence can solve the AI challenge Societies have to devise frameworks for directing technologies for the common good This is where humans come in. First, to design the right sorts of algorithms and so to minimise risk. Second, since even the best algorithm can sometimes go wrong, or be fed the wrong data or in some other way misused, we need to ensure that not all decisions are left to brainless machines. Third, while some crucial decisions may indeed be too complex for any human to cope with, we should nevertheless oversee and manage such decision-making processes. And fourth, the fact that a decision is taken by an algorithm is not grounds for disregarding the insight and understanding that only humans can bring when things go awry.

In short, we need a system of design, control, transparency and accountability overseen by humans. And this need not mean spurning the help provided by digital technologies. After all, while a computer may play chess better than a human, a human in tandem with a computer is unbeatable….(More).

The rise of the algorithm need not be bad news for humans

Seth Berkley at MIT Technology Review:” In developing countries, one in three children under age five has no record of their existence. Technology can help….Digital identities have become an integral part of modern life, but things like e-passports, digital health records, or Apple Pay really only provide faster, easier, or sometimes smarter ways of accessing services that are already available.

In developing countries it’s a different story. There, digital ID technology can have a profound impact on people’s lives by enabling them to access vital and often life-saving services for the very first time….The challenge is that in poor countries, an increasing number of people live under the radar, invisible to the often archaic, paper-based methods used to certify births, deaths, and marriages. One in three children under age five does not officially exist because their birth wasn’t registered. Even when it is, many don’t have proof in the form of birth certificates. This can have a lasting impact on children’s lives, leaving them vulnerable to neglect and abuse.

In light of this, it is difficult to see how we will meet the SDG16 deadline without a radical solution. What we need are new and affordable digital ID technologies capable of working in poorly resourced settings—for example, where there is no reliable electricity—and yet able to leapfrog current approaches to reach everyone, whether they’re living in remote villages or urban slums.

Such technologies are already emerging as part of efforts to increase global childhood vaccination coverage, with small-scale trials across Africa and Asia. With 86 percent of infants now having access to routine immunization—where they receive all three doses of a diphtheria-pertussis-tetanus vaccine—there are obvious advantages of building on an existing system with such a broad reach.

These systems were designed to help the World Health Organization, UNICEF, and my organization, Gavi, the Vaccine Alliance, close the gap on the one in seven infants still missing out. But they can also be used to help us achieve SDG16.

One, called MyChild, helps countries transition from paper to digital. At first glance it looks like a typical paper booklet on which workers can record health-record details about the child, such as vaccinations, deworming, or nutritional supplements. But each booklet contains a unique identification number and tear-out slips that are collected and scanned later. This means that even if a child’s birth hasn’t been registered, a unique digital record will follow them through childhood. Developed by Swedish startup Shifo, this system has been used to register more than 95,000 infants in Uganda, Afghanistan, and the Gambia, enabling health workers to follow up either in person or using text reminders to parents.

Another system, called Khushi Baby, is entirely paperless and involves giving each child a digital necklace that contains a unique ID number on a near-field communication chip. This can be scanned by community health workers using a cell phone, enabling them to update a child’s digital health records even in remote areas with no cell coverage. Trials in the Indian state of Rajasthan have been carried out across 100 villages to track more than 15,000 vaccination events. An organization called ID2020 is exploring the use of blockchain technology to create access to a unique identity for those who currently lack one….(More)”

Solving a Global Digital Identity Crisis

 at TechCrunch: “When Netflix recommends you watch “Grace and Frankie” after you’ve finished “Love,” an algorithm decided that would be the next logical thing for you to watch. And when Google shows you one search result ahead of another, an algorithm made a decision that one page was more important than the other. Oh, and when a photo app decides you’d look better with lighter skin, a seriously biased algorithm that a real person developed made that call.

Algorithms are sets of rules that computers follow in order to solve problems and make decisions about a particular course of action. Whether it’s the type of information we receive, the information people see about us, the jobs we get hired to do, the credit cards we get approved for, and, down the road, the driverless cars that either see us or don’t see us, algorithms are increasingly becoming a big part of our lives.

But there is an inherent problem with algorithms that begins at the most base level and persists throughout its adaption: human bias that is baked into these machine-based decision-makers.

You may remember that time when Uber’s self-driving car ran a red light in San Francisco, or when Google’s photo app labeled images of black people as gorillas. The Massachusetts Registry of Motor Vehicles’ facial-recognition algorithm mistakenly tagged someone as a criminal and revoked their driver’s license. And Microsoft’s bot Tay went rogue and decided to become a white supremacist. Those were algorithms at their worst. They have also recently been thrust into the spotlight with the troubles around fake news stories surfacing in Google search results and on Facebook.

But algorithms going rogue have much greater implications; they can result in life-altering consequences for unsuspecting people. Think about how scary it could be with algorithmically biased self-driving cars, drones and other sorts of automated vehicles. Consider robots that are algorithmically biased against black people or don’t properly recognize people who are not cisgender white people, and then make a decision on the basis that the person is not human.

Another important element to consider is the role algorithm’s play in determining what we see in the world, as well as how people see us. Think driverless cars “driven” by algorithms mowing down black people because they don’t recognize black people as human. Or algorithmic software that predicts future criminals, which just so happens to be biased against black people.

A variety of issues can arise as a result of bad or erroneous data, good but biased data because there’s not enough of it, or an inflexible model that can’t account for different scenarios.

The dilemma is figuring out what to do about these problematic algorithmic outcomes. Many researchers and academics are actively exploring how to increase algorithmic accountability. What would it mean if tech companies provided their code in order to make these algorithmic decisions more transparent? Furthermore, what would happen if some type of government board would be in charge of reviewing them?…(More)”.

Algorithmic accountability

A collection of essays from technologists and scholars about how machines are reshaping civil society” in the Atlantic:” Technology is changing the way people think about—and participate in—democratic society. What does that mean for democracy?…

We are witnessing, on a massive scale, diminishing faith in institutions of all kinds. People don’t trust the government. They don’t trust banks and other corporations. They certainly don’t trust the news media.

At the same time, we are living through a period of profound technological change. Along with the rise of bioengineering, networked devices, autonomous robots, space exploration, and machine learning, the mobile internet is recontextualizing how we relate to one another, dramatically changing the way people seek and share information, and reconfiguring how we express our will as citizens in a democratic society.

But trust is a requisite for democratic governance. And now, many are growing disillusioned with democracy itself.

Disentangling the complex forces that are driving these changes can help us better understand what ails democracies today, and potentially guide us toward compelling solutions. That’s why we asked more than two dozen people who think deeply about the intersection of technology and civics to reflect on two straightforward questions: Is technology hurting democracy? And can technology help save democracy?

We received an overwhelming response. Our contributors widely view 2017 as a moment of reckoning. They are concerned with many aspects of democratic life and put a spotlight in particular on correcting institutional failures that have contributed most to inequality of access—to education, information, and voting—as well as to ideological divisiveness and the spread of misinformation. They also offer concrete solutions for how citizens, corporations, and governmental bodies can improve the free flow of reliable information, pull one another out of ever-deepening partisan echo chambers, rebuild spaces for robust and civil discourse, and shore up the integrity of the voting process itself.

Despite the unanimous sense of urgency, the authors of these essays are cautiously optimistic, too. Everyone who participated in this series believes there is hope yet—for democracy, and for the institutions that support it. They also believe that technology can help, though it will take time and money to make it so. Democracy can still thrive in this uncertain age, they argue, but not without deliberate and immediate action from the people who believe it is worth protecting.

We’ll publish a new essay every day for the next several weeks, beginning with Shannon Vallor’s “Lessons From Isaac Asimov’s Multivac.”…(More)”

The Next Great Experiment

Springwise: “Apps that measure a user’s exercise have been 10-a-penny for some years, but Go Jauntly is set to offer something brand new and leans much more into crowdsourcing than its rivals. Launched by a new start-up of nature-loving digital experts, and co-developed with Transport for London, Go Jauntly is a community-based initiative that’s as much about exploration and sharing with fellow jaunt-lovers. It also had £10,000 backing from the Ordnance Survey’s Geovation fund that helps start ups using geo-based technology. Big players are involved.

It’s directly tapped into TFL’s dynamic open data, and keeps users informed of everything from congestion to pollution. According to statistics, some 3.6 million journeys a day are made in London using cars and public transport, all of which could have been walked.

“We’re hoping that with Go Jauntly we’re creating technology for good that has a positive impact on society from a health, wellness and environmental perspective,” explains Hana Sutch, CEO and co-founder. “We wanted to start something that would get people out of the house and more active. Our team at Go Jauntly are all nature-loving city dwellers who spend too much of our time deskbound and wanted to be a bit more active.”

Go Jauntly is available now on the App Store with a variety of walks including Richmond and Regent’s Parks, plus a selection of South East London’s cemeteries. This isn’t just a London-centric innovation, anyone in the UK can download it, walk-the-walk, and share their jaunt. The company is hoping to get an Android version out by the end of the year.

Other apps that encourage walking include Norway’s Traffic Agent, and the UK’s Walkability was also designed to get users on the hoof….(More)”

Community-based app gets Londoners walking

 at The Conversation: “There’s a problem with the wisdom of crowds. Market economies and democracies rely on the idea that whole populations know more about what is best for them than a small elite group. This knowledge is potentially so powerful it can even predict the future through stock markets, betting exchanges and special investment vehicles called prediction markets.

These markets allow people to trade “shares” in possible future outcomes, such as the winner of upcoming elections. Anyone with new information about the future has a financial incentive to spread it by buying these shares. Prediction markets now routinely inform bookmakers odds and are quoted in news coverage of elections alongside more traditional opinion polls.

But prediction markets are having a crisis of confidence in the abilities of the crowd. They have been systematically wrong about a series of high profile political decisions, including the UK general election of 2015, the Brexit referendum and the US presidential election of 2016.

We shouldn’t expect perfect accuracy on every occasion, just as we know opinion polls are often flawed. But to be wrong so consistently about such prominent events points to possible flaws in the assumptions we make about crowd intelligence. For example, people don’t always act on the information they have and so it might never become part of the crowd’s decision. The dynamics of crowds and markets might also stop people from paying attention to some sources of information at all.

However, there might be a way forward. My colleagues and I have come up with a model that overcomes this problem by giving people a incentive to seek out new sources of information, and an extra reason to share it.

An important question for markets is “where do individuals get their information?” Research shows that our opinions and activities very often match those of our peers. We also tend to look for information in the most obvious places, in line with everyone else.

To give an example, if you look around on any public transport in the City of London you’ll probably see people holding copies of the Financial Times. This is a problem because if everyone has the same information, the crowd is no smarter than a single individual. Studies show that having a diverse collection of opinions, especially including minority views, is crucial for creating a smart group.

So why do we tend to narrow the sources of our opinions? One reason is because we have an innate desire to imitate our peers, to behave in ways that are safe and acceptable within our community. But it may also be because of a rational, profit-seeking motivation.

We studied how theoretical profit-motivated people behave when faced with the types of rewards seen in market-like situations. To do this, we created a computer simulation of a prediction market, where people received a reward for making correct predictions. Rewards were larger when fewer people guessed the right answer, just like in a prediction market or a betting exchange.

The reward an individual received was a fixed amount divided by the number of other people who made a correct prediction. This was supposed to give people an incentive to look for right answers that other people wouldn’t find. But we found that people still gravitated towards a very small subset of the available information – just like London bankers with their copies of the Financial Times.

The more complex the situation was, the smaller the percentage of available information people actually used. The problem was that the more niche, unused information, though it might be useful to the group, was so rarely useful to the individual that possessed it that there was no incentive for them to seek it out….(More)”

A simple reward system could make crowds a whole lot wiser

 at VICE News: “The Chinese government is recruiting 20,000 people to create an online encyclopedia that will be the country’s own, China-centric version of Wikipedia, or as one official put it, like “a Great Wall of culture.”

Known as the “Chinese Encyclopedia,” the country’s national encyclopedia will go online for the first time in 2018, and the government has employed tens of thousands of scholars from universities and research institutes who will contribute articles in more than 100 disciplines. The end result will be a knowledge base with more than 300,000 entries, each of which will be about 1,000 words long.

“The Chinese Encyclopaedia is not a book, but a Great Wall of culture,” Yang Muzhi, the editor-in-chief of the project and the chairman of the Book and Periodicals Distribution Association of China, said. He added that China was under pressure from the international community to produce an encyclopedia that will “guide and lead the public and society.”

The need for an online reference encyclopedia is in part a result of the Chinese government blocking access to Wikipedia. Chinese internet companies like Baidu and Qihoo 360 operate their own online encyclopedias, but none are capable of matching Wikipedia in terms of scale and breadth of information.

The aim of the new version of the Chinese encyclopedia is to showcase China’s latest science and technology developments, promote historical heritage, increase cultural soft power, and strengthen the core values of socialism, according to Yang, who stressed that the goal isn’t to mimic Wikipedia: “We have the biggest, most high-quality author team in the world. Our goal is not to catch up, but overtake.”…(More)”

China is recruiting 20,000 people to write its own Wikipedia

 at the Financial Times: “During the Iraq war, the Los Angeles Times attempted to harness the collective wisdom of its readers by crowdsourcing an editorial, called a wikitorial, on the conflict. It was a disaster. The arguments between the hawks and doves quickly erupted into a ranting match. The only way to salvage the mess was to “fork” the debate inviting the two sides to refine separate arguments.

If it is impossible to crowdsource an opinion column, is it any more realistic to do so with news in our hyper-partisan age? We are about to find out as Jimmy Wales, the founder of Wikipedia, is launching Wikitribune in an attempt to do just that. Declaring that “news is broken”, Mr Wales said his intention was to combine the radical community spirit of Wikipedia with the best practices of journalism. His crowdfunded news site, free of advertising and paywalls, will initially be staffed by 10 journalists working alongside volunteer contributors.

Mr Wales is right that the news business desperately needs to regain credibility given the erosion of trusted media organisations, the proliferation of fake news and the howling whirlwind of social media. It is doubly problematic in an era in which unscrupulous politicians, governments and corporations can now disintermediate the media by providing their own “alternative facts” direct to the public.

Unlikely as it is that Wikitribune has stumbled upon the answer, it should be applauded for asking the right questions. How can the media invent sustainable new models that combine credibility, relevance and reach? One thing to note is that Wikipedia has for years been producing crowdsourced news in the Wikinews section of its site, with little impact. Wikinews invites anyone to write the news. But the service is slow, clunky and dull.

As a separate project, Wikitribune is breaking with Wikipedia’s core philosophy by entrusting experts with authority. As a journalist, I warm to the idea that Mr Wales thinks we serve some useful purpose. But it will surely take time for his new site to create a viable hybrid culture….(More)”.

Wiki-journalism may be part of the answer to fake news

Legislators have always struggled to address this problem. But in the first 100 days of Donald Trump’s administration, new gun legislation has only expanded, not restricted gun rights. In short order, lawmakers made it easier for certain people with mental illness to buy guns, and pushed to expand the locations where people can carry firearms.

Over the past few years, however, gun owners and sellers have started taking matters into their own hands and have come up with creative solutions to reduce the threat from guns.

From working with public health organisations so gun sellers can recognise the signs of depression in a prospective buyer to developing biometric gun locks, citizen scientists are cobbling together measures they hope will stave off the worst aspects of US gun culture.

The Federation of American Scientists estimates that 320 million firearms circulate in the US – about enough for every man, woman and child. According to the independent policy group Gun Violence Archive, there were 385 mass shootings in 2016, and it looks as if the numbers for 2017 will not differ wildly.

In the absence of regulations against guns, individual gun sellers and owners are trying to help”

Although the number of these incidents is alarming, it is dwarfed by the amount of suicides, which account for more than half of all firearms deaths (see graph, right). And last year, a report from the Associated Press and the USA Today Network showed that accidental shootings kill almost twice as many children as is shown in US government data.

In just one week in 2009, New Hampshire gun shop owner Ralph Demicco sold three guns that were ultimately used by their new owners to end their own lives. Demicco’s horror and dismay that he had inadvertently contributed to their deaths led him to start what has become known as the Gun Shop Project.

The project uses insights from the study of suicide to teach gun sellers to recognise signs of suicidal intent in buyers, and know when to avoid selling a gun. To do this, Demicco teamed up with Catherine Barber, an epidemiologist at the Harvard T. H. Chan School of Public Health.

Part of what the project does is challenge myths. With suicide, the biggest is that people plan suicides over a long period. But empirical evidence shows that people usually act in a moment of brief but extreme emotion. One study has found that nearly half of people who attempted suicide contemplated their attempt for less than 10 minutes. In the time it takes to find another method, a suicidal crisis often passes, so even a small delay in obtaining a gun could make a difference….Another myth that Demicco and Barber are seeking to dispel is that if you take away someone’s gun, they’ll just find another way to hurt themselves. While that’s sometimes true, Barber says, alternatives are less likely to be fatal. Gun attempts result in death more than 80 per cent of the time; only 2 per cent of pill-based suicide attempts are lethal.

Within a year of its launch in 2009, half of all gun sellers in New Hampshire had hung posters about the warning signs of suicide by the cash registers in their stores. The programme has expanded to 21 states, and Barber is now analysing data to see how well it is working.

Another grass-roots project is trying to prevent children from accidentally shooting themselves. Kai Kloepfer, an undergraduate at Massachusetts Institute of Technology, has been working on a fingerprint lock to prevent anyone other than the owner using a gun. He has founded a start-up called Biofire Technologies to improve the lock’s reliability and bring it into production….

Grass-roots schemes like the Gun Shop Project have a better chance of being successful, because gun users are already buying in. But it may take years for the project to become big enough to have a significant effect on national statistics.

Regulatory changes might be needed to make any improvements stick in the long term. At the very least, new regulations shouldn’t block the gun community’s efforts at self-governance.

Change will not come quickly, regardless. Barber sees parallels between the Gun Shop Project and campaigns against drink driving in the 1980s and 90s.

“One commercial didn’t change rates of drunk driving. It was an ad on TV, a scene in a movie, repeated over and over, that ultimately had an impact,” she says….(More)

DIY gun control: The people taking matters into their own hands

Ssanyu Rebecca at Making All Voices Count: “The call for a data revolution by the UN Secretary General’s High Level Panel in the run up to Agenda 2030 stimulated debate and action in terms of innovative ways of generating and sharing data.

Since then, technological advances have supported increased access to data and information through initiatives such as open data platforms and SMS-based citizen reporting systems. The main driving force for these advances is for data to be timely and usable in decision-making. Among the several actors in the data field are the proponents of citizen-generated data (CGD) who assert its potential in the context of the sustainable development agenda.

Nevertheless, there is need for more evidence on the potential of CGD in influencing policy and service delivery, and contributing to the achievement of the sustainable development goals. Our study on Citizen-generated data in the information ecosystem: exploring links for sustainable development sought to obtain answers. Using case studies on the use of CGD in two different scenarios in Uganda and Kenya, Development Research and Training (DRT) and Development Initiatives (DI) collaborated to carry out this one-year study.

In Uganda, we focused on a process of providing unsolicited citizen feedback to duty- bearers and service providers in communities. This was based on the work of Community Resource Trackers, a group of volunteers supported by DRT in five post-conflict districts (Gulu, Kitgum, Pader, Katakwi and Kotido) to identify and track community resources and provide feedback on their use. These included financial and in-kind resources, allocated through central and local government, NGOs and donors.

In Kenya, we focused on a formalised process of CGD involving the Ministry of Education and National Taxpayers Association. The School Report Card (SRC) is an effort to increase parental participation in schooling. SRC is a scorecard for parents to assess the performance of their school each year in ten areas relatingto the quality of education.

What were the findings?

The two processes provided insights into the changes CGD influences in the areas of  accountability, resource allocation, service delivery and government response.

Both cases demonstrated the relevance of CGD in improving service delivery. They showed that the uptake of CGD and response by government depends significantly on the quality of relationships that CGD producers create with government, and whether the initiatives relate to existing policy priorities and interests of government.

The study revealed important effects on improving citizen behaviours. Community members who participated in CGD processes, understood their role as citizens and participated fully in development processes, with strong skills, knowledge and confidence.

The Kenya case study revealed that CGD can influence policy change if it is generated and used at large scale, and in direct linkage with a specific sector; but it also revealed that this is difficult to measure.

In Uganda we observed distinct improvements in service delivery and accessibility at the local level – which was the motivation for engaging in CGD in the first instance….(More) (Full Report)”.

Citizen-generated data in the information ecosystem

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday