Shutting down the internet doesn’t work – but governments keep doing it


George Ogola in The Conversation: “As the internet continues to gain considerable power and agency around the world, many governments have moved to regulate it. And where regulation fails, some states resort to internet shutdowns or deliberate disruptions.

The statistics are staggering. In India alone, there were 154 internet shutdowns between January 2016 and May 2018. This is the most of any country in the world.

But similar shutdowns are becoming common on the African continent. Already in 2019 there have been shutdowns in Cameroon, the Democratic Republic of Congo, Republic of Congo, Chad, Sudan and Zimbabwe. Last year there were 21 such shutdowns on the continent. This was the case in Togo, Sierra Leone, Sudan and Ethiopia, among others.

The justifications for such shutdowns are usually relatively predictable. Governments often claim that internet access is blocked in the interest of public security and order. In some instances, however, their reasoning borders on the curious if not downright absurd, like the case of Ethiopia in 2017 and Algeria in 2018 when the internet was shut down apparently to curb cheating in national examinations.

Whatever their reasons, governments have three general approaches to controlling citzens’ access to the web.

How they do it

Internet shutdowns or disruptions usually take three forms. The first and probably the most serious is where the state completely blocks access to the internet on all platforms. It’s arguably the most punitive, with significant socialeconomic and political costs.

The financial costs can run into millions of dollars for each day the internet is blocked. A Deloitte report on the issue estimates that a country with average connectivity could lose at least 1.9% of its daily GDP for each day all internet services are shut down.

For countries with average to medium level connectivity the loss is 1% of daily GDP, and for countries with average to low connectivity it’s 0.4%. It’s estimated that Ethiopia, for example, could lose up to US$500,000 a day whenever there is a shutdown. These shutdowns, then, damage businesses, discourage investments, and hinder economic growth.

The second way that governments restrict internet access is by applying content blocking techniques. They restrict access to particular sites or applications. This is the most common strategy and it’s usually targeted at social media platforms. The idea is to stop or limit conversations on these platforms.

Online spaces have become the platform for various forms of political expression that many states especially those with authoritarian leanings consider subversive. Governments argue, for example, that social media platforms encourage the spread of rumours which can trigger public unrest.

This was the case in 2016 in Uganda during the country’s presidential elections. The government restricted access to social media, describing the shutdown as a “security measure to avert lies … intended to incite violence and illegal declaration of election results”.

In Zimbabwe, the government blocked social media following demonstrations over an increase in fuel prices. It argued that the January 2019 ban was because the platforms were being “used to coordinate the violence”.

The third strategy, done almost by stealth, is the use of what is generally known as “bandwidth throttling”. In this case telecom operators or internet service providers are forced to lower the quality of their cell signals or internet speed. This makes the internet too slow to use. “Throttling” can also target particular online destinations such as social media sites….(More)”

Democracy Beyond Voting and Protests


Sasha Fisher at Project Syndicate: “For over a decade now, we have witnessed more elections and, simultaneously, less democracy. According to Bloomberg, elections have been occurring more frequently around the world. Yet Freedom House finds that some 110 countries have experienced declines in political and civil rights over the past 13 years.

As democracy declines, so does our sense of community. In the United States, this is evidenced by a looming loneliness epidemicand the rapid disappearance of civic institutions such as churches, eight of which close every day. And though these trends are global in nature, the US exemplifies them in the extreme.

This is no coincidence. As Alexis de Tocqueville pointed out in the 1830s, America’s founders envisioned a country governed not by shared values, but by self-interest. That vision has since defined America’s institutions, and fostered a hyper-individualistic society.

Growing distrust in governing institutions has fueled a rise in authoritarian populist movements around the world. Citizens are demanding individual economic security and retreating into an isolationist mentality. ...

And yet we know that “user engagement” works, as shown by countless studies and human experiences. For example, an evaluation conducted in Uganda found that the more citizens participated in the design of health programs, the more the perception of the health-care system improved. And in Indonesia, direct citizen involvement in government decision-making has led to higher satisfaction with government services....

While the Western world suffers from over-individualization, the most notable governance and economic innovations are taking place in the Global South. In Rwanda, for example, the government has introduced policies to encourage grassroots solutions that strengthen citizens’ sense of community and shared accountability. Through monthly community-service meetings, families and individuals work together to build homes for the needy, fix roads, and pool funds to invest in better farming practices and equipment.

Imagine if over 300 million Americans convened every month for a similar purpose. There would suddenly be billions more citizen hours invested in neighbor-to-neighbor interaction and citizen action.

This was one of the main effects of the Village Savings and Loan Associations that originated in the Democratic Republic of Congo. Within communities, members have access to loans to start small businesses and save for a rainy day. The model works because it leverages neighbor-to-neighbor accountability. Likewise, from Haiti to Liberia to Burundi and beyond, community-based health systems have proven effective precisely because health workers know their neighbors and their needs. Community health workers go from home to home, checking in on pregnant mothers and making sure they are cared for. Each of these solutions uses and strengthens communal accountability through shared engagement – not traditional vertical accountability lines.

If we believe in the democratic principle that governments must be accountable to citizens, we should build systems that hold us accountable to each other – and we must engage beyond elections and protests. We must usher in a new era of community-driven democracy – power must be decentralized and placed in the hands of families and communities.

When we achieve community-driven democracy, we will engage with one another and with our governments – not just on special occasions, but continuously, because our democracy and freedom depend on us….(More)” (See also Index on Trust in Institutions)

7 things we’ve learned about computer algorithms


Aaron Smith at Pew Research Center: “Algorithms are all around us, using massive stores of data and complex analytics to make decisions with often significant impacts on humans – from choosing the content people see on social media to judging whether a person is a good credit risk or job candidate. Pew Research Center released several reports in 2018 that explored the role and meaning of algorithms in people’s lives today. Here are some of the key themes that emerged from that research.

  1. Algorithmically generated content platforms play a prominent role in Americans’ information diets. Sizable shares of U.S. adults now get news on sites like Facebook or YouTube that use algorithms to curate the content they show to their users. A study by the Center found that 81% of YouTube users say they at least occasionally watch the videos suggested by the platform’s recommendation algorithm, and that these recommendations encourage users to watch progressively longer content as they click through the videos suggested by the site.
  2. The inner workings of even the most common algorithms can be confusing to users. Facebook is among the most popular social media platforms, but roughly half of Facebook users – including six-in-ten users ages 50 and older – say they do not understand how the site’s algorithmically generated news feed selects which posts to show them. And around three-quarters of Facebook users are not aware that the site automatically estimates their interests and preferences based on their online behaviors in order to deliver them targeted advertisements and other content.
  3. The public is wary of computer algorithms being used to make decisions with real-world consequences. The public expresses widespread concern about companies and other institutions using computer algorithms in situations with potential impacts on people’s lives. More than half (56%) of U.S. adults think it is unacceptable to use automated criminal risk scores when evaluating people who are up for parole. And 68% think it is unacceptable for companies to collect large quantities of data about individuals for the purposes of offering them deals or other financial incentives. When asked to elaborate about their worries, many feel that these programs violate people’s privacy, are unfair, or simply will not work as well as decisions made by humans….(More)”.

Congress needs your input (but don’t call it crowdsourcing)


Lorelei Kelly at TechCrunch: “As it stands, Congress does not have the technical infrastructure to ingest all this new input in any systematic way. Individual members lack a method to sort and filter signal from noise or trusted credible knowledge from malicious falsehood and hype.

What Congress needs is curation, not just more information

Curation means discovering, gathering and presenting content. This word is commonly thought of as the job of librarians and museums, places we go to find authentic and authoritative knowledge. Similarly, Congress needs methods to sort and filter information as required within the workflow of lawmaking. From personal offices to committees, members and their staff need context and informed judgement based on broadly defined expertise. The input can come from individuals or institutions. It can come from the wisdom of colleagues in Congress or across the federal government. Most importantly, it needs to be rooted in local constituents and it needs to be trusted.

It is not to say that crowdsourcing is unimportant for our governing system. But input methods that include digital must demonstrate informed and accountable deliberative methods over time. Governing is the curation part of democracy. Governing requires public review, understanding of context, explanation and measurements of value for the nation as a whole. We are already thinking about how to create an ethical blockchain. Why not the same attention for our most important democratic institution?

Governing requires trade-offs that elicit emotion and sometimes anger. But as in life, emotions require self-regulation. In Congress, this means compromise and negotiation. In fact, one of the reasons Congress is so stuck is that its own deliberative process has declined at every level. Besides the official committee process stalling out, members have few opportunities to be together as colleagues, and public space is increasingly antagonistic and dangerous.

With so few options, members are left with blunt communications objects like clunky mail management systems and partisan talking points. This means that lawmakers don’t use public input for policy formation as much as to surveil public opinion.

Any path forward to the 21st century must include new methods to (1) curate and hear from the public in a way that informs policy AND (2) incorporate real data into a results-driven process.

While our democracy is facing unprecedented stress, there are bright spots. Congress is again dedicating resources to an in-house technologyassessment capacity. Earlier this month, the new 116th Congress created a Select Committee on the Modernization of Congress. It will be chaired by Rep. Derek Kilmer (D-WA). Then the Open Government Data Actbecame law. This law will potentially scale the level of access to government data to unprecedented levels. It will require that all public-facing federal data must be machine-readable and reusable. This is a move in the right direction, and now comes the hard part.

Marci Harris, the CEO of civic startup Popvox, put it well, “The Foundations for Evidence-Based Policymaking (FEBP) Act, which includes the OPEN Government Data Act, lays groundwork for a more effective, accountable government. To realize the potential of these new resources, Congress will need to hire tech literate staff and incorporate real data and evidence into its oversight and legislative functions.”

In forsaking its own capacity for complex problem solving, Congress has become non-competitive in the creative process that moves society forward. During this same time period, all eyes turned toward Silicon Valley to fill the vacuum. With mass connection platforms and unlimited personal freedom, it seemed direct democracy had arrived. But that’s proved a bust. If we go by current trends, entrusting democracy to Silicon Valley will give us perfect laundry and fewer voting rights. Fixing democracy is a whole-of-nation challenge that Congress must lead.

Finally, we “the crowd” want a more effective governing body that incorporates our experience and perspective into the lawmaking process, not just feel-good form letters thanking us for our input. We also want a political discourse grounded in facts. A “modern” Congress will provide both, and now we have the institutional foundation in place to make it happen….(More)”.

Decoding Algorithms


Malcalester University: “Ada Lovelace probably didn’t foresee the impact of the mathematical formula she published in 1843, now considered the first computer algorithm.

Nor could she have anticipated today’s widespread use of algorithms, in applications as different as the 2016 U.S. presidential campaign and Mac’s first-year seminar registration. “Over the last decade algorithms have become embedded in every aspect of our lives,” says Shilad Sen, professor in Macalester’s Math, Statistics, and Computer Science (MSCS) Department.

How do algorithms shape our society? Why is it important to be aware of them? And for readers who don’t know, what is an algorithm, anyway?…(More)”.

Digital mile-markers provide navigation in cities


Springwise: “UK-based Maynard Design Consultancy has developed a system to help people navigate the changing landscape of city neighbourhoods. A prototype of a wayfinding solution for districts in London combines smart physical markers and navigational apps. The physical markers, inspired by traditional mile markers, include a digital screen. They provide real-time information, including daily news and messages from local businesses. The markers also track how people use the park, providing valuable information to the city and urban planners. The partnering apps provide up-to-date information about the changing environment in the city, such as on-going construction and delays due to large-scale events.

Unlike traditional, smartphone based navigational apps, this concept uses technology to help us reconnect with our surroundings, Maynard Design said.

The proposal won the Smart London District Challenge competition set by the Institute for Sustainability. Maynard is currently looking for partner companies to pilot its concept.

Takeaway: The Maynard design represents the latest efforts to use smartphones to amplify public safety announcements, general information and local businesses. The concept moves past traditional wayfinding markers to link people to a smart-city grid. By tracking how people use parks and other urban spaces, the markers will provide valuable insight for city officials. We expect more innovations like this as cities increasingly move toward seamless communication between services and city residents, aided by smart technologies. Over the past several months, we have seen technology to connect drivers to parking spaces and a prototype pavement that can change functions based on people’s needs….(More)”

How Tech Utopia Fostered Tyranny


Jon Askonas at The New Atlantis: “The rumors spread like wildfire: Muslims were secretly lacing a Sri Lankan village’s food with sterilization drugs. Soon, a video circulated that appeared to show a Muslim shopkeeper admitting to drugging his customers — he had misunderstood the question that was angrily put to him. Then all hell broke loose. Over a several-day span, dozens of mosques and Muslim-owned shops and homes were burned down across multiple towns. In one home, a young journalist was trapped, and perished.

Mob violence is an old phenomenon, but the tools encouraging it, in this case, were not. As the New York Times reported in April, the rumors were spread via Facebook, whose newsfeed algorithm prioritized high-engagement content, especially videos. “Designed to maximize user time on site,” as the Times article describes, the newsfeed algorithm “promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.” On Facebook in Sri Lanka, posts with incendiary rumors had among the highest engagement rates, and so were among the most highly promoted content on the platform. Similar cases of mob violence have taken place in India, Myanmar, Mexico, and elsewhere, with misinformation spread mainly through Facebook and the messaging tool WhatsApp.

Follow The New AtlantisThis is in spite of Facebook’s decision in January 2018 to tweak its algorithm, apparently to prevent the kind of manipulation we saw in the 2016 U.S. election, when posts and election ads originating from Russia reportedly showed up in newsfeeds of up to 126 million American Facebook users. The company explained that the changes to its algorithm will mean that newsfeeds will be “showing more posts from friends and family and updates that spark conversation,” and “less public content, including videos and other posts from publishers or businesses.” But these changes, which Facebook had tested out in countries like Sri Lanka in the previous year, may actually have exacerbated the problem — which is that incendiary content, when posted by friends and family, is guaranteed to “spark conversation” and therefore to be prioritized in newsfeeds. This is because “misinformation is almost always more interesting than the truth,” as Mathew Ingram provocatively put it in the Columbia Journalism Review.

How did we get here, from Facebook’s mission to “give people the power to build community and bring the world closer together”? Riot-inducing “fake news” and election meddling are obviously far from what its founders intended for the platform. Likewise, Google’s founders surely did not build their search engine with the intention of its being censored in China to suppress free speech, and yet, after years of refusing this demand from Chinese leadership, Google has recently relented rather than pull their search engine from China entirely. And YouTube’s creators surely did not intend their feature that promotes “trending” content to help clickbait conspiracy-theory videos go viral.

These outcomes — not merely unanticipated by the companies’ founders but outright opposed to their intentions — are not limited to social media. So far, Big Tech companies have presented issues of incitement, algorithmic radicalization, and “fake news” as merely bumps on the road of progress, glitches and bugs to be patched over. In fact, the problem goes deeper, to fundamental questions of human nature. Tools based on the premise that access to information will only enlighten us and social connectivity will only make us more humane have instead fanned conspiracy theories, information bubbles, and social fracture. A tech movement spurred by visions of libertarian empowerment and progressive uplift has instead fanned a global resurgence of populism and authoritarianism.

Despite the storm of criticism, Silicon Valley has still failed to recognize in these abuses a sharp rebuke of its sunny view of human nature. It remains naïvely blind to how its own aspirations for social engineering are on a spectrum with the tools’ “unintended” uses by authoritarian regimes and nefarious actors….(More)”.

How to keep good research from dying a bad death: Strategies for co-creating research with impact


Blog post by Bridget Konadu Gyamfi and Bethany Park…:”Researchers are often invested in disseminating the results of their research to the practitioners and policymakers who helped enable it—but disseminating a paper, developing a brief, or even holding an event may not truly empower decision-makers to make changes based on the research.  

Disseminate results in stages and determine next steps

Mapping evidence to real-world decisions and processes in order to determine the right course of action can be complex. Together with our partners, we gather the troops—researchers, implementers, and IPA’s research and policy team—and have a discussion around what the implications of the research are for policy and practice.

This staged dissemination is critically important: having private discussions first helps partners digest the results and think through their reactions in a lower-stakes setting. We help the partners think about not only the results, but how their stakeholders will respond to the results, and how we can support their ongoing learning, whether results are “good” or not as hoped. Later, we hold larger dissemination events to inform the public. But we try to work closely with researchers and implementers to think through next steps right after results are available—before the window of opportunity passes.

Identify & prioritize policy opportunities

Many of our partners have already written smart advice about how to identify policy opportunities (windows, openings… etc.), so there’s no need for us to restate all that great thinking (go read it!). However, we get asked frequently how we prioritize policy opportunities, and we do have a clear internal process for making that decision. Here are our criteria:

High Impact Policy Activities.png
  1. A body of evidence to build on: One single study doesn’t often present the best policy opportunities. This is a generalization, of course, and there are exceptions, but typically our policy teams pay the most attention to bodies of evidence that are coming to a consensus. These are the opportunities for which we feel most able to recommend next steps related to policy and practice—there is a clearer message to communicate and research conclusions we can state with greater confidence.
  2. Relationships to open doors: Our long-term in-country presence and deep involvement with partners through research projects means that we have many relationships and doors open to us. Yet some of these relationships are stronger than others, and some partners are more influential in the processes we want to impact. We use stakeholder mapping tools to clarify who is invested and who has influence. We also track our stakeholder outreach to make sure our relationships stay strong and mutually beneficial.
  3. A concrete decision or process that we can influence: This is the typical understanding of a “policy opening,” and it’s an important one. What are the partner’s priorities, felt needs, and open questions? Where do those create opportunities for our influence? If the evidence would indicate one course of action, but that course isn’t even an option our partner would consider or be able to consider (for cost or other practical reasons), we have to give the opportunity a pass.
  4. Implementation funding: In the countries where we work, even when we have strong relationships, strong evidence, and the partner is open to influence, there is still one crucial ingredient missing: implementation funding. Addressing this constraint means getting evidence-based programming onto the agenda of major donors.

Get partners on board

Forming a coalition of partners and funders who will partner with us as we move forward is crucial. As a research and policy organization, we can’t scale effective solutions alone—nor is that the specialty that we want to develop, since there are others to fill that role. We need partners like Evidence Action Beta to help us pressure test solutions as they move towards scale, or partners like Living Goods who already have nationwide networks of community health workers who can reach communities efficiently and effectively. And we need governments who are willing to make public investments and decisions based on evidence….(More)”.

This is how AI bias really happens—and why it’s so hard to fix


Karen Hao at MIT Technology Review: “Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.

But it’s not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place.

How AI bias happens

We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process. For the purposes of this discussion, we’ll focus on three key stages.Sign up for the The AlgorithmArtificial intelligence, demystified

By signing up you agree to receive email newsletters and notifications from MIT Technology Review. You can change your preferences at any time. View our Privacy Policy for more detail.

Framing the problem. The first thing computer scientists do when they create a deep-learning model is decide what they actually want it to achieve. A credit card company, for example, might want to predict a customer’s creditworthiness, but “creditworthiness” is a rather nebulous concept. In order to translate it into something that can be computed, the company must decide whether it wants to, say, maximize its profit margins or maximize the number of loans that get repaid. It could then define creditworthiness within the context of that goal. The problem is that “those decisions are made for various business reasons other than fairness or discrimination,” explains Solon Barocas, an assistant professor at Cornell University who specializes in fairness in machine learning. If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn’t the company’s intention.

Collecting the data. There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. The resulting face recognition system would inevitably be worse at recognizing darker-skinned faces. The second case is precisely what happened when Amazon discovered that its internal recruiting tool was dismissing female candidates. Because it was trained on historical hiring decisions, which favored men over women, it learned to do the same.

Preparing the data. Finally, it is possible to introduce bias during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. (This is not to be confused with the problem-framing stage. You can use the same attributes to train a model for very different goals or use very different attributes to train a model for the same goal.) In the case of modeling creditworthiness, an “attribute” could be the customer’s age, income, or number of paid-off loans. In the case of Amazon’s recruiting tool, an “attribute” could be the candidate’s gender, education level, or years of experience. This is what people often call the “art” of deep learning: choosing which attributes to consider or ignore can significantly influence your model’s prediction accuracy. But while its impact on accuracy is easy to measure, its impact on the model’s bias is not.

Why AI bias is hard to fix

Given that context, some of the challenges of mitigating bias may already be apparent to you. Here we highlight four main ones….(More)”

Institutions as Social Theory


Blogpost by Titus Alexander: “The natural sciences comprise of a set of institutions and methods designed to improve our understanding of the physical world. One of the most powerful things science does is to produce theories – models of reality – that are used by others to change the world. The benefits of using science are so great that societies have created many channels to develop and use research to improve the human condition.

Social scientists also seek to improve the human condition. However, the channels from research to application are often weak and most social research is buried in academic papers and books. Some will inform policy via think tanks, civil servants or pressure groups but practitioners and politicians often prefer their own judgement and prejudices, using research only when it suits them. But a working example – the institution as the method – has more influence than a research paper. The evidence is tangible, like an experiment in natural science, and includes all the complexities of real life. It demonstrates its reliability over time and provides proof of what works.

Reflexivity is key to social science

In the physical sciences the investigator is separate from the subject of investigation and she or he has no influence on what they observe. Generally, theories in the human sciences cannot provide this kind of detached explanation, because societies are reflexive. When we study human behaviour we also influence it. People change what they do in response to being studied. They use theories to change their own behaviour or the behaviour of others. Many scholars and practitioners have explored reflexivity, including Albert BanduraPierre Bourdieu and the financier George Soros. Anthony Giddens called it the ‘double hermeneutic’.

The fact that society is reflexive is the key to effective social science. Like scientists, societies create systematic detachment to increase objectivity in decision-making, through advisers, boards, regulators, opinion polls and so on. Peer reviewed social science research is a form of detachment, but it is often so detached to be irrelevant….(More)”.