Why Taking a Step Back From Social Impact Assessment Can Lead to Better Results


Article by By Anita Fuzi, Lidia Gryszkiewicz, & Dariusz Sikora: “Over the years, many social sector leaders have written about the difficulties of measuring social impact. Over the past few decades, they’ve called for more skilled analysts, the embedding of impact measurement in the broader investment process, and the development of impact measurement roadmaps. Yet measurement remains a constant challenge for the sector.

For once, let’s take a step back instead of looking further forward.

Impact assessments are important tools for learning about effective solutions to social challenges, but do they really make sense when an organization is not fully leveraging its potential to address those challenges and deliver positive impact in the first place? Should well-done impact assessment remain the holy grail, or should we focus on organizations’ ability to deliver impact? We believe that before diving into measurement, organizations must establish awareness of and readiness for impact in every aspect of their operations. In other words, they need to assess their social impact capability system before they can even attempt to measure any impact they have generated. We call this the “capability approach to social impact,” and it rests on an evaluation of seven different organizational areas….

The Social Impact Capability Framework

When organizations do not have the right support system and resources in place to create positive social impact, it is unlikely that actual attempts at impact assessment will succeed. For example, measuring an organization’s impact on the local community will not bear much fruit if the organization’s strategy, mission, vision, processes, resources, and values are not designed to support local community involvement in the first place. It is better to focus on assessing impact readiness level—whether an organization is capable of delivering the impact it wishes to deliver—rather than jumping into the impact assessment itself.Examining these seven capability areas can help organizations determine their readiness for creating impact.

To help assess this, we created a diagnostic tool— based on extensive literature review and our advisory experience—that evaluates seven capability areas: strategic framework, process, culture and leadership, structure and system, resources, innovation, and the external environment. Organizations rate each area on a scale from one to five, with one being very low/not important and five being very high/essential. Ideally, representatives from all departments complete the assessment collectively to ensure that everyone is on the same page….(More)”.

Trusting Intelligent Machines: Deepening Trust Within Socio-Technical Systems


Peter Andras et al in IEEE Technology and Society Magazine: “Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice of moves in the game Go (generated by Deep Mind’s Alpha Go Zero) are an impressive example of an artificial intelligence system calculating results that even a human expert for the game can hardly retrace. But this is, quite literally, a toy example. In reality, intelligent algorithms are encroaching more and more into our everyday lives, be it through algorithms that recommend products for us to buy, or whole systems such as driverless vehicles. We are delegating ever more aspects of our daily routines to machines, and this trend looks set to continue in the future. Indeed, continued economic growth is set to depend on it. The nature of human-computer interaction in the world that the digital transformation is creating will require (mutual) trust between humans and intelligent, or seemingly intelligent, machines. But what does it mean to trust an intelligent machine? How can trust be established between human societies and intelligent machines?…(More)”.

We Need an FDA For Algorithms


Interview with Hannah Fry on the promise and danger of an AI world by Michael Segal:”…Why do we need an FDA for algorithms?

It used to be the case that you could just put any old colored liquid in a glass bottle and sell it as medicine and make an absolute fortune. And then not worry about whether or not it’s poisonous. We stopped that from happening because, well, for starters it’s kind of morally repugnant. But also, it harms people. We’re in that position right now with data and algorithms. You can harvest any data that you want, on anybody. You can infer any data that you like, and you can use it to manipulate them in any way that you choose. And you can roll out an algorithm that genuinely makes massive differences to people’s lives, both good and bad, without any checks and balances. To me that seems completely bonkers. So I think we need something like the FDA for algorithms. A regulatory body that can protect the intellectual property of algorithms, but at the same time ensure that the benefits to society outweigh the harms.

Why is the regulation of medicine an appropriate comparison?

If you swallow a bottle of colored liquid and then you keel over the next day, then you know for sure it was poisonous. But there are much more subtle things in pharmaceuticals that require expert analysis to be able to weigh up the benefits and the harms. To study the chemical profile of these drugs that are being sold and make sure that they actually are doing what they say they’re doing. With algorithms it’s the same thing. You can’t expect the average person in the street to study Bayesian inference or be totally well read in random forests, and have the kind of computing prowess to look up a code and analyze whether it’s doing something fairly. That’s not realistic. Simultaneously, you can’t have some code of conduct that every data science person signs up to, and agrees that they won’t tread over some lines. It has to be a government, really, that does this. It has to be government that analyzes this stuff on our behalf and makes sure that it is doing what it says it does, and in a way that doesn’t end up harming people.

How did you come to write a book about algorithms?

Back in 2011 in London, we had these really bad riots in London. I’d been working on a project with the Metropolitan Police, trying mathematically to look at how these riots had spread and to use algorithms to ask how could the police have done better. I went to go and give a talk in Berlin about this paper we’d published about our work, and they completely tore me apart. They were asking questions like, “Hang on a second, you’re creating this algorithm that has the potential to be used to suppress peaceful demonstrations in the future. How can you morally justify the work that you’re doing?” I’m kind of ashamed to say that it just hadn’t occurred to me at that point in time. Ever since, I have really thought a lot about the point that they made. And started to notice around me that other researchers in the area weren’t necessarily treating the data that they were working with, and the algorithms that they were creating, with the ethical concern they really warranted. We have this imbalance where the people who are making algorithms aren’t talking to the people who are using them. And the people who are using them aren’t talking to the people who are having decisions made about their lives by them. I wanted to write something that united those three groups….(More)”.

Harnessing Digital Tools to Revitalize European Democracy


Article by Elisa Lironi: “…Information and communication technology (ICT) can be used to implement more participatory mechanisms and foster democratic processes. Often referred to as e-democracy, there is a large range of very different possibilities for online engagement, including e-initiatives, e-consultations, crowdsourcing, participatory budgeting, and e-voting. Many European countries have started exploring ICT’s potential to reach more citizens at a lower cost and to tap into the so-called wisdom of the crowd, as governments attempt to earn citizens’ trust and revitalize European democracy by developing more responsive, transparent, and participatory decisionmaking processes.

For instance, when Anne Hidalgo was elected mayor of Paris in May 2014, one of her priorities was to make the city more collaborative by allowing Parisians to propose policy and develop projects together. In order to build a stronger relationship with the citizens, she immediately started to implement a citywide participatory budgeting project for the whole of Paris, including all types of policy issues. It started as a small pilot, with the city of Paris putting forward fifteen projects that could be funded with up to about 20 million euros and letting citizens vote on which projects to invest in, via ballot box or online. Parisians and local authorities deemed this experiment successful, so Hidalgo decided it was worth taking further, with more ideas and a bigger pot of money. Within two years, the level of participation grew significantly—from 40,000 voters in 2014 to 92,809 in 2016, representing 5 percent of the total urban population. Today, Paris Budget Participatif is an official platform that lets Parisians decide how to spend 5 percent of the investment budget from 2014 to 2020, amounting to around 500 million euros. In addition, the mayor also introduced two e-democracy platforms—Paris Petitions, for e-petitions, and Idée Paris, for e-consultations. Citizens in the French capital now have multiple channels to express their opinions and contribute to the development of their city.

In Latvia, civil society has played a significant role in changing how legislative procedures are organized. ManaBalss (My Voice) is a grassroots NGO that creates tools for better civic participation in decisionmaking processes. Its online platform, ManaBalss.lv, is a public e-participation website that lets Latvian citizens propose, submit, and sign legislative initiatives to improve policies at both the national and municipal level. …

In Finland, the government itself introduced an element of direct democracy into the Finnish political system, through the 2012 Citizens’ Initiative Act (CI-Act) that allows citizens to submit initiatives to the parliament. …

Other civic tech NGOs across Europe have been developing and experimenting with a variety of digital tools to reinvigorate democracy. These include initiatives like Science For You (SCiFY) in Greece, Netwerk Democratie in the Netherlands, and the Citizens Foundation in Iceland, which got its start when citizens were asked to crowdsource their constitution in 2010.

Outside of civil society, several private tech companies are developing digital platforms for democratic participation, mainly at the local government level. One example is the Belgian start-up CitizenLab, an online participation platform that has been used by more than seventy-five municipalities around the world. The young founders of CitizenLab have used technology to innovate the democratic process by listening to what politicians need and including a variety of functions, such as crowdsourcing mechanisms, consultation processes, and participatory budgeting. Numerous other European civic tech companies have been working on similar concepts—Cap Collectif in France, Delib in the UK, and Discuto in Austria, to name just a few. Many of these digital tools have proven useful to elected local or national representatives….

While these initiatives are making a real impact on the quality of European democracy, most of the EU’s formal policy focus is on constraining the power of the tech giants rather than positively aiding digital participation….(More)”

Bad Landlord? These Coders Are Here to Help


Luis Ferré-Sadurní in the New York Times: “When Dan Kass moved to New York City in 2013 after graduating from college in Boston, his introduction to the city was one that many New Yorkers are all too familiar with: a bad landlord….

Examples include an app called Heatseek, created by students at a coding academy, that allows tenants to record and report the temperature in their homes to ensure that landlords don’t skimp on the heat. There’s also the Displacement Alert Project, built by a coalition of affordable housing groups, that maps out buildings and neighborhoods at risk of displacement.

Now, many of these civic coders are trying to band together and formalize a community.

For more than a year, Mr. Kass and other housing-data wonks have met each month at a shared work space in Brooklyn to exchange ideas about projects and talk about data sets over beer and snacks. Some come from prominent housing advocacy groups; others work unrelated day jobs. They informally call themselves the Housing Data Coalition.

“The real estate industry has many more programmers, many more developers, many more technical tools at their disposal,” said Ziggy Mintz, 30, a computer programmer who is part of the coalition. “It never quite seems fair that the tenant side of the equation doesn’t have the same tools.”

“Our collaboration is a counteracting force to that,” said Lucy Block, a research and policy associate at the Association for Neighborhood & Housing Development, the group behind the Displacement Alert Project. “We are trying to build the capacity to fight the displacement of low-income people in the city.”

This week, Mr. Kass and his team at JustFix.nyc, a nonprofit technology start-up, launched a new database for tenants that was built off ideas raised during those monthly meetings.

The tool, called Who Owns What, allows tenants to punch in an address and look up other buildings associated with the landlord or management company. It might sound inconsequential, but the tool goes a long way in piercing the veil of secrecy that shrouds the portfolios of landlords….(More)”.

To Reduce Privacy Risks, the Census Plans to Report Less Accurate Data


Mark Hansen at the New York Times: “When the Census Bureau gathered data in 2010, it made two promises. The form would be “quick and easy,” it said. And “your answers are protected by law.”

But mathematical breakthroughs, easy access to more powerful computing, and widespread availability of large and varied public data sets have made the bureau reconsider whether the protection it offers Americans is strong enough. To preserve confidentiality, the bureau’s directors have determined they need to adopt a “formal privacy” approach, one that adds uncertainty to census data before it is published and achieves privacy assurances that are provable mathematically.

The census has always added some uncertainty to its data, but a key innovation of this new framework, known as “differential privacy,” is a numerical value describing how much privacy loss a person will experience. It determines the amount of randomness — “noise” — that needs to be added to a data set before it is released, and sets up a balancing act between accuracy and privacy. Too much noise would mean the data would not be accurate enough to be useful — in redistricting, in enforcing the Voting Rights Act or in conducting academic research. But too little, and someone’s personal data could be revealed.

On Thursday, the bureau will announce the trade-off it has chosen for data publications from the 2018 End-to-End Census Test it conducted in Rhode Island, the only dress rehearsal before the actual census in 2020. The bureau has decided to enforce stronger privacy protections than companies like Apple or Google had when they each first took up differential privacy….

In presentation materials for Thursday’s announcement, special attention is paid to lessening any problems with redistricting: the potential complications of using noisy counts of voting-age people to draw district lines. (By contrast, in 2000 and 2010 the swapping mechanism produced exact counts of potential voters down to the block level.)

The Census Bureau has been an early adopter of differential privacy. Still, instituting the framework on such a large scale is not an easy task, and even some of the big technology firms have had difficulties. For example, shortly after Apple’s announcement in 2016 that it would use differential privacy for data collected from its macOS and iOS operating systems, it was revealed that the actual privacy loss of their systems was much higher than advertised.

Some scholars question the bureau’s abandonment of techniques like swapping in favor of differential privacy. Steven Ruggles, Regents Professor of history and population studies at the University of Minnesota, has relied on census data for decades. Through the Integrated Public Use Microdata Series, he and his team have regularized census data dating to 1850, providing consistency between questionnaires as the forms have changed, and enabling researchers to analyze data across years.

“All of the sudden, Title 13 gets equated with differential privacy — it’s not,” he said, adding that if you make a guess about someone’s identity from looking at census data, you are probably wrong. “That has been regarded in the past as protection of privacy. They want to make it so that you can’t even guess.”

“There is a trade-off between usability and risk,” he added. “I am concerned they may go far too far on privileging an absolutist standard of risk.”

In a working paper published Friday, he said that with the number of private services offering personal data, a prospective hacker would have little incentive to turn to public data such as the census “in an attempt to uncover uncertain, imprecise and outdated information about a particular individual.”…(More)”.

Chatbots Are a Danger to Democracy


Jamie Susskind in the New York Times: “As we survey the fallout from the midterm elections, it would be easy to miss the longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process.

Chatbots are software programs that are capable of conversing with human beings on social media using natural language. Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.

Some chatbots, like the award-winning Mitsuku, can hold passable levels of conversation. Politics, however, is not Mitsuku’s strong suit. When asked “What do you think of the midterms?” Mitsuku replies, “I have never heard of midterms. Please enlighten me.” Reflecting the imperfect state of the art, Mitsuku will often give answers that are entertainingly weird. Asked, “What do you think of The New York Times?” Mitsuku replies, “I didn’t even know there was a new one.”

Most political bots these days are similarly crude, limited to the repetition of slogans like “#LockHerUp” or “#MAGA.” But a glance at recent political history suggests that chatbots have already begun to have an appreciable impact on political discourse. In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.

In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.

Chatbots aren’t a recent phenomenon. Two years ago, around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots. And a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side….

We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human? Bots peddling suspect information could be challenged by moderator-bots to provide recognized sources for their claims within seconds. Those that fail would face removal.

We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate. For both those reasons, the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake….(More)”.

New methods help identify what drives sensitive or socially unacceptable behaviors


Mary Guiden at Physorg: “Conservation scientists and statisticians at Colorado State University have teamed up to solve a key problem for the study of sensitive behaviors like poaching, harassment, bribery, and drug use.

Sensitive behaviors—defined as socially unacceptable or not compliant with rules and regulations—are notoriously hard to study, researchers say, because people often do not want to answer direct questions about them.

To overcome this challenge, scientists have developed indirect questioning approaches that protect responders’ identities. However, these methods also make it difficult to predict which sectors of a population are more likely to participate in sensitive behaviors, and which factors, such as knowledge of laws, education, or income, influence the probability that an individual will engage in a sensitive behavior.

Assistant Professor Jennifer Solomon and Associate Professor Michael Gavin of the Department of Human Dimensions of Natural Resources at CSU, and Abu Conteh from MacEwan University in Alberta, Canada, have teamed up with Professor Jay Breidt and doctoral student Meng Cao in the CSU Department of Statistics to develop a new method to solve the problem.

The study, “Understanding the drivers of sensitive behavior using Poisson regression from quantitative randomized response technique data,” was published recently in PLOS One.

Conteh, who, as a doctoral student, worked with Gavin in New Zealand, used a specific technique, known as quantitative randomized response, to elicit confidential answers to questions on behaviors related to non-compliance with natural resource regulations from a protected area in Sierra Leone.

In this technique, the researcher conducting interviews has a large container containing pingpong balls, some with numbers and some without numbers. The interviewer asks the respondent to pick a ball at random, without revealing it to the interviewer. If the ball has a number, the respondent tells the interviewer the number. If the ball does not have a number, the respondent reveals how many times he illegaly hunted animals in a given time period….

Armed with the new computer program, the scientists found that people from rural communities with less access to jobs in urban centers were more likely to hunt in the reserve. People in communities with a greater proportion people displaced by Sierra Leone’s 10-year civil war were also more likely to hunt illegally….(More)”

The researchers said that collaborating across disciplines was and is key to addressing complex problems like this one. It is commonplace for people to be noncompliant with rules and regulations and equally important for social scientists to analyze these behaviors….(More)”

The Constitution of Knowledge


Jonathan Rauch at National Affairs: “America has faced many challenges to its political culture, but this is the first time we have seen a national-level epistemic attack: a systematic attack, emanating from the very highest reaches of power, on our collective ability to distinguish truth from falsehood. “These are truly uncharted waters for the country,” wrote Michael Hayden, former CIA director, in the Washington Post in April. “We have in the past argued over the values to be applied to objective reality, or occasionally over what constituted objective reality, but never the existence or relevance of objective reality itself.” To make the point another way: Trump and his troll armies seek to undermine the constitution of knowledge….

The attack, Hayden noted, is on “the existence or relevance of objective reality itself.” But what is objective reality?

In everyday vernacular, reality often refers to the world out there: things as they really are, independent of human perception and error. Reality also often describes those things that we feel certain about, things that we believe no amount of wishful thinking could change. But, of course, humans have no direct access to an objective world independent of our minds and senses, and subjective certainty is in no way a guarantee of truth. Philosophers have wrestled with these problems for centuries, and today they have a pretty good working definition of objective reality. It is a set of propositions: propositions that have been validated in some way, and have thereby been shown to be at least conditionally true — true, that is, unless debunked. Some of these propositions reflect the world as we perceive it (e.g., “The sky is blue”). Others, like claims made by quantum physicists and abstract mathematicians, appear completely removed from the world of everyday experience.

It is worth noting, however, that the locution “validated in some way” hides a cheat. In what way? Some Americans believe Elvis Presley is alive. Should we send him a Social Security check? Many people believe that vaccines cause autism, or that Barack Obama was born in Africa, or that the murder rate has risen. Who should decide who is right? And who should decide who gets to decide?

This is the problem of social epistemology, which concerns itself with how societies come to some kind of public understanding about truth. It is a fundamental problem for every culture and country, and the attempts to resolve it go back at least to Plato, who concluded that a philosopher king (presumably someone like Plato himself) should rule over reality. Traditional tribal communities frequently use oracles to settle questions about reality. Religious communities use holy texts as interpreted by priests. Totalitarian states put the government in charge of objectivity.

There are many other ways to settle questions about reality. Most of them are terrible because they rely on authoritarianism, violence, or, usually, both. As the great American philosopher Charles Sanders Peirce said in 1877, “When complete agreement could not otherwise be reached, a general massacre of all who have not thought in a certain way has proved a very effective means of settling opinion in a country.”

As Peirce implied, one way to avoid a massacre would be to attain unanimity, at least on certain core issues. No wonder we hanker for consensus. Something you often hear today is that, as Senator Ben Sasse put it in an interview on CNN, “[W]e have a risk of getting to a place where we don’t have shared public facts. A republic will not work if we don’t have shared facts.”

But that is not quite the right answer, either. Disagreement about core issues and even core facts is inherent in human nature and essential in a free society. If unanimity on core propositions is not possible or even desirable, what is necessary to have a functional social reality? The answer is that we need an elite consensus, and hopefully also something approaching a public consensus, on the method of validating propositions. We needn’t and can’t all agree that the same things are true, but a critical mass needs to agree on what it is we do that distinguishes truth from falsehood, and more important, on who does it.

Who can be trusted to resolve questions about objective truth? The best answer turns out to be no one in particular….(More)”.

Library of Congress Launches Crowdsourcing Platform


Matt Enis at the Library Journal: “The Library of Congress (LC) last month launched crowd.loc.gov, a new crowdsourcing platform that will improve discovery and access to the Library’s digital collections with the help of volunteer transcription and tagging. The project kicked off with the “Letters to Lincoln Challenge,” a campaign encouraging volunteers to transcribe 10,000 digitized versions of documents written by or to Abraham Lincoln, which will make these materials full-text searchable for the first time….

The new project is the earliest example of LC’s new Digital Strategy, which complements the library’s new 2019–23 strategic plan. Announced in October, the strategic plan, “Enriching the User Experience,” outlines four high-level goals—expanding access, enhancing services, optimizing resources, and measuring results—while the digital strategy outlines how LC plans to accomplish these goals with its digital resources, described as “throwing open the treasure chest, connecting, and investing in our future”…

LC aims to use crowdsourcing to enrich the user experience in two key ways, Zwaard said.

“First, it helps with the legibility of our collections,” she explained. “The Library of Congress is home to so many historic treasures, but the handwriting can be hard to read…. For example, we have this amazing letter from Abraham Lincoln to his first fiancée. It’s really quite lovely, but at a glance, if you’re not familiar with historic handwriting, it’s hard to read.”…

Second, crowdsourcing “invites people into the collections,” she added. “The library is very optimized around answering specific research questions. One of the things we’re thinking about is how to serve users who don’t have a specific research question—who just want to see all of the cool stuff. We have so much cool stuff! But it can be hard for people to find purchase when they are just browsing and don’t have anything specific in mind. One of the ways we can [showcase interesting content] is by offering them a window into the collections by asking for their help.”…

To facilitate ongoing engagement with these varied projects, LC has set up an online forum on History Hub, a site hosted by the National Archives, to encourage crowd.loc.gov participants to ask questions, discuss projects, and meet other volunteers. …

Crowd.loc.gov is not LC’s first crowdsourcing project. Followers of the library’s official Flickr account have added tens of thousands of descriptive tags to digitized historical photos since the account debuted in 2007. And last year, the debut of labs.loc.gov—which aims to encourage creative use of LOC’s digital collections—included the Beyond Words crowdsourcing project developed by LC software developer Tong Wang….(More)”