Explore our articles
View All Results

Stefaan Verhulst

Paper by Susan Stout, Vinisha Bhatia, and Paige Kirby: “We know that Monitoring and Evaluation (M&E) aims to support accountability and learning, in order to drive better outcomes…The paper, Understanding Data Use: Building M&E Systems that Empower Users, emphasizes how critical it is for decision makers to consider users’ decision space – from the institutional all the way to technical levels – in achieving data uptake.

Specifically, we call on smart mapping of this decision space – what do intended M&E users need, and what institutional factors shape those needs? With this understanding, we can better anticipate what types of data are most useful, and invest in systems to support data-driven decision making and better outcomes.

Mapping decision space is essential to understanding M&E data use. And as we’ve explored before, the development community has the opportunity to unlock existing resources to access more and better data that fits the needs of development actors to meet the SDGs….(More)”.

Understanding Data Use: Building M&E Systems that Empower Users

Paper by Regina Lenart-Gansiniec and Łukasz Sułkowski: “Crowdsourcing is one of the new themes that has appeared in the last decade. Considering its potential, more and more organisations reach for it. It is perceived as an innovative method that can be used for problem solving, improving business processes, creating open innovations, building a competitive advantage, and increasing transparency and openness of the organisation. Crowdsourcing is also conceptualised as a source of a knowledge-based organisation. The importance of crowdsourcing for organisational learning is seen as one of the key themes in the latest literature in the field of crowdsourcing. Since 2008, there has been an increase in the interest of public organisations in crowdsourcing and including it in their activities.

This article is a response to the recommendations in the subject literature, which states that crowdsourcing in public organisations is a new and exciting research area. The aim of the article is to present a new paradigm that combines crowdsourcing levels with the levels of learning. The research methodology is based on an analysis of the subject literature and exemplifications of organisations which introduce crowdsourcing. This article presents a cross-sectional study of four Polish municipal offices that use four types of crowdsourcing, according to the division by J. Howe: collective intelligence, crowd creation, crowd voting, and crowdfunding. Semi-structured interviews were conducted with the management personnel of those municipal offices. The research results show that knowledge acquired from the virtual communities allows the public organisation to anticipate changes, expectations, and needs of citizens and to adapt to them. It can therefore be considered that crowdsourcing is a new and rapidly developing organisational learning paradigm….(More)”

Crowdsourcing – a New Paradigm of Organisational Learning of Public Organisation

Alessandro Mantelero in Computer Law & Security Review: “The use of algorithms in modern data processing techniques, as well as data-intensive technological trends, suggests the adoption of a broader view of the data protection impact assessment. This will force data controllers to go beyond the traditional focus on data quality and security, and consider the impact of data processing on fundamental rights and collective social and ethical values.

Building on studies of the collective dimension of data protection, this article sets out to embed this new perspective in an assessment model centred on human rights (Human Rights, Ethical and Social Impact Assessment-HRESIA). This self-assessment model intends to overcome the limitations of the existing assessment models, which are either too closely focused on data processing or have an extent and granularity that make them too complicated to evaluate the consequences of a given use of data. In terms of architecture, the HRESIA has two main elements: a self-assessment questionnaire and an ad hoc expert committee. As a blueprint, this contribution focuses mainly on the nature of the proposed model, its architecture and its challenges; a more detailed description of the model and the content of the questionnaire will be discussed in a future publication drawing on the ongoing research….(More)”.

AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment

Book edited by Dirk Helbing: “This new collection of essays follows in the footsteps of the successful volume Thinking Ahead – Essays on Big Data, Digital Revolution, and Participatory Market Society, published at a time when our societies were on a path to technological totalitarianism, as exemplified by mass surveillance reported by Edward Snowden and others.

Meanwhile the threats have diversified and tech companies have gathered enough data to create detailed profiles about almost everyone living in the modern world – profiles that can predict our behavior better than our friends, families, or even partners. This is not only used to manipulate peoples’ opinions and voting behaviors, but more generally to influence consumer behavior at all levels. It is becoming increasingly clear that we are rapidly heading towards a cybernetic society, in which algorithms and social bots aim to control both the societal dynamics and individual behaviors….(More)”.

Towards Digital Enlightenment: Essays on the Dark and Light Sides of the Digital Revolution

Long Now Foundation Seminar by Juan Benet: “We live in a spectacular time,”…”We’re a century into our computing phase transition. The latest stages have created astonishing powers for individuals, groups, and our species as a whole. We are also faced with accumulating dangers — the capabilities to end the whole humanity experiment are growing and are ever more accessible. In light of the promethean fire that is computing, we must prevent bad outcomes and lock in good ones to build robust foundations for our knowledge, and a safe future. There is much we can do in the short-term to secure the long-term.”

“I come from the front lines of computing platform design to share a number of new super-powers at our disposal, some old challenges that are now soluble, and some new open problems. In this next decade, we’ll need to leverage peer-to-peer networks, crypto-economics, blockchains, Open Source, Open Services, decentralization, incentive-structure engineering, and so much more to ensure short-term safety and the long-term flourishing of humanity.”

Juan Benet is the inventor of the InterPlanetary File System (IPFS)—a new protocol which uses content-addressing to make the web faster, safer, and more open—and the creator of Filecoin, a cryptocurrency-incentivized storage market….(More + Video)”

Long Term Info-structure

Book by Steven Johnson: “Big, life-altering decisions matter so much more than the decisions we make every day, and they’re also the most difficult: where to live, whom to marry, what to believe, whether to start a company, how to end a war. There’s no one-size-fits-all approach for addressing these kinds of conundrums.

Steven Johnson’s classic Where Good Ideas Come From inspired creative people all over the world with new ways of thinking about innovation. In Farsighted, he uncovers powerful tools for honing the important skill of complex decision-making. While you can’t model a once-in-a-lifetime choice, you can model the deliberative tactics of expert decision-makers. These experts aren’t just the master strategists running major companies or negotiating high-level diplomacy. They’re the novelists who draw out the complexity of their characters’ inner lives, the city officials who secure long-term water supplies, and the scientists who reckon with future challenges most of us haven’t even imagined. The smartest decision-makers don’t go with their guts. Their success relies on having a future-oriented approach and the ability to consider all their options in a creative, productive way.

Through compelling stories that reveal surprising insights, Johnson explains how we can most effectively approach the choices that can chart the course of a life, an organization, or a civilization. Farsighted will help you imagine your possible futures and appreciate the subtle intelligence of the choices that shaped our broader social history….(More)”.

Farsighted

Open Letter: “To everyone who shapes technology today

We live in a world where technology is consuming society, ethics, and our core existence.

It is time to take responsibility for the world we are creating. Time to put humans before business. Time to replace the empty rhetoric of “building a better world” with a commitment to real action. It is time to organize, and to hold each other accountable.

Tech is not above us. It should be governed by all of us, by our democratic institutions. It should play by the rules of our societies. It should serve our needs, both individual and collective, as much as our wants.

Progress is more than innovation. We are builders at heart. Let us create a new Renaissance. We will open and nourish honest public conversation about the power of technology. We are ready to serve our societies. We will apply the means at our disposal to move our societies and their institutions forward.

Let us build from trust. Let us build for true transparency. We need digital citizens, not mere consumers. We all depend on transparency to understand how technology shapes us, which data we share, and who has access to it. Treating each other as commodities from which to extract maximum economic value is bad, not only for society as a complex, interconnected whole but for each and every one of us.

Design open to scrutiny. We must encourage a continuous, public, and critical reflection on our definition of success as it defines how we build and design for others. We must seek to design with those for whom we are designing. We will not tolerate design for addiction, deception, or control. We must design tools that we would love our loved ones to use. We must question our intent and listen to our hearts.

Let us move from human-centered design to humanity-centered design.
We are a community that exerts great influence. We must protect and nurture the potential to do good with it. We must do this with attention to inequality, with humility, and with love. In the end, our reward will be to know that we have done everything in our power to leave our garden patch a little greener than we found it….(More)”.

The Copenhagen Letter (on ethical technology)

Case study by Gabriel Kuris and Steven S. Strauss at Innovations for Successful Societies: “In 2011, voters in Chicago elected Rahm Emanuel, a 51-year-old former Chicago congressman, as their new mayor. Emanuel inherited a city on the upswing after years of decline but still marked by high rates of crime and poverty, racial segregation, and public distrust in government. The Emanuel administration hoped to harness the city’s trove of digital data to improve Chicagoans’ health, safety, and quality of life. During the next several years, Chief Data Officer Brett Goldstein and his successor Tom Schenk led innovative uses of city data, ranging from crisis management to the statistical targeting of restaurant inspections and pest extermination. As their teams took on more-sophisticated projects that predicted lead-poisoning risks and Escherichia coli outbreaks and created a citywide network of ambient sensors, the two faced new concerns about normative issues like privacy, ethics, and equity. By 2018, Chicago had won acclaim as a smarter city, but was it a fairer city? This case study discusses some of the approaches the city developed to address those challenges and manage the societal implications of cutting-edge technologies….(More)”.

Making a Smart City a Fairer City: Chicago’s Technologists Address Issues of Privacy, Ethics, and Equity, 2011-2018

John Abowd at US Census: “…Throughout our history, we have been leaders in statistical data protection, which we call disclosure avoidance. Other statistical agencies use the terms “disclosure limitation” and “disclosure control.” These terms are all synonymous. Disclosure avoidance methods have evolved since the censuses of the early 1800s, when the only protection used was simply removing names. Executive orders, and a series of laws modified the legal basis for these protections, which were finally codified in the 1954 Census Act (13 U.S.C. Sections 8(b) and 9). We have continually added better and stronger protections to keep the data we publish anonymous and underlying records confidential.

However, historical methods cannot completely defend against the threats posed by today’s technology. Growth in computing power, advances in mathematics, and easy access to large, public databases pose a significant threat to confidentiality. These forces have made it possible for sophisticated users to ferret out common data points between databases using only our published statistics. If left unchecked, those users might be able to stitch together these common threads to identify the people or businesses behind the statistics as was done in the case of the Netflix Challenge.

The Census Bureau has been addressing these issues from every feasible angle and changing rapidly with the times to ensure that we protect the data our census and survey respondents provide us. We are doing this by moving to a new, advanced, and far more powerful confidentiality protection system, which uses a rigorous mathematical process that protects respondents’ information and identity in all of our publications.

The new tool is based on the concept known in scientific and academic circles as “differential privacy.” It is also called “formal privacy” because it provides provable mathematical guarantees, similar to those found in modern cryptography, about the confidentiality protections that can be independently verified without compromising the underlying protections.

“Differential privacy” is based on the cryptographic principle that an attacker should not be able to learn any more about you from the statistics we publish using your data than from statistics that did not use your data. After tabulating the data, we apply carefully constructed algorithms to modify the statistics in a way that protects individuals while continuing to yield accurate results. We assume that everyone’s data are vulnerable and provide the same strong, state-of-the-art protection to every record in our database.

The Census Bureau did not invent the science behind differential privacy. However, we were the first organization anywhere to use it when we incorporated differential privacy into the OnTheMap application in 2008. It was used in this event to protect block-level residential population data. Recently, Google, Apple, Microsoft, and Uber have all followed the Census Bureau’s lead, adopting differentially privacy systems as the standard for protecting user data confidentiality inside their browsers (Chrome), products (iPhones), operating systems (Windows 10), and apps (Uber)….(More)”.

Protecting the Confidentiality of America’s Statistics: Adopting Modern Disclosure Avoidance Methods at the Census Bureau

Paper by Helen Nissenbaum, Sebastian Benthall, Anupam Datta, Michael Carl Tschantz, and Piot Mardziel: “Machine learning over big data poses challenges for our conceptualization of privacy. Such techniques can discover surprising and counteractive associations that take innocent looking data and turns it into important inferences about a person. For example, the buying carbon monoxide monitors has been linked to paying credit card bills, while buying chrome-skull car accessories predicts not doing so. Also, Target may have used the buying of scent-free hand lotion and vitamins as a sign that the buyer is pregnant. If we take pregnancy status to be private and assume that we should prohibit the sharing information that can reveal that fact, then we have created an unworkable notion of privacy, one in which sharing any scrap of data may violate privacy.

Prior technical specifications of privacy depend on the classification of certain types of information as private or sensitive; privacy policies in these frameworks limit access to data that allow inference of this sensitive information. As the above examples show, today’s data rich world creates a new kind of problem: it is difficult if not impossible to guarantee that information does notallow inference of sensitive topics. This makes information flow rules based on information topic unstable.

We address the problem of providing a workable definition of private data that takes into account emerging threats to privacy from large-scale data collection systems. We build on Contextual Integrity and its claim that privacy is appropriate information flow, or flow according to socially or legally specified rules.

As in other adaptations of Contextual Integrity (CI) to computer science, the parameterization of social norms in CI is translated into a logical specification. In this work, we depart from CI by considering rules that restrict information flow based on its origin and provenance, instead of on it’s type, topic, or subject.

We call this concept of privacy as adherence to origin-based rules Origin Privacy. Origin Privacy rules can be found in some existing data protection laws. This motivates the computational implementation of origin-based rules for the simple purpose of compliance engineering. We also formally model origin privacy to determine what security properties it guarantees relative to the concerns that motivate it….(More)”.

Origin Privacy: Protecting Privacy in the Big-Data Era

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday