Explore our articles

Stefaan Verhulst

Paper by Gianluca Elia and Alessandro Margherita describing “A conceptual framework and a collective intelligence system to support problem analysis and solution design for complex social issues…Wicked problems are complex and multifaceted issues that have no single solution, and are perceived by different stakeholders through contrasting views. Examples in the social context include climate change, poverty, energy production, sanitation, sustainable cities, pollution and homeland security.

Extant research has been addressed to support open discussion and collaborative decision making in wicked scenarios, but complexities derive from the difficulty to leverage multiple contributions, coming from both experts and non-experts, through a structured approach.

In such view, we present a conceptual framework for the study of wicked problem solving as a complex and multi-stakeholder process. Afterwards, we describe an integrated system of tools and associated operational guidelines aimed to support collective problem analysis and solution design. The main value of the article is to highlight the relevance of collective approaches in the endeavor of wicked problem resolution, and to provide an integrated framework of activities, actors and purposeful tools….(More)”.

 

Can we solve wicked problems?

Book by Gordon C.C. Douglas: “When local governments neglect public services or community priorities, how do concerned citizens respond? In The Help-Yourself City, Gordon Douglas looks closely at people who take urban planning into their own hands with homemade signs and benches, guerrilla bike lanes and more. Douglas explores the frustration, creativity, and technical expertise behind these interventions, but also the position of privilege from which they often come. Presenting a needed analysis of this growing trend from vacant lots to city planning offices, The Help-Yourself City tells a street-level story of people’s relationships to their urban surroundings and the individualization of democratic responsibility…(More)”.

The Help-Yourself City: Legitimacy and Inequality in DIY Urbanism

Report by the WebFoundation: “The exponential growth of data provides powerful new ways for governments and companies to understand and respond to challenges and opportunities. This report, Data for Development: What’s next, investigates how organisations working in international development can leverage the growing quantity and variety of data to improve their investments and projects so that they better meet people’s needs.

Investigating the state of data for development and identifying emerging data trends, the study provides recommendations to support German development cooperation actors seeking to integrate data strategies and investments in their work. These insights can guide any organisation seeking to use data to enhance their development work.

The research considers four types of data: (1) big data, (2) open data, (3) citizen-generated data and (4) real-time data, and examines how they are currently being used in development-related policy-making and how they might lead to better development outcomes….(More)”.

Data for Development: What’s next? Concepts, trends and recommendations

Paper by Eric Forbush and  Nicol Turner-Lee: “In June 2017, Mark Zuckerberg proclaimed a new mission for Facebook, which was to “[g]ive people the power to build community and bring the world closer together” during the company’s first Community Summit. Yet, his declaration comes in the backdrop of a politically polarized America. While research has indicated that ideological polarization (the alignment and divergence of ideologies) has remained relatively unchanged, affective polarization (the degree to which Democrats and Republicans dislike each other) has skyrocketed (Gentzkow, 2016; Lelkes, 2016). This dislike for members of the opposite party may be amplified on social media platforms.
Social media have been accused of making our social networks increasingly insular, resulting in “echo chambers,” wherein individuals select information and friends who support their already held beliefs (Quattrociocchi, Scala, and Sunstein, 2016; Williams, McMurray, Kurz, and Lambert, 2015). However, the implicit message in Zuckerberg’s comments, and other leaders in this space, is that social media can provide users with a means for brokering relationships with other users that hold different values and beliefs from them. However, little is known on the extent to which social media platforms enable these opportunities.

Theories of prejudice reduction (Paluck and Green, 2009) partially explain an idealistic outcome of improved online relationships. In his seminal contact theory, Gordon Allport (1954) argued that under certain optimal conditions, all that is needed to reduce prejudice is for members of different groups to spend more time interacting with each other. However, contemporary social media platforms may not be doing enough to increase intergroup engagements, especially between politically polarized communities on issues of importance.

In this paper, we use Twitter data collected over a 20-day period, following the Day of Action for Net Neutrality on July 12, 2017. In support of a highly polarized regulatory issue, the Day of Action was organized by advocacy groups and corporations in support of an open internet, which does not discriminate against online users when accessing their preferred content. Analyzing 81,316 tweets about #netneutrality from 40,502 distinct users, we use social network analysis to develop network visualizations and conduct discrete content analysis of central tweets. Our research also divides the content by those in support and those opposed to any type of repeal of net neutrality rules by the FCC.

Our analysis of this particular issue reveals that social media is merely replicating, and potentially strengthening polarization on issues by party affiliations and online associations. Consequently, the appearance of mediators who are able to bridge online conversations or beliefs on charged issues appear to be nonexistent on both sides of the issue. Consequently, our findings suggest that social media companies may not be doing enough to bring communities together through meaningful conversations on their platforms….(More)”.

Can Social Media Help Build Communities?

Julia Apostle in the Financial Times: “The unsettling revelations about how data firm Cambridge Analytica surreptitiously exploited the personal information of Facebook users is yet another demoralising reminder of how much data has been amassed about us, and of how little control we have over it.

Unfortunately, the General Data Protection Regulation privacy laws that are coming into force across Europe — with more demanding consent, transparency and accountability requirements, backed by huge fines — may improve practices, but they will not change the governing paradigm: the law labels those who gather our data as “controllers”. We are merely “subjects”.

But if the past 20 years have taught us anything, it is that when business and legislators have been too slow to adapt to public demand — for goods and services that we did not even know we needed, such as Amazon, Uber and bitcoin — computer scientists have stepped in to fill the void. And so it appears that the realms of data privacy and security are deserving of some disruption. This might come in the form of “self-sovereign identity” systems.

The theory behind self-sovereign identity is that individuals should control the data elements that form the basis of their digital identities, and not centralised authorities such as governments and private companies. In the current online environment, we all have multiple log-ins, usernames, customer IDs and personal data spread across countless platforms and stored in myriad repositories.

Instead of this scattered approach, we should each possess the digital equivalent of a wallet that contains verified pieces of our identities. We can then choose which identification to share, with whom, and when. Self-sovereign identity systems are currently being developed.

They involve the creation of a unique and persistent identifier attributed to an individual (called a decentralised identity), which cannot be taken away. The systems use public/private key cryptography, which enables a user with a private key (a string of numbers) to share information with unlimited recipients who can access the encrypted data if they possess a corresponding public key.

The systems also rely on decentralised ledger applications like blockchain. While key cryptography has been around for a long time, it is the development of decentralised ledger technology, which also supports the trading of cryptocurrencies without the involvement of intermediaries, that will allow self-sovereign identity systems to take off. The potential uses for decentralised identity are legion and small-scale implementation is already happening. The Swiss municipality of Zug started using a decentralised identity system called uPort last year, to allow residents access to certain government services. The municipality announced it will also use the system for voting this spring….

Decentralised identity is more difficult to access and therefore there is less financial incentive for hackers to try. Self-sovereign identity systems could eliminate many of our data privacy concerns while empowering individuals in the online world and turning the established data order on its head. But the success of the technology depends on its widespread adoption….(More)

Lessons from Cambridge Analytica: one way to protect your data

Michael Wade at The Conversation: “Much of the discussion has been on how Cambridge Analytica was able to obtain data on more than 50m Facebook users – and how it allegedly failed to delete this data when told to do so. But there is also the matter of what Cambridge Analytica actually did with the data. In fact the data crunching company’s approach represents a step change in how analytics can today be used as a tool to generate insights – and to exert influence.

For example, pollsters have long used segmentation to target particular groups of voters, such as through categorising audiences by gender, age, income, education and family size. Segments can also be created around political affiliation or purchase preferences. The data analytics machine that presidential candidate Hillary Clinton used in her 2016 campaign – named Ada after the 19th-century mathematician and early computing pioneer – used state-of-the-art segmentation techniques to target groups of eligible voters in the same way that Barack Obama had done four years previously.

Cambridge Analytica was contracted to the Trump campaign and provided an entirely new weapon for the election machine. While it also used demographic segments to identify groups of voters, as Clinton’s campaign had, Cambridge Analytica also segmented using psychographics. As definitions of class, education, employment, age and so on, demographics are informational. Psychographics are behavioural – a means to segment by personality.

This makes a lot of sense. It’s obvious that two people with the same demographic profile (for example, white, middle-aged, employed, married men) can have markedly different personalities and opinions. We also know that adapting a message to a person’s personality – whether they are open, introverted, argumentative, and so on – goes a long way to help getting that message across….

There have traditionally been two routes to ascertaining someone’s personality. You can either get to know them really well – usually over an extended time. Or you can get them to take a personality test and ask them to share it with you. Neither of these methods is realistically open to pollsters. Cambridge Analytica found a third way, with the assistance of two University of Cambridge academics.

The first, Aleksandr Kogan, sold them access to 270,000 personality tests completed by Facebook users through an online app he had created for research purposes. Providing the data to Cambridge Analytica was, it seems, against Facebook’s internal code of conduct, but only now in March 2018 has Kogan been banned by Facebook from the platform. In addition, Kogan’s data also came with a bonus: he had reportedly collected Facebook data from the test-takers’ friends – and, at an average of 200 friends per person, that added up to some 50m people.

However, these 50m people had not all taken personality tests. This is where the second Cambridge academic, Michal Kosinski, came in. Kosinski – who is said to believe that micro-targeting based on online data could strengthen democracy – had figured out a way to reverse engineer a personality profile from Facebook activity such as likes. Whether you choose to like pictures of sunsets, puppies or people apparently says a lot about your personality. So much, in fact, that on the basis of 300 likes, Kosinski’s model is able to predict someone’s personality profile with the same accuracy as a spouse….(More)”

Psychographics: the behavioural analysis that helped Cambridge Analytica know voters’ minds

 at The Conversation: “The scandal that has erupted around Cambridge Analytica’s alleged harvesting of 50m Facebook profiles assembled from data provided by a UK-based academic and his company is a worrying development for legitimate researchers.

Political data analytics company Cambridge Analytica – which is affiliated with Strategic Communication Laboratories (SCL) – reportedly used Facebook data, after it was handed over by Aleksandr Kogan, a lecturer at the University of Cambridge’s department of psychology.

Kogan, through his company Global Science Research (GSR) – separate from his university work – gleaned the data from a personality test app named “thisisyourdigitallife”. Roughly 270,000 US-based Facebook users voluntarily responded to the test in 2014. But the app also collected data on those participants’ Facebook friends without their consent.

This was possible due to Facebook rules at the time that allowed third-party apps to collect data about a Facebook user’s friends. The Mark Zuckerberg-run company has since changed its policy to prevent such access to developers….

Social media data is a rich source of information for many areas of research in psychology, technology, business and humanities. Some recent examples include using Facebook to predict riots, comparing the use of Facebook with body image concern in adolescent girls and investigating whether Facebook can lower levels of stress responses, with research suggesting that it may enhance and undermine psycho-social constructs related to well-being.

It is right to believe that researchers and their employers value research integrity. But instances where trust has been betrayed by an academic – even if it’s the case that data used for university research purposes wasn’t caught in the crossfire – will have a negative impact on whether participants will continue to trust researchers. It also has implications for research governance and for companies to share data with researchers in the first place.

Universities, research organisations and funders govern the integrity of research with clear and strict ethics proceduresdesigned to protect participants in studies, such as where social media data is used. The harvesting of data without permission from users is considered an unethical activity under commonly understood research standards.

The fallout from the Cambridge Analytica controversy is potentially huge for researchers who rely on social networks for their studies, where data is routinely shared with them for research purposes. Tech companies could become more reluctant to share data with researchers. Facebook is already extremely protective of its data – the worry is that it could become doubly difficult for researchers to legitimately access this information in light of what has happened with Cambridge Analytica….(More)”.

Cambridge Analytica scandal: legitimate researchers using Facebook data could be collateral damage

Jessi Hempel at Wired: “Though best known for underpinning volatile cryptocurrencies, like Bitcoin and Ethereum, blockchain technology has a number of qualities which make it appealing for record-keeping. A distributed ledger doesn’t depend on a central authority to verify its existence, or to facilitate transactions within it, which makes it less vulnerable to tampering. By using applications that are built on the ‘chain, individuals may be able to build up records over time, use those records across borders as a form of identity—essentially creating the trust they need to interact with the world, without depending on a centralized authority, like a government or a bank, to vouch for them.

For now, these efforts are small experiments. In Finland, the Finnish Immigration Service offers refugees a prepaid Mastercard developed by the Helsinki-based startup MONI that also links to a digital identity, composed of the record of one’s financial transactions, which is stored on the blockchain. In Moldova, the government is working with digital identification expertsfrom the United Nations Office for Project Services (UNOPS) to brainstorm ways to use blockchain to provide children living in rural areas with a digital identity, so it’s more difficult for traffickers to smuggle them across borders.

Among the more robust programs is a pilot the United Nations World Food Program (WFP) launched in Jordan last May. Syrian refugees stationed at the Azraq Refugee Camp receive vouchers to shop at the local grocery store. The WFP integrated blockchain into its biometric authentication technology, so Syrian refugees can cash in their vouchers at the supermarket by staring into a retina scanner. These transactions are recorded on a private Ethereum-basedblockchain, called Building Blocks. Because the blockchain eliminates the need for WFP to pay banks to facilitate transactions, Building Blocks could save the WFP as much as $150,000 each month in bank fees in Jordan alone. The program has been so successful that by the end of the year, the WFP plans to expand the technology throughout Jordan. Blockchain enthusiasts imagine a future in which refugees can access more than just food vouchers, accumulating a transaction history that could stand in as a credit history when they attempt to resettle….

But in the rush to apply blockchain technology to every problem, many point out that relying on the ledger may have unintended consequences. As the Blockchain for Social Impact chief technology officer at ConsenSys, Robert Greenfeld IV writes, blockchain-based identity “isn’t a silver bullet, and if we don’t think about it/build it carefully, malicious actors could still capitalize on it as an element of control.” If companies rely on private blockchains, he warns, there’s a danger that the individual permissions will prevent these identity records from being used in multiple places. (Many of these projects, like the UNWFP project, are built on private blockchains so that organizations can exert more control over their development.) “If we don’t start to collaborate together with populations, we risk ending up with a bunch of siloed solutions,” says Greenfeld.

For his part, Greenfeld suggests governments could easily use state-sponsored machine learning algorithms to monitor public blockchain activity. But as bitcoin enthusiasts branch out of their get-rich-quick schemes to wrestle with how to make the web more equitable for everyone, they have the power to craft a world of their own devising. The early web should be a lesson to the bitcoin enthusiasts as they promote the blockchain’s potential. Right now we have the power to determine its direction; the dangers exist, but the potential is enormous….(More)”

How Refugees Are Helping Create Blockchain’s Brand New World

Medium blog by Yasodara Cordova: “…The data collected by industry represents AI opportunities for governments, to improve their services through innovation. Data-based intelligence promises to increase the efficiency of resource management by improving transparency, logistics, social welfare distribution — and virtually every government service. E-government enthusiasm took of with the realization of the possible applications, such as using AI to fight corruption by automating the fraud-tracking capabilities of cost-control tools. Controversially, the AI enthusiasm has spread to the distribution of social benefits, optimization of tax oversight and control, credit scoring systems, crime prediction systems, and other applications based in personal and sensitive data collection, especially in countries that do not have comprehensive privacy protections.

There are so many potential applications, society may operate very differently in ten years when the “datafixation” has advanced beyond citizen data and into other applications such as energy and natural resource management. However, many countries in the Global South are not being given necessary access to their countries’ own data.

Useful data are everywhere, but only some can take advantage. Beyond smartphones, data can be collected from IoT components in common spaces. Not restricted to urban spaces, data collection includes rural technology like sensors installed in tractors. However, even when the information is related to issues of public importance in developing countries —like data taken from road mesh or vital resources like water and land — it stays hidden under contract rules and public citizens cannot access, and therefore take benefit, from it. This arrangement keeps the public uninformed about their country’s operations. The data collection and distribution frameworks are not built towards healthy partnerships between industry and government preventing countries from realizing the potential outlined in the previous paragraph.

The data necessary to the development of better cities, public policies, and common interest cannot be leveraged if kept in closed silos, yet access often costs more than is justifiable. Data are a primordial resource to all stages of new technology, especially tech adoption and integration, so the necessary long term investment in innovation needs a common ground to start with. The mismatch between the pace of the data collection among big established companies and small, new, and local businesses will likely increase with time, assuming no regulation is introduced for equal access to collected data….

Currently, data independence remains restricted to discussions on the technological infrastructure that supports data extraction. Privacy discussions focus on personal data rather than the digital accumulation of strategic data in closed silos — a necessary discussion not yet addressed. The national interest of data is not being addressed in a framework of economic and social fairness. Access to data, from a policy-making standpoint, needs to find a balance between the extremes of public, open access and limited, commercial use.

A final, but important note: the vast majority of social media act like silos. APIs play an important role in corporate business models, where industry controls the data it collects without reward, let alone user transparency. Negotiation of the specification of APIs to make data a common resource should be considered, for such an effort may align with the citizens’ interest….(More)”.

Artificial Intelligence and the Need for Data Fairness in the Global South

Yogesh Rajkotia at the Stanford Social Innovation Review: “In 2013, in southern Mozambique, foreign NGO workers searched for a man whom the local health facility reported as diagnosed with HIV. The workers aimed to verify that the health facility did indeed diagnose and treat him. When they could not find him, they asked the village chief for help. Together with an ever-growing crowd of onlookers, the chief led them to the man’s home. After hesitating and denying, he eventually admitted, in front of the crowd, that he had tested positive and received treatment. With his status made public, he now risked facing stigma, discrimination, and social marginalization. The incident undermined both his health and his ability to live a dignified life.

Similar privacy violations were documented in Burkina Faso in 2016, where community workers asked partners, in the presence of each other, to disclose what individual health services they had obtained.

Why was there such a disregard for the privacy and dignity of these citizens?

As it turns out, unbeknownst to these Mozambican and Burkinabé patients, their local health centers were participating in performance-based financing (PBF) programs financed by foreign assistance agencies. Implemented in more than 35 countries, PBF programs offer health workers financial bonuses for delivering priority health interventions. To ensure that providers do not cheat the system, PBF programs often send verifiers to visit patients’ homes to confirm that they have received specific health services. These verifiers are frequently community members (the World Bank callously notes in its “Performance-Based Financing Toolkit” that even “a local soccer club” can play this role), and this practice, known as “patient tracing,” is common among PBF programs. In World Bank-funded PBF programs alone, 19 out of the 25 PBF programs implement patient tracing. Yet the World Bank’s toolkit never mentions patient privacy or confidentiality. In patient tracing, patients’ rights and dignity are secondary to donor objectives.

Patient tracing within PBF programs is just one example of a bigger problem: Privacy violations are pervasive in global health. Some researchers and policymakers have raised privacy concerns about tuberculosis (TB), human immunodeficiency virus (HIV), family planningpost-abortion care, and disease surveillance programsA study conducted by the Asia-Pacific Network of People Living with HIV/AIDS found that 34 percent of people living with HIV in India, Indonesia, Philippines, and Thailand reported that health workers breached confidentiality. In many programs, sensitive information about people’s sexual and reproductive health, disease status, and other intimate health details are often collected to improve health system effectiveness and efficiency. Usually, households have no way to opt out, nor any control over how heath care programs use, store, and disseminate this data. At the same time, most programs do not have systems to enforce health workers’ non-disclosure of private information.

In societies with strong stigma around certain health topics—especially sexual and reproductive health—the disclosure of confidential patient information can destroy lives. In contexts where HIV is highly stigmatized, people living with HIV are 2.4 times more likely to delay seeking care until they are seriously ill. In addition to stigma’s harmful effects on people’s health, it can limit individuals’ economic opportunities, cause them to be socially marginalized, and erode their psychological wellbeing….(More)”.

International Development Doesn’t Care About Patient Privacy

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday