Stefaan Verhulst
Julia M. Puaschunder in the International Robotics & Automation Journal: “Since the end of the 1970ies a wide range of psychological, economic and sociological laboratory and field experiments proved human beings deviating from rational choices and standard neo-classical profit maximization axioms to fail to explain how human actually behave. Behavioral economists proposed to nudge and wink citizens to make better choices for them with many different applications. While the motivation behind nudging appears as a noble endeavor to foster peoples’ lives around the world in very many different applications, the nudging approach raises questions of social hierarchy and class division. The motivating force of the nudgital society may open a gate of exploitation of the populace and – based on privacy infringements – stripping them involuntarily from their own decision power in the shadow of legally-permitted libertarian paternalism and under the cloak of the noble goal of welfare-improving global governance. Nudging enables nudgers to plunder the simple uneducated citizen, who is neither aware of the nudging strategies nor able to oversee the tactics used by the nudgers.
The nudgers are thereby legally protected by democratically assigned positions they hold or by outsourcing strategies used, in which social media plays a crucial rule. Social media forces are captured as unfolding a class dividing nudgital society, in which the provider of social communication tools can reap surplus value from the information shared of social media users. The social media provider thereby becomes a capitalist-industrialist, who benefits from the information shared by social media users, or so-called consumer-workers, who share private information in their wish to interact with friends and communicate to public. The social media capitalist-industrialist reaps surplus value from the social media consumer-workers’ information sharing, which stems from nudging social media users. For one, social media space can be sold to marketers who can constantly penetrate the consumer-worker in a subliminal way with advertisements. But also nudging occurs as the big data compiled about the social media consumer-worker can be resold to marketers and technocrats to draw inferences about consumer choices, contemporary market trends or individual personality cues used for governance control, such as, for instance, border protection and tax compliance purposes.
The law of motion of the nudging societies holds an unequal concentration of power of those who have access to compiled data and who abuse their position under the cloak of hidden persuasion and in the shadow of paternalism. In the nudgital society, information, education and differing social classes determine who the nudgers and who the nudged are. Humans end in different silos or bubbles that differ in who has power and control and who is deceived and being ruled. The owners of the means of governance are able to reap a surplus value in a hidden persuasion, protected by the legal vacuum to curb libertarian paternalism, in the moral shadow of the unnoticeable guidance and under the cloak of the presumption that some know what is more rational than others. All these features lead to an unprecedented contemporary class struggle between the nudgers (those who nudge) and the nudged (those who are nudged), who are divided by the implicit means of governance in the digital scenery. In this light, governing our common welfare through deceptive means and outsourced governance on social media appears critical. In combination with the underlying assumption of the nudgers knowing better what is right, just and fair within society, the digital age and social media tools hold potential unprecedented ethical challenges….(More)”
Paper by Robert Brauneis and Ellen P. Goodman: “Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser.
We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness.
To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain….(More)”.
Katie Fletcher, Tesfay Woldemariam and Fred Stolle at EcoWatch: “No single person could ever hope to count the world’s trees. But a crowd of them just counted the world’s drylands forests—and, in the process, charted forests never before mapped, cumulatively adding up to an area equivalent in size to the Amazon rainforest.
Current technology enables computers to automatically detect forest area through satellite data in order to adequately map most of the world’s forests. But drylands, where trees are fewer and farther apart, stymied these modern methods. To measure the extent of forests in drylands, which make up more than 40 percent of land surface on Earth, researchers from UN Food and Agriculture Organization, World Resources Institute and several universities and organizations had to come up with unconventional techniques. Foremost among these was turning to residents, who contributed their expertise through local map-a-thons….
Google Earth collects satellite data from several satellites with a variety of resolutions and technical capacities. The dryland satellite imagery collection compiled by Google from various providers, including Digital Globe, is of particularly high quality, as desert areas have little cloud cover to obstruct the views. So while difficult for algorithms to detect non-dominant land cover, the human eye has no problem distinguishing trees in the landscapes. Using this advantage, the scientists decided to visually count trees in hundreds of thousands of high-resolution images to determine overall dryland tree cover….
Armed with the quality images from Google that allowed researchers to see objects as small as half a meter (about 20 inches) across, the team divided the global dryland images into 12 regions, each with a regional partner to lead the counting assessment. The regional partners in turn recruited local residents with practical knowledge of the landscape to identify content in the sample imagery. These volunteers would come together in participatory mapping workshops, known colloquially as “map-a-thons.”…
Utilizing local landscape knowledge not only improved the map quality but also created a sense of ownership within each region. The map-a-thon participants have access to the open source tools and can now use these data and results to better engage around land use changes in their communities. Local experts, including forestry offices, can also use this easily accessible application to continue monitoring in the future.
Global Forest Watch uses medium resolution satellites (30 meters or about 89 feet) and sophisticated algorithms to detect near-real time deforestation in densely forested area. The dryland tree cover maps complement Global Forest Watch by providing the capability to monitor non-dominant tree cover and small-scale, slower-moving events like degradation and restoration. Mapping forest change at this level of detail is critical both for guiding land decisions and enabling government and business actors to demonstrate their pledges are being fulfilled, even over short periods of time.
The data documented by local participants will enable scientists to do many more analyses on both natural and man-made land changes including settlements, erosion features and roads. Mapping the tree cover in drylands is just the beginning….(More)”.
Nishan Degnarain and Steve Adler at WEF: “We have collected more data on our oceans in the past two years than in the history of the planet.
There has been a proliferation of remote and near sensors above, on, and beneath the oceans. New low-cost micro satellites ring the earth and can record what happens below daily. Thousands of tidal buoys follow currents transmitting ocean temperature, salinity, acidity and current speed every minute. Undersea autonomous drones photograph and map the continental shelf and seabed, explore deep sea volcanic vents, and can help discover mineral and rare earth deposits.
The volume, diversity and frequency of data is increasing as the cost of sensors fall, new low-cost satellites are launched, and an emerging drone sector begins to offer new insights into our oceans. In addition, new processing capabilities are enhancing the value we receive from such data on the biological, physical and chemical properties of our oceans.
Yet it is not enough.
We need much more data at higher frequency, quality, and variety to understand our oceans to the degree we already understand the land. Less than 5% of the oceans are comprehensively monitored. We need more data collection capacity to unlock the sustainable development potential of the oceans and protect critical ecosystems.
More data from satellites will help identify illegal fishing activity, track plastic pollution, and detect whales and prevent vessel collisions. More data will help speed the placement of offshore wind and tide farms, improve vessel telematics, develop smart aquaculture, protect urban coastal zones, and enhance coastal tourism.
Unlocking the ocean data market
But we’re not there yet.
This new wave of data innovation is constrained by inadequate data supply, demand, and governance. The supply of existing ocean data is locked by paper records, old formats, proprietary archives, inadequate infrastructure, and scarce ocean data skills and capacity.
The market for ocean observation is driven by science and science isn’t adequately funded.
To unlock future commercial potential, new financing mechanisms are needed to create market demand that will stimulate greater investments in new ocean data collection, innovation and capacity.
Efforts such as the Financial Stability Board’s Taskforce on Climate-related Financial Disclosure have gone some way to raise awareness and create demand for such ocean-related climate risk data.
Much data that is produced is collected by nations, universities and research organizations, NGO’s, and the private sector, but only a small percentage is Open Data and widely available.
Data creates more value when it is widely utilized and well governed. Helping organize to improve data infrastructure, quality, integrity, and availability is a requirement for achieving new ocean data-driven business models and markets. New Ocean Data Governance models, standards, platforms, and skills are urgently needed to stimulate new market demand for innovation and sustainable development….(More)”.
Joshua Howgego at New Scientist: “Kwandengezi is a beguiling neighbourhood on the outskirts of Durban. Its ramshackle dwellings are spread over rolling green hills, with dirt roads winding in between. Nothing much to put it on the map. Until last year, that is, when weird signs started sprouting, nailed to doors, stapled to fences or staked in front of houses. Each consisted of three seemingly random words. Cutaway.jazz.wording said one; tokens.painted.enacted read another.
In a neighbourhood where houses have no numbers and the dirt roads no names, these signs are the fastest way for ambulances to locate women going into labour and who need ferrying to the nearest hospital. The hope is that signs like this will save lives and be adopted elsewhere. For the residents of KwaNdengezi in South Africa aren’t alone – recent estimates suggest that only 80 or so countries worldwide have an up-to-date addressing system. And even where one exists, it isn’t always working as well as it could.
Poor addresses aren’t simply confusing: they frustrate businesses and can shave millions of dollars off economic output. That’s why there’s a growing feeling that we need to reinvent the address – and those makeshift three-word signs are just the beginning.
“Poor addresses frustrate businesses and can shave millions of dollars off economic output”
In itself, an address is a simple thing: its purpose is to unambiguously identify a point on Earth’s surface. But, it also forms a crucial part of the way societies are managed. Governments use lists of addresses to work out how many people they need to serve; without an address by your name, you can’t apply for a passport…(More)”.
Courtney Kennedy and George Elliott Morris at Pew Research: “People polled by telephone are slightly less likely than those interviewed online to say their personal finances are in “poor shape” (14% versus 20%, respectively), a Pew Research Center survey experiment has found.
The experiment, conducted in February and March, is part of a line of research at the Center looking into “mode effects” – in this case, whether findings from self-administered web surveys differ from those of interviewer-administered phone surveys.
In particular, survey researchers have long known that Americans may be more likely to give a “socially desirable” response (and less likely to give a stigmatized or undesirable answer) in an interviewer-administered survey than in one that is self-administered. Mode effects can also result from other differences in survey design, such as seeing the answer choices visually on the web versus hearing them over the phone.
The Center’s experiment randomly assigned respondents to a survey method (online or telephone). Although it found that political questions, such as whether respondents approve of President Donald Trump, don’t elicit significant mode effects, some other, more personal items clearly do. When asked whether or not they had received financial assistance from a family member in the past year, for instance, just 15% of phone respondents say yes. That share is significantly higher (26%) among web respondents….
While the findings from this experiment suggest that self-administered surveys may be more accurate than interviewer-administered approaches as a way to measure financial stress (all else being equal), this does not mean that past telephone-based research arrived at erroneous conclusions regarding financial stress – for example, what predicts it or how the likelihood varies across subgroups. That said, researchers studying financial stress should consider that phone surveys have, at least to some degree, been understating the share of Americans experiencing economic hardship….(More)”.
Note: Survey methodology can be found here, and the topline is available here.
White Paper Series at the WebFoundation: “To achieve our vision of digital equality, we need to understand how new technologies are shaping society; where they present opportunities to make people’s lives better, and indeed where they threaten to create harm. To this end, we have commissioned a series of white papers examining three key digital trends: artificial intelligence, algorithms and control of personal data. The papers focus on low and middle-income countries, which are all too often overlooked in debates around the impacts of emerging technologies.
The series addresses each of these three digital issues, looking at how they are impacting people’s lives and identifying steps that governments, companies and civil society organisations can take to limit the harms, and maximise benefits, for citizens.
Download the white papers
- Artificial Intelligence: The road ahead in low and middle-income countries
- Algorithmic Accountability: Applying the concept to different country contexts
- Personal data: An overview of low and middle-income countries
We will use these white papers to refine our thinking and set our work agenda on digital equality in the years ahead. We are sharing them openly with the hope they benefit others working towards our goals and to amplify the limited research currently available on digital issues in low and middle-income countries. We intend the papers to foster discussion about the steps we can take together to ensure emerging digital technologies are used in ways that benefit people’s lives, whether they are in Los Angeles or Lagos….(More)”.
Blog by Stefaan G. Verhulst: “At a time of open and big data, data-led and evidence-based policy making has great potential to improve problem solving but will have limited, if not harmful, effects if the underlying components are riddled with bad data.
Why should we care about bad data? What do we mean by bad data? And what are the determining factors contributing to bad data that if understood and addressed could prevent or tackle bad data? These questions were the subject of my short presentation during a recent webinar on Bad Data: The Hobgoblin of Effective Government, hosted by the American Society for Public Administration and moderated by Richard Greene (Partner, Barrett and Greene Inc.). Other panelists included Ben Ward (Manager, Information Technology Audits Unit, California State Auditor’s Office) and Katherine Barrett (Partner, Barrett and Greene Inc.). The webinar was a follow-up to the excellent Special Issue of Governing on Bad Data written by Richard and Katherine….(More)”
Paper by Susan Nevelow Mart: “…examines the legal bases of the public’s right to access government information, reviews the types of information that have recently been removed from the Internet, and analyzes the rationales given for the removals. She suggests that the concerted use of the Freedom of Information Act by public interest groups and their constituents is a possible method of returning the information to the Internet….(More)”.
Thomas Hardjono and Pete Teigen providing “A Blueprint Discussion on Identity“: Data breaches, identity theft, and trust erosion are all identity-related issues that citizens and government organizations face with increased frequency and magnitude. The rise of blockchain technology, and related distributed ledger technology, is generating significant interest in how a blockchain infrastructure can enable better identity management across a variety of industries. Historically, governments have taken the primary role in issuing certain types of identities (e.g. social security numbers, driver licenses, and passports) based on strong authentication proofing of individuals using government-vetted documentation – a process often referred to as on-boarding. This identity proofing and on-boarding process presents a challenge to government because it is still heavily paper-based, making it cumbersome, time consuming and dependent on siloed, decades old, and inefficient systems.
Another aspect of the identity challenge is the risk of compromising an individual’s digital identifiers and government-issued credentials through identity theft. With so many vital services (e.g. banking, health services, transport, residency) dependent on trusted, government-vetted credentials, any compromise of that identity can result in a significant negative impact to the individual and be difficult to repair. Compounding the problem, many instances of identity theft go undetected and only discovered after damage is done.
Increasing the efficiency of the identity vetting process while also enhancing transparency would help mitigate those identity challenges. Blockchain technology promises to do just that. Through the use of multiple computer systems (nodes) that are interconnected in a peer-to-peer (P2P) network, a shared common view of the information in the network ensures synchronicity of agreed data. A trusted ledger then exists in a distributed manner across the network that inherently is accountable to all network participants, thereby providing transparency and trustworthiness.
Using that trusted distributed ledger, identity-related data vetted by one Government entity and including that data’s location (producing a link in the chain) can be shared with other members of the network as needed — allowing members to instantaneously accept an identity without the need to duplicate the identity vetting process. The more sophisticated blockchain systems possess this “record-link-fetch” feature that is inherent in the blockchain system’s building blocks. Additional efficiency enhancing features allow downstream processes using that identity assertion as automated input to enable “smart contracts”, discussed below.
Thus, the combination of Government vetting of individual data, together with the embedded transparency and accountability capabilities of blockchain systems, allow relying parties (e.g. businesses, online merchants, individuals, etc.) to obtain higher degrees of assurance regarding the identity of other parties with whom they are conducting transactions…..
Identity and membership management solutions already exist and can be applied to private (permissioned) blockchain systems. Features within these solutions should be evaluated for their suitability for blockchain systems. Specifically, these four steps can enable government to start in suing blockchain to address identity challenges:
- Evaluate existing identity and membership management solutions in order to identify features that apply to permissioned blockchain systems in the short term.
- Experiment with integrating these existing solutions with open source blockchain implementations.
- Create a roadmap (with a 2-3 year horizon) for identity and membership management for smart contracts within permissioned blockchains.
- Develop a long term plan (a 5 year horizon) for addressing identity and membership management for permissionless (public) blockchain systems. Here again, use open source blockchain implementations as the basis to understand the challenges in the identity space for permissionless blockchains….(More)”.