Stefaan Verhulst
Leslie K. John at Harvard Business Review: “…People are bad at making decisions about their private data. They misunderstand both costs and benefits. Moreover, natural human biases interfere with their judgment. And whether by design or accident, major platform companies and data aggregators have structured their products and services to exploit those biases, often in subtle ways.
Impatience. People tend to overvalue immediate costs and benefits and underweight those that will occur in the future. They want $9 today rather than $10 tomorrow. On the internet, this tendency manifests itself in a willingness to reveal personal information for trivial rewards. Free quizzes and surveys are prime examples. …
The endowment effect. In theory people should be willing to pay the same amount to buy a good as they’d demand when selling it. In reality, people typically value a goodless when they have to buy it. A similar dynamic can be seen when people make decisions about privacy….
Illusion of control. People share a misapprehension that they can control chance processes. This explains why, for example, study subjects valued lottery tickets that they had personally selected more than tickets that had been randomly handed to them. People also confuse the superficial trappings of control with real control….
Desire for disclosure. This is not a decision-making bias. Rather, humans have what appears to be an innate desire, or even need, to share with others. After all, that’s how we forge relationships — and we’re inherently social creatures…
False sense of boundaries. In off-line contexts, people naturally understand and comply with social norms about discretion and interpersonal communication. Though we may be tempted to gossip about someone, the norm “don’t talk behind people’s backs” usually checks that urge. Most of us would never tell a trusted confidant our secrets when others are within earshot. And people’s reactions in the moment can make us quickly scale back if we disclose something inappropriate….(More)”.
Micah Lee at The Intercept: “The United Nations accidentally published passwords, internal documents, and technical details about websites when it misconfigured popular project management service Trello, issue tracking app Jira, and office suite Google Docs.
The mistakes made sensitive material available online to anyone with the proper link, rather than only to specific users who should have access. Affected data included credentials for a U.N. file server, the video conferencing system at the U.N.’s language school, and a web development environment for the U.N.’s Office for the Coordination of Humanitarian Affairs. Security researcher Kushagra Pathak discovered the accidental leak and notified the U.N. about what he found a little over a month ago. As of today, much of the material appears to have been taken down.
In an online chat, Pathak said he found the sensitive information by running searches on Google. The searches, in turn, produced public Trello pages, some of which contained links to the public Google Docs and Jira pages.
Trello projects are organized into “boards” that contain lists of tasks called “cards.” Boards can be public or private. After finding one public Trello board run by the U.N., Pathak found additional public U.N. boards by using “tricks like by checking if the users of one Trello board are also active on some other boards and so on.” One U.N. Trello board contained links to an issue tracker hosted on Jira, which itself contained even more sensitive information. Pathak also discovered links to documents hosted on Google Docs and Google Drive that were configured to be accessible to anyone who knew their web addresses. Some of these documents contained passwords….Here is just some of the sensitive information that the U.N. accidentally made accessible to anyone who Googled for it:
- A social media team promoting the U.N.’s “peace and security” efforts published credentials to access a U.N. remote file access, or FTP, server in a Trello card coordinating promotion of the International Day of United Nations Peacekeepers. It is not clear what information was on the server; Pathak said he did not connect to it.
- The U.N.’s Language and Communication Programme, which offers language courses at U.N. Headquarters in New York City, published credentials for a Google account and a Vimeo account. The program also exposed, on a publicly visible Trello board, credentials for a test environment for a human resources web app. It also made public a Google Docs spreadsheet, linked from a public Trello board, that included a detailed meeting schedule for 2018, along with passwords to remotely access the program’s video conference system to join these meetings.
- One public Trello board used by the developers of Humanitarian Response and ReliefWeb, both websites run by the U.N.’s Office for the Coordination of Humanitarian Affairs, included sensitive information like internal task lists and meeting notes. One public card from the board had a PDF, marked “for internal use only,” that contained a map of all U.N. buildings in New York City. …(More)”.
We have developed here a broad policy framework to address the digital threat to democracy, building upon basic principles to recommend a set of specific proposals.
Transparency: As citizens, we have the right to know who is trying to influence our political views and how they are doing it. We must have explicit disclosure about the operation of dominant digital media platforms — including:
- Real-time and archived information about targeted political advertising;
- Clear accountability for the social impact of automated decision-making;
- Explicit indicators for the presence of non-human accounts in digital media.
Privacy: As individuals with the right to personal autonomy, we must be given more control over how our data is collected, used, and monetized — especially when it comes to sensitive information that shapes political decision-making. A baseline data privacy law must include:
- Consumer control over data through stronger rights to access and removal;
- Transparency for the user of the full extent of data usage and meaningful consent;
- Stronger enforcement with resources and authority for agency rule-making.
Competition: As consumers, we must have meaningful options to find, send and receive information over digital media. The rise of dominant digital platforms demonstrates how market structure influences social and political outcomes. A new competition policy agenda should include:
- Stronger oversight of mergers and acquisitions;
- Antitrust reform including new enforcement regimes, levies, and essential services regulation;
- Robust data portability and interoperability between services.
There are no single-solution approaches to the problem of digital disinformation that are likely to change outcomes. … Awareness and education are the first steps toward organizing and action to build a new social contract for digital democracy….(More)”
Announcement by Bob Schultz at IBM: “The talent economy is one of the great outcomes of the digital era — and the ability to attract and develop the right talent has become a competitive advantage in most industries. According to a recent IBM study, which surveyed over 2,100 Chief Human Resource Officers, 33 percent of CHROs believe AI will revolutionize the way they do business over the next few years. In that same study, 65 percent of CEOs expect that people skills will have a strong impact on their businesses over the next several years. At IBM, we see AI as a tremendous untapped opportunity to transform the way companies attract, develop, and build the workforce for the decades ahead.
Consider this: The average hiring manager has hundreds of applicants a day for key positions and spends approximately six seconds on each resume. The ability to make the right decision without analytics and AI’s predictive abilities is limited and has the potential to create unconscious bias in hiring.
That is why today, I am pleased to announce the rollout of IBM Watson Recruitment’s Adverse Impact Analysis capability, which identifies instances of bias related to age, gender, race, education, or previous employer by assessing an organization’s historical hiring data and highlighting potential unconscious biases. This capability empowers HR professionals to take action against potentially biased hiring trends — and in the future, choose the most promising candidate based on the merit of their skills and experience alone. This announcement is part of IBM’s largest ever AI toolset release, tailor made for nine industries and professions where AI will play a transformational role….(More)”.
David Scharfenberg at the Boston Globe: “Years of research have shown that teenagers need their sleep. Yet high schools often start very early in the morning. Starting them later in Boston would require tinkering with elementary and middle school schedules, too — a Gordian knot of logistics, pulled tight by the weight of inertia, that proved impossible to untangle.
Until the computers came along.
Last year, the Boston Public Schools asked MIT graduate students Sébastien Martin and Arthur Delarue to build an algorithm that could do the enormously complicated work of changing start times at dozens of schools — and rerouting the hundreds of buses that serve them….
The algorithm was poised to put Boston on the leading edge of a digital transformation of government. In New York, officials were using a regression analysis tool to focus fire inspections on the most vulnerable buildings. And in Allegheny County, Pa., computers were churning through thousands of health, welfare, and criminal justice records to help identify children at risk of abuse….
While elected officials tend to legislate by anecdote and oversimplify the choices that voters face, algorithms can chew through huge amounts of complicated information. The hope is that they’ll offer solutions we’ve never imagined — much as Google Maps, when you’re stuck in traffic, puts you on an alternate route, down streets you’ve never traveled.
Dataphiles say algorithms may even allow us to filter out the human biases that run through our criminal justice, social service, and education systems. And the MIT algorithm offered a small window into that possibility. The data showed that schools in whiter, better-off sections of Boston were more likely to have the school start times that parents prize most — between 8 and 9 a.m. The mere act of redistributing start times, if aimed at solving the sleep deprivation problem and saving money, could bring some racial equity to the system, too.
Or, the whole thing could turn into a political disaster.
District officials expected some pushback when they released the new school schedule on a Thursday night in December, with plans to implement in the fall of 2018. After all, they’d be messing with the schedules of families all over the city.
But no one anticipated the crush of opposition that followed. Angry parents signed an online petition and filled the school committee chamber, turning the plan into one of the biggest crises of Mayor Marty Walsh’s tenure. The city summarily dropped it. The failure would eventually play a role in the superintendent’s resignation.
It was a sobering moment for a public sector increasingly turning to computer scientists for help in solving nagging policy problems. What had gone wrong? Was it a problem with the machine? Or was it a problem with the people — both the bureaucrats charged with introducing the algorithm to the public, and the public itself?…(More)”
Paper by Paul W. Mungai: “Open data—including open government data (OGD)—has become a topic of prominence during the last decade. However, most governments have not realised the desired value streams or outcomes from OGD. The Kenya Open Data Initiative (KODI), a Government of Kenya initiative, is no exception with some moments of success but also sustainability struggles. Therefore, the focus for this paper is to understand the causal mechanisms that either enable or constrain institutionalisation of OGD initiatives. Critical realism is ideally suited as a paradigm to identify such mechanisms, but guides to its operationalisation are few. This study uses the operational approach of Bygstad, Munkvold & Volkoff’s six‐step framework, a hybrid approach that melds concepts from existing critical realism models with the idea of affordances. The findings suggest that data demand and supply mechanisms are critical in institutionalising KODI and that, underpinning basic data‐related affordances, are mechanisms engaging with institutional capacity, formal policy, and political support. It is the absence of such elements in the Kenya case which explains why it has experienced significant delays…(More)”.
This book illustrates various aspects and dimensions of cognitive cities. Following a comprehensive introduction, the first part of the book explores conceptual considerations for the design of cognitive cities, while the second part focuses on concrete applications. The contributions provide an overview of the wide diversity of cognitive city conceptualizations and help readers to better understand why it is important to think about the design of our cities. The book adopts a transdisciplinary approach since the cognitive city concept can only be achieved through cooperation across different academic disciplines (e.g., economics, computer science, mathematics) and between research and practice. More and more people live in a growing number of ever-larger cities. As such, it is important to reflect on how cities need to be designed to provide their inhabitants with the means and resources for a good life. The cognitive city is an emerging, innovative approach to address this need….(More)”.
Medium Article by Stefaan G. Verhulst: “…Yet even as we see more data steward-type roles defined within companies, there exists considerable confusion about just what they should be doing. In particular, we have noticed a tendency to conflate the roles of data stewards with those of individuals or groups who might be better described as chief privacy, chief data or security officers. This slippage is perhaps understandable, but our notion of the role is somewhat broader. While privacy and security are of course key components of trusted and effective data collaboratives, the real goal is to leverage private data for broader social goals — while preventing harm.
So what are the necessary attributes of data stewards? What are their roles, responsibilities, and goals of data stewards? And how can they be most effective, both as champions of sharing within organizations and as facilitators for leveraging data with external entities? These are some of the questions we seek to address in our current research, and below we outline some key preliminary findings.
The following “Three Goals” and “Five Functions” can help define the aspirations of data stewards, and what is needed to achieve the goals. While clearly only a start, these attributes can help guide companies currently considering setting up sharing initiatives or establishing data steward-like roles.
The Three Goals of Data Stewards

- Collaborate: Data stewards are committed to working and collaborating with others, with the goal of unlocking the inherent value of data when a clear case exists that it serves the public good and that it can be used in a responsible manner.
- Protect: Data stewards are committed to managing private data ethically, which means sharing information responsibly, and preventing harm to potential customers, users, corporate interests, the wider public and of course those individuals whose data may be shared.
- Act: Data stewards are committed to pro-actively acting in order to identify partners who may be in a better position to unlock value and insights contained within privately held data.
…(More)”.
Darrell M. West at Brookings: “In this paper, I examine five AI ethical dilemmas: weapons and military-related applications, law and border enforcement, government surveillance, issues of racial bias, and social credit systems. I discuss how technology companies are handling these issues and the importance of having principles and processes for addressing these concerns. I close by noting ways to strengthen ethics in AI-related corporate decisions.
Briefly, I argue it is important for firms to undertake several steps in order to ensure that AI ethics are taken seriously:
- Hire ethicists who work with corporate decisionmakers and software developers
- Develop a code of AI ethics that lays out how various issues will be handled
- Have an AI review board that regularly addresses corporate ethical questions
- Develop AI audit trails that show how various coding decisions have been made
- Implement AI training programs so staff operationalizes ethical considerations in their daily work, and
- Provide a means for remediation when AI solutions inflict harm or damages on people or organizations….(More)”.
Sarah Krouse at the Wall Street Journal: “Emergency call operators will soon have an easier time pinpointing the whereabouts of Android phone users.
Google has struck a deal with T-Mobile US to pipe location data from cellphones with Android operating systems in the U.S. to emergency call centers, said Fiona Lee, who works on global partnerships for Android emergency location services.
The move is a sign that smartphone operating system providers and carriers are taking steps to improve the quality of location data they send when customers call 911. Locating callers has become a growing problem for 911 operators as cellphone usage has proliferated. Wireless devices now make 80% or more of the 911 calls placed in some parts of the U.S., according to the trade group National Emergency Number Association. There are roughly 240 million calls made to 911 annually.
While landlines deliver an exact address, cellphones typically register only an estimated location provided by wireless carriers that can be as wide as a few hundred yards and imprecise indoors.
That has meant that while many popular applications like Uber can pinpoint users, 911 call takers can’t always do so. Technology giants such as Google and Apple Inc. that run phone operating systems need a direct link to the technology used within emergency call centers to transmit precise location data….
Google currently offers emergency location services in 14 countries around the world by partnering with carriers and companies that are part of local emergency communications infrastructure. Its location data is based on a combination of inputs from Wi-Fi to sensors, GPS and a mobile network information.
Jim Lake, director at the Charleston County Consolidated 9-1-1 Center, participated in a pilot of Google’s emergency location services and said it made it easier to find people who didn’t know their location, particularly because the area draws tourists.
“On a day-to-day basis, most people know where they are, but when they don’t, usually those are the most horrifying calls and we need to know right away,” Mr. Lake said.
In June, Apple said it had partnered with RapidSOS to send iPhone users’ location information to 911 call centers….(More)”