Resource Guide to Data Governance and Security


National Neighborhood Indicators Partnership (NNIP): “Any organization that collects, analyzes, or disseminates data should establish formal systems to manage data responsibly, protect confidentiality, and document data files and procedures. In doing so, organizations will build a reputation for integrity and facilitate appropriate interpretation and data sharing, factors that contribute to an organization’s long-term sustainability.

To help groups improve their data policies and practices, this guide assembles lessons from the experiences of partners in the National Neighborhood Indicators Partnership network and similar organizations. The guide presents advice and annotated resources for the three parts of a data governance program: protecting privacy and human subjects, ensuring data security, and managing the data life cycle. While applicable for non-sensitive data, the guide is geared for managing confidential data, such as data used in integrated data systems or Pay-for-Success programs….(More)”.

The Promise and Peril of the Digital Knowledge Loop


Excerpt of Albert Wenger’s draft book World After Capital: “The zero marginal cost and universality of digital technologies are already impacting the three phases of learning, creating and sharing, giving rise to a Digital Knowledge Loop. This Digital Knowledge Loop holds both amazing promise and great peril, as can be seen in the example of YouTube.

YouTube has experienced astounding growth since its release in beta form in 2005. People around the world now upload over 100 hours of video content to YouTube every minute. It is difficult to grasp just how much content that is. If you were to spend 100 years watching YouTube twenty-four hours a day, you still wouldn’t be able to watch all the video that people upload in the course of a single week. YouTube contains amazing educational content on topics as diverse as gardening and theoretical math. Many of those videos show the promise of the Digital Knowledge loop. For example, Destin Sandlin, the creator of the Smarter Every Day series of videos. Destin is interested in all things science. When he learns something new, such as the make-up of butterfly wings, he creates a new engaging video sharing that with the world. But the peril of the Digital Knowledge Loop is right there as well: YouTube is also full of videos that peddle conspiracies, spread mis-information, and even incite outright hate.

Both the promise and the peril are made possible by the same characteristics of YouTube: All of the videos are available for free to anyone in the world (except for those countries in which YouTube is blocked). They are also available 24×7. And they become available globally the second someone publishes a new one. Anybody can publish a video. All you need to access these videos is an Internet connection and a smartphone—you don’t even need a laptop or other traditional computer. That means already today two to three billion people, almost half of the world’s population has access to YouTube and can participate in the Digital Knowledge Loop for good and for bad.

These characteristics, which draw on the underlying capabilities of digital technology, are also found in other systems that similarly show the promise and peril of the Digital Knowledge Loop.

Wikipedia, the collectively-produced online encyclopedia is another great example. Here is how it works at its most promising: Someone reads an entry and learns the method used by Pythagoras to approximate the number pi. They then go off and create an animation that illustrates this method. Finally, they share the animation by publishing it back to Wikipedia thus making it easier for more people to learn. Wikipedia entries result from a large collaboration and ongoing revision process, with only a single entry per topic visible at any given time (although you can examine both the history of the page and the conversations about it). What makes this possible is a piece of software known as a wiki that keeps track of all the historical edits [58]. When that process works well it raises the quality of entries over time. But when there is a coordinated effort at manipulation or insufficient editing resources, Wikipedia too can spread misinformation instantly and globally.

Wikipedia illustrates another important aspect of the Digital Knowledge Loop: it allows individuals to participate in extremely small or minor ways. If you wish, you can contribute to Wikipedia by fixing a single typo. In fact, the minimal contribution unit is just one letter! I have not yet contributed anything of length to Wikipedia, but I have fixed probably a dozen or so typos. That doesn’t sound like much, but if you get ten thousand people to fix a typo every day, that’s 3.65 million typos a year. Let’s assume that a single person takes two minutes on average to discover and fix a typo. It would take nearly fifty people working full time for a year (2500 hours) to fix 3.65 million typos.

Small contributions by many that add up are only possible in the Digital Knowledge Loop. The Wikipedia spelling correction example shows the power of such contributions. Their peril can be seen in systems such as Twitter and Facebook, where the smallest contributions are Likes and Retweets or Reposts to one’s friends or followers. While these tiny actions can amplify high quality content, they can just as easily spread mistakes, rumors and propaganda. The impact of these information cascades ranges from viral jokes to swaying the outcomes of elections and has even led to major outbreaks of violence.

Some platforms even make it possible for people to passively contribute to the Digital Knowledge Loop. The app Waze is a good example. …The promise of the Digital Knowledge Loop is broad access to a rapidly improving body of knowledge. The peril is a fragmented post-truth society constantly in conflict. Both of these possibilities are enabled by the same fundamental characteristics of digital technologies. And once again we see clearly that technology by itself does not determine the future…(More).

Is the Government More Entrepreneurial Than You Think?


 Freakonomics Radio (Podcast): We all know the standard story: our economy would be more dynamic if only the government would get out of the way. The economist Mariana Mazzucato says we’ve got that story backward. She argues that the government, by funding so much early-stage research, is hugely responsible for big successes in tech, pharma, energy, and more. But the government also does a terrible job in claiming credit — and, more important, getting a return on its investment….

Quote:

MAZZUCATO: “…And I’ve been thinking about this especially around the big data and the kind of new questions around privacy with Facebook, etc. Instead of having a situation where all the data basically gets captured, which is citizens’ data, by companies which then, in some way, we have to pay into in terms of accessing these great new services — whether they’re free or not, we’re still indirectly paying. We should have the data in some sort of public repository because it’s citizens’ data. The technology itself was funded by the citizens. What would Uber be without GPS, publicly financed? What would Google be without the Internet, publicly financed? So, the tech was financed from the state, the citizens; it’s their data. Why not completely reverse the current relationship and have that data in a public repository which companies actually have to pay into to get access to it under certain strict conditions which could be set by an independent advisory council?… (More)”

Constitutional Democracy and Technology in the age of Artificial Intelligence


Paul Nemitz at Royal Society Philosophical Transactions: “Given the foreseeable pervasiveness of Artificial Intelligence in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy.

This paper first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless internet and the relationship between technology and the law as it has developed in the internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws.

The paper closes with a call for a new culture of incorporating the principles of Democracy, Rule of law and Human Rights by design in AI and a three level technological impact assessment for new technologies like AI as a practical way forward for this purpose….(More).

Technology Run Amok: Crisis Management in the Digital Age


Book by Ian I. Mitroff: “The recent data controversy with Facebook highlights the tech industry as a whole was utterly unprepared for the backlash it faced as a result of its business model of selling user data to third parties. Despite the predominant role that technology plays in all of our lives, the controversy also revealed that many tech companies are reactive, rather than proactive, in addressing crises.

This book examines society’s failure to manage technology and its resulting negative consequences. Mitroff argues that the “technological mindset” is responsible for society’s unbridled obsession with technology and unless confronted, will cause one tech crisis after another. This trans-disciplinary text, edgy in its approach, will appeal to academics, students, and practitioners through its discussion of the modern technological crisis…(More)”.

How Smart Should a City Be? Toronto Is Finding Out


Laura Bliss at CityLab: “A data-driven “neighborhood of the future” masterminded by a Google corporate sibling, the Quayside project could be a milestone in digital-age city-building. But after a year of scandal in Silicon Valley, questions about privacy and security remain…

Quayside was billed as “the world’s first neighborhood built from the internet up,” according to Sidewalk Labs’ vision plan, which won the RFP to develop this waterfront parcel. The startup’s pitch married “digital infrastructure” with an utopian promise: to make life easier, cheaper, and happier for Torontonians.

Everything from pedestrian traffic and energy use to the fill-height of a public trash bin and the occupancy of an apartment building could be counted, geo-tagged, and put to use by a wifi-connected “digital layer” undergirding the neighborhood’s physical elements. It would sense movement, gather data, and send information back to a centralized map of the neighborhood. “With heightened ability to measure the neighborhood comes better ways to manage it,” stated the winning document. “Sidewalk expects Quayside to become the most measurable community in the world.”

“Smart cities are largely an invention of the private sector—an effort to create a market within government,” Wylie wrote in Canada’s Globe and Mail newspaper in December 2017. “The business opportunities are clear. The risks inherent to residents, less so.” A month later, at a Toronto City Council meeting, Wylie gave a deputation asking officials to “ensure that the data and data infrastructure of this project are the property of the city of Toronto and its residents.”

In this case, the unwary Trojans would be Waterfront Toronto, the nonprofit corporation appointed by three levels of Canadian government to own, manage, and build on the Port Lands, 800 largely undeveloped acres between downtown and Lake Ontario. When Waterfront Toronto gave Sidewalk Labs a green light for Quayside in October, the startup committed $50 million to a one-year consultation, which was recently extended by several months. The plan is to submit a final “Master Innovation and Development Plan” by the end of this year.

That somewhat Orwellian vision of city management had privacy advocates and academics concerned from the the start. Bianca Wylie, the co-founder of the technology advocacy group Tech Reset Canada, has been perhaps the most outspoken of the project’s local critics. For the last year, she’s spoken up at public fora, written pointed op-edsand Medium posts, and warned city officials of what she sees as the “Trojan horse” of smart city marketing: private companies that stride into town promising better urban governance, but are really there to sell software and monetize citizen data.

But there has been no guarantee about who would own the data at the core of its proposal—much of which would ostensibly be gathered in public space. Also unresolved is the question of whether this data could be sold. With little transparency about what that means from the company or its partner, some Torontonians are wondering what Waterfront Toronto—and by extension, the public—is giving away….(More)”.

Decentralisation: the next big step for the world wide web


Zoë Corbyn at The Observer: “The decentralised web, or DWeb, could be a chance to take control of our data back from the big tech firms. So how does it work and when will it be here?...What is the decentralised web? 
It is supposed to be like the web you know but without relying on centralised operators. In the early days of the world wide web, which came into existence in 1989, you connected directly with your friends through desktop computers that talked to each other. But from the early 2000s, with the advent of Web 2.0, we began to communicate with each other and share information through centralised services provided by big companies such as Google, Facebook, Microsoft and Amazon. It is now on Facebook’s platform, in its so called “walled garden”, that you talk to your friends. “Our laptops have become just screens. They cannot do anything useful without the cloud,” says Muneeb Ali, co-founder of Blockstack, a platform for building decentralised apps. The DWeb is about re-decentralising things – so we aren’t reliant on these intermediaries to connect us. Instead users keep control of their data and connect and interact and exchange messages directly with others in their network.

Why do we need an alternative? 
With the current web, all that user data concentrated in the hands of a few creates risk that our data will be hacked. It also makes it easier for governments to conduct surveillance and impose censorship. And if any of these centralised entities shuts down, your data and connections are lost. Then there are privacy concerns stemming from the business models of many of the companies, which use the private information we provide freely to target us with ads. “The services are kind of creepy in how much they know about you,” says Brewster Kahle, the founder of the Internet Archive. The DWeb, say proponents, is about giving people a choice: the same services, but decentralised and not creepy. It promises control and privacy, and things can’t all of a sudden disappear because someone decides they should. On the DWeb, it would be harder for the Chinese government to block a site it didn’t like, because the information can come from other places.

How does the DWeb work that is different? 

There are two big differences in how the DWeb works compared to the world wide web, explains Matt Zumwalt, the programme manager at Protocol Labs, which builds systems and tools for the DWeb. First, there is this peer-to-peer connectivity, where your computer not only requests services but provides them. Second, how information is stored and retrieved is different. Currently we use http and https links to identify information on the web. Those links point to content by its location, telling our computers to find and retrieve things from those locations using the http protocol. By contrast, DWeb protocols use links that identify information based on its content – what it is rather than where it is. This content-addressed approach makes it possible for websites and files to be stored and passed around in many ways from computer to computer rather than always relying on a single server as the one conduit for exchanging information. “[In the traditional web] we are pointing to this location and pretending [the information] exists in only one place,” says Zumwalt. “And from this comes this whole monopolisation that has followed… because whoever controls the location controls access to the information.”…(More)”.

What if technologies had their own ethical standards?


European Parliament: “Technologies are often seen either as objects of ethical scrutiny or as challenging traditional ethical norms. The advent of autonomous machines, deep learning and big data techniques, blockchain applications and ‘smart’ technological products raises the need to introduce ethical norms into these devices. The very act of building new and emerging technologies has also become the act of creating specific moral systems within which human and artificial agents will interact through transactions with moral implications. But what if technologies introduced and defined their own ethical standards?…(More)”.

The Known Known


Book Review by Sue Halpern in The New York Review of Books of The Known Citizen: A History of Privacy in Modern America by Sarah E. Igo; Habeas Data: Privacy vs. the Rise of Surveillance Tech by Cyrus Farivar;  Beyond Abortion: Roe v. Wade and the Battle for Privacy by Mary Ziegler; Privacy’s Blueprint: The Battle to Control the Design of New Technologies by Woodrow Hartzog: “In 1999, when Scott McNealy, the founder and CEO of Sun Microsystems, declared, “You have zero privacy…get over it,” most of us, still new to the World Wide Web, had no idea what he meant. Eleven years later, when Mark Zuckerberg said that “the social norms” of privacy had “evolved” because “people [had] really gotten comfortable not only sharing more information and different kinds, but more openly and with more people,” his words expressed what was becoming a common Silicon Valley trope: privacy was obsolete.

By then, Zuckerberg’s invention, Facebook, had 500 million users, was growing 4.5 percent a month, and had recently surpassed its rival, MySpace. Twitter had overcome skepticism that people would be interested in a zippy parade of 140-character posts; at the end of 2010 it had 54 million active users. (It now has 336 million.) YouTube was in its fifth year, the micro-blogging platform Tumblr was into its third, and Instagram had just been created. Social media, which encouraged and relied on people to share their thoughts, passions, interests, and images, making them the Web’s content providers, were ascendant.

Users found it empowering to bypass, and even supersede, the traditional gatekeepers of information and culture. The social Web appeared to bring to fruition the early promise of the Internet: that it would democratize the creation and dissemination of knowledge. If, in the process, individuals were uploading photos of drunken parties, and discussing their sexual fetishes, and pulling back the curtain on all sorts of previously hidden personal behaviors, wasn’t that liberating, too? How could anyone argue that privacy had been invaded or compromised or effaced when these revelations were voluntary?

The short answer is that they couldn’t. And they didn’t. Users, who in the early days of social media were predominantly young, were largely guileless and unconcerned about privacy. In a survey of sixty-four of her students at Rochester Institute of Technology in 2006, Susan Barnes found that they “wanted to keep information private, but did not seem to realize that Facebook is a public space.” When a random sample of young people was asked in 2007 by researchers from the Pew Research Center if “they had any concerns about publicly posted photos, most…said they were not worried about risks to their privacy.” (This was largely before Facebook and other tech companies began tracking and monetizing one’s every move on- and offline.)

In retrospect, the tendencies toward disclosure and prurience online should not have been surprising….(More)”.

Attempting the Impossible: A Thoughtful Meditation on Technology


Book review by Akash Kapur of A Life in Code By David Auerbach in the New York Times: “What began as a vague apprehension — unease over the amount of time we spend on our devices, a sense that our children are growing up distracted — has, since the presidential election of 2016, transformed into something like outright panic. Pundits and politicians debate the perils of social media; technology is vilified as an instigator of our social ills, rather than a symptom. Something about our digital life seems to inspire extremes: all that early enthusiasm, the utopian fervor over the internet, now collapsed into fear and recriminations.

“Bitwise: A Life in Code,” David Auerbach’s thoughtful meditation on technology and its place in society, is a welcome effort to reclaim the middle ground. Auerbach, a former professional programmer, now a journalist and writer, is “cautiously positive toward technology.” He recognizes the very real damage it is causing to our political, cultural and emotional lives. But he also loves computers and data, and is adept at conveying the awe that technology can summon, the bracing sense of discovery that Arthur C. Clarke memorably compared to touching magic. “Much joy and satisfaction can be found in chasing after the secrets and puzzles of the world,” Auerbach writes. “I felt that joy first with computers.”

The book is a hybrid of memoir, technical primer and social history. It is perhaps best characterized as a survey not just of technology, but of our recent relationship to technology. Auerbach is in a good position to conduct this survey. He has spent much of his life on the front lines, playing around as a kid with Turtle graphics, working on Microsoft’s Messenger Service after college, and then reveling in Google’s oceans of data. (Among his lasting contributions, for which he does not express adequate contrition, is being the first, while at Microsoft, to introduce smiley face emoticons to America.) He writes well about databases and servers, but what’s really distinctive about this book is his ability to dissect Joyce and Wittgenstein as easily as C++ code. One of Auerbach’s stated goals is to break down barriers, or at least initiate a conversation, between technology and the humanities, two often irreconcilable domains. He suggests that we need to be bitwise (i.e., understand the world through the lens of computers) as well as worldwise. We must “be able to translate our ideas between the two realms.”…(More).