Safe artificial intelligence requires cultural intelligence


Gillian Hadfield at TechCrunch: “Knowledge, to paraphrase British journalist Miles Kington, is knowing a tomato is a fruit; wisdom is knowing there’s a norm against putting it in a fruit salad.

Any kind of artificial intelligence clearly needs to possess great knowledge. But if we are going to deploy AI agents widely in society at large — on our highways, in our nursing homes and schools, in our businesses and governments — we will need machines to be wise as well as smart.

Researchers who focus on a problem known as AI safety or AI alignment define artificial intelligence as machines that can meet or beat human performance at a specific cognitive task. Today’s self-driving cars and facial recognition algorithms fall into this narrow type of AI.

But some researchers are working to develop artificial general intelligence (AGI) — machines that can outperform humans at any cognitive task. We don’t know yet when or even if AGI will be achieved, but it’s clear that the research path is leading to ever more powerful and autonomous AI systems performing more and more tasks in our economies and societies.

Building machines that can perform any cognitive task means figuring out how to build AI that can not only learn about things like the biology of tomatoes but also about our highly variable and changing systems of norms about things like what we do with tomatoes.

Humans live lives populated by a multitude of norms, from how we eat, dress and speak to how we share information, treat one another and pursue our goals.

For AI to be truly powerful will require machines to comprehend that norms can vary tremendously from group to group, making them seem unnecessary, yet it can be critical to follow them in a given community.

Tomatoes in fruit salads may seem odd to the Brits for whom Kington was writing, but they are perfectly fine if you are cooking for Koreans or a member of the culinary avant-garde.  And while it may seem minor, serving them the wrong way to a particular guest can cause confusion, disgust, even anger. That’s not a recipe for healthy future relationships….(More)”.

The Qualified Self: Social Media and the Accounting of Everyday Life


Book by Lee H. Humphreys: “How sharing the mundane details of daily life did not start with Facebook, Twitter, and YouTube but with pocket diaries, photo albums, and baby books.

Social critiques argue that social media have made us narcissistic, that Facebook, Twitter, Instagram, and YouTube are all vehicles for me-promotion. In The Qualified Self, Lee Humphreys offers a different view. She shows that sharing the mundane details of our lives—what we ate for lunch, where we went on vacation, who dropped in for a visit—didn’t begin with mobile devices and social media. People have used media to catalog and share their lives for several centuries. Pocket diaries, photo albums, and baby books are the predigital precursors of today’s digital and mobile platforms for posting text and images. The ability to take selfies has not turned us into needy narcissists; it’s part of a longer story about how people account for everyday life.

Humphreys refers to diaries in which eighteenth-century daily life is documented with the brevity and precision of a tweet, and cites a nineteenth-century travel diary in which a young woman complains that her breakfast didn’t agree with her. Diaries, Humphreys explains, were often written to be shared with family and friends. Pocket diaries were as mobile as smartphones, allowing the diarist to record life in real time. Humphreys calls this chronicling, in both digital and nondigital forms, media accounting. The sense of self that emerges from media accounting is not the purely statistics-driven “quantified self,” but the more well-rounded qualified self. We come to understand ourselves in a new way through the representations of ourselves that we create to be consumed…(More)”.

Resource Guide to Data Governance and Security


National Neighborhood Indicators Partnership (NNIP): “Any organization that collects, analyzes, or disseminates data should establish formal systems to manage data responsibly, protect confidentiality, and document data files and procedures. In doing so, organizations will build a reputation for integrity and facilitate appropriate interpretation and data sharing, factors that contribute to an organization’s long-term sustainability.

To help groups improve their data policies and practices, this guide assembles lessons from the experiences of partners in the National Neighborhood Indicators Partnership network and similar organizations. The guide presents advice and annotated resources for the three parts of a data governance program: protecting privacy and human subjects, ensuring data security, and managing the data life cycle. While applicable for non-sensitive data, the guide is geared for managing confidential data, such as data used in integrated data systems or Pay-for-Success programs….(More)”.

The Promise and Peril of the Digital Knowledge Loop


Excerpt of Albert Wenger’s draft book World After Capital: “The zero marginal cost and universality of digital technologies are already impacting the three phases of learning, creating and sharing, giving rise to a Digital Knowledge Loop. This Digital Knowledge Loop holds both amazing promise and great peril, as can be seen in the example of YouTube.

YouTube has experienced astounding growth since its release in beta form in 2005. People around the world now upload over 100 hours of video content to YouTube every minute. It is difficult to grasp just how much content that is. If you were to spend 100 years watching YouTube twenty-four hours a day, you still wouldn’t be able to watch all the video that people upload in the course of a single week. YouTube contains amazing educational content on topics as diverse as gardening and theoretical math. Many of those videos show the promise of the Digital Knowledge loop. For example, Destin Sandlin, the creator of the Smarter Every Day series of videos. Destin is interested in all things science. When he learns something new, such as the make-up of butterfly wings, he creates a new engaging video sharing that with the world. But the peril of the Digital Knowledge Loop is right there as well: YouTube is also full of videos that peddle conspiracies, spread mis-information, and even incite outright hate.

Both the promise and the peril are made possible by the same characteristics of YouTube: All of the videos are available for free to anyone in the world (except for those countries in which YouTube is blocked). They are also available 24×7. And they become available globally the second someone publishes a new one. Anybody can publish a video. All you need to access these videos is an Internet connection and a smartphone—you don’t even need a laptop or other traditional computer. That means already today two to three billion people, almost half of the world’s population has access to YouTube and can participate in the Digital Knowledge Loop for good and for bad.

These characteristics, which draw on the underlying capabilities of digital technology, are also found in other systems that similarly show the promise and peril of the Digital Knowledge Loop.

Wikipedia, the collectively-produced online encyclopedia is another great example. Here is how it works at its most promising: Someone reads an entry and learns the method used by Pythagoras to approximate the number pi. They then go off and create an animation that illustrates this method. Finally, they share the animation by publishing it back to Wikipedia thus making it easier for more people to learn. Wikipedia entries result from a large collaboration and ongoing revision process, with only a single entry per topic visible at any given time (although you can examine both the history of the page and the conversations about it). What makes this possible is a piece of software known as a wiki that keeps track of all the historical edits [58]. When that process works well it raises the quality of entries over time. But when there is a coordinated effort at manipulation or insufficient editing resources, Wikipedia too can spread misinformation instantly and globally.

Wikipedia illustrates another important aspect of the Digital Knowledge Loop: it allows individuals to participate in extremely small or minor ways. If you wish, you can contribute to Wikipedia by fixing a single typo. In fact, the minimal contribution unit is just one letter! I have not yet contributed anything of length to Wikipedia, but I have fixed probably a dozen or so typos. That doesn’t sound like much, but if you get ten thousand people to fix a typo every day, that’s 3.65 million typos a year. Let’s assume that a single person takes two minutes on average to discover and fix a typo. It would take nearly fifty people working full time for a year (2500 hours) to fix 3.65 million typos.

Small contributions by many that add up are only possible in the Digital Knowledge Loop. The Wikipedia spelling correction example shows the power of such contributions. Their peril can be seen in systems such as Twitter and Facebook, where the smallest contributions are Likes and Retweets or Reposts to one’s friends or followers. While these tiny actions can amplify high quality content, they can just as easily spread mistakes, rumors and propaganda. The impact of these information cascades ranges from viral jokes to swaying the outcomes of elections and has even led to major outbreaks of violence.

Some platforms even make it possible for people to passively contribute to the Digital Knowledge Loop. The app Waze is a good example. …The promise of the Digital Knowledge Loop is broad access to a rapidly improving body of knowledge. The peril is a fragmented post-truth society constantly in conflict. Both of these possibilities are enabled by the same fundamental characteristics of digital technologies. And once again we see clearly that technology by itself does not determine the future…(More).

Is the Government More Entrepreneurial Than You Think?


 Freakonomics Radio (Podcast): We all know the standard story: our economy would be more dynamic if only the government would get out of the way. The economist Mariana Mazzucato says we’ve got that story backward. She argues that the government, by funding so much early-stage research, is hugely responsible for big successes in tech, pharma, energy, and more. But the government also does a terrible job in claiming credit — and, more important, getting a return on its investment….

Quote:

MAZZUCATO: “…And I’ve been thinking about this especially around the big data and the kind of new questions around privacy with Facebook, etc. Instead of having a situation where all the data basically gets captured, which is citizens’ data, by companies which then, in some way, we have to pay into in terms of accessing these great new services — whether they’re free or not, we’re still indirectly paying. We should have the data in some sort of public repository because it’s citizens’ data. The technology itself was funded by the citizens. What would Uber be without GPS, publicly financed? What would Google be without the Internet, publicly financed? So, the tech was financed from the state, the citizens; it’s their data. Why not completely reverse the current relationship and have that data in a public repository which companies actually have to pay into to get access to it under certain strict conditions which could be set by an independent advisory council?… (More)”

Constitutional Democracy and Technology in the age of Artificial Intelligence


Paul Nemitz at Royal Society Philosophical Transactions: “Given the foreseeable pervasiveness of Artificial Intelligence in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy.

This paper first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless internet and the relationship between technology and the law as it has developed in the internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws.

The paper closes with a call for a new culture of incorporating the principles of Democracy, Rule of law and Human Rights by design in AI and a three level technological impact assessment for new technologies like AI as a practical way forward for this purpose….(More).

Technology Run Amok: Crisis Management in the Digital Age


Book by Ian I. Mitroff: “The recent data controversy with Facebook highlights the tech industry as a whole was utterly unprepared for the backlash it faced as a result of its business model of selling user data to third parties. Despite the predominant role that technology plays in all of our lives, the controversy also revealed that many tech companies are reactive, rather than proactive, in addressing crises.

This book examines society’s failure to manage technology and its resulting negative consequences. Mitroff argues that the “technological mindset” is responsible for society’s unbridled obsession with technology and unless confronted, will cause one tech crisis after another. This trans-disciplinary text, edgy in its approach, will appeal to academics, students, and practitioners through its discussion of the modern technological crisis…(More)”.

How Smart Should a City Be? Toronto Is Finding Out


Laura Bliss at CityLab: “A data-driven “neighborhood of the future” masterminded by a Google corporate sibling, the Quayside project could be a milestone in digital-age city-building. But after a year of scandal in Silicon Valley, questions about privacy and security remain…

Quayside was billed as “the world’s first neighborhood built from the internet up,” according to Sidewalk Labs’ vision plan, which won the RFP to develop this waterfront parcel. The startup’s pitch married “digital infrastructure” with an utopian promise: to make life easier, cheaper, and happier for Torontonians.

Everything from pedestrian traffic and energy use to the fill-height of a public trash bin and the occupancy of an apartment building could be counted, geo-tagged, and put to use by a wifi-connected “digital layer” undergirding the neighborhood’s physical elements. It would sense movement, gather data, and send information back to a centralized map of the neighborhood. “With heightened ability to measure the neighborhood comes better ways to manage it,” stated the winning document. “Sidewalk expects Quayside to become the most measurable community in the world.”

“Smart cities are largely an invention of the private sector—an effort to create a market within government,” Wylie wrote in Canada’s Globe and Mail newspaper in December 2017. “The business opportunities are clear. The risks inherent to residents, less so.” A month later, at a Toronto City Council meeting, Wylie gave a deputation asking officials to “ensure that the data and data infrastructure of this project are the property of the city of Toronto and its residents.”

In this case, the unwary Trojans would be Waterfront Toronto, the nonprofit corporation appointed by three levels of Canadian government to own, manage, and build on the Port Lands, 800 largely undeveloped acres between downtown and Lake Ontario. When Waterfront Toronto gave Sidewalk Labs a green light for Quayside in October, the startup committed $50 million to a one-year consultation, which was recently extended by several months. The plan is to submit a final “Master Innovation and Development Plan” by the end of this year.

That somewhat Orwellian vision of city management had privacy advocates and academics concerned from the the start. Bianca Wylie, the co-founder of the technology advocacy group Tech Reset Canada, has been perhaps the most outspoken of the project’s local critics. For the last year, she’s spoken up at public fora, written pointed op-edsand Medium posts, and warned city officials of what she sees as the “Trojan horse” of smart city marketing: private companies that stride into town promising better urban governance, but are really there to sell software and monetize citizen data.

But there has been no guarantee about who would own the data at the core of its proposal—much of which would ostensibly be gathered in public space. Also unresolved is the question of whether this data could be sold. With little transparency about what that means from the company or its partner, some Torontonians are wondering what Waterfront Toronto—and by extension, the public—is giving away….(More)”.

Decentralisation: the next big step for the world wide web


Zoë Corbyn at The Observer: “The decentralised web, or DWeb, could be a chance to take control of our data back from the big tech firms. So how does it work and when will it be here?...What is the decentralised web? 
It is supposed to be like the web you know but without relying on centralised operators. In the early days of the world wide web, which came into existence in 1989, you connected directly with your friends through desktop computers that talked to each other. But from the early 2000s, with the advent of Web 2.0, we began to communicate with each other and share information through centralised services provided by big companies such as Google, Facebook, Microsoft and Amazon. It is now on Facebook’s platform, in its so called “walled garden”, that you talk to your friends. “Our laptops have become just screens. They cannot do anything useful without the cloud,” says Muneeb Ali, co-founder of Blockstack, a platform for building decentralised apps. The DWeb is about re-decentralising things – so we aren’t reliant on these intermediaries to connect us. Instead users keep control of their data and connect and interact and exchange messages directly with others in their network.

Why do we need an alternative? 
With the current web, all that user data concentrated in the hands of a few creates risk that our data will be hacked. It also makes it easier for governments to conduct surveillance and impose censorship. And if any of these centralised entities shuts down, your data and connections are lost. Then there are privacy concerns stemming from the business models of many of the companies, which use the private information we provide freely to target us with ads. “The services are kind of creepy in how much they know about you,” says Brewster Kahle, the founder of the Internet Archive. The DWeb, say proponents, is about giving people a choice: the same services, but decentralised and not creepy. It promises control and privacy, and things can’t all of a sudden disappear because someone decides they should. On the DWeb, it would be harder for the Chinese government to block a site it didn’t like, because the information can come from other places.

How does the DWeb work that is different? 

There are two big differences in how the DWeb works compared to the world wide web, explains Matt Zumwalt, the programme manager at Protocol Labs, which builds systems and tools for the DWeb. First, there is this peer-to-peer connectivity, where your computer not only requests services but provides them. Second, how information is stored and retrieved is different. Currently we use http and https links to identify information on the web. Those links point to content by its location, telling our computers to find and retrieve things from those locations using the http protocol. By contrast, DWeb protocols use links that identify information based on its content – what it is rather than where it is. This content-addressed approach makes it possible for websites and files to be stored and passed around in many ways from computer to computer rather than always relying on a single server as the one conduit for exchanging information. “[In the traditional web] we are pointing to this location and pretending [the information] exists in only one place,” says Zumwalt. “And from this comes this whole monopolisation that has followed… because whoever controls the location controls access to the information.”…(More)”.

What if technologies had their own ethical standards?


European Parliament: “Technologies are often seen either as objects of ethical scrutiny or as challenging traditional ethical norms. The advent of autonomous machines, deep learning and big data techniques, blockchain applications and ‘smart’ technological products raises the need to introduce ethical norms into these devices. The very act of building new and emerging technologies has also become the act of creating specific moral systems within which human and artificial agents will interact through transactions with moral implications. But what if technologies introduced and defined their own ethical standards?…(More)”.