Our shared reality is fraying


Arie Kruglanski at The Conversation: “The concept of truth is under assault, but our troubles with truth aren’t exactly new.

What’s different is that in the past, debates about the status of truth primarily took place in intellectual cafes and academic symposia among philosophers. These days, uncertainty about what to believe is endemic – a pervasive feature of everyday life for everyday people.

“Truth isn’t truth” – Rudy Giuliani, President Donald Trump’s lawyer, famously said in August. His statement wasn’t as paradoxical as it might have appeared. It means that our beliefs, what we hold as true, are ultimately unprovable, rather than objectively verifiable.

Many philosophers would agree. Nevertheless, voluminous research in psychology, my own field of study, has shown that the idea of truth is key to humans interacting normally with the world and other people in it. Humans need to believe that there is truth in order to maintain relationships, institutions and society.

Truth’s indispensability

Beliefs about what is true are typically shared by others in one’s society: fellow members of one’s culture, one’s nation or one’s profession.

Psychological research in a forthcoming book by Tory Higgins, “Shared Reality: What Makes Us Strong and Tears Us Apart,” attests that shared beliefs help us collectively understand how the world works and provide a moral compass for living in it together.

Cue our current crisis of confidence.

Distrust of the U.S. government, which has been growing since the 1960s, has spread to nearly all other societal institutions, even those once held as beyond reproach.

From the media to the medical and scientific communities to the Catholic Church, there is a gnawing sense that none of the once hallowed information sources can be trusted.

When we can no longer make sense of the world together, a crippling insecurity ensues. The internet inundates us with a barrage of conflicting advice about nutrition, exercise, religion, politics and sex. People develop anxiety and confusion about their purpose and direction.

In the extreme, a lost sense of reality is a defining feature of psychosis, a major mental illness.

A society that has lost its shared reality is also unwell. In the past, people turned to widely respected societal institutions for information: the government, major news outlets, trusted communicators like Walter Cronkite, David Brinkley or Edward R. Murrow. Those days are gone, alas. Now, just about every source is suspect of bias and serving interests other than the truth. In consequence, people increasingly believe what they wish to believe, or what they find pleasing and reassuring….(More)”.

The Qualified Self: Social Media and the Accounting of Everyday Life


Book by Lee H. Humphreys: “How sharing the mundane details of daily life did not start with Facebook, Twitter, and YouTube but with pocket diaries, photo albums, and baby books.

Social critiques argue that social media have made us narcissistic, that Facebook, Twitter, Instagram, and YouTube are all vehicles for me-promotion. In The Qualified Self, Lee Humphreys offers a different view. She shows that sharing the mundane details of our lives—what we ate for lunch, where we went on vacation, who dropped in for a visit—didn’t begin with mobile devices and social media. People have used media to catalog and share their lives for several centuries. Pocket diaries, photo albums, and baby books are the predigital precursors of today’s digital and mobile platforms for posting text and images. The ability to take selfies has not turned us into needy narcissists; it’s part of a longer story about how people account for everyday life.

Humphreys refers to diaries in which eighteenth-century daily life is documented with the brevity and precision of a tweet, and cites a nineteenth-century travel diary in which a young woman complains that her breakfast didn’t agree with her. Diaries, Humphreys explains, were often written to be shared with family and friends. Pocket diaries were as mobile as smartphones, allowing the diarist to record life in real time. Humphreys calls this chronicling, in both digital and nondigital forms, media accounting. The sense of self that emerges from media accounting is not the purely statistics-driven “quantified self,” but the more well-rounded qualified self. We come to understand ourselves in a new way through the representations of ourselves that we create to be consumed…(More)”.

Resource Guide to Data Governance and Security


National Neighborhood Indicators Partnership (NNIP): “Any organization that collects, analyzes, or disseminates data should establish formal systems to manage data responsibly, protect confidentiality, and document data files and procedures. In doing so, organizations will build a reputation for integrity and facilitate appropriate interpretation and data sharing, factors that contribute to an organization’s long-term sustainability.

To help groups improve their data policies and practices, this guide assembles lessons from the experiences of partners in the National Neighborhood Indicators Partnership network and similar organizations. The guide presents advice and annotated resources for the three parts of a data governance program: protecting privacy and human subjects, ensuring data security, and managing the data life cycle. While applicable for non-sensitive data, the guide is geared for managing confidential data, such as data used in integrated data systems or Pay-for-Success programs….(More)”.

Ethics and Data Science


(Open) Ebook by Mike LoukidesHilary Mason and DJ Patil: “As the impact of data science continues to grow on society there is an increased need to discuss how data is appropriately used and how to address misuse. Yet, ethical principles for working with data have been available for decades. The real issue today is how to put those principles into action. With this report, authors Mike Loukides, Hilary Mason, and DJ Patil examine practical ways for making ethical data standards part of your work every day.

To help you consider all of possible ramifications of your work on data projects, this report includes:

  • A sample checklist that you can adapt for your own procedures
  • Five framing guidelines (the Five C’s) for building data products: consent, clarity, consistency, control, and consequences
  • Suggestions for building ethics into your data-driven culture

Now is the time to invest in a deliberate practice of data ethics, for better products, better teams, and better outcomes….(More)”.

The Promise and Peril of the Digital Knowledge Loop


Excerpt of Albert Wenger’s draft book World After Capital: “The zero marginal cost and universality of digital technologies are already impacting the three phases of learning, creating and sharing, giving rise to a Digital Knowledge Loop. This Digital Knowledge Loop holds both amazing promise and great peril, as can be seen in the example of YouTube.

YouTube has experienced astounding growth since its release in beta form in 2005. People around the world now upload over 100 hours of video content to YouTube every minute. It is difficult to grasp just how much content that is. If you were to spend 100 years watching YouTube twenty-four hours a day, you still wouldn’t be able to watch all the video that people upload in the course of a single week. YouTube contains amazing educational content on topics as diverse as gardening and theoretical math. Many of those videos show the promise of the Digital Knowledge loop. For example, Destin Sandlin, the creator of the Smarter Every Day series of videos. Destin is interested in all things science. When he learns something new, such as the make-up of butterfly wings, he creates a new engaging video sharing that with the world. But the peril of the Digital Knowledge Loop is right there as well: YouTube is also full of videos that peddle conspiracies, spread mis-information, and even incite outright hate.

Both the promise and the peril are made possible by the same characteristics of YouTube: All of the videos are available for free to anyone in the world (except for those countries in which YouTube is blocked). They are also available 24×7. And they become available globally the second someone publishes a new one. Anybody can publish a video. All you need to access these videos is an Internet connection and a smartphone—you don’t even need a laptop or other traditional computer. That means already today two to three billion people, almost half of the world’s population has access to YouTube and can participate in the Digital Knowledge Loop for good and for bad.

These characteristics, which draw on the underlying capabilities of digital technology, are also found in other systems that similarly show the promise and peril of the Digital Knowledge Loop.

Wikipedia, the collectively-produced online encyclopedia is another great example. Here is how it works at its most promising: Someone reads an entry and learns the method used by Pythagoras to approximate the number pi. They then go off and create an animation that illustrates this method. Finally, they share the animation by publishing it back to Wikipedia thus making it easier for more people to learn. Wikipedia entries result from a large collaboration and ongoing revision process, with only a single entry per topic visible at any given time (although you can examine both the history of the page and the conversations about it). What makes this possible is a piece of software known as a wiki that keeps track of all the historical edits [58]. When that process works well it raises the quality of entries over time. But when there is a coordinated effort at manipulation or insufficient editing resources, Wikipedia too can spread misinformation instantly and globally.

Wikipedia illustrates another important aspect of the Digital Knowledge Loop: it allows individuals to participate in extremely small or minor ways. If you wish, you can contribute to Wikipedia by fixing a single typo. In fact, the minimal contribution unit is just one letter! I have not yet contributed anything of length to Wikipedia, but I have fixed probably a dozen or so typos. That doesn’t sound like much, but if you get ten thousand people to fix a typo every day, that’s 3.65 million typos a year. Let’s assume that a single person takes two minutes on average to discover and fix a typo. It would take nearly fifty people working full time for a year (2500 hours) to fix 3.65 million typos.

Small contributions by many that add up are only possible in the Digital Knowledge Loop. The Wikipedia spelling correction example shows the power of such contributions. Their peril can be seen in systems such as Twitter and Facebook, where the smallest contributions are Likes and Retweets or Reposts to one’s friends or followers. While these tiny actions can amplify high quality content, they can just as easily spread mistakes, rumors and propaganda. The impact of these information cascades ranges from viral jokes to swaying the outcomes of elections and has even led to major outbreaks of violence.

Some platforms even make it possible for people to passively contribute to the Digital Knowledge Loop. The app Waze is a good example. …The promise of the Digital Knowledge Loop is broad access to a rapidly improving body of knowledge. The peril is a fragmented post-truth society constantly in conflict. Both of these possibilities are enabled by the same fundamental characteristics of digital technologies. And once again we see clearly that technology by itself does not determine the future…(More).

The Use of Regulatory Sandboxes in Europe and Asia


Claus Christensen at Regulation Aisa: “Global attention to money-laundering, terrorism financing and financial criminal practices has grown exponentially in recent years. As criminals constantly come up with new tactics, global regulations in the financial world are evolving all the time to try and keep up. At the same time, end users’ expectations are putting companies at commercial risk if they are not prepared to deliver outstanding and digital-first customer experiences through innovative solutions.

Among the many initiatives introduced by global regulators to address these two seemingly contradictory needs, regulatory sandboxes – closed environments that allow live testing of innovations by tech companies under the regulator’s supervision – are by far one of the most popular. As the CEO of a fast-growing regtech company working across both Asia and Europe, I have identified a few differences in how the regulators across different jurisdictions are engaging with the industry in general, and regulatory sandboxes in particular.

Since the launch of ‘Project Innovate’ in 2014, the UK’s FCA (Financial Conduct Authority) has won recognition for the success of its sandbox, where fintech companies can test innovative products, services and business models in a live market environment, while ensuring that appropriate safeguards are in place through temporary authorisation. The FCA advises companies, whether fintech startups or established banks, on which existing regulations might apply to their cutting-edge products.

So far, the sandbox has helped more than 500 companies, with 40+ firms receiving regulatory authorisation. Project Innovate has helped the FCA’s reputation for supporting initiatives which boost competition within financial services, which was part of the regulator’s post-financial crisis agenda. The success of the initiative in fostering a fertile fintech environment is reflected by the growing number of UK-based challenger banks that are expanding their client bases across Europe. Following its success, the sandbox approach has gone global, with regulators around the world adopting a similar strategy for fintech innovation.

Across Europe, regulators are directly working with financial services providers and taking proactive measures to not only encourage the use of innovative technology in improving their systems, but also to boost adoption by others within the ecosystem…(More)”.

Is the Government More Entrepreneurial Than You Think?


 Freakonomics Radio (Podcast): We all know the standard story: our economy would be more dynamic if only the government would get out of the way. The economist Mariana Mazzucato says we’ve got that story backward. She argues that the government, by funding so much early-stage research, is hugely responsible for big successes in tech, pharma, energy, and more. But the government also does a terrible job in claiming credit — and, more important, getting a return on its investment….

Quote:

MAZZUCATO: “…And I’ve been thinking about this especially around the big data and the kind of new questions around privacy with Facebook, etc. Instead of having a situation where all the data basically gets captured, which is citizens’ data, by companies which then, in some way, we have to pay into in terms of accessing these great new services — whether they’re free or not, we’re still indirectly paying. We should have the data in some sort of public repository because it’s citizens’ data. The technology itself was funded by the citizens. What would Uber be without GPS, publicly financed? What would Google be without the Internet, publicly financed? So, the tech was financed from the state, the citizens; it’s their data. Why not completely reverse the current relationship and have that data in a public repository which companies actually have to pay into to get access to it under certain strict conditions which could be set by an independent advisory council?… (More)”

Pick your poison: How a crowdsourcing app helped identify and reduce food poisoning


Alex Papas at LATimes: “At some point in life, almost everyone will have experienced the debilitating effects of a foodborne illness. Whether an under-cooked chicken kebab, an E. coli infested salad or some toxic fish, a good day can quickly become a loathsome frenzy of vomiting and diarrhoea caused by poorly prepared or poorly kept food.

Since 2009, the website iwaspoisoned.com has allowed victims of food-poisoning victims to help others avoid such an ordeal by crowd-sourcing food illnesses on one easy-to-use, consumer-led platform.

Whereas previously a consumer struck down by food poisoning may have been limited to complaining to the offending food outlet, IWasPosioned allows users to submit detailed reports of food-poisoning incidents – including symptoms, location and space to describe the exact effects and duration of the incident. The information is then transferred in real time to public health organisations and food industry groups, who  use the data to flag potentially dangerous foodborne illness before a serious outbreak occurs.

In the United States alone, where food safety standards are among the highest in the world, there are still 48 million cases of food poisoning per year. From those cases, 128,000 result in hospitalisation and 3,000 in death, according to data from the U.S. Food and Drug Association.

Back in 2008 the site’s founder, Patrick Quade, himself fell foul to food poisoning after eating a BLT from a New York deli which caused him to be violently ill. Concerned by the lack of options for reporting such incidents, he set up the novel crowdsourcing platform, which also aims at improving transparency in the food monitoring industry.

The emergence of IWasPoisoned is part of the wider trend of consumers taking revenge against companies via digital platforms, which spans various industries. In the case of IWasPoisoned, reports of foodborne illness have seriously tarnished the reputations of several major food retailers….(More)”.

Citizen Innovations


Introduction by Jean-Claude Ruano-Borbalan and Bertrand Bocquet of Special Issue of Technology and Innovation (French) : “The last half century has seen considerable development of institutional interfaces participating in the “great standardization” of science and innovation systems. The limitations of this model appeared for many economic, political or cultural reasons. Strong developments appear within the context of a deliberative democracy that impacts scientific and technical institutions and production, and therefore the nature and the policies of innovation. The question about this part of a “technical democracy”
is whether there will be a long-term movement. We dedicate this issue to citizen participatory innovations, more or less related to technical and scientific questions. It highlights various scales and focal points of “social and citizen innovation”, domains based on examples of ongoing transformations…. (More)
Table of Contents:

Constitutional Democracy and Technology in the age of Artificial Intelligence


Paul Nemitz at Royal Society Philosophical Transactions: “Given the foreseeable pervasiveness of Artificial Intelligence in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy.

This paper first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless internet and the relationship between technology and the law as it has developed in the internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws.

The paper closes with a call for a new culture of incorporating the principles of Democracy, Rule of law and Human Rights by design in AI and a three level technological impact assessment for new technologies like AI as a practical way forward for this purpose….(More).