The Discrete Charm of the Machine: Why the World Became Digital


Book by Ken Steiglitz: “A few short decades ago, we were informed by the smooth signals of analog television and radio; we communicated using our analog telephones; and we even computed with analog computers. Today our world is digital, built with zeros and ones. Why did this revolution occur? The Discrete Charm of the Machine explains, in an engaging and accessible manner, the varied physical and logical reasons behind this radical transformation.

The spark of individual genius shines through this story of innovation: the stored program of Jacquard’s loom; Charles Babbage’s logical branching; Alan Turing’s brilliant abstraction of the discrete machine; Harry Nyquist’s foundation for digital signal processing; Claude Shannon’s breakthrough insights into the meaning of information and bandwidth; and Richard Feynman’s prescient proposals for nanotechnology and quantum computing. Ken Steiglitz follows the progression of these ideas in the building of our digital world, from the internet and artificial intelligence to the edge of the unknown. Are questions like the famous traveling salesman problem truly beyond the reach of ordinary digital computers? Can quantum computers transcend these barriers? Does a mysterious magical power reside in the analog mechanisms of the brain? Steiglitz concludes by confronting the moral and aesthetic questions raised by the development of artificial intelligence and autonomous robots.

The Discrete Charm of the Machine examines why our information technology, the lifeblood of our civilization, became digital, and challenges us to think about where its future trajectory may lead….(More)”.

Achieving Digital Permanence


Raymond Blum with Betsy Beyer at ACM Queu: “Digital permanence has become a prevalent issue in society. This article focuses on the forces behind it and some of the techniques to achieve a desired state in which “what you read is what was written.” While techniques that can be imposed as layers above basic data stores—blockchains, for example—are valid approaches to achieving a system’s information assurance guarantees, this article won’t discuss them.

First, let’s define digital permanence and the more basic concept of data integrity.

Data integrity is the maintenance of the accuracy and consistency of stored information. Accuracy means that the data is stored as the set of values that were intended. Consistency means that these stored values remain the same over time—they do not unintentionally waver or morph as time passes.

Digital permanence refers to the techniques used to anticipate and then meet the expected lifetime of data stored in digital media. Digital permanence not only considers data integrity, but also targets guarantees of relevance and accessibility: the ability to recall stored data and to recall it with predicted latency and at a rate acceptable to the applications that require that information.

To illustrate the aspects of relevance and accessibility, consider two counterexamples: journals that were safely stored redundantly on Zip drives or punch cards may as well not exist if the hardware required to read the media into a current computing system isn’t available. Nor is it very useful to have receipts and ledgers stored on a tape medium that will take eight days to read in when you need the information for an audit on Thursday.

The Multiple Facets of Digital Permanence

Human memory is the most subjective record imaginable. Common adages and clichés such as “He said, she said,” “IIRC (If I remember correctly),” and “You might recall” recognize the truth of memories—that they are based only on fragments of the one-time subjective perception of any objective state of affairs. What’s more, research indicates that people alter their memories over time. Over the years, as the need to provide a common ground for actions based on past transactions arises, so does the need for an objective record of fact—an independent “true” past. These records must be both immutable to a reasonable degree and durable. Media such as clay tablets, parchment, photographic prints, and microfiche became popular because they satisfied the “write once, read many” requirement of society’s record keepers.

Information storage in the digital age has evolved to fit the scale of access (frequent) and volume (high) by moving to storage media that record and deliver information in an almost intangible state. Such media have distinct advantages: electrical impulses and the polarity of magnetized ferric compounds can be moved around at great speed and density. These media, unfortunately, also score higher in another measure: fragility. Paper and clay can survive large amounts of neglect and punishment, but a stray electromagnetic discharge or microscopic rupture can render a digital library inaccessible or unrecognizable.

It stands to reason that storing permanent records in some immutable and indestructible medium would be ideal—something that, once altered to encode information, could never be altered again, either by an overwrite or destruction. Experience shows that such ideals are rarely realized; with enough force and will, the hardest stone can be broken and the most permanent markings defaced.

In considering and ensuring digital permanence, you want to guard against two different failures: the destruction of the storage medium, and a loss of the integrity or “truthfulness” of the records….(More)”.

Decoding Algorithms


Malcalester University: “Ada Lovelace probably didn’t foresee the impact of the mathematical formula she published in 1843, now considered the first computer algorithm.

Nor could she have anticipated today’s widespread use of algorithms, in applications as different as the 2016 U.S. presidential campaign and Mac’s first-year seminar registration. “Over the last decade algorithms have become embedded in every aspect of our lives,” says Shilad Sen, professor in Macalester’s Math, Statistics, and Computer Science (MSCS) Department.

How do algorithms shape our society? Why is it important to be aware of them? And for readers who don’t know, what is an algorithm, anyway?…(More)”.

Leveraging and Sharing Data for Urban Flourishing


Testimony by Stefaan Verhulst before New York City Council Committee on Technology and the Commission on Public Information and Communication (COPIC): “We live in challenging times. From climate change to economic inequality, the difficulties confronting New York City, its citizens, and decision-makers are unprecedented in their variety, and also in their complexity and urgency. Our standard policy toolkit increasingly seems stale and ineffective. Existing governance institutions and mechanisms seem outdated and distrusted by large sections of the population.

To tackle today’s problems we need not only new solutions but also new methods for arriving at solutions. Data can play a central role in this task. Access to and the use of data in a trusted and responsible manner is central to meeting the challenges we face and enabling public innovation.

This hearing, called by the Technology Committee and the Commission on Public Information and Communication, is therefore timely and very important. It is my firm belief that rapid progress on developing an effective data sharing framework is among the most important steps our New York City leaders can take to tackle the myriad of 21st challenges....

I am joined today by some of my distinguished NYU colleagues, Prof. Julia Lane and Prof. Julia Stoyanovich, who have worked extensively on the technical and privacy challenges associated with data sharing. I will, therefore, avoid duplicating our testimonies and won’t focus on issues of privacy, trust and how to establish a responsible data sharing infrastructure, while these are central considerations for the type of data-driven approaches I will discuss. I am, of course, happy to elaborate on these topics during the question and answer session.

Instead, I want to focus on four core issues associated with data collaboration. I phrase these issues as answers to four questions. For each of these questions, I also provide a set of recommended actions that this Committee could consider undertaking or studying.

The four core questions are:

  • First, why should NYC care about data and data sharing?
  • Second, if you build a data-sharing framework, will they come?
  • Third, how can we best engage the private sector when it comes to sharing and using their data?
  • And fourth, is technology is the main (or best) answer?…(More)”.

Digital mile-markers provide navigation in cities


Springwise: “UK-based Maynard Design Consultancy has developed a system to help people navigate the changing landscape of city neighbourhoods. A prototype of a wayfinding solution for districts in London combines smart physical markers and navigational apps. The physical markers, inspired by traditional mile markers, include a digital screen. They provide real-time information, including daily news and messages from local businesses. The markers also track how people use the park, providing valuable information to the city and urban planners. The partnering apps provide up-to-date information about the changing environment in the city, such as on-going construction and delays due to large-scale events.

Unlike traditional, smartphone based navigational apps, this concept uses technology to help us reconnect with our surroundings, Maynard Design said.

The proposal won the Smart London District Challenge competition set by the Institute for Sustainability. Maynard is currently looking for partner companies to pilot its concept.

Takeaway: The Maynard design represents the latest efforts to use smartphones to amplify public safety announcements, general information and local businesses. The concept moves past traditional wayfinding markers to link people to a smart-city grid. By tracking how people use parks and other urban spaces, the markers will provide valuable insight for city officials. We expect more innovations like this as cities increasingly move toward seamless communication between services and city residents, aided by smart technologies. Over the past several months, we have seen technology to connect drivers to parking spaces and a prototype pavement that can change functions based on people’s needs….(More)”

The Future of FOIA in an Open Government World: Implications of the Open Government Agenda for Freedom of Information Policy and Implementation


Paper by Daniel Berliner, Alex Ingrams and Suzanne J. Piotrowski: “July 4, 2016 marked the fiftieth anniversary of the 1966 Freedom of Information Act of the United States. Freedom of Information (FOI) has become a vital element of the American political process, become recognized as a core value of democracy, and helped to inspire similar laws and movements around the world. FOI has always faced myriad challenges, including resistance, evasion, and poor implementation and enforcement. Yet the last decade has brought a change of a very different form to the evolution of FOI policy—the emergence of another approach to transparency that is in some ways similar to FOI, and in other ways distinct: open government. The open government agenda, driven by technological developments and motivated by a broader conception of transparency, today rivals, or by some measures, even eclipses FOI in terms of political attention and momentum. What have been the consequences of these trends? How does the advent of new technologies and new agendas shape the transparency landscape?

The political and policy contexts for FOI have fundamentally shifted due to the rise of the open government reform agenda. FOI was at one point the primary tool used to promote governance transparency. FOI is now just one good governance tool in an increasingly crowded field of transparency policy areas. Focus is increasingly shifting toward technology-enabled open data reforms. While many open government reformers see these as positive developments, many traditional FOI proponents have raised concerns. With a few notable exceptions, the academic literature has been silent on this issue. We offer a systematic framework for understanding the potential consequences—both positive and negative—of the open government agenda for FOI policy and implementation….(More)”.

Assessing the Legitimacy of “Open” and “Closed” Data Partnerships for Sustainable Development


Paper by Andreas Rasche, Mette Morsing and Erik Wetter in Business and Society: “This article examines the legitimacy attached to different types of multi-stakeholder data partnerships occurring in the context of sustainable development. We develop a framework to assess the democratic legitimacy of two types of data partnerships: open data partnerships (where data and insights are mainly freely available) and closed data partnerships (where data and insights are mainly shared within a network of organizations). Our framework specifies criteria for assessing the legitimacy of relevant partnerships with regard to their input legitimacy as well as their output legitimacy. We demonstrate which particular characteristics of open and closed partnerships can be expected to influence an analysis of their input and output legitimacy….(More)”.

2018 Global Go To Think Tank Index Report


Report by James G. McGann: “The Think Tanks and Civil Societies Program (TTCSP) of the Lauder Institute at the University of Pennsylvania conducts research on the role policy institutes play in governments and civil societies around the world. Often referred to as the “think tanks’ think tank,” TTCSP examines the evolving role and character of public policy research organizations. Over the last 27 years, the TTCSP has developed and led a series of global initiatives that have helped bridge the gap between knowledge and policy in critical policy areas such as international peace and security, globalization and governance, international economics, environmental issues, information and society, poverty alleviation, and healthcare and global health. These international collaborative efforts are designed to establish regional and international networks of policy institutes and communities that improve policy making while strengthening democratic institutions and civil societies around the world.

The TTCSP works with leading scholars and practitioners from think tanks and universities in a variety of collaborative efforts and programs, and produces the annual Global Go To think Tank Index that ranks the world’s leading think tanks in a variety of categories. This is achieved with the help of a panel of over 1,796 peer institutions and experts from the print and electronic media, academia, public and private donor institutions, and governments around the world. We have strong relationships with leading think tanks around the world, and our annual think Tank Index is used by academics, journalists, donors and the public to locate and connect with the leading centers of public policy research around the world. Our goal is to increase the profile and performance of think tanks and raise the public awareness of the important role think tanks play in governments and civil societies around the globe.”…(More)”.

Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence


Paper by Huimin Xia et al in at Nature Medicine: “Artificial intelligence (AI)-based methods have emerged as powerful tools to transform medical care. Although machine learning classifiers (MLCs) have already demonstrated strong performance in image-based diagnoses, analysis of diverse and massive electronic health record (EHR) data remains challenging. Here, we show that MLCs can query EHRs in a manner similar to the hypothetico-deductive reasoning used by physicians and unearth associations that previous statistical methods have not found. Our model applies an automated natural language processing system using deep learning techniques to extract clinically relevant information from EHRs. In total, 101.6 million data points from 1,362,559 pediatric patient visits presenting to a major referral center were analyzed to train and validate the framework.

Our model demonstrates high diagnostic accuracy across multiple organ systems and is comparable to experienced pediatricians in diagnosing common childhood diseases. Our study provides a proof of concept for implementing an AI-based system as a means to aid physicians in tackling large amounts of data, augmenting diagnostic evaluations, and to provide clinical decision support in cases of diagnostic uncertainty or complexity. Although this impact may be most evident in areas where healthcare providers are in relative shortage, the benefits of such an AI system are likely to be universal….(More)”.

How Tech Utopia Fostered Tyranny


Jon Askonas at The New Atlantis: “The rumors spread like wildfire: Muslims were secretly lacing a Sri Lankan village’s food with sterilization drugs. Soon, a video circulated that appeared to show a Muslim shopkeeper admitting to drugging his customers — he had misunderstood the question that was angrily put to him. Then all hell broke loose. Over a several-day span, dozens of mosques and Muslim-owned shops and homes were burned down across multiple towns. In one home, a young journalist was trapped, and perished.

Mob violence is an old phenomenon, but the tools encouraging it, in this case, were not. As the New York Times reported in April, the rumors were spread via Facebook, whose newsfeed algorithm prioritized high-engagement content, especially videos. “Designed to maximize user time on site,” as the Times article describes, the newsfeed algorithm “promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.” On Facebook in Sri Lanka, posts with incendiary rumors had among the highest engagement rates, and so were among the most highly promoted content on the platform. Similar cases of mob violence have taken place in India, Myanmar, Mexico, and elsewhere, with misinformation spread mainly through Facebook and the messaging tool WhatsApp.

Follow The New AtlantisThis is in spite of Facebook’s decision in January 2018 to tweak its algorithm, apparently to prevent the kind of manipulation we saw in the 2016 U.S. election, when posts and election ads originating from Russia reportedly showed up in newsfeeds of up to 126 million American Facebook users. The company explained that the changes to its algorithm will mean that newsfeeds will be “showing more posts from friends and family and updates that spark conversation,” and “less public content, including videos and other posts from publishers or businesses.” But these changes, which Facebook had tested out in countries like Sri Lanka in the previous year, may actually have exacerbated the problem — which is that incendiary content, when posted by friends and family, is guaranteed to “spark conversation” and therefore to be prioritized in newsfeeds. This is because “misinformation is almost always more interesting than the truth,” as Mathew Ingram provocatively put it in the Columbia Journalism Review.

How did we get here, from Facebook’s mission to “give people the power to build community and bring the world closer together”? Riot-inducing “fake news” and election meddling are obviously far from what its founders intended for the platform. Likewise, Google’s founders surely did not build their search engine with the intention of its being censored in China to suppress free speech, and yet, after years of refusing this demand from Chinese leadership, Google has recently relented rather than pull their search engine from China entirely. And YouTube’s creators surely did not intend their feature that promotes “trending” content to help clickbait conspiracy-theory videos go viral.

These outcomes — not merely unanticipated by the companies’ founders but outright opposed to their intentions — are not limited to social media. So far, Big Tech companies have presented issues of incitement, algorithmic radicalization, and “fake news” as merely bumps on the road of progress, glitches and bugs to be patched over. In fact, the problem goes deeper, to fundamental questions of human nature. Tools based on the premise that access to information will only enlighten us and social connectivity will only make us more humane have instead fanned conspiracy theories, information bubbles, and social fracture. A tech movement spurred by visions of libertarian empowerment and progressive uplift has instead fanned a global resurgence of populism and authoritarianism.

Despite the storm of criticism, Silicon Valley has still failed to recognize in these abuses a sharp rebuke of its sunny view of human nature. It remains naïvely blind to how its own aspirations for social engineering are on a spectrum with the tools’ “unintended” uses by authoritarian regimes and nefarious actors….(More)”.