Humanizing technology


Kaliya Young at Open Democracy: “Can we use the internet to enhance deep human connection and support the emergence of thriving communities in which everyone’s needs are met and people’s lives are filled with joy and meaning?….

Our work on ‘technical’ technologies won’t generate broad human gains unless we invest an equal amount of time, energy and resources in the development of social and emotional technologies that drive how our whole society is organized and how we work together. I think we are actually on the cusp of having the tools, understanding and infrastructure to make that happen, without all our ideas and organizing being intermediated by giant corporations. But what does that mean in practice?

I think two things are absolutely vital.

First of all, how do we connect all the people and all the groups that want to align their goals in pursuit of social justice, deep democracy, and the development of new economies that share wealth and protect the environment? How are people supported to protect their own autonomy while also working with multiple other groups in processes of joint work and collective action?

One key element of the answer to that question is to generate a digital identity that is not under the control of a corporation, an organization or a government.

I have been co-leading the community surrounding the Internet Identity Workshop for the last 12 years. After many explorations of the techno-possibility landscape we have finally made some breakthroughs that will lay the foundations of a real internet-scale infrastructure to support what are called ‘user-centric’ or ‘self-sovereign’ identities.

This infrastructure consists of a network with two different types of nodes—people and organizations—with each individual being able to join lots of different groups. But regardless of how many groups they join, people will need a digital identity that is not owned by Twitter, Amazon, Apple, Google or Facebook. That’s the only way they will be able to control their own autonomous interactions on the internet. If open standards are not created for this critical piece of infrastructure then we will end up in a future where giant corporations control all of our identities. In many ways we are in this future now.

This is where something called ‘Shared Ledger Technology’ or SLT comes in—more commonly known as ‘blockchain’ or ‘distributed ledger technology.’  SLT represents a huge innovation in terms of databases that can be read by anyone and which are highly resistant to tampering—meaning that data cannot be erased or changed once entered. At the moment there’s a lot of work going on to design the encryption key management that’s necessary to support the creation and operation of these unique private channels of connection and communication between individuals and organizations. The Sovrin Foundation has built an SLT specifically for digital identity key management, and has donated the code required to the HyperLeger Foundation under ‘project Indy.’…

To put it simply, technical technologies are easier to turn in the direction of democracy and social justice if they are developed and applied with social and emotional intelligence. Combining all three together is the key to using technology for liberating ends….(More)”.

Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation


Paper by Jack Balkin: “We have now moved from the early days of the Internet to the Algorithmic Society. The Algorithmic Society features the use of algorithms, artificial intelligence agents, and Big Data to govern populations. It also features digital infrastructure companies, large multi-national social media platforms, and search engines that sit between traditional nation states and ordinary individuals, and serve as special-purpose governors of speech.

The Algorithmic Society presents two central problems for freedom of expression. First, Big Data allows new forms of manipulation and control, which private companies will attempt to legitimate and insulate from regulation by invoking free speech principles. Here First Amendment arguments will likely be employed to forestall digital privacy guarantees and prevent consumer protection regulation. Second, privately owned digital infrastructure companies and online platforms govern speech much as nation states once did. Here the First Amendment, as normally construed, is simply inadequate to protect the practical ability to speak.

The first part of the essay describes how to regulate online businesses that employ Big Data and algorithmic decision making consistent with free speech principles. Some of these businesses are “information fiduciaries” toward their end-users; they must exercise duties of good faith and non-manipulation. Other businesses who are not information fiduciaries have a duty not to engage in “algorithmic nuisance”: they may not externalize the costs of their analysis and use of Big Data onto innocent third parties.

The second part of the essay turns to the emerging pluralist model of online speech regulation. This pluralist model contrasts with the traditional dyadic model in which nation states regulated the speech of their citizens.

In the pluralist model, territorial governments continue to regulate the speech directly. But they also attempt to coerce or co-opt owners of digital infrastructure to regulate the speech of others. This is “new school” speech regulation….(More)”.

Data’s big moment? Here’s what you need to know


Opinion piece by Claire Melamed, and Mahamudu Bawumia: “Knowledge is power, and knowledge will empower humanity to tackle the most serious challenges of our time. We are all reliant on accurate knowledge to achieve the collective ambition of the Sustainable Development Goals, and the clock is ticking.

So let’s test your knowledge: A) Are boys or girls under 2 years old more likely to be stunted? B) Are slum-dwellers more likely to be young or old? C) What proportion of disabled people are unemployed? D) What proportion of migrants have birth certificates?

Answer — no one knows.

Data is the story of people’s lives in numbers. Data allows researchers, campaigners, and policymakers to understand how societies work, who gains, and who loses from changes and crises. If you’re not in the data, you’re not in the picture — and too many people are still uncounted. There’s a huge need for concerted efforts to uncover the realities of life for the “left behind” in 2017, and a coalition of partners have been working to disaggregate data on gender, race, age, disabilities, migratory status, and more.

But data has a PR problem. Much as it used to be fine to say “I hate mathematics,” today we all encounter people who think numbers are a distraction from the real business of helping people. But we cannot turn our backs on the greatest renewable resource of our time — the resource that will inform and guide humanity to both define and solve our problems….(More)”.

Building the Learning City


Daniel Castro at GovTech: “…Like other technologies, smart cities will evolve and mature over time. The earliest will provide basic insights from data and enable local leaders to engage in evidence-based governance. These efforts will be important, but they will represent only incremental change from what cities have already been doing. For example, Baltimore created its CitiStat program in 1999 to measure all municipal functions and improve oversight and accountability of city agencies. Early smart cities will have substantially more data at their disposal, but they will not necessarily use this data in fundamentally new ways.

The second stage of smart cities will use predictive analytics to identify patterns and forecast trends. These types of insights will be especially valuable to city planners and local officials responsible for improving municipal services and responding to changing demands. These cities will reduce downtime on critical municipal infrastructure by performing preventive maintenance on vehicles, bridges and buildings, and more quickly intervene when public health and safety issues arise. This stage will rely on powerful data-driven technologies, such as the systems that enable Netflix to offer movie recommendations and Amazon to suggest additional products for customers.

The third stage of smart cities will focus on using “prescriptive analytics” to use data to optimize processes automatically. Whereas the second stage of smart cities will be primarily about using data to supply insights about the future that will allow city leaders to evaluate different choices, this third stage will be about relying on algorithms to make many of these decisions independently. Much like a system of smart traffic signals uses real-time data to optimize traffic flow, these algorithms will help to automate more government functions and increase the productivity of municipal employees.

At all three stages of smart city development, there is an opportunity for city leaders to look beyond local needs and consider how they can design a smart city that will be part of a larger network of cities that share and learn from one another. On its own, a smart city can use data to track local trends, but as part of a network, a smart city can benchmark itself against a set of similar peers. For example, water and waste management departments can compare metrics to assess their relative performance and identify opportunities for change.

If they hope to successfully develop into learning cities, cities can begin the process of setting up to work jointly with their peers by participating in forums such as the Global City Teams Challenge, an initiative to bring together government and industry stakeholders working on common smart city problems. But longer-term change will require city leaders to reorient their planning to consider not only the needs of their city, but also how they fit into the larger network….(More)”

The stuff and nonsense of open data in government


Ian L. Boyd at Nature: “….Government is mostly involved in creating or implementing policies concerning how people live their lives. Making government data open could be seen as the equivalent of open government itself because data are increasingly the ‘stuff’ of government. But there are moral and ethical issues concerning the data owned by government and there is an issue of trust about how government handles data. The more obvious issues concerning data that are tagged specifically with the identity of an individual are probably well covered, but we all know that this is not sufficient. We need to be aware of the pitfalls about opening government without appropriate assurances around the ethical and moral use of the data it holds.

It would be wrong, however, to always see the risks presented by open data and not also see the benefits. Like all innovations it is important to design the application of the innovation to maximise benefits and minimise risks. The intelligent use of data could revolutionise government and place a lot more control in the hands of individuals by, for example, ensuring that everybody can have instant access to all the information that government holds about them and can make their own decisions about who should be allowed to see that information and what uses can be made of it. There is a big drive in government towards this kind of model of data control. Commercial operators, such as major retailers, often hold a lot of data about individuals. They should also have to move towards the empowerment of individuals to say what should or should not be done with data that concerns them….(More)”.

A framework for the free flow of non-personal data in the EU


European Commission Press Release: “To unlock the full potential of the EU data economy, the Commission is proposing a new set of rules to govern the free flow of non-personal data in the EU. Together with the already existing rules for personal data, the new measures will enable the storage and processing of non-personal data across the Union to boost the competitiveness of European businesses and to modernise public services in an effective EU single market for data services. Removing data localisation restrictions is considered the most important factor for the data economy to double its value to 4% of GDP in 2020….

The framework proposes:

  1. The principle of free flow of non-personal data across borders: Member States can no longer oblige organisations to locate the storage or processing of data within their borders. Restrictions will only be justified for reasons of public security. Member States will have to notify the Commission of new or existing data localisation requirements. The free flow of non-personal data will make it easier and cheaper for businesses to operate across borders without having to duplicate IT systems or to save the same data in different places.
  2. The principle of data availability for regulatory control: Competent authorities will be able to exercise their rights of access to data wherever it is stored or processed in the EU. The free flow of non-personal data will not affect the obligations for businesses and other organisations to provide certain data for regulatory control purposes.
  3. The development of EU codes of conduct to remove obstacles to switching between service providers of cloud storage and to porting data back to users’ own IT systems…. (Full press release and all documents related to the package)”

Data Sharing Vital in Fight Against Childhood Obesity


Digit: “The Data Lab is teaming up with UNICEF in a bid to encourage data sharing in public and private organisations to help solve pressing public problems. The first collaborative project aims to tackle the issue of childhood obesity in Scotland, with between 29% and 33% of children aged 2-15 at risk of serious obesity-related health complications.

According to UNICEF, solving some of the most complex problems affecting children around the world will require access to different data sets and expertise from diverse sectors. The rapid rise in the availability of quality data offers a wealth of information to address complex problems affecting children. The charity has identified an opportunity to tap into this potential through collaborative working, prompting the development of DataCollaboratives.org in partnership with The Governance Lab at the NYU Tandon School of Engineering, and the Omidyar Network.  The aim for DataCollaboratives is to encourage organisations from different sectors, including private companies, research institutions, government agencies and others, to exchange and share data to help solve public problems.

The initiative is now being promoted in Scotland through UNICEF’s partnership with The Data Lab, who will work together to deliver a Data Collaboratives hub in Scotland where data scientists and strategists will work on some of the most important issues facing children around the world. Finding solutions to these problems has the potential to transform the lives of some of the most disadvantaged children in Scotland, the UK, and around the globe….(More)”.

Blockchain: Blueprint for a New Economy


Book by Melanie Swan: “Bitcoin is starting to come into its own as a digital currency, but the blockchain technology behind it could prove to be much more significant. This book takes you beyond the currency (“Blockchain 1.0”) and smart contracts (“Blockchain 2.0”) to demonstrate how the blockchain is in position to become the fifth disruptive computing paradigm after mainframes, PCs, the Internet, and mobile/social networking.

Author Melanie Swan, Founder of the Institute for Blockchain Studies, explains that the blockchain is essentially a public ledger with potential as a worldwide, decentralized record for the registration, inventory, and transfer of all assets—not just finances, but property and intangible assets such as votes, software, health data, and ideas.

Topics include:

  • Concepts, features, and functionality of Bitcoin and the blockchain
  • Using the blockchain for automated tracking of all digital endeavors
  • Enabling censorship?resistant organizational models
  • Creating a decentralized digital repository to verify identity
  • Possibility of cheaper, more efficient services traditionally provided by nations
  • Blockchain for science: making better use of the data-mining network
  • Personal health record storage, including access to one’s own genomic data
  • Open access academic publishing on the blockchain…(More)”.

The Promise of Evidence-Based Policymaking


Final Report by the Commission on Evidence-Based Policymaking: “…There are many barriers to the efective use of government data to generate evidence. Better access to these data holds the potential for substantial gains for society. The Commission’s recommendations recognize that the country’s laws and practices are not currently optimized to support the use of data for evidence building, nor in a manner that best protects privacy. To correct these problems, the Commission makes the following recommendations:

  • Establish a National Secure Data Service to facilitate access to data for evidence building while ensuring privacy and transparency in how those data are used. As a state-of-the-art resource for improving government’s capacity to use the data it already collects, the National Secure Data Service will be able to temporarily link existing data and provide secure access to those data for exclusively statistical purposes in connection with approved projects. The National Secure Data Service will do this without creating a data clearinghouse or warehouse.
  • Require stringent privacy qualifcations for acquiring and combining data for statistical purposes at the National Secure Data Service to ensure that data continue to be efectively protected while improving the government’s ability to understand the impacts of programs on a wider range of outcomes. At the same time, consider additional statutory changes to enable ongoing statistical production that, under the same stringent privacy qualifcations, may make use of combined data.
  • Review and, where needed, revise laws authorizing Federal data collection and use to ensure that limited access to administrative and survey data is possible to return benefts to the public through improved programs and policies, but only under strict privacy controls.
  • Ensure state-collected quarterly earnings data are available for statistical purposes, including to support the many evidence-building activities for which earnings are an important outcome.
  • Make additional state-collected data about Federal programs available for evidence building. Where appropriate, states that administer programs with substantial Federal investment should in return provide the data necessary for evidence building.
  • Develop a uniform process for external researchers to apply and qualify for secure access to confdential government data for evidence-building purposes while protecting privacy by carefully restricting data access to qualifed and approved researchers…(More)”

Big Data: A New Empiricism and its Epistemic and Socio-Political Consequences


Chapter by Gernot Rieder and Judith Simon in Berechenbarkeit der Welt?: “The paper investigates the rise of Big Data in contemporary society. It examines the most prominent epistemological claims made by Big Data proponents, calls attention to the potential socio-political consequences of blind data trust, and proposes a possible way forward. The paper’s main focus is on the interplay between an emerging new empiricism and an increasingly opaque algorithmic environment that challenges democratic demands for transparency and accountability. It concludes that a responsible culture of quantification requires epistemic vigilance as well as a greater awareness of the potential dangers and pitfalls of an ever more data-driven society….(More)”.