Nick Paumgarten at the New Yorker: “…Parcak is a pioneer in the use of remote sensing, via satellite, to find and map potential locations that would otherwise be invisible to us. Variations in the chemical composition of the earth reveal the ghost shadows of ancient walls and citadels, watercourses and planting fields. The nifty kid-friendly name for all this is “archeology from space,” which also happens to be the title of Parcak’s new book. That’s a bit of a misnomer, because, technically, the satellites in question are in the mid-troposphere, and also the archeology still happens on, or under, the ground. In spite of the whiz-bang abracadabra of the multispectral imagery, Parcak is, at heart, a shovel bum…..Another estimate of Parcak’s, based on satellite data: there are roughly fifty million unmapped archeological sites around the world. Many, if not most, will be gone or corrupted by 2040, she says, the threats being not just looting but urban development, illegal construction, and climate change. In 2016, Parcak won the ted Prize, a grant of a million dollars; she used it to launch a project called GlobalXplorer, a crowdsourcing platform, by which citizen Indiana Joneses can scrutinize satellite maps and identify potential new sites, adding these to a database without publicly revealing the coördinates. The idea is to deploy more eyeballs (and, ultimately, more benevolent shovel bums) in the race against carbon and greed….(More)”.
The “Tokenization” of the eParticipation in Public Governance: An Opportunity to Hack Democracy
Chapter by Francisco Luis Benítez Martínez, María Visitación Hurtado Torres and Esteban Romero Frías: “Currently Distributed Ledger Technologies-DLTs, and especially the Blockchain technology, are an excellent opportunity for public institutions to transform the channels of citizen participation and reinvigorate democratic processes. These technologies permit the simplification of processes and make it possible to safely and securely manage the data stored in its records. This guarantees the transmission and public transparency of information, and thus leads to the development of a new citizen governance model by using technology such as a BaaS (Blockchain as a Service) platform. G-Cloud solutions would facilitate a faster deployment in the cities and provide scalability to foster the creation of Smart Citizens within the philosophy of Open Government. The development of an eParticipation model that can configure a tokenizable system of the actions and processes that citizens currently exercise in democratic environments is an opportunity to guarantee greater participation and thus manage more effective local democratic spaces. Therefore, a Blockchain solution in eDemocracy platforms is an exciting new opportunity to claim a new pattern of management amongst the agents that participate in the public sphere….(More)”.
Identity in the Decentralized Web
Blog by Jim Nelson: “The idea is that web sites will verify you much as a bartender checks your ID before pouring a drink. The bar doesn’t store a copy of your card and the bartender doesn’t look at your name or address; only your age is pertinent to receive service. The next time you enter the bar the bartender once again asks for proof of age, which you may or may not relinquish. That’s the promise of self-sovereign identity.
At the Decentralized Web Summit, questions and solutions were bounced around in the hopes of solving this fundamental problem. Developers spearheading the next web hashed out the criteria for decentralized identity, including:
- secure: to prevent fraud, maintain privacy, and ensure trust between all parties
- self-sovereign: individual ownership of private information
- consent: fine-tuned control over what information third-parties are privy to
- directed identity: manage multiple identities for different contexts (for example, your doctor can access certain aspects while your insurance company accesses others)
- and, of course, decentralized: no central authority or governing body holds private keys or generates identifiers
One problem with decentralized identity is that these problems often compete, pulling in polar directions.

For example, while security seems like a no-brainer, with self-sovereign identity the end-user is in control (and not Facebook, Google, or Twitter). It’s incumbent on them to secure their information. This raises questions of key management, data storage practices, and so on. Facebook, Google, and Twitter pay full-time engineers to do this job; handing that responsibility to end-users shifts the burden to someone who may not be so technically savvy. The inconvenience of key management and such also creates more hurdles for widespread adoption of the decentralized web.
The good news is, there are many working proposals today attempting to solve the above problems. One of the more promising is DID (Decentralized Identifier).
A DID is simply a URI, a familiar piece of text to most people nowadays. Each DID references a record stored in a blockchain. DIDs are not tied to any particular blockchain, and so they’re interoperable with existing and future technologies. DIDs are cryptographically secure as well.
DIDs require no central authority to produce or validate. If you want a DID, you can generate one yourself, or as many was you want. In fact, you should generate lots of them. Each unique DID gives the user fine-grained control over what personal information is revealed when interacting with a myriad of services and people.
If you’re interested to learn more, I recommend reading Michiel Mulders’ article on DIDs, “the Internet’s ‘missing identity layer’.” The DID working technical specification is being developed by the W3C. And those looking for code and community, check out the Decentralized Identity Foundation…(More)”.
Soon, satellites will be able to watch you everywhere all the time
Christopher Beam at MIT Technology Review: “In 2013, police in Grants Pass, Oregon, got a tip that a man named Curtis W. Croft had been illegally growing marijuana in his backyard. So they checked Google Earth. Indeed, the four-month-old satellite image showed neat rows of plants growing on Croft’s property. The cops raided his place and seized 94 plants.
In 2018, Brazilian police in the state of Amapá used real-time satellite imagery to detect a spot where trees had been ripped out of the ground. When they showed up, they discovered that the site was being used to illegally produce charcoal, and arrested eight people in connection with the scheme.
Chinese government officials have denied or downplayed the existence of Uighur reeducation camps in Xinjiang province, portraying them as “vocational schools.” But human rights activists have used satellite imagery to show that many of the “schools” are surrounded by watchtowers and razor wire.
Every year, commercially available satellite images are becoming sharper and taken more frequently. In 2008, there were 150 Earth observation satellites in orbit; by now there are 768. Satellite companies don’t offer 24-hour real-time surveillance, but if the hype is to be believed, they’re getting close. Privacy advocates warn that innovation in satellite imagery is outpacing the US government’s (to say nothing of the rest of the world’s) ability to regulate the technology. Unless we impose stricter limits now, they say, one day everyone from ad companies to suspicious spouses to terrorist organizations will have access to tools previously reserved for government spy agencies. Which would mean that at any given moment, anyone could be watching anyone else.
The images keep getting clearer
Commercial satellite imagery is currently in a sweet spot: powerful enough to see a car, but not enough to tell the make and model; collected frequently enough for a farmer to keep tabs on crops’ health, but not so often that people could track the comings and goings of a neighbor. This anonymity is deliberate. US federal regulations limit images taken by commercial satellites to a resolution of 25 centimeters, or about the length of a man’s shoe….(More)”.
How Ideas and Institutions Shape the Politics of Public Policy
Book by Daniel Béland :”…provides a critical review of existing literature on the role of ideas and institutions in the politics of public policy with the aim of contributing to the study of the politics of public policy. Because most policy scholars deal with the role of ideas or institutions in their research, such a critical review should help them improve their knowledge of crucial analytical issues in policy and political analysis. The following discussion brings together insights from both the policy studies literature and new institutionalism in sociology and political science, and stresses the explanatory role of ideas and institutions….(More)”.
Information, Technology and Control in a Changing World: Understanding Power Structures in the 21st Century
Book edited by Blayne Haggart, Kathryn Henne, and Natasha Tusikov: “This book explores the interconnected ways in which the control of knowledge has become central to the exercise of political, economic, and social power. Building on the work of International Political Economy scholar Susan Strange, this multidisciplinary volume features experts from political science, anthropology, law, criminology, women’s and gender studies, and Science and Technology Studies, who consider how the control of knowledge is shaping our everyday lives. From “weaponised copyright” as a censorship tool, to the battle over control of the internet’s “guts,” to the effects of state surveillance at the Mexico–U.S. border, this book offers a coherent way to understand the nature of power in the twenty-first century…(More)”.
We Need a Data-Rich Picture of What’s Killing the Planet
Clive Thompson at Wired: “…Marine litter isn’t the only hazard whose contours we can’t fully see. The United Nations has 93 indicators to measure the environmental dimensions of “sustainable development,” and amazingly, the UN found that we have little to no data on 68 percent of them—like how rapidly land is being degraded, the rate of ocean acidification, or the trade in poached wildlife. Sometimes this is because we haven’t collected it; in other cases some data exists but hasn’t been shared globally, or it’s in a myriad of incompatible formats. No matter what, we’re flying blind. “And you can’t manage something if you can’t measure it,” says David Jensen, the UN’s head of environmental peacebuilding.
In other words, if we’re going to help the planet heal and adapt, we need a data revolution. We need to build a “digital ecosystem for the environment,” as Jensen puts it.
The good news is that we’ve got the tools. If there’s one thing tech excels at (for good and ill), it’s surveillance, right? We live in a world filled with cameras and pocket computers, titanic cloud computing, and the eerily sharp insights of machine learning. And this stuff can be used for something truly worthwhile: studying the planet.
There are already some remarkable cases of tech helping to break through the fog. Consider Global Fishing Watch, a nonprofit that tracks the world’s fishing vessels, looking for overfishing. They use everything from GPS-like signals emitted by ships to satellite infrared imaging of ship lighting, plugged into neural networks. (It’s massive, cloud-scale data: over 60 million data points per day, making the AI more than 90 percent accurate at classifying what type of fishing activity a boat is engaged in.)
“If a vessel is spending its time in an area that has little tuna and a lot of sharks, that’s questionable,” says Brian Sullivan, cofounder of the project and a senior program manager at Google Earth Outreach. Crucially, Global Fishing Watch makes its data open to anyone—so now the National Geographic Society is using it to lobby for new marine preserves, and governments and nonprofits use it to target illicit fishing.
If we want better environmental data, we’ll need for-profit companies with the expertise and high-end sensors to pitch in too. Planet, a firm with an array of 140 satellites, takes daily snapshots of the entire Earth. Customers like insurance and financial firms love that sort of data. (It helps them understand weather and climate risk.) But Planet also offers it to services like Global Forest Watch, which maps deforestation and makes the information available to anyone (like activists who help bust illegal loggers). Meanwhile, Google’s skill in cloud-based data crunching helps illuminate the state of surface water: Google digitized 30 years of measurements from around the globe—extracting some from ancient magnetic tapes—then created an easy-to-use online tool that lets resource-poor countries figure out where their water needs protecting….(More)”.
Open Urban Data and the Sustainable Development Goals
Conference Paper by Christine Meschede and Tobias Siebenlist: “Since the adoption of the United Nations’ Sustainable Development Goals (SDGs) in 2015 – an ambitious agenda to end poverty, combat environmental threats and ensure prosperity for everyone – some effort has been made regarding the adequate measuring of the progress on its targets. As the crucial point is the availability of sufficient, comparable information, open data can play a key role. The coverage of open data, i.e., data that is machine-readable, freely available and reusable for everyone, is assessed by several measurement tools. We propose the use of open governmental data to make the achievement of SDGs easy and transparent to measure. For this purpose, a mapping of the open data categories to the SDGs is presented. Further, we argue that the SDGs need to be tackled in particular at the city level. For analyzing the current applicability of open data for measuring progress on the SDGs, we provide a small-scale case study on German open data portals and the embedded data categories and datasets. The results suggest that further standardization is needed in order to be able to use open data for comparing cities and their progress towards the SDGs….(More)”.
The Public Domain
Article by Ilanah Simon Fhima: “…explores whether it is possible to identify and delineate the public domain in intellectual property law.
The article begins with a review of existing IP scholarship on the public domain. Identifying that the prevailing model of the public domain relies upon analogies to the commons, it questions how well a model based upon medieval real property holdings might be expected to capture the nuances of intangible property in the internet age. Additionally, academic focus is predominantly directed on copyright law at the expense of other fields of intellectual endeavour and adopts a US-centric viewpoint, which may not reflective of the different settlement reached in the EU-influenced UK. The article then explores whether an alternative rhetoric in ‘no property’ – tangible objects which cannot be propertised – or a potential analogy with public rights of access to land might provide more suitable alternatives.
Ultimately, the article concludes that it is impossible to draw a complete map of the public domain in terms of what should not be propertised. Instead, it advocates for a positive conception of the public domain, based on individual uses that should always remain free, and the general public interests underlying those uses. This more flexible approach allows the law to evolve in response to specific changes in technology and social conditions, while at the same time maintaining a focus on core non-negotiable freedoms. It also consider whether the same methodology can be applied to tangible property….(More)”.
Which Countries Have More Open Governments? Assessing Structural Determinants of Openness
Paper by Sabina Schnell and Suyeon Jo: “An increasing number of countries are adopting open government reforms, driven, in part, by the Open Government Partnership (OGP), a global effort dedicated to advancing such initiatives. Yet, there is still wide variation in openness across countries. We investigate the political, administrative, and civic factors that explain this variation, using countries’ fulfillment of OGP eligibility criteria as a proxy for minimum standards of openness. We find that countries with strong constraints on the executive and high levels of citizen education have governments that are more open. A dense network of civil society organizations is associated with more budget transparency and higher civil liberties, but not with access to information or asset disclosure laws. The results suggest that if the value of openness is to be translated in practice, it is not enough to have capable bureaucracies—countries also need informed citizens and strong oversight of executive agencies….(More)”.