Technology and satellite companies open up a world of data


Gabriel Popkin at Nature: “In the past few years, technology and satellite companies’ offerings to scientists have increased dramatically. Thousands of researchers now use high-resolution data from commercial satellites for their work. Thousands more use cloud-computing resources provided by big Internet companies to crunch data sets that would overwhelm most university computing clusters. Researchers use the new capabilities to track and visualize forest and coral-reef loss; monitor farm crops to boost yields; and predict glacier melt and disease outbreaks. Often, they are analysing much larger areas than has ever been possible — sometimes even encompassing the entire globe. Such studies are landing in leading journals and grabbing media attention.

Commercial data and cloud computing are not panaceas for all research questions. NASA and the European Space Agency carefully calibrate the spectral quality of their imagers and test them with particular types of scientific analysis in mind, whereas the aim of many commercial satellites is to take good-quality, high-resolution pictures for governments and private customers. And no company can compete with Landsat’s free, publicly available, 46-year archive of images of Earth’s surface. For commercial data, scientists must often request images of specific regions taken at specific times, and agree not to publish raw data. Some companies reserve cloud-computing assets for researchers with aligned interests such as artificial intelligence or geospatial-data analysis. And although companies publicly make some funding and other resources available for scientists, getting access to commercial data and resources often requires personal connections. Still, by choosing the right data sources and partners, scientists can explore new approaches to research problems.

Mapping poverty

Joshua Blumenstock, an information scientist at the University of California, Berkeley (UCB), is always on the hunt for data he can use to map wealth and poverty, especially in countries that do not conduct regular censuses. “If you’re trying to design policy or do anything to improve living conditions, you generally need data to figure out where to go, to figure out who to help, even to figure out if the things you’re doing are making a difference.”

In a 2015 study, he used records from mobile-phone companies to map Rwanda’s wealth distribution (J. Blumenstock et al. Science 350, 1073–1076; 2015). But to track wealth distribution worldwide, patching together data-sharing agreements with hundreds of these companies would have been impractical. Another potential information source — high-resolution commercial satellite imagery — could have cost him upwards of US$10,000 for data from just one country….

Use of commercial images can also be restricted. Scientists are free to share or publish most government data or data they have collected themselves. But they are typically limited to publishing only the results of studies of commercial data, and at most a limited number of illustrative images.

Many researchers are moving towards a hybrid approach, combining public and commercial data, and running analyses locally or in the cloud, depending on need. Weiss still uses his tried-and-tested ArcGIS software from Esri for studies of small regions, and jumps to Earth Engine for global analyses.

The new offerings herald a shift from an era when scientists had to spend much of their time gathering and preparing data to one in which they’re thinking about how to use them. “Data isn’t an issue any more,” says Roy. “The next generation is going to be about what kinds of questions are we going to be able to ask?”…(More)”.

Bonding with Your Algorithm


Conversation with Nicolas Berggruen at the Edge: “The relationship between parents and children is the most important relationship. It gets more complicated in this case because, beyond the children being our natural children, we can influence them even beyond. We can influence them biologically, and we can use artificial intelligence as a new tool. I’m not a scientist or a technologist whatsoever, but the tools of artificial intelligence, in theory, are algorithm- or computer-based. In reality, I would argue that even an algorithm is biological because it comes from somewhere. It doesn’t come from itself. If it’s related to us as creators or as the ones who are, let’s say, enabling the algorithms, well, we’re the parents.

Who are those children that we are creating? What do we want them to be like as part of the earth, compared to us as a species and, frankly, compared to us as parents? They are our children. We are the parents. How will they treat us as parents? How do we treat our own parents? How do we treat our children? We have to think of these in the exact same way. Separating technology and humans the way we often think about these issues is almost wrong. If it comes from us, it’s the same thing. We have a responsibility. We have the power and the imagination to shape this future generation. It’s exciting, but let’s just make sure that they view us as their parents. If they view us as their parents, we will have a connection….(More)”

New Technologies Won’t Reduce Scarcity, but Here’s Something That Might


Vasilis Kostakis and Andreas Roos at the Harvard Business Review: “In a book titled Why Can’t We All Just Get Along?, MIT scientists Henry Lieberman and Christopher Fry discuss why we have wars, mass poverty, and other social ills. They argue that we cannot cooperate with each other to solve our major problems because our institutions and businesses are saturated with a competitive spirit. But Lieberman and Fry have some good news: modern technology can address the root of the problem. They believe that we compete when there is scarcity, and that recent technological advances, such as 3D printing and artificial intelligence, will end widespread scarcity. Thus, a post-scarcity world, premised on cooperation, would emerge.

But can we really end scarcity?

We believe that the post-scarcity vision of the future is problematic because it reflects an understanding of technology and the economy that could worsen the problems it seeks to address. This is the bad news. Here’s why:

New technologies come to consumers as finished products that can be exchanged for money. What consumers often don’t understand is that the monetary exchange hides the fact that many of these technologies exist at the expense of other humans and local environments elsewhere in the global economy….

The good news is that there are alternatives. The wide availability of networked computers has allowed new community-driven and open-source business models to emerge. For example, consider Wikipedia, a free and open encyclopedia that has displaced the Encyclopedia Britannica and Microsoft Encarta. Wikipedia is produced and maintained by a community of dispersed enthusiasts primarily driven by other motives than profit maximization.  Furthermore, in the realm of software, see the case of GNU/Linux on which the top 500 supercomputers and the majority of websites run, or the example of the Apache Web Server, the leading software in the web-server market. Wikipedia, Apache and GNU/Linux demonstrate how non-coercive cooperation around globally-shared resources (i.e. a commons) can produce artifacts as innovative, if not more, as those produced by industrial capitalism.

In the same way, the emergence of networked micro-factories are giving rise to new open-source business models in the realm of design and manufacturing. Such spaces can either be makerspaces, fab labs, or other co-working spaces, equipped with local manufacturing technologies, such as 3D printing and CNC machines or traditional low-tech tools and crafts. Moreover, such spaces often offer collaborative environments where people can meet in person, socialize and co-create.

This is the context in which a new mode of production is emerging. This mode builds on the confluence of the digital commons of knowledge, software, and design with local manufacturing technologies.  It can be codified as “design global, manufacture local” following the logic that what is light (knowledge, design) becomes global, while what is heavy (machinery) is local, and ideally shared. Design global, manufacture local (DGML) demonstrates how a technology project can leverage the digital commons to engage the global community in its development, celebrating new forms of cooperation. Unlike large-scale industrial manufacturing, the DGML model emphasizes application that is small-scale, decentralized, resilient, and locally controlled. DGML could recognize the scarcities posed by finite resources and organize material activities accordingly. First, it minimizes the need to ship materials over long distances, because a considerable part of the manufacturing takes place locally. Local manufacturing also makes maintenance easier, and also encourages manufacturers to design products to last as long as possible. Last, DGML optimizes the sharing of knowledge and design as there are no patent costs to pay for….(More)”

The Slippery Math of Causation


Pradeep Mutalik for Quanta Magazine: “You often hear the admonition “correlation does not imply causation.” But what exactly is causation? Unlike correlation, which has a specific mathematical meaning, causation is a slippery concept that has been debated by philosophers for millennia. It seems to get conflated with our intuitions or preconceived notions about what it means to cause something to happen. One common-sense definition might be to say that causation is what connects one prior process or agent — the cause — with another process or state — the effect. This seems reasonable, except that it is useful only when the cause is a single factor, and the connection is clear. But reality is rarely so simple.

Although we tend to credit or blame things on a single major cause, in nature and in science there are almost always multiple factors that have to be exactly right for an event to take place. For example, we might attribute a forest fire to the carelessly thrown cigarette butt, but what about the grassy tract leading to the forest, the dryness of the vegetation, the direction of the wind and so on? All of these factors had to be exactly right for the fire to start. Even though many tossed cigarette butts don’t start fires, we zero in on human actions as causes, ignoring other possibilities, such as sparks from branches rubbing together or lightning strikes, or acts of omission, such as failing to trim the grassy path short of the forest. And we tend to focus on things that can be manipulated: We overlook the direction of the wind because it is not something we can control. Our scientifically incomplete intuitive model of causality is nevertheless very useful in practice, and helps us execute remedial actions when causes are clearly defined. In fact, artificial intelligence pioneer Judea Pearl has published a new book about why it is necessary to teach cause and effect to intelligent machines.

However, clearly defined causes may not always exist. Complex, interdependent multifactorial causes arise often in nature and therefore in science. Most scientific disciplines focus on different aspects of causality in a simplified manner. Physicists may talk about causal influences being unable to propagate faster than the speed of light, while evolutionary biologists may discuss proximate and ultimate causes as mentioned in our previous puzzle on triangulation and motion sickness. But such simple situations are rare, especially in biology and the so-called “softer” sciences. In the world of genetics, the complex multifactorial nature of causality was highlighted in a recent Quanta article by Veronique Greenwood that described the intertwined effects of genes.

One well-known approach to understanding causality is to separate it into two types: necessary and sufficient….(More)”

How Citizens Can Hack EU Democracy


Stephen Boucher at Carnegie Europe: “…To connect citizens with the EU’s decisionmaking center, European politicians will need to provide ways to effectively hack this complex system. These democratic hacks need to be visible and accessible, easily and immediately implementable, viable without requiring changes to existing European treaties, and capable of having a traceable impact on policy. Many such devices could be imagined around these principles. Here are three ideas to spur debate.

Hack 1: A Citizens’ Committee for the Future in the European Parliament

The European Parliament has proposed that twenty-seven of the seventy-three seats left vacant by Brexit should be redistributed among the remaining member states. According to one concept, the other forty-six unassigned seats could be used to recruit a contingent of ordinary citizens from around the EU to examine legislation from the long-term perspective of future generations. Such a “Committee for the Future” could be given the power to draft a response to a yearly report on the future produced by the president of the European Parliament, initiate debates on important political themes of their own choosing, make submissions on future-related issues to other committees, and be consulted by members of the European Parliament (MEPs) on longer-term matters.

MEPs could decide to use these forty-six vacant seats to invite this Committee for the Future to sit, at least on a trial basis, with yearly evaluations. This arrangement would have real benefits for EU politics, acting as an antidote to the union’s existential angst and helping the EU think systemically and for the longer term on matters such as artificial intelligence, biodiversity, climate concerns, demography, mobility, and energy.

Hack 2: An EU Participatory Budget

In 1989, the city of Porto Alegre, Brazil, decided to cede control of a share of its annual budget for citizens to decide upon. This practice, known as participatory budgets, has since spread globally. As of 2015, over 1,500 instances of participatory budgets have been implemented across five continents. These processes generally have had a positive impact, with people proving that they take public spending matters seriously.

To replicate these experiences at the European level, the complex realities of EU budgeting would require specific features. First, participative spending probably would need to be both local and related to wider EU priorities in order to ensure that citizens see its relevance and its wider European implications. Second, significant resources would need to be allocated to help citizens come up with and promote projects. For instance, the city of Paris has ensured that each suggested project that meets the eligibility requirements has a desk officer within its administration to liaise with the idea’s promoters. It dedicates significant resources to reach out to citizens, in particular in the poorer neighborhoods of Paris, both online and face-to-face. Similar efforts would need to be deployed across Europe. And third, in order to overcome institutional complexities, the European Parliament would need to work with citizens as part of its role in negotiating the budget with the European Council.

Hack 3: An EU Collective Intelligence Forum

Many ideas have been put forward to address popular dissatisfaction with representative democracy by developing new forums such as policy labs, consensus conferences, and stakeholder facilitation groups. Yet many citizens still feel disenchanted with representative democracy, including at the EU level, where they also strongly distrust lobby groups. They need to be involved more purposefully in policy discussions.

A yearly Deliberative Poll could be run on a matter of significance, ahead of key EU summits and possibly around the president of the commission’s State of the Union address. On the model of the first EU-wide Deliberative Poll, Tomorrow’s Europe, this event would bring together in Brussels a random sample of citizens from all twenty-seven EU member states, and enable them to discuss various social, economic, and foreign policy issues affecting the EU and its member states. This concept would have a number of advantages in terms of promoting democratic participation in EU affairs. By inviting a truly representative sample of citizens to deliberate on complex EU matters over a weekend, within the premises of the European Parliament, the European Parliament would be the focus of a high-profile event that would draw media attention. This would be especially beneficial if—unlike Tomorrow’s Europe—the poll was not held at arm’s length by EU policymakers, but with high-level national officials attending to witness good-quality deliberation remolding citizens’ views….(More)”.

Data Governance in the Digital Age


Centre for International Governance Innovation: “Data is being hailed as “the new oil.” The analogy seems appropriate given the growing amount of data being collected, and the advances made in its gathering, storage, manipulation and use for commercial, social and political purposes.

Big data and its application in artificial intelligence, for example, promises to transform the way we live and work — and will generate considerable wealth in the process. But data’s transformative nature also raises important questions around how the benefits are shared, privacy, public security, openness and democracy, and the institutions that will govern the data revolution.

The delicate interplay between these considerations means that they have to be treated jointly, and at every level of the governance process, from local communities to the international arena. This series of essays by leading scholars and practitioners, which is also published as a special report, will explore topics including the rationale for a data strategy, the role of a data strategy for Canadian industries, and policy considerations for domestic and international data governance…

RATIONALE OF A DATA STRATEGY

THE ROLE OF A DATA STRATEGY FOR CANADIAN INDUSTRIES

BALANCING PRIVACY AND COMMERCIAL VALUES

DOMESTIC POLICY FOR DATA GOVERNANCE

INTERNATIONAL POLICY CONSIDERATIONS

EPILOGUE

Public Policy in an AI Economy


NBER Working Paper by Austan Goolsbee: “This paper considers the role of policy in an AI-intensive economy (interpreting AI broadly). It emphasizes the speed of adoption of the technology for the impact on the job market and the implications for inequality across people and across places. It also discusses the challenges of enacting a Universal Basic Income as a response to widespread AI adoption, discuss pricing, privacy and competition policy the question of whether AI could improve policy making itself….(More).

How Policymakers Can Foster Algorithmic Accountability


Report by Joshua New and Daniel Castro: “Increased automation with algorithms, particularly through the use of artificial intelligence (AI), offers opportunities for the public and private sectors to complete increasingly complex tasks with a level of productivity and effectiveness far beyond that of humans, generating substantial social and economic benefits in the process. However, many believe an increased use of algorithms will lead to a host of harms, including exacerbating existing biases and inequalities, and have therefore called for new public policies, such as establishing an independent commission to regulate algorithms or requiring companies to explain publicly how their algorithms make decisions. Unfortunately, all of these proposals would lead to less AI use, thereby hindering social and economic progress.

Policymakers should reject these proposals and instead support algorithmic decision-making by promoting policies that ensure its robust development and widespread adoption. Like any new technology, there are strong incentives among both developers and adopters to improve algorithmic decision-making and ensure its applications do not contain flaws, such as bias, that reduce their effectiveness. Thus, rather than establish a master regulatory framework for all algorithms, policymakers should do what they have always done with regard to technology regulation: enact regulation only where it is required, targeting specific harms in particular application areas through dedicated regulatory bodies that are already charged with oversight of that particular sector. To accomplish this, regulators should pursue algorithmic accountability—the principle that an algorithmic system should employ a variety of controls to ensure the operator (i.e., the party responsible for deploying the algorithm) can verify it acts in accordance with its intentions, as well as identify and rectify harmful outcomes. Adopting this framework would both promote the vast benefits of algorithmic decision-making and minimize harmful outcomes, while also ensuring laws that apply to human decisions can be effectively applied to algorithmic decisions….(More)”.

This is your office on AI


Article by Jeffrey Brown at a Special Issue of the Wilson Quarterly on AI: “The future has arrived and it’s your first day at your new job. You step across the threshold sporting a nervous smile and harboring visions of virtual handshakes and brain-computer interfaces. After all, this is one of those newfangled, modern offices that science-fiction writers have been dreaming up for ages. Then you bump up against something with a thud. No, it’s not one of the ubiquitous glass walls, but the harsh reality of an office that, at first glance, doesn’t appear much different from what you’re accustomed to. Your new colleagues shuffle between meetings clutching phones and laptops. A kitchenette stocked with stale donuts lurks in the background. And, by the way, you were fifteen minutes late because the commute is still hell.

So where is the fabled “office of the future”? After all, many of us have only ever fantasized about the ways in which technology – and especially artificial intelligence – might transform our working lives for the better. In fact, the AI-enabled office will usher in far more than next-generation desk supplies. It’s only over subsequent weeks that you come to appreciate how the office of the future feels, operates, and yes, senses. It also slowly dawns on you that work itself has changed and that what it means to be a worker has undergone a similar retrofit.

With AI already deployed in everything from the fight against ISIS to the hunt for exoplanets and your cat’s Alexa-enabled Friskies order, its application to the office should come as no surprise. As workers pretty much everywhere can attest, today’s office has issues: It can’t intuitively crack a window when your officemate decides to microwave leftover catfish. It seems to willfully disregard your noise, temperature, light, and workflow preferences. And it certainly doesn’t tell its designers – or your manager – what you are really thinking as you plop down in your annoyingly stiff chair to sip your morning cup of mud.

Now, you may be thinking to yourself, “These seem like trivial issues that can be worked out simply by chatting with another human being, so why do we even need AI in my office?” If so, read on. In your lifetime, companies and workers will channel AI to unlock new value – and immense competitive advantage….(More)”.

Tech Platforms and the Knowledge Problem


Frank Pasquale at American Affairs: “Friedrich von Hayek, the preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to central planning. Knowledge about the price of supplies and labor, and consumers’ ability and willingness to pay, is so scattered and protean that even the wisest authorities cannot access all of it. No person knows everything about how goods and services in an economy should be priced. No central decision-maker can grasp the idiosyncratic preferences, values, and purchasing power of millions of individuals. That kind of knowledge, Hayek said, is distributed.

In an era of artificial intelligence and mass surveillance, however, the possibility of central planning has reemerged—this time in the form of massive firms. Having logged and analyzed billions of transactions, Amazon knows intimate details about all its customers and suppliers. It can carefully calibrate screen displays to herd buyers toward certain products or shopping practices, or to copy sellers with its own, cheaper, in-house offerings. Mark Zuckerberg aspires to omniscience of consumer desires, by profiling nearly everyone on Facebook, Instagram, and WhatsApp, and then leveraging that data trove to track users across the web and into the real world (via mobile usage and device fingerprinting). You don’t even have to use any of those apps to end up in Facebook/Instagram/WhatsApp files—profiles can be assigned to you. Google’s “database of intentions” is legendary, and antitrust authorities around the world have looked with increasing alarm at its ability to squeeze out rivals from search results once it gains an interest in their lines of business. Google knows not merely what consumers are searching for, but also what other businesses are searching, buying, emailing, planning—a truly unparalleled matching of data-processing capacity to raw communication flows.

Nor is this logic limited to the online context. Concentration is paying dividends for the largest banks (widely assumed to be too big to fail), and major health insurers (now squeezing and expanding the medical supply chain like an accordion). Like the digital giants, these finance and insurance firms not only act as middlemen, taking a cut of transactions, but also aspire to capitalize on the knowledge they have gained from monitoring customers and providers in order to supplant them and directly provide services and investment. If it succeeds, the CVS-Aetna merger betokens intense corporate consolidations that will see more vertical integration of insurers, providers, and a baroque series of middlemen (from pharmaceutical benefit managers to group purchasing organizations) into gargantuan health providers. A CVS doctor may eventually refer a patient to a CVS hospital for a CVS surgery, to be followed up by home health care workers employed by CVS who bring CVS pharmaceuticals—allcovered by a CVS/Aetna insurance plan, which might penalize the patient for using any providers outside the CVS network. While such a panoptic firm may sound dystopian, it is a logical outgrowth of health services researchers’ enthusiasm for “integrated delivery systems,” which are supposed to provide “care coordination” and “wraparound services” more efficiently than America’s current, fragmented health care system.

The rise of powerful intermediaries like search engines and insurers may seem like the next logical step in the development of capitalism. But a growing chorus of critics questions the size and scope of leading firms in these fields. The Institute for Local Self-Reliance highlights Amazon’s manipulation of both law and contracts to accumulate unfair advantages. International antitrust authorities have taken Google down a peg, questioning the company’s aggressive use of its search engine and Android operating system to promote its own services (and demote rivals). They also question why Google and Facebook have for years been acquiring companies at a pace of more than two per month. Consumer advocates complain about manipulative advertising. Finance scholars lambaste megabanks for taking advantage of the implicit subsidies that too-big-to-fail status confers….(More)”.