Tech groups cannot be allowed to hide from scrutiny


Marietje Schaake at the Financial Times: “Technology companies have governments over a barrel. Whether they are maximising traffic flow efficiency, matching pupils with their school preferences, trying to anticipate drought based on satellite and soil data, most governments heavily rely on critical infrastructure and artificial intelligence developed by the private sector. This growing dependence has profound implications for democracy.

An unprecedented information asymmetry is growing between companies and governments. We can see this in the long-running investigation into interference in the 2016 US presidential elections. Companies build voter registries, voting machines and tallying tools, while social media companies sell precisely targeted advertisements using information gleaned by linking data on friends, interests, location, shopping and search.

This has big privacy and competition implications, yet oversight is minimal. Governments, researchers and citizens risk being blindsided by the machine room that powers our lives and vital aspects of our democracies. Governments and companies have fundamentally different incentives on transparency and accountability.

While openness is the default and secrecy the exception for democratic governments, companies resist providing transparency about their algorithms and business models. Many of them actively prevent accountability, citing rules that protect trade secrets.

We must revisit these protections when they shield companies from oversight. There is a place for protecting proprietary information from commercial competitors, but the scope and context need to be clarified and balanced when they have an impact on democracy and the rule of law.

Regulators must act to ensure that those designing and running algorithmic processes do not abuse trade secret protections. Tech groups also use the EU’s General Data Protection Regulation to deny access to company information. Although the regulation was enacted to protect citizens against the mishandling of personal data, it is now being wielded cynically to deny scientists access to data sets for research. The European Data Protection Supervisor has intervened, but problems could recur. To mitigate concerns about the power of AI, provider companies routinely promise that the applications will be understandable, explainable, accountable, reliable, contestable, fair and — don’t forget — ethical.

Yet there is no way to test these subjective notions without access to the underlying data and information. Without clear benchmarks and information to match, proper scrutiny of the way vital data is processed and used will be impossible….(More)”.

How digital sleuths unravelled the mystery of Iran’s plane crash


Chris Stokel-Walker at Wired: “The video shows a faint glow in the distance, zig-zagging like a piece of paper caught in an underdraft, slowly meandering towards the horizon. Then there’s a bright flash and the trees in the foreground are thrown into shadow as Ukraine International Airlines flight PS752 hits the ground early on the morning of January 8, killing all 176 people on board.

At first, it seemed like an accident – engine failure was fingered as the cause – until the first video showing the plane seemingly on fire as it weaved to the ground surfaced. United States officials started to investigate, and a more complicated picture emerged. It appeared that the plane had been hit by a missile, corroborated by a second video that appears to show the moment the missile ploughs into the Boeing 737-800. While military and intelligence officials at governments around the world were conducting their inquiries in secret, a team of investigators were using open-source intelligence (OSINT) techniques to piece together the puzzle of flight PS752.

It’s not unusual nowadays for OSINT to lead the way in decoding key news events. When Sergei Skripal was poisoned, Bellingcat, an open-source intelligence website, tracked and identified his killers as they traipsed across London and Salisbury. They delved into military records to blow the cover of agents sent to kill. And in the days after the Ukraine Airlines plane crashed into the ground outside Tehran, Bellingcat and The New York Times have blown a hole in the supposition that the downing of the aircraft was an engine failure. The pressure – and the weight of public evidence – compelled Iranian officials to admit overnight on January 10 that the country had shot down the plane “in error”.

So how do they do it? “You can think of OSINT as a puzzle. To get the complete picture, you need to find the missing pieces and put everything together,” says Loránd Bodó, an OSINT analyst at Tech versus Terrorism, a campaign group. The team at Bellingcat and other open-source investigators pore over publicly available material. Thanks to our propensity to reach for our cameraphones at the sight of any newsworthy incident, video and photos are often available, posted to social media in the immediate aftermath of events. (The person who shot and uploaded the second video in this incident, of the missile appearing to hit the Boeing plane was a perfect example: they grabbed their phone after they heard “some sort of shot fired”.) “Open source investigations essentially involve the collection, preservation, verification, and analysis of evidence that is available in the public domain to build a picture of what happened,” says Yvonne McDermott Rees, a lecturer at Swansea University….(More)”.

Technology Can't Fix Algorithmic Injustice


Annette Zimmerman, Elena Di Rosa and Hochan Kima at Boston Review: “A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of “strong” (or “general”) AI—and the associated existential risk that it may pose for humanity. In Hawking’s words, the development of strong AI “could spell the end of the human race.”

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society….(More)”.

Paging Dr. Google: How the Tech Giant Is Laying Claim to Health Data


Wall Street Journal: “Roughly a year ago, Google offered health-data company Cerner Corp.an unusually rich proposal.

Cerner was interviewing Silicon Valley giants to pick a storage provider for 250 million health records, one of the largest collections of U.S. patient data. Google dispatched former chief executive Eric Schmidt to personally pitch Cerner over several phone calls and offered around $250 million in discounts and incentives, people familiar with the matter say. 

Google had a bigger goal in pushing for the deal than dollars and cents: a way to expand its effort to collect, analyze and aggregate health data on millions of Americans. Google representatives were vague in answering questions about how Cerner’s data would be used, making the health-care company’s executives wary, the people say. Eventually, Cerner struck a storage deal with Amazon.com Inc. instead.

The failed Cerner deal reveals an emerging challenge to Google’s move into health care: gaining the trust of health care partners and the public. So far, that has hardly slowed the search giant.

Google has struck partnerships with some of the country’s largest hospital systems and most-renowned health-care providers, many of them vast in scope and few of their details previously reported. In just a few years, the company has achieved the ability to view or analyze tens of millions of patient health records in at least three-quarters of U.S. states, according to a Wall Street Journal analysis of contractual agreements. 

In certain instances, the deals allow Google to access personally identifiable health information without the knowledge of patients or doctors. The company can review complete health records, including names, dates of birth, medications and other ailments, according to people familiar with the deals.

The prospect of tech giants’ amassing huge troves of health records has raised concerns among lawmakers, patients and doctors, who fear such intimate data could be used without individuals’ knowledge or permission, or in ways they might not anticipate. 

Google is developing a search tool, similar to its flagship search engine, in which patient information is stored, collated and analyzed by the company’s engineers, on its own servers. The portal is designed for use by doctors and nurses, and eventually perhaps patients themselves, though some Google staffers would have access sooner. 

Google executives and some health systems say that detailed data sharing has the potential to improve health outcomes. Large troves of data help fuel algorithms Google is creating to detect lung cancer, eye disease and kidney injuries. Hospital executives have long sought better electronic record systems to reduce error rates and cut down on paperwork….

Legally, the information gathered by Google can be used for purposes beyond diagnosing illnesses, under laws enacted during the dial-up era. U.S. federal privacy laws make it possible for health-care providers, with little or no input from patients, to share data with certain outside companies. That applies to partners, like Google, with significant presences outside health care. The company says its intentions in health are unconnected with its advertising business, which depends largely on data it has collected on users of its many services, including email and maps.

Medical information is perhaps the last bounty of personal data yet to be scooped up by technology companies. The health data-gathering efforts of other tech giants such as Amazon and International Business Machines Corp. face skepticism from physician and patient advocates. But Google’s push in particular has set off alarm bells in the industry, including over privacy concerns. U.S. senators, as well as health-industry executives, are questioning Google’s expansion and its potential for commercializing personal data….(More)”.

Navigation Apps Changed the Politics of Traffic


Essay by Laura Bliss: “There might not be much “weather” to speak of in Los Angeles, but there is traffic. It’s the de facto small talk upon arrival at meetings or cocktail parties, comparing journeys through the proverbial storm. And in certain ways, traffic does resemble the daily expressions of climate. It follows diurnal and seasonal patterns; it shapes, and is shaped, by local conditions. There are unexpected downpours: accidents, parades, sports events, concerts.

Once upon a time, if you were really savvy, you could steer around the thunderheads—that is, evade congestion almost entirely.

Now, everyone can do that, thanks to navigation apps like Waze, which launched in 2009 by a startup based in suburban Tel Aviv with the aspiration to save drivers five minutes on every trip by outsmarting traffic jams. Ten years later, the navigation app’s current motto is to “eliminate traffic”—to untie the knots of urban congestion once and for all. Like Google Maps, Apple Maps, Inrix, and other smartphone-based navigation tools, its routing algorithm weaves user locations with other sources of traffic data, quickly identifying the fastest routes available at any given moment.

Waze often describes itself in terms of the social goods it promotes. It likes to highlight the dedication of its active participants, who pay it forward to less-informed drivers behind them, as well as its willingness to share incident reports with city governments so that, for example, traffic engineers can rejigger stop lights or crack down on double parking. “Over the last 10 years, we’ve operated from a sense of civic responsibility within our means,” wrote Waze’s CEO and founder Noam Bardin in April 2018.

But Waze is a business, not a government agency. The goal is to be an indispensable service for its customers, and to profit from that. And it isn’t clear that those objectives align with a solution for urban congestion as a whole. This gets to the heart of the problem with any navigation app—or, for that matter, any traffic fix that prioritizes the needs of independent drivers over what’s best for the broader system. Managing traffic requires us to work together. Apps tap into our selfish desires….(More)”.

This essay is adapted from SOM Thinkers: The Future of Transportation, published by Metropolis Books.

The Case for an Institutionally Owned Knowledge Infrastructure


Article by James W. Weis, Amy Brand and Joi Ito: “Science and technology are propelled forward by the sharing of knowledge. Yet despite their vital importance in today’s innovation-driven economy, our knowledge infrastructures have failed to scale with today’s rapid pace of research and discovery.

For example, academic journals, the dominant dissemination platforms of scientific knowledge, have not been able to take advantage of the linking, transparency, dynamic communication and decentralized authority and review that the internet enables. Many other knowledge-driven sectors, from journalism to law, suffer from a similar bottleneck — caused not by a lack of technological capacity, but rather by an inability to design and implement efficient, open and trustworthy mechanisms of information dissemination.

Fortunately, growing dissatisfaction with current knowledge-sharing infrastructures has led to a more nuanced understanding of the requisite features that such platforms must provide. With such an understanding, higher education institutions around the world can begin to recapture the control and increase the utility of the knowledge they produce.

When the World Wide Web emerged in the 1990s, an era of robust scholarship based on open sharing of scientific advancements appeared inevitable. The internet — initially a research network — promised a democratization of science, universal access to the academic literature and a new form of open publishing that supported the discovery and reuse of knowledge artifacts on a global scale. Unfortunately, however, that promise was never realized. Universities, researchers and funding agencies, for the most part, failed to organize and secure the investment needed to build scalable knowledge infrastructures, and publishing corporations moved in to solidify their position as the purveyors of knowledge.

In the subsequent decade, such publishers have consolidated their hold. By controlling the most prestigious journals, they have been able to charge for access — extracting billions of dollars in subscription fees while barring much of the world from the academic literature. Indeed, some of the world’s wealthiest academic institutions are no longer able or willing to pay the subscription costs required.

Further, by controlling many of the most prestigious journals, publishers have also been able to position themselves between the creation and consumption of research, and so wield enormous power over peer review and metrics of scientific impact. Thus, they are able to significantly influence academic reputation, hirings, promotions, career progressions and, ultimately, the direction of science itself.

But signs suggest that the bright future envisioned in the early days of the internet is still within reach. Increasing awareness of, and dissatisfaction with, the many bottlenecks that the commercial monopoly on research information has imposed are stimulating new strategies for developing the future’s knowledge infrastructures. One of the most promising is the shift toward infrastructures created and supported by academic institutions, the original creators of the information being shared, and nonprofit consortia like the Collaborative Knowledge Foundation and the Center for Open Science.

Those infrastructures should fully exploit the technological capabilities of the World Wide Web to accelerate discovery, encourage more research support and better structure and transmit knowledge. By aligning academic incentives with socially beneficial outcomes, such a system could enrich the public while also amplifying the technological and societal impact of investment in research and innovation.

We’ve outlined below the three areas in which a shift to an academically owned platforms would yield the highest impact.

  • Truly Open Access
  • Meaningful Impact Metrics
  • Trustworthy Peer Review….(More)”.

Icelandic Citizen Engagement Tool Offers Tips for U.S.


Zack Quaintance at Government Technology: “The world of online discourse was vastly different one decade ago. This was before foreign election meddling, before social media execs were questioned by Congress, and before fighting with cantankerous uncles became an online trope. The world was perhaps more naïve, with a wide-eyed belief in some circles that Internet forums would amplify the voiceless within democracy.

This was the world in which Róbert Bjarnason and his collaborators lived. Based in Iceland, Bjarnason and his team developed a platform in 2010 for digital democracy. It was called Shadow Parliament, and its aim was simply to connect Iceland’s people with its governmental leadership. The platform launched one morning that year, with a comments section for debate. By evening, two users were locked in a deeply personal argument.

“We just looked at each other and thought, this is not going to be too much fun,” Bjarnason recalled recently. “We had just created one more platform for people to argue on.”

Sure, the engagement level was quite high, bringing furious users back to the site repeatedly to launch vitriol, but Shadow Parliament was not fostering the helpful discourse for which it was designed. So, developers scrapped it, pulling from the wreckage lessons to inform future work.

Bjarnason and team, officially a nonprofit called Citizens Foundation, worked for roughly a year, and, eventually, a new platform called Better Reykjavik was born. Better Reykjavik had key differences, chief among them a new debate system with simple tweaks: Citizens must list arguments for and against ideas, and instead of replying to each other directly, they can only down-vote things with which they disagree. This is a design that essentially forces users to create standalone points, rather than volley combative responses at one another, threaded in the fashion of Facebook or Twitter.

“With this framing of it,” Bjarnason said, “we’re not asking people to write the first comment they think of. We’re actually asking people to evaluate the idea.”

One tradeoff is that fury has proven itself to be an incredible driver of traffic, and the site loses that. But what the platform sacrifices in irate engagement, it gains in thoughtful debate. It’s essentially trading anger clicks for coherent discourse, and it’s seen tremendous success within Iceland — where some municipalities report 20 percent citizen usage — as well as throughout the international community, primarily in Europe. All told, Citizens Foundation has now built like-minded projects in 20 countries. And now, it is starting to build platforms for communities in the U.S….(More)”.

The Starving State


Article by Joseph E. Stiglitz, Todd N. Tucker, and Gabriel Zucman at Foreign Affairs: “For millennia, markets have not flourished without the help of the state. Without regulations and government support, the nineteenth-century English cloth-makers and Portuguese winemakers whom the economist David Ricardo made famous in his theory of comparative advantage would have never attained the scale necessary to drive international trade. Most economists rightly emphasize the role of the state in providing public goods and correcting market failures, but they often neglect the history of how markets came into being in the first place. The invisible hand of the market depended on the heavier hand of the state.

The state requires something simple to perform its multiple roles: revenue. It takes money to build roads and ports, to provide education for the young and health care for the sick, to finance the basic research that is the wellspring of all progress, and to staff the bureaucracies that keep societies and economies in motion. No successful market can survive without the underpinnings of a strong, functioning state.

That simple truth is being forgotten today. In the United States, total tax revenues paid to all levels of government shrank by close to four percent of national income over the last two decades, from about 32 percent in 1999 to approximately 28 percent today, a decline unique in modern history among wealthy nations. The direct consequences of this shift are clear: crumbling infrastructure, a slowing pace of innovation, a diminishing rate of growth, booming inequality, shorter life expectancy, and a sense of despair among large parts of the population. These consequences add up to something much larger: a threat to the sustainability of democracy and the global market economy….(More)”.

Philosophy Is a Public Service


Jonathon Keats at Nautilus: “…One of my primary techniques, adapted from philosophy, is to undertake large-scale thought experiments. In these experiments, I create alternative realities that provide perspectives on our own society, and provoke dialogue about who and what we want to become. Another of my techniques is to create philosophical instruments: tools and devices with which people can collectively investigate the places they inhabit.

The former technique is exemplified by Centuries of the Bristlecone, and other environmentally-calibrated clocks I’m developing in other cities, such as a timepiece modulated by the flow of rivers in Alaska, currently in planning at the Anchorage Museum.

The latter is exemplified by a project I initiated in Berlin in 2014, which I’ve now instigated in cities around the world. It’s a new kind of camera that produces a single exposure over a span of 100 years. People hide these cameras throughout their city, providing a means for the next generation to observe the decisions that citizens make about their urban environment: decisions about development and gentrification and sustainability. In a sense, these devices are intergenerational surveillance cameras. They prompt people to consider the long-term impact of their actions. They encourage people to act in ways that will change the picture to reflect what they want the next generation to see.

But the truth is that most of my projects—perhaps even the two I’ve just mentioned—combine techniques from philosophy and many other disciplines. In order to map out possible futures for society, especially while navigating the shifting terrain of climate change, the philosopher-explorer needs to be adaptable. And most likely you won’t have all the skills and tools you need. I believe that anyone can become a philosopher-explorer. The practice benefits from more practitioners. No particular abilities are needed, except a capacity for collaboration.

Ayear ago, I was invited by the Fraunhofer Institute for Building Physics to envision the city of the future. Through Fraunhofer’s artist-in-lab program, I had the opportunity to work with leading scientists and engineers, and to run computer simulations and physical experiments on state-of-the-art equipment in Stuttgart and Holzkirchen, Germany.

My starting point was to consider one of the most serious problems faced by cities today: sea level rise. Global sea levels are expected to increase by two-and-a-half meters by the end of the century, and as much as 15 meters in the next 300 years. With 11 percent of the world population living less than 10 meters above the current sea level, many cities will probably be submerged in the future: mega-cities including New York and Shanghai. One likely response is that people will migrate inland, seeking ever higher elevations.

The question I asked myself was this: Would it make more sense to stay put?…(More)”.

Too much information? The new challenge for decision-makers


Daniel Winter at the Financial Times: “…Concern over technology’s capacity both to shrink the world and complicate it has grown steadily since the second world war — little wonder, perhaps, when the existential threats it throws up have expanded from nuclear weapons to encompass climate change (and any consequent geoengineering), gene editing and AI as well. The financial crisis of 2008, in which poorly understood investment instruments made economies totter, has added to the unease over our ability to make sense of things.

From preoccupying cold war planners, attempts to codify best practice in sense-making have gone on to exercise (often profitably) business academics and management consultants, and now draw large audiences online.

Blogs, podcasts and YouTube channels such as Rebel Wisdom and Future Thinkers aim to arm their followers with the tools they need to understand the world, and make the right decisions. Daniel Schmachtenberger is one such voice, whose interviews on YouTube and his podcast Civilization Emerging have reached hundreds of thousands of people.

“Due to increasing technological capacity — increasing population multiplied by increasing impact per person — we’re making more and more consequential choices with worse and worse sense-making to inform those choices,” he says in one video. “Exponential tech is leading to exponential disinformation.” Strengthening individuals’ ability to handle and filter information would go a long way towards improving the “information ecology”, Mr Schmachtenberger argues. People need to get used to handling complex information and should train themselves to be less distracted. “The impulse to say, ‘hey, make it really simple so everyone can get it’ and the impulse to say ‘[let’s] help people actually make sense of the world well’ are different things,” he says. Of course, societies have long been accustomed to handling complexity. No one person can possibly memorise the entirety of US law or be an expert in every field of medicine. Libraries, databases, and professional and academic networks exist to aggregate expertise.

The increasing bombardment of data — the growing amount of evidence that can inform any course of action — pushes such systems to the limit, prompting people to offload the work to computers. Yet this only defers the problem. As AI becomes more sophisticated, its decision-making processes become more opaque. The choice as to whether to trust it — to let it run a self-driving car in a crowded town, say — still rests with us.

Far from being able to outsource all complex thinking to the cloud, Prof Guillén warns that leaders will need to be as skilled as ever at handling and critically evaluating information. It will be vital, he suggests, to build flexibility into the policymaking process.

“The feedback loop between the effects of the policy and how you need to recalibrate the policy in real time becomes so much faster and so much more unpredictable,” he says. “That’s the effect that complex policies produce.” A more piecemeal approach could better suit regulation in fast-moving fields, he argues, with shorter “bursts” of rulemaking, followed by analysis of the effects and then adjustments or additions where necessary.

Yet however adept policymakers become at dealing with a complex world, their task will at some point always resist simplification. That point is where the responsibility resides. Much as we may wish it otherwise, governance will always be as much an art as a science….(More)”.