The Technopolar Moment


Ian Bremmer at Foreign Affairs: “…States have been the primary actors in global affairs for nearly 400 years. That is starting to change, as a handful of large technology companies rival them for geopolitical influence. The aftermath of the January 6 riot serves as the latest proof that Amazon, Apple, Facebook, Google, and Twitter are no longer merely large companies; they have taken control of aspects of society, the economy, and national security that were long the exclusive preserve of the state. The same goes for Chinese technology companies, such as Alibaba, ByteDance, and Tencent. Nonstate actors are increasingly shaping geopolitics, with technology companies in the lead. And although Europe wants to play, its companies do not have the size or geopolitical influence to compete with their American and Chinese counterparts….(More)”.

We Need a New Economic Category


Article by Anne-Marie Slaughter and Hilary Cottam: “Recognizing the true value and potential of care, socially as well as economically, depends on a different understanding of what care actually is: not a service but a relationship that depends on human connection. It is the essence of what Jamie Merisotis, the president of the nonprofit Lumina Foundation, calls “human work”: the “work only people can do.” This makes it all the more essential in an age when workers face the threat of being replaced by machines.

When we use the word in an economic sense, care is a bundle of services: feeding, dressing, bathing, toileting, and assisting. Robots could perform all of those functions; in countries such as Japan, sometimes they already do. But that work is best described as caretaking, comparable to what the caretaker of a property provides by watering a garden or fixing a gate.

What transforms those services into caregiving, the support we want for ourselves and for those we love, is the existence of a relationship between the person providing care and the person being cared for. Not just any relationship, but one that is affectionate, or at least considerate and respectful. Most human beings cannot thrive without connection to others, a point underlined by the depression and declining mental capacities of many seniors who have been isolated during the pandemic….

One of us, Hilary, has worked in Britain to expand caregiving networks. In 2007 she co-designed a program called Circle, which is part social club, part concierge service. Members pay a small monthly fee, and in return get access to fun activities and practical support from members and helpers in the community. More than 10,000 people have participated, and evaluations show that members feel less lonely and more capable. The program has also reduced the money spent on formal services; Circle members are less likely, for example, to be readmitted to the hospital.The mutual-aid societies that mushroomed into existence across the United States during the pandemic reflect the same philosophy. The core of a mutual-aid network is the principle of “solidarity not charity”: a group of community members coming together on an equal basis for the common good. These societies draw on a long tradition of “collective care” developed by African American, Indigenous, and immigrant groups as far back as the 18th century….Care jobs help humans flourish, and, properly understood and compensated, they can power a growing sector of the economy, strengthen our society, and increase our well-being. Goods are things that people buy and own; services are functions that people pay for. Relationships require two people and a connection between them. We don’t really have an economic category for that, but we should….(More)”.

The Future is not a Solution


Essay by Laura Forlano: “The future is a particular kind of speaker,” explains communication scholar James W. Carey, “who tells us where we are going before we know it ourselves.” But in discussions about the nature of the future, the future as an experience never appears. This is because “the future is always offstage and never quite makes its entrance into history; the future is a time that never arrives but is always awaited.” Perhaps this is why, in the American context, there is a widespread tendency to “discount the present for the future,” and see the “future as a solvent” for existing social problems.

Abstract discussions of the “the future” miss the mark. That is because experience changes us. Anyone that has lived through the last 18 months of the COVID-19 pandemic would surely agree. While health experts are well aware of the ongoing global risks posed by pandemics, no one—not even an algorithm—can predict exactly when, where, and how they might come to be. And, yet, since spring 2020, there has been a global desire to understand precisely what is next, how to navigate uncertain futures as well as adapt to long-term changes. The pandemic, according to the writer Arundhati Roy, is “a portal, a gateway between one world and the next.”

In order to understand the choices that we are facing, it is necessary to understand the ways in which technologies and futures are often linked—socially, politically, and commercially —through their promises of a better tomorrow, one just beyond our grasp. Computer scientist Paul Dourish and anthropologist Genevieve Bell refer to these as “technovisions” or the stories that technologists and technology companies tell about the role of computational technologies in the future. Technovisions portray technological progress as inevitable—becoming cultural mythologies and self-fulfilling prophecies. They explain that the “proximate future,” a future that is “indefinitely postponed” is a key feature of research and practice in the field of computing that allows technology companies to “absolve themselves of the responsibilities of the present” by assuming that “certain problems will simply disappear of their own accord—questions of usability, regulation, resistance, adoption barriers, sociotechnical backlashes, and other concerns are erased.”…(More)”

Why designers should embrace ‘weird data’


Article by Mimi Ọnụọha: “My interest in missing things began with what I could see. For a long time, I have kept a small piece of paper taped to the bottom right corner of my desk. This paper comes and goes, at times becoming wrinkled, discolored by tea stains, or hidden under a stack of books. But it always serves the same purpose: listing the most eccentric datasets that I can find online.

Before the score and lyrics for the hit American musical Hamilton had been released, a group of obsessed fans created a shared document of every word in the show. This dataset made my list. In 2016, a Reddit user published a post with a link to where he had downloaded the metadata of every story ever published on fanfiction.net, a popular site for stories about fandoms. This, too, made the list.

Other things that have graced the list: the daily count of footballs produced by the Wilson Sporting Goods football factory in Ada, Iowa (4,000 as of 2008); an estimation of the number of hot dogs eaten by Americans on the Fourth of July every year (most recently: 150 million); the locations of every public toilet in Australia (of which there are more than 17,000).

Australian academic Mitchell Whitelaw defines data as measurements extracted from the flux of the real. When we typically think of collecting data, we think of big, important things: census information, UN data about health and diseases, data mined by large companies like Google, Amazon, or Facebook….(More)”.

Reboot AI with human values


Book Review by Reema Patel of “In AI We Trust: Power, Illusion and Control of Predictive Algorithms Helga Nowotny Polity (2021)”: “In the 1980s, a plaque at NASA’s Johnson Space Center in Houston, Texas, declared: “In God we trust. All others must bring data.” Helga Nowotny’s latest book, In AI We Trust, is more than a play on the first phrase in this quote attributed to statistician W. Edwards Deming. It is most occupied with the second idea.

What happens, Nowotny asks, when we deploy artificial intelligence (AI) without interrogating its effectiveness, simply trusting that it ‘works’? What happens when we fail to take a data-driven approach to things that are themselves data driven? And what about when AI is shaped and influenced by human bias? Data can be inaccurate, of poor quality or missing. And technologies are, Nowotny reminds us, “intrinsically intertwined with conscious or unconscious bias since they reflect existing inequalities and discriminatory practices in society”.

Nowotny, a founding member and former president of the European Research Council, has a track record of trenchant thought on how society should handle innovation. Here, she offers a compelling analysis of the risks and challenges of the AI systems that pervade our lives. She makes a strong case for digital humanism: “Human values and perspectives ought to be the starting point” for the design of systems that “claim to serve humanity”….(More)”.

How to Fix Social Media


Essay by Nicholas Carr: “Arguments over whether and how to control the information distributed through social media go to the heart of America’s democratic ideals.

It’s a mistake, though, to assume that technological changes, even profound ones, render history irrelevant. The arrival of broadcast media at the start of the last century set off an information revolution just as tumultuous as the one we are going through today, and the way legislators, judges, and the public responded to the earlier upheaval can illuminate our current situation. Particularly pertinent are the distinctions between different forms of communication that informed the Supreme Court’s decision in the Carlin case — and that had guided legal and regulatory policy-making throughout the formative years of the mass media era. Digitization has blurred those distinctions at a technical level — all forms of communication can now be transmitted through a single computer network — but it has not erased them.

By once again making such distinctions, particularly between personal speech and public speech, we have an opportunity to break out of our current ideological bind and create a democratic framework for governing social media that is consistent with the country’s values and traditions….(More)”.

Bring American cities into the 21st century by funding urban innovation


Article by Dan Doctoroff and Richard Florida: “The U.S. is on the verge of the fourth revolution in urban technology. Where railroads, the electric grid, and the automobile defined previous eras, today, new strategies that integrate new technologies in our cities can unlock striking possibilities.

Our buildings can be dramatically more sustainable, adaptable, and affordable. Energy systems and physical infrastructure can fulfill the promise of “climate-positive” development. Secure digital infrastructure can connect people and improve services while safeguarding privacy. We can deploy mobility solutions that regulate the flow of people and vehicles in real time, ease traffic, and cut carbon emissions. Innovative social infrastructure can enable new service models to build truly inclusive communities. 

Congress and the administration are currently negotiating a reconciliation package that is intended to put the U.S. on a path to a sustainable and equitable future. However, this mission will not succeed without meaningful investments in technical solutions that recognize the frontline role of cities and urban counties in so many national priorities.

U.S. cities are still built, connected, powered, heated, and run much as they have been for the past 75 years. Cities continue to generally rely on “dumb” infrastructure, such as the classic traffic light, which can direct traffic and do little else. 

When Detroit deployed the first red-yellow-green automatic traffic light in the 1920s, it pioneered state-of-the art traffic management. Soon, there was a traffic light at every major intersection in America, and it has remained an icon of urban technology ever since. Relying on 100-year-old technology isn’t all that unusual in our cities. If you look closely at any American city, you will see it’s rather the rule. While our policy needs and technical capabilities have changed dramatically, the urban systems U.S. cities rely on have remained essentially frozen in time since the Second World War.  

We must leverage today’s technology and use artificial intelligence, machine learning, data analytics, connected infrastructure, cloud computing, and automation to run our cities. That is why we have come together to help forge a new initiative, the Coalition for Urban Innovation, to reimagine urban infrastructure for the future. Consisting of leading urban thinkers, businesses, and nonprofits, the coalition is calling on Congress and the administration to seize this generational opportunity to finally unlock the potential of cities as powerful levers for tackling climate change, promoting inclusion, and otherwise addressing our thorniest challenges…(More)”.

Americans Need a Bill of Rights for an AI-Powered World


Article by Eric Lander and Alondra Nelson: “…Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.

Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. 

Of course, enumerating the rights is just a first step. What might we do to protect them? Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this “bill of rights,” or adopting new laws and regulations to fill gaps. States might choose to adopt similar practices….(More)”.

Using location data responsibly in cities and local government


Article by Ben Hawes: “City and local governments increasingly recognise the power of location data to help them deliver more and better services, exactly where and when they are needed. The use of this data is going to grow, with more pressure to manage resources and emerging challenges including responding to extreme weather events and other climate impacts.

But using location data to target and manage local services comes with risks to the equitable delivery of services, privacy and accountability. To make the best use of these growing data resources, city leaders and their teams need to understand those risks and address them, and to be able to explain their uses of data to citizens.

The Locus Charter, launched earlier this year, is a set of common principles to promote responsible practice when using location data. The Charter could be very valuable to local governments, to help them navigate the challenges and realise the rewards offered by data about the places they manage….

Compared to private companies, local public bodies already have special responsibilities to ensure transparency and fairness. New sources of data can help, but can also generate new questions. Local governments have generally been able to improve services as they learned more about the people they served. Now they must manage the risks of knowing too much about people, and acting intrusively. They can also risk distorting service provision because their data about people in places is uneven or unrepresentative.

Many city and local governments fully recognise that data-driven delivery comes with risks, and are developing specific local data ethics frameworks to guide their work. Some of these, like Kansas City’s, are specifically aimed at managing data privacy. Others cover broader uses of data, like Greater Manchester’s Declaration for Intelligent and Responsible Data Practice (DTPR). DTPR is an open source communication standard that helps people understand how data is being used in public places.

London is engaging citizens on an Emerging Technology Charter, to explore new and ethically charged questions around data. Govlab supports an AI Localism repository of actions taken by local decision-makers to address the use of AI within a city or community. The EU Sherpa programme (Shaping the Ethical Dimensions of Smart Information Systems) includes a smart cities strand, and has published a case-study on the Ethics of Using Smart City AI and Big Data.

Smart city applications make it potentially possible to collect data in many ways, for many purposes, but the technologies cannot answer questions about what is appropriate. In The Smart Enough City: Putting Technology in its Place to Reclaim Our Urban Future (2019), author Ben Green describes examples when some cities have failed and others succeeded in judging what smart applications should be used.

Attention to what constitutes ethical practice with location data can give additional help to leaders making that kind of judgement….(More)”

Licensure as Data Governance


Essay by Frank Pasquale: “…A licensure regime for data and the AI it powers would enable citizens to democratically shape data’s scope and proper use, rather than resigning ourselves to being increasingly influenced and shaped by forces beyond our control.To ground the case for more ex ante regulation, Part I describes the expanding scope of data collection, analysis, and use, and the threats that that scope poses to data subjects. Part II critiques consent-based models of data protection, while Part III examines the substantive foundation of licensure models. Part IV addresses a key challenge to my approach: the free expression concerns raised by the licensure of large-scale personal data collection, analysis, and use. Part V concludes with reflections on the opportunities created by data licensure frameworks and potential limitations upon them….(More)”.