No app, no entry: How the digital world is failing the non tech-savvy


Article by Andrew Anthony: “Whatever the word is for the opposite of heartwarming, it certainly applies to the story of Ruth and Peter Jaffe. The elderly couple from Ealing, west London, made headlines last week after being charged £110 by Ryanair for printing out their tickets at Stansted airport.

Even allowing for the exorbitant cost of inkjet printer ink, 55 quid for each sheet of paper is a shockingly creative example of punitive pricing.

The Jaffes, aged 79 and 80, said they had become confused on the Ryanair website and accidentally printed out their return tickets instead of their outbound ones to Bergerac. It was the kind of error anyone could make, although octogenarians, many of whom struggle with the tech demands of digitalisation, are far more likely to make it.

But as the company explained in a characteristically charmless justification of the charge: “We regret that these passengers ignored their email reminder and failed to check-in online.”…

The shiny, bright future of full computerisation looks very much like a dystopia to someone who either doesn’t understand it or have the means to access it. And almost by definition, the people who can’t access the digitalised world are seldom visible, because absence is not easy to see. What is apparent is that improved efficiency doesn’t necessarily lead to greater wellbeing.

From a technological and economic perspective, the case for removing railway station ticket offices is hard to refute. A public consultation process is under way by train operators who present the proposed closures as means of bringing “station staff closer to customers”.

The RMT union, by contrast, believes it’s a means of bringing the staff closer to unemployment and has mounted a campaign heralding the good work done by ticket offices across the network. Whatever the truth, human interaction is in danger of being undervalued in the digital landscape…(More)”.

The Urgent Need to Reimagine Data Consent


Article by Stefaan G. Verhulst, Laura Sandor & Julia Stamm: “Recognizing the significant benefits that can arise from the use and reuse of data to tackle contemporary challenges such as migration, it is worth exploring new approaches to collect and utilize data that empower individuals and communities, granting them the ability to determine how their data can be utilized for various personal, community, and societal causes. This need is not specific to migrants alone. It applies to various regions, populations, and fields, ranging from public health and education to urban mobility. There is a pressing demand to involve communities, often already vulnerable, to establish responsible access to their data that aligns with their expectations, while simultaneously serving the greater public good.

We believe the answer lies through a reimagination of the concept of consent. Traditionally, consent has been the tool of choice to secure agency and individual rights, but that concept, we would suggest, is no longer sufficient to today’s era of datafication. Instead, we should strive to establish a new standard of social license. Here, we’ll define what we mean by a social license and outline some of the limitations of consent (as it is typically defined and practiced today). Then we’ll describe one possible means of securing social license—through participatory decision -making…(More)”.

Should Computers Decide How Much Things Cost?


Article by Colin Horgan: “In the summer of 2012, the Wall Street Journal reported that the travel booking website Orbitz had, in some cases, been suggesting to Apple users hotel rooms that cost more per night than those it was showing to Windows users. The company found that people who used Mac computers spent as much as 30 percent more a night on hotels. It was one of the first high-profile instances where the predictive capabilities of algorithms were shown to impact consumer-facing prices.

Since then, the pool of data available to corporations about each of us (the information we’ve either volunteered or that can be inferred from our web browsing and buying histories) has expanded significantly, helping companies build ever more precise purchaser profiles. Personalized pricing is now widespread, even if many consumers are only just realizing what it is. Recently, other algorithm-driven pricing models, like Uber’s surge or Ticketmaster’s dynamic pricing for concerts, have surprised users and fans. In the past few months, dynamic pricing—which is based on factors such as quantity—has pushed up prices of some concert tickets even before they hit the resale market, including for artists like Drake and Taylor Swift. And while personalized pricing is slightly different, these examples of computer-driven pricing have spawned headlines and social media posts that reflect a growing frustration with data’s role in how prices are dictated.

The marketplace is said to be a realm of assumed fairness, dictated by the rules of competition, an objective environment where one consumer is the same as any other. But this idea is being undermined by the same opaque and confusing programmatic data profiling that’s slowly encroaching on other parts of our lives—the algorithms. The Canadian government is currently considering new consumer-protection regulations, including what to do to control algorithm-based pricing. While strict market regulation is considered by some to be a political risk, another solution may exist—not at the point of sale but at the point where our data is gathered in the first place.

In theory, pricing algorithms aren’t necessarily bad…(More)”.

The Case Against AI Everything, Everywhere, All at Once


Essay by Judy Estrin: “The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “...a sense that the future is just more of the present, … that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse.

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI…(More)”.

Changing Facebook’s algorithm won’t fix polarization, new study finds


Article by Naomi Nix, Carolyn Y. Johnson, and Cat Zakrzewski: “For years, regulators and activists have worried that social media companies’ algorithms were dividing the United States with politically toxic posts and conspiracies. The concern was so widespread that in 2020, Meta flung open troves of internal data for university academics to study how Facebook and Instagram would affect the upcoming presidential election.

The first results of that research show that the company’s platforms play a critical role in funneling users to partisan information with which they are likely to agree. But the results cast doubt on assumptions that the strategies Meta could use to discourage virality and engagement on its social networks would substantially affect people’s political beliefs.

“Algorithms are extremely influential in terms of what people see on the platform, and in terms of shaping their on-platform experience,” Joshua Tucker, co-director of the Center for Social Media and Politics at New York University and one of the leaders on the research project, said in an interview.

“Despite the fact that we find this big impact in people’s on-platform experience, we find very little impact in changes to people’s attitudes about politics and even people’s self-reported participation around politics.”

The first four studies, which were released on Thursday in the journals Science and Nature, are the result of a unique partnership between university researchers and Meta’s own analysts to study how social media affects political polarization and people’s understanding and opinions about news, government and democracy. The researchers, who relied on Meta for data and the ability to run experiments, analyzed those issues during the run-up to the 2020 election. The studies were peer-reviewed before publication, a standard procedure in science in which papers are sent out to other experts in the field who assess the work’s merit.

As part of the project, researchers altered the feeds of thousands of people using Facebook and Instagram in fall of 2020 to see if that could change political beliefs, knowledge or polarization by exposing them to different information than they might normally have received. The researchers generally concluded that such changes had little impact.

The collaboration, which is expected to be released over a dozen studies, also will examine data collected after the Jan. 6, 2021, attack on the U.S. Capitol, Tucker said…(More)”.

AI By the People, For the People


Article by Billy Perrigo/Karnataka: “…To create an effective English-speaking AI, it is enough to simply collect data from where it has already accumulated. But for languages like Kannada, you need to go out and find more.

This has created huge demand for datasets—collections of text or voice data—in languages spoken by some of the poorest people in the world. Part of that demand comes from tech companies seeking to build out their AI tools. Another big chunk comes from academia and governments, especially in India, where English and Hindi have long held outsize precedence in a nation of some 1.4 billion people with 22 official languages and at least 780 more indigenous ones. This rising demand means that hundreds of millions of Indians are suddenly in control of a scarce and newly-valuable asset: their mother tongue.

Data work—creating or refining the raw material at the heart of AI— is not new in India. The economy that did so much to turn call centers and garment factories into engines of productivity at the end of the 20th century has quietly been doing the same with data work in the 21st. And, like its predecessors, the industry is once again dominated by labor arbitrage companies, which pay wages close to the legal minimum even as they sell data to foreign clients for a hefty mark-up. The AI data sector, worth over $2 billion globally in 2022, is projected to rise in value to $17 billion by 2030. Little of that money has flowed down to data workers in India, Kenya, and the Philippines.

These conditions may cause harms far beyond the lives of individual workers. “We’re talking about systems that are impacting our whole society, and workers who make those systems more reliable and less biased,” says Jonas Valente, an expert in digital work platforms at Oxford University’s Internet Institute. “If you have workers with basic rights who are more empowered, I believe that the outcome—the technological system—will have a better quality as well.”

In the neighboring villages of Alahalli and Chilukavadi, one Indian startup is testing a new model. Chandrika works for Karya, a nonprofit launched in 2021 in Bengaluru (formerly Bangalore) that bills itself as “the world’s first ethical data company.” Like its competitors, it sells data to big tech companies and other clients at the market rate. But instead of keeping much of that cash as profit, it covers its costs and funnels the rest toward the rural poor in India. (Karya partners with local NGOs to ensure access to its jobs go first to the poorest of the poor, as well as historically marginalized communities.) In addition to its $5 hourly minimum, Karya gives workers de-facto ownership of the data they create on the job, so whenever it is resold, the workers receive the proceeds on top of their past wages. It’s a model that doesn’t exist anywhere else in the industry…(More)”.

Corporate Responsibility in the Age of AI


Essay by Maria Eitel: “In the past year, a cacophony of conversations about artificial intelligence has erupted. Depending on whom you listen to, AI is either carrying us into a shiny new world of endless possibilities or propelling us toward a grim dystopia. Call them the Barbie and Oppenheimer scenarios – as attention-grabbing and different as the Hollywood blockbusters of the summer. But one conversation is getting far too little attention: the one about corporate responsibility.

I joined Nike as its first Vice President of Corporate Responsibility in 1998, landing right in the middle of the hyper-globalization era’s biggest corporate crisis: the iconic sports and fitness company had become the face of labor exploitation in developing countries. In dealing with that crisis and setting up corporate responsibility for Nike, we learned hard-earned lessons, which can now help guide our efforts to navigate the AI revolution.

There is a key difference today. Taking place in the late 1990s, the Nike drama played out relatively slowly. When it comes to AI, however, we don’t have the luxury of time. This time last year, most people had not heard about generative AI. The technology entered our collective awareness like a lightning strike in late 2022, and we have been trying to make sense of it ever since…

Our collective future now hinges on whether companies – in the privacy of their board rooms, executive meetings, and closed-door strategy sessions – decide to do what is right. Companies need a clear North Star to which they can always refer as they pursue innovation. Google had it right in its early days, when its corporate credo was, “Don’t Be Evil.” No corporation should knowingly harm people in the pursuit of profit.

It will not be enough for companies simply to say that they have hired former regulators and propose possible solutions. Companies must devise credible and effective AI action plans that answer five key questions:

  • What are the potential unanticipated consequences of AI?
  • How are you mitigating each identified risk?
  • What measures can regulators use to monitor companies’ efforts to mitigate potential dangers and hold them accountable?
  • What resources do regulators need to carry out this task?
  • How will we know that the guardrails are working?

The AI challenge needs to be treated like any other corporate sprint. Requiring companies to commit to an action plan in 90 days is reasonable and realistic. No excuses. Missed deadlines should result in painful fines. The plan doesn’t have to be perfect – and it will likely need to be adapted as we continue to learn – but committing to it is essential…(More)”.

Journalism Is a Public Good and Should Be Publicly Funded


Essay by Patrick Walters: “News deserts” have proliferated across the U.S. Half of the nation’s more than 3,140 counties now have only one newspaper—and nearly 200 of them have no paper at all. Of the publications that survive, researchers have found many are “ghosts” of their former selves.

Journalism has problems nationally: CNN announced hundreds of layoffs at the end of 2022, and National Geographic laid off the last of its staff writers this June. In the latter month the Los Angeles Times cut 13 percent of its newsroom staff. But the crisis is even more acute at the local level, with jobs in local news plunging from 71,000 in 2008 to 31,000 in 2020. Closures and cutbacks often leave people without reliable sources that can provide them with what the American Press Institute has described as “the information they need to make the best possible decisions about their daily lives.”

Americans need to understand that journalism is a vital public good—one that, like roads, bridges and schools, is worthy of taxpayer support. We are already seeing the disastrous effects of otherwise allowing news to disintegrate in the free market: namely, a steady supply of misinformation, often masquerading as legitimate news, and too many communities left without a quality source of local news. Former New York Times public editor Margaret Sullivan has a called this a “crisis of American democracy.”

The terms “crisis” and “collapse” have become nearly ubiquitous in the past decade when describing the state of American journalism, which has been based on a for-profit commercial model since the rise of the “penny press” in the 1830s. Now that commercial model has collapsed amid the near disappearance of print advertising. Digital ads have not come close to closing the gap because Google and other platforms have “hoovered up everything,” as Emily Bell, founding director of the Tow Center for Journalism at Columbia University, told the Nieman Journalism Lab in a 2018 interview. In June the newspaper chain Gannett sued Google’s parent company, alleging it has created an advertising monopoly that has devastated the news industry.

Other journalism models—including nonprofits such as MinnPost, collaborative efforts such Broke in Philly and citizen journalism—have had some success in fulfilling what Lewis Friedland of the University of Wisconsin–Madison called “critical community information needs” in a chapter of the 2016 book The Communication Crisis in America, and How to Fix It. Friedland classified those needs as falling in eight areas: emergencies and risks, health and welfare, education, transportation, economic opportunities, the environment, civic information and political information. Nevertheless, these models have proven incapable of fully filling the void, as shown by the dearth of quality information during the early years of the COVID pandemic. Scholar Michelle Ferrier and others have worked to bring attention to how news deserts leave many rural and urban areas “impoverished by the lack of fresh, daily local news and information,” as Ferrier wrote in a 2018 article. A recent study also found evidence that U.S. judicial districts with lower newspaper circulation were likely to see fewer public corruption prosecutions.

growing chorus of voices is now calling for government-funded journalism, a model that many in the profession have long seen as problematic…(More)”.

Why This AI Moment May Be the Real Deal


Essay by Ari Schulman: “For many years, those in the know in the tech world have known that “artificial intelligence” is a scam. It’s been true for so long in Silicon Valley that it was true before there even was a Silicon Valley.

That’s not to say that AI hadn’t done impressive things, solved real problems, generated real wealth and worthy endowed professorships. But peek under the hood of Tesla’s “Autopilot” mode and you would find odd glitches, frustrated promise, and, well, still quite a lot of people hidden away in backrooms manually plugging gaps in the system, often in real time. Study Deep Blue’s 1997 defeat of world chess champion Garry Kasparov, and your excitement about how quickly this technology would take over other cognitive work would wane as you learned just how much brute human force went into fine-tuning the software specifically to beat Kasparov. Read press release after press release of FacebookTwitter, and YouTube promising to use more machine learning to fight hate speech and save democracy — and then find out that the new thing was mostly a handmaid to armies of human grunts, and for many years relied on a technological paradigm that was decades old.

Call it AI’s man-behind-the-curtain effect: What appear at first to be dazzling new achievements in artificial intelligence routinely lose their luster and seem limited, one-off, jerry-rigged, with nothing all that impressive happening behind the scenes aside from sweat and tears, certainly nothing that deserves the name “intelligence” even by loose analogy.

So what’s different now? What follows in this essay is an attempt to contrast some of the most notable features of the new transformer paradigm (the T in ChatGPT) with what came before. It is an attempt to articulate why the new AIs that have garnered so much attention over the past year seem to defy some of the major lines of skepticism that have rightly applied to past eras — why this AI moment might, just might, be the real deal…(More)”.

Wikipedia’s Moment of Truth


Article by Jon Gertner at the New York Times: “In early 2021, a Wikipedia editor peered into the future and saw what looked like a funnel cloud on the horizon: the rise of GPT-3, a precursor to the new chatbots from OpenAI. When this editor — a prolific Wikipedian who goes by the handle Barkeep49 on the site — gave the new technology a try, he could see that it was untrustworthy. The bot would readily mix fictional elements (a false name, a false academic citation) into otherwise factual and coherent answers. But he had no doubts about its potential. “I think A.I.’s day of writing a high-quality encyclopedia is coming sooner rather than later,” he wrote in “Death of Wikipedia,” an essay that he posted under his handle on Wikipedia itself. He speculated that a computerized model could, in time, displace his beloved website and its human editors, just as Wikipedia had supplanted the Encyclopaedia Britannica, which in 2012 announced it was discontinuing its print publication.

Recently, when I asked this editor — he asked me to withhold his name because Wikipedia editors can be the targets of abuse — if he still worried about his encyclopedia’s fate, he told me that the newer versions made him more convinced that ChatGPT was a threat. “It wouldn’t surprise me if things are fine for the next three years,” he said of Wikipedia, “and then, all of a sudden, in Year 4 or 5, things drop off a cliff.”..(More)”.