What Big Tech Knows About Your Body

Article by Yael Grauer: “If you were seeking online therapy from 2017 to 2021—and a lot of people were—chances are good that you found your way to BetterHelp, which today describes itself as the world’s largest online-therapy purveyor, with more than 2 million users. Once you were there, after a few clicks, you would have completed a form—an intake questionnaire, not unlike the paper one you’d fill out at any therapist’s office: Are you new to therapy? Are you taking any medications? Having problems with intimacy? Experiencing overwhelming sadness? Thinking of hurting yourself? BetterHelp would have asked you if you were religious, if you were LGBTQ, if you were a teenager. These questions were just meant to match you with the best counselor for your needs, small text would have assured you. Your information would remain private.

Except BetterHelp isn’t exactly a therapist’s office, and your information may not have been completely private. In fact, according to a complaint brought by federal regulators, for years, BetterHelp was sharing user data—including email addresses, IP addresses, and questionnaire answers—with third parties, including Facebook and Snapchat, for the purposes of targeting ads for its services. It was also, according to the Federal Trade Commission, poorly regulating what those third parties did with users’ data once they got them. In July, the company finalized a settlement with the FTC and agreed to refund $7.8 million to consumers whose privacy regulators claimed had been compromised. (In a statement, BetterHelp admitted no wrongdoing and described the alleged sharing of user information as an “industry-standard practice.”)

We leave digital traces about our health everywhere we go: by completing forms like BetterHelp’s. By requesting a prescription refill online. By clicking on a link. By asking a search engine about dosages or directions to a clinic or pain in chest dying. By shopping, online or off. By participating in consumer genetic testing. By stepping on a smart scale or using a smart thermometer. By joining a Facebook group or a Discord server for people with a certain medical condition. By using internet-connected exercise equipment. By using an app or a service to count your steps or track your menstrual cycle or log your workouts. Even demographic and financial data unrelated to health can be aggregated and analyzed to reveal or infer sensitive information about people’s physical or mental-health conditions…(More)”.

The Man Who Trapped Us in Databases

McKenzie Funk in The New York University: “One of Asher’s innovations — or more precisely one of his companies’ innovations — was what is now known as the LexID. My LexID, I learned, is 000874529875. This unique string of digits is a kind of shadow Social Security number, one of many such “persistent identifiers,” as they are called, that have been issued not by the government but by data companies like Acxiom, Oracle, Thomson Reuters, TransUnion — or, in this case, LexisNexis.

My LexID was created sometime in the early 2000s in Asher’s computer room in South Florida, as many still are, and without my consent it began quietly stalking me. One early data point on me would have been my name; another, my parents’ address in Oregon. From my birth certificate or my driver’s license or my teenage fishing license — and from the fact that the three confirmed one another — it could get my sex and my date of birth. At the time, it would have been able to collect the address of the college I attended, Swarthmore, which was small and expensive, and it would have found my first full-time employer, the National Geographic Society, quickly amassing more than enough data to let someone — back then, a human someone — infer quite a bit more about me and my future prospects…(More)”

Get a rabbit: Don’t trust the numbers · 

Article by John Lanchester: “At a dinner​ with the American ambassador in 2007, Li Keqiang, future premier of China, said that when he wanted to know what was happening to the country’s economy, he looked at the numbers for electricity use, rail cargo and bank lending. There was no point using the official GDP statistics, Li said, because they are ‘man-made’. That remark, which we know about thanks to WikiLeaks, is fascinating for two reasons. First, because it shows a sly, subtle, worldly humour – a rare glimpse of the sort of thing Chinese Communist Party leaders say in private. Second, because it’s true. A whole strand in contemporary thinking about the production of knowledge is summed up there: data and statistics, all of them, are man-made.

They are also central to modern politics and governance, and the ways we talk about them. That in itself represents a shift. Discussions that were once about values and beliefs – about what a society wants to see when it looks at itself in the mirror – have increasingly turned to arguments about numbers, data, statistics. It is a quirk of history that the politician who introduced this style of debate wasn’t Harold Wilson, the only prime minister to have had extensive training in statistics, but Margaret Thatcher, who thought in terms of values but argued in terms of numbers. Even debates that are ultimately about national identity, such as the referendums about Scottish independence and EU membership, now turn on numbers.

Given the ubiquity of this style of argument, we are nowhere near as attentive to its misuses as we should be. As the House of Commons Treasury Committee said dryly in a 2016 report on the economic debate about EU membership, ‘many of these claims sound factual because they use numbers.’ The best short book about the use and misuse of statistics is Darrell Huff’s How to Lie with Statistics, first published in 1954, a devil’s-advocate guide to the multiple ways in which numbers are misused in advertising, commerce and politics. (Single best tip: ‘up to’ is always a fib. It means somebody did a range of tests and has artfully chosen the most flattering number.) For all its virtues, though, even Huff’s book doesn’t encompass the full range of possibilities for statistical deception. In politics, the numbers in question aren’t just man-made but are often contentious, tendentious or outright fake.

Two fake numbers have been decisively influential in British politics over the baleful last thirteen years. The first was an outright lie: Vote Leave’s assertion that £350 million a week extra ‘for the NHS’ would be available if we left the EU. The real number for the UK’s net contribution to the EU was £110 million, but that didn’t matter, since the crucial thing for the Leave campaign was to make the number the focus of debate. The Treasury Committee said the number was fake, and so did the UK Statistics Authority. This had no, or perhaps even a negative, effect. In politics it doesn’t really matter what the numbers are, so much as whose they are. If people are arguing about your numbers, you’re winning…(More)“.

These Prisoners Are Training AI

Article by Morgan Meaker: “…Around the world, millions of so-called “clickworkers” train artificial intelligence models, teaching machines the difference between pedestrians and palm trees, or what combination of words describe violence or sexual abuse. Usually these workers are stationed in the global south, where wages are cheap. OpenAI, for example, uses an outsourcing firm that employs clickworkers in Kenya, Uganda, and India. That arrangement works for American companies, operating in the world’s most widely spoken language, English. But there are not a lot of people in the global south who speak Finnish.

That’s why Metroc turned to prison labor. The company gets cheap, Finnish-speaking workers, while the prison system can offer inmates employment that, it says, prepares them for the digital world of work after their release. Using prisoners to train AI creates uneasy parallels with the kind of low-paid and sometimes exploitive labor that has often existed downstream in technology. But in Finland, the project has received widespread support.

“There’s this global idea of what data labor is. And then there’s what happens in Finland, which is very different if you look at it closely,” says Tuukka Lehtiniemi, a researcher at the University of Helsinki, who has been studying data labor in Finnish prisons.

For four months, Marmalade has lived here, in Hämeenlinna prison. The building is modern, with big windows. Colorful artwork tries to enforce a sense of cheeriness on otherwise empty corridors. If it wasn’t for the heavy gray security doors blocking every entry and exit, these rooms could easily belong to a particularly soulless school or university complex.

Finland might be famous for its open prisons—where inmates can work or study in nearby towns—but this is not one of them. Instead, Hämeenlinna is the country’s highest-security institution housing exclusively female inmates. Marmalade has been sentenced to six years. Under privacy rules set by the prison, WIRED is not able to publish Marmalade’s real name, exact age, or any other information that could be used to identify her. But in a country where prisoners serving life terms can apply to be released after 12 years, six years is a heavy sentence. And like the other 100 inmates who live here, she is not allowed to leave…(More)”.

Satellite Internet Companies Could Help Break Authoritarianism

Article by In 2022, when Iran’s notorious “morality police” killed 22-year-old Kurdish-Iranian Mahsa Amini, the act triggered nationwide protests around police brutality and women’s rights. The government tried to quell the unrest by shutting down mobile data communication and hampering the flow of information through social media channels. Iranian officials cut off Internet access entirely to Kurdistan.

With the first anniversary of her death in mid-September, the issue is still urgent. There were multiple protests all around the country. More than 200 people were confirmed arrested. There have been reports of shots fired by police. The Iranian government has increased Internet restrictions to stem protests and remembrances and to reduce interest in the “Woman, Life, Freedom” movement Amini’s death sparked.

Internet access can be a matter of life or death under authoritarian leadership. When people lose Internet access, they lose freedom of thought, freedom of movement, freedom of knowledge and much more. In the face of shutdowns and government monitoring, access to satellite Internet can preserve both autonomy and freedoms. To preserve both democratic ideals and basic human rights, Western governments and nongovernmental organizations should incentivize and insist that satellite providers establish simple Internet access for people undergoing communications shutdowns.

During the unrest after Amini’s killing, protesters in Iran and their supporters elsewhere asked for help from Internet providers like Starlink, the low-Earth orbit satellite communications company. Owner Elon Musk had given the company’s services to Ukraine during the early days of the Russian invasion before asking the U.S government to reimburse him. To that end, the Biden administration announced negotiations with Musk about one year ago to provide Internet access for the Iranian people. Those talks do not seem to have yielded results.

That Internet access in Iran become a top priority in the wake of Amini’s death is not surprising. The Islamic Republic embraces new technologies when it can exercise complete control, and shuns them in others. As a journalist who has spent the past two decades covering science and technology in Iran, I have seen this firsthand. When I was a kid, owning a VCR player was a crime. Owning a fax machine required government approval. In 2009, during the Green Movement, I watched the government cut text messaging services for months, ban social media platforms, and monitor and record citizens’ communications to intimidate them.

The issue of Internet access extends well beyond Iran. According to Access Now, an Internet freedom advocacy group, 2021 alone saw 182 Internet shutdowns in 34 countries. According to Freedom House’s latest report on Internet freedom, out of the 70 countries the report assessed, only 17 are truly free based on criteria related to access, censorship and user rights. Unsurprisingly, these are mostly democracies…(More)”.

More Companies Are Disclosing Their ESG Data, but Confusion on How Persists

Article by David Breg: “Public companies in the U.S. are increasingly disclosing sustainability information, but many say they find it a challenge to report fundamental climate data that many regulators around the globe likely will require under incoming mandatory reporting standards

Nearly two-thirds of respondents said their company was disclosing environmental, social and governance information, up from 56% in the prior year, according to the annual survey of sustainability officials that WSJ Pro conducted this spring.

However, there was little consensus on which framework to use and respondents highlighted three fundamental types of information as their three biggest environmental reporting challenges: Greenhouse-gas emissions, climate-change risk and energy management.

The proportion of companies disclosing sustainability and ESG information was 63%, up from 56% last year. Those that don’t yet report this data but plan to was 16%, down from 25% last year. About one-fifth of respondents said their organization had no plans to report their progress, virtually unchanged from last year. Breaking that down, a quarter of private companies don’t plan any ESG reporting, while only 7% of public companies felt the same.

Regulators around the globe are finalizing rules that would require companies to publish standardized information after years of patchy voluntary ESG reporting based on a host of frameworks. California’s governor has said he would soon sign that state’s requirements into law. The U.S. Securities and Exchange Commission’s rules are expected later this year. European regulations are already in place and many other countries are also working on standards. The International Sustainability Standards Board hopes its climate framework, completed this past summer, becomes the global baseline

While it is mostly public companies that face mandatory requirements, even private businesses face increased scrutiny of their sustainability and ESG policies from stakeholders including shareholders, eco-conscious consumers, suppliers, insurers and lenders…(More)”.

Surveys Provide Insight Into Three Factors That Encourage Open Data and Science

Article by Joshua Borycz, Alison Specht and Kevin Crowston: “Open Science is a game changer for researchers and the research community. The UNESCO Open Science recommendations in 2021 suggest that the practice of Open Science is a win-win for researchers as they gain from others’ work while making contributions, which in turn benefits the community, as transparency of conclusions and hence confidence in new knowledge improves.

Over a 10-year period Carol Tenopir of DataONE and her team conducted a global survey of scientists, managers and government workers involved in broad environmental science activities about their willingness to share data and their opinion of the resources available to do so (Tenopir et al., 2011201520182020). Comparing the responses over that time shows a general increase in the willingness to share data (and thus engage in open science).

A higher willingness to share data corresponded with a decrease in satisfaction with data sharing resources across nations.

The most surprising result was that a higher willingness to share data corresponded with a decrease in satisfaction with data sharing resources across nations (e.g., skills, tools, training) (Fig.1). That is, researchers who did not want to share data were satisfied with the available resources, and those that did want to share data were dissatisfied. Researchers appear to only discover that the tools are insufficient when they begin the hard work of engaging in open science practices. This indicates that a cultural shift in the attitudes of researchers needs to precede the development of support and tools for data management…(More)”.

Picture of a graph showing the correlation between the factors of willingness to share and satisfaction with resources for data sharing for six groups of nations.
Fig.1: Correlation between the factors of willingness to share and satisfaction with resources for data sharing for six groups of nations.

Doing more good: three trends tech companies should consider in supporting humanitarian response

Article by Jessie End: “On 6 February 2023, a 7.8 magnitude earthquake struck Turkey and Syria, leaving at least 56,000 dead and more than 20 million impacted. Since April, renewed conflict in Sudan has left hundreds of thousands displaced. Ukraine. COVID. Contemplating an increasingly complex and besieged humanitarian landscape, I asked our partners: how can the technology sector better meet these growing needs?

To mark World Humanitarian Day last month, here are three trends with which tech companies can align to ensure our work is doing the most good…

Climate change has been in the public narrative for decades. For much of that time it was the territory of environmental nonprofits. Today, it is recognised as an intersectional issue impacting the work of every humanitarian organisation. The effects of climate change on food security, livelihoods, migration and conflict requires organisations such as Mercy Corps and the International Committee of the Red Cross to incorporate mitigation, resilience and climate-savvy response across their programs.

Early warning systems (EWS) are a promising development in this area, and one well-aligned with the expertise of the tech sector. An effective climate early warning system addresses the complex network of factors contributing to and resulting from climate change. It provides event detection, analysis, prediction, communication and decision-making tools. An effective EWS includes the communities and sectors most at risk, incorporating all relevant risk factors, from geography to social vulnerabilities.

There are many ways for tech firms to engage with this work. Companies working on remote sensing technologies improve risk detection, as well as provide predictive modeling. Dataminr’s own AI platform detects the earliest signals of high-impact events and emerging risks from within publicly available data, including environmental sensors. Market insight platforms can lend their strengths to participatory mapping and data collection for climate risk analysis. And two-way, geolocated messaging can help with targeted dissemination of warnings to impacted communities, as well as with coordinating response efforts.

The key to success is integration. No single tech company can address all parts of a robust EWS, but working together and with partners like MIT’s CREWSnet we can build seamless systems that help humanitarian partners protect the lives and livelihoods of the world’s most vulnerable…(More)”.

AI and the next great tech shift

Book review by John Thornhill: “When the South Korean political activist Kim Dae-jung was jailed for two years in the early 1980s, he powered his way through some 600 books in his prison cell, such was his thirst for knowledge. One book that left a lasting impression was The Third Wave by the renowned futurist Alvin Toffler, who argued that an imminent information revolution was about to transform the world as profoundly as the preceding agricultural and industrial revolutions.

“Yes, this is it!” Kim reportedly exclaimed. When later elected president, Kim referred to the book many times in his drive to turn South Korea into a technological powerhouse.

Forty-three years after the publication of Toffler’s book, another work of sweeping futurism has appeared with a similar theme and a similar name. Although the stock in trade of futurologists is to highlight the transformational and the unprecedented, it is remarkable how much of their output appears the same.

The chief difference is that The Coming Wave by Mustafa Suleyman focuses more narrowly on the twin revolutions of artificial intelligence and synthetic biology. But the author would surely be delighted if his book were to prove as influential as Toffler’s in prompting politicians to action.

As one of the three co-founders of DeepMind, the London-based AI research company founded in 2010, and now chief executive of the AI start-up Inflection, Suleyman has been at the forefront of the industry for more than a decade. The Coming Wave bristles with breathtaking excitement about the extraordinary possibilities that the revolutions in AI and synthetic biology could bring about.

AI, we are told, could unlock the secrets of the universe, cure diseases and stretch the bounds of imagination. Biotechnology can enable us to engineer life and transform agriculture. “Together they will usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen,” he writes.

But what is striking about Suleyman’s heavily promoted book is how the optimism of his will is overwhelmed by the pessimism of his intellect, to borrow a phrase from the Marxist philosopher Antonio Gramsci. For most of history, the challenge of technology has been to unleash its power, Suleyman writes. Now the challenge has flipped.

In the 21st century, the dilemma will be how to contain technology’s power given the capabilities of these new technologies have exploded and the costs of developing them have collapsed. “Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible,” he writes…(More)”.

Unlocking AI’s Potential for Everyone

Article by Diane Coyle: “…But while some policymakers do have deep knowledge about AI, their expertise tends to be narrow, and most other decision-makers simply do not understand the issue well enough to craft sensible policies. Owing to this relatively low knowledge base and the inevitable asymmetry of information between regulators and regulated, policy responses to specific issues are likely to remain inadequate, heavily influenced by lobbying, or highly contested.

So, what is to be done? Perhaps the best option is to pursue more of a principles-based policy. This approach has already gained momentum in the context of issues like misinformation and trolling, where many experts and advocates believe that Big Tech companies should have a general duty of care (meaning a default orientation toward caution and harm reduction).

In some countries, similar principles already apply to news broadcasters, who are obligated to pursue accuracy and maintain impartiality. Although enforcement in these domains can be challenging, the upshot is that we do already have a legal basis for eliciting less socially damaging behavior from technology providers.

When it comes to competition and market dominance, telecoms regulation offers a serviceable model with its principle of interoperability. People with competing service providers can still call each other because telecom companies are all required to adhere to common technical standards and reciprocity agreements. The same is true of ATMs: you may incur a fee, but you can still withdraw cash from a machine at any bank.

In the case of digital platforms, a lack of interoperability has generally been established by design, as a means of locking in users and creating “moats.” This is why policy discussions about improving data access and ensuring access to predictable APIs have failed to make any progress. But there is no technical reason why some interoperability could not be engineered back in. After all, Big Tech companies do not seem to have much trouble integrating the new services that they acquire when they take over competitors.

In the case of LLMs, interoperability probably could not apply at the level of the models themselves, since not even their creators understand their inner workings. However, it can and should apply to interactions between LLMs and other services, such as cloud platforms…(More)”.