Big Tech Goes to War


Article by Christine H. Fox and Emelia S. Probasco: “Even before he made a bid to buy Twitter, Elon Musk was an avid user of the site. It is a reason Ukraine’s Minister of Digital Transformation Mykhailo Fedorov took to the social media platform to prod the SpaceX CEO to activate Starlink, a SpaceX division that provides satellite internet, to help his country in the aftermath of Russia’s invasion. “While you try to colonize Mars—Russia try [sic] to occupy Ukraine!” Fedorov wrote on February 26. “We ask you to provide Ukraine with Starlink stations.”

“Starlink service is now active in Ukraine,” Musk tweeted that same day. This was a coup for Ukraine: it facilitated Ukrainian communications in the conflict. Starlink later helped fend off Russian jamming attacks against its service to Ukraine with a quick and relatively simple code update. Now, however, Musk has gone back and forth on whether the company will continue funding the Starlink satellite service that has kept Ukraine and its military online during the war.

The tensions and uncertainty Musk is injecting into the war effort demonstrate the challenges that can emerge when companies play a key role in military conflict. Technology companies ranging from Microsoft to Silicon Valley start-ups have provided cyberdefense, surveillance, and reconnaissance services—not by direction of a government contract or even as a part of a government plan but instead through the independent decision-making of individual companies. These companies’ efforts have rightly garnered respect and recognition; their involvement, after all, were often pro bono and could have provoked Russian attacks on their networks, or even their people, in retaliation…(More)”.

How Do You Prove a Secret?


Essay by Sheon Han: “Imagine you had some useful knowledge — maybe a secret recipe, or the key to a cipher. Could you prove to a friend that you had that knowledge, without revealing anything about it? Computer scientists proved over 30 years ago that you could, if you used what’s called a zero-knowledge proof.

For a simple way to understand this idea, let’s suppose you want to show your friend that you know how to get through a maze, without divulging any details about the path. You could simply traverse the maze within a time limit, while your friend was forbidden from watching. (The time limit is necessary because given enough time, anyone can eventually find their way out through trial and error.) Your friend would know you could do it, but they wouldn’t know how.

Zero-knowledge proofs are helpful to cryptographers, who work with secret information, but also to researchers of computational complexity, which deals with classifying the difficulty of different problems. “A lot of modern cryptography relies on complexity assumptions — on the assumption that certain problems are hard to solve, so there has always been some connections between the two worlds,” said Claude Crépeau, a computer scientist at McGill University. “But [these] proofs have created a whole world of connection.”

Zero-knowledge proofs belong to a category known as interactive proofs, so to learn how the former work, it helps to understand the latter. First described in a 1985 paper by the computer scientists Shafi Goldwasser, Silvio Micali and Charles Rackoff, interactive proofs work like an interrogation: Over a series of messages, one party (the prover) tries to convince the other (the verifier) that a given statement is true. An interactive proof must satisfy two properties. First, a true statement will always eventually convince an honest verifier. Second, if the given statement is false, no prover — even one pretending to possess certain knowledge — can convince the verifier, except with negligibly small probability…(More)”

The Exploited Labor Behind Artificial Intelligence


Essay by Adrienne Williams, Milagros Miceli, and Timnit Gebru: “The public’s understanding of artificial intelligence (AI) is largely shaped by pop culture — by blockbuster movies like “The Terminator” and their doomsday scenarios of machines going rogue and destroying humanity. This kind of AI narrative is also what grabs the attention of news outlets: a Google engineer claiming that its chatbot was sentient was among the most discussed AI-related news in recent months, even reaching Stephen Colbert’s millions of viewers. But the idea of superintelligent machines with their own agency and decision-making power is not only far from reality — it distracts us from the real risks to human lives surrounding the development and deployment of AI systems. While the public is distracted by the specter of nonexistent sentient machines, an army of precarized workers stands behind the supposed accomplishments of artificial intelligence systems today.

Many of these systems are developed by multinational corporations located in Silicon Valley, which have been consolidating power at a scale that, journalist Gideon Lewis-Kraus notes, is likely unprecedented in human history. They are striving to create autonomous systems that can one day perform all of the tasks that people can do and more, without the required salaries, benefits or other costs associated with employing humans. While this corporate executives’ utopia is far from reality, the march to attempt its realization has created a global underclass, performing what anthropologist Mary L. Gray and computational social scientist Siddharth Suri call ghost work: the downplayed human labor driving “AI”.

Tech companies that have branded themselves “AI first” depend on heavily surveilled gig workers like data labelers, delivery drivers and content moderators. Startups are even hiring people to impersonate AI systems like chatbots, due to the pressure by venture capitalists to incorporate so-called AI into their products. In fact, London-based venture capital firm MMC Ventures surveyed 2,830 AI startups in the EU and found that 40% of them didn’t use AI in a meaningful way…(More)”.

Leveraging Data for the Public Good


Article by Christopher Pissarides, Fadi Farra and Amira Bensebaa: “…Yet data are simply too important to be entrusted to either governments or large corporations that treat them as their private property. Instead, governments should collaborate with companies on joint-governance frameworks that recognize both the opportunities and the risks of big data.

Businesses – which are best positioned to understand big data’s true value – must move beyond short‐sighted efforts to prevent regulation. Instead, they need to initiate a dialogue with policymakers on how to design viable solutions that can leverage the currency of our era to benefit the public good. Doing so would help them regain public trust.

Governments, for their part, must avoid top‐down regulatory strategies. To win the support they need from businesses, they need to create incentives for data sharing and privacy protection and help develop new analytical tools through advanced modeling. Governments should also rethink and renew deeply-rooted frameworks inherited from the industrial era, such as those for taxation and social welfare.

In the digital age, governments should recognize the centrality of data to policymaking and develop tools to reward businesses that contribute to the public good by sharing it. True, governments require taxes to raise revenues, but they must recognize that a better understanding of individuals enables more efficient policies. By recognizing companies’ ability to save public money and create social value, governments could encourage companies to share data as a matter of social responsibility…(More)”.

Simple Writing Pays Off (Literally)


Article by Bill Birchard: “When SEC Chairman Arthur Levitt championed “plain English” writing in the 1990s, he argued that simpler financial disclosures would help investors make more informed decisions. Since then, we’ve also learned that it can help companies make more money. 

Researchers have confirmed that if you write simply and directly in disclosures like 10-Ks you can attract more investors, cut the cost of debt and equity, and even save money and time on audits.  

landmark experiment by Kristina Rennekamp, an accounting professor at Cornell, documented some of the consequences of poor corporate writing. Working with readers of corporate press releases, she showed that companies stand to lose readers owing to lousy “processing fluency” of their documents. “Processing fluency” is a measure of readability used by psychologists and neuroscientists. 

Rennekamp asked people in an experiment to evaluate two versions of financial press releases. One was the actual release, from a soft drink company. The other was an edit using simple language advocated by the SEC’s Plain English Handbook. The handbook, essentially a guide to better fluency, contains principles that now serve as a standard by which researchers measure readability. 

Published under Levitt, the handbook clarified the requirements of Rule 421, which, starting in 1998, required all prospectuses (and in 2008 all mutual fund summary prospectuses) to adhere to the handbook’s principles. Among them: Use short sentences. Stick to active voice. Seek concrete words. Shun boilerplate. Minimize jargon. And avoid multiple negatives. 

Rennekamp’s experiment, using the so-called Fog Index, a measure of readability based on handbook standards, provided evidence that companies would do better at hooking readers if they simply made their writing easier to read. “Processing fluency from a more readable disclosure,” she wrote in 2012 after measuring the greater trust readers put in well-written releases, “acts as a heuristic cue and increases investors’ beliefs that they can rely on the information in the disclosure…(More)”.

Four ways that AI and robotics are helping to transform other research fields


Article by Michael Eisenstein: “Artificial intelligence (AI) is already proving a revolutionary tool for bioinformatics; the AlphaFold database set up by London-based company DeepMind, owned by Google, is allowing scientists to predict the structures of 200 million proteins across 1 million species. But other fields are benefiting too. Here, we describe the work of researchers pursuing cutting-edge AI and robotics techniques to better anticipate the planet’s changing climate, uncover the hidden history behind artworks, understand deep sea ecology and develop new materials.

Marine biology with a soft touch

It takes a tough organism to withstand the rigours of deep-sea living. But these resilient species are also often remarkably delicate, ranging from soft and squishy creatures such as jellyfish and sea cucumbers, to firm but fragile deep-sea fishes and corals. Their fragility makes studying these organisms a complex task.

The rugged metal manipulators found on many undersea robots are more likely to harm such specimens than to retrieve them intact. But ‘soft robots’ based on flexible polymers are giving marine biologists such as David Gruber, of the City University of New York, a gentler alternative for interacting with these enigmatic denizens of the deep…(More)”.

Eliminate data asymmetries to democratize data use


Article by Rahul Matthan: “Anyone who possesses a large enough store of data can reasonably expect to glean powerful insights from it. These insights are more often than not used to enhance advertising revenues or ensure greater customer stickiness. In other instances, they’ve been subverted to alter our political preferences and manipulate us into taking decisions we otherwise may not have.

The ability to generate insights places those who have access to these data sets at a distinct advantage over those whose data is contained within them. It allows the former to benefit from the data in ways that the latter may not even have thought possible when they consented to provide it. Given how easily these insights can be used to harm those to whom it pertains, there is a need to mitigate the effects of this data asymmetry.

Privacy law attempts to do this by providing data principals with tools they can use to exert control over their personal data. It requires data collectors to obtain informed consent from data principals before collecting their data and forbids them from using it for any purpose other than that which has been previously notified. This is why, even if that consent has been obtained, data fiduciaries cannot collect more data than is absolutely necessary to achieve the stated purpose and are only allowed to retain that data for as long as is necessary to fulfil the stated purpose.

In India, we’ve gone one step further and built techno-legal solutions to help reduce this data asymmetry. The Data Empowerment and Protection Architecture (DEPA) framework makes it possible to extract data from the silos in which they reside and transfer it on the instructions of the data principal to other entities, which can then use it to provide other services to the data principal. This data micro-portability dilutes the historical advantage that incumbents enjoy on account of collecting data over the entire duration of their customer engagement. It eliminates data asymmetries by establishing the infrastructure that creates a competitive market for data-based services, allowing data principals to choose from a range of options as to how their data could be used for their benefit by service providers.

This, however, is not the only type of asymmetry we have to deal with in this age of big data. In a recent article, Stefaan Verhulst of GovLab at New York University pointed out that it is no longer enough to possess large stores of data—you need to know how to effectively extract value from it. Many businesses might have vast stores of data that they have accumulated over the years they have been in operation, but very few of them are able to effectively extract useful signals from that noisy data.

Without the know-how to translate data into actionable information, merely owning a large data set is of little value.

Unlike data asymmetries, which can be mitigated by making data more widely available, information asymmetries can only be addressed by radically democratizing the techniques and know-how that are necessary for extracting value from data. This know-how is largely proprietary and hard to access even in a fully competitive market. What’s more, in many instances, the computation power required far exceeds the capacity of entities for whom data analysis is not the main purpose of their business…(More)”.

Can Smartphones Help Predict Suicide?


Ellen Barry in The New York Times: “In March, Katelin Cruz left her latest psychiatric hospitalization with a familiar mix of feelings. She was, on the one hand, relieved to leave the ward, where aides took away her shoelaces and sometimes followed her into the shower to ensure that she would not harm herself.

But her life on the outside was as unsettled as ever, she said in an interview, with a stack of unpaid bills and no permanent home. It was easy to slide back into suicidal thoughts. For fragile patients, the weeks after discharge from a psychiatric facility are a notoriously difficult period, with a suicide rate around 15 times the national rate, according to one study.

This time, however, Ms. Cruz, 29, left the hospital as part of a vast research project which attempts to use advances in artificial intelligence to do something that has eluded psychiatrists for centuries: to predict who is likely to attempt suicide and when that person is likely to attempt it, and then, to intervene.

On her wrist, she wore a Fitbit programmed to track her sleep and physical activity. On her smartphone, an app was collecting data about her moods, her movement and her social interactions. Each device was providing a continuous stream of information to a team of researchers on the 12th floor of the William James Building, which houses Harvard’s psychology department.

In the field of mental health, few new areas generate as much excitement as machine learning, which uses computer algorithms to better predict human behavior. There is, at the same time, exploding interest in biosensors that can track a person’s mood in real time, factoring in music choices, social media posts, facial expression and vocal expression.

Matthew K. Nock, a Harvard psychologist who is one of the nation’s top suicide researchers, hopes to knit these technologies together into a kind of early-warning system that could be used when an at-risk patient is released from the hospital…(More)”.

Hurricane Ian Destroyed Their Homes. Algorithms Sent Them Money


Article by Chris Stokel-Walker: “The algorithms that power Skai’s damage assessments are trained by manually labeling satellite images of a couple of hundred buildings in a disaster-struck area that are known to have been damaged. The software can then, at speed, detect damaged buildings across the whole affected area. A research paper on the underlying technology presented at a 2020 academic workshop on AI for disaster response claimed the auto-generated damage assessments match those of human experts with between 85 and 98 percent accuracy.

In Florida this month, GiveDirectly sent its push notification offering $700 to any user of the Providers app with a registered address in neighborhoods of Collier, Charlotte, and Lee Counties where Google’s AI system deemed more than 50 percent of buildings had been damaged. So far, 900 people have taken up the offer, and half of those have been paid. If every recipient takes up GiveDirectly’s offer, the organization will pay out $2.4 million in direct financial aid.

Some may be skeptical of automated disaster response. But in the chaos after an event like a hurricane making landfall, the conventional, human response can be far from perfect. Diaz points to an analysis GiveDirectly conducted looking at their work after Hurricane Harvey, which hit Texas and Louisiana in 2017, before the project with Google. Two out of the three areas that were most damaged and economically depressed were initially overlooked. A data-driven approach is “much better than what we’ll have from boots on the ground and word of mouth,” Diaz says.

GiveDirectly and Google’s hands-off, algorithm-led approach to aid distribution has been welcomed by some disaster assistance experts—with caveats. Reem Talhouk, a research fellow at Northumbria University’s School of Design and Centre for International Development in the UK, says that the system appears to offer a more efficient way of delivering aid. And it protects the dignity of recipients, who don’t have to queue up for handouts in public…(More)”.

‘Dark data’ is killing the planet – we need digital decarbonisation


Article by Tom Jackson and Ian R. Hodgkinson: “More than half of the digital data firms generate is collected, processed and stored for single-use purposes. Often, it is never re-used. This could be your multiple near-identical images held on Google Photos or iCloud, a business’s outdated spreadsheets that will never be used again, or data from internet of things sensors that have no purpose.

This “dark data” is anchored to the real world by the energy it requires. Even data that is stored and never used again takes up space on servers – typically huge banks of computers in warehouses. Those computers and those warehouses all use lots of electricity.

This is a significant energy cost that is hidden in most organisations. Maintaining an effective organisational memory is a challenge, but at what cost to the environment?

In the drive towards net zero many organisations are trying to reduce their carbon footprints. Guidance has generally centred on reducing traditional sources of carbon production, through mechanisms such as carbon offsetting via third parties (planting trees to make up for emissions from using petrol, for instance).

While most climate change activists are focused on limiting emissions from the automotive, aviation and energy industries, the processing of digital data is already comparable to these sectors and is still growing. In 2020, digitisation was purported to generate 4% of global greenhouse gas emissions. Production of digital data is increasing fast – this year the world is expected to generate 97 zettabytes (that is: 97 trillion gigabytes) of data. By 2025, it could almost double to 181 zettabytes. It is therefore surprising that little policy attention has been placed on reducing the digital carbon footprint of organisations…(More)”.