Simple Writing Pays Off (Literally)


Article by Bill Birchard: “When SEC Chairman Arthur Levitt championed “plain English” writing in the 1990s, he argued that simpler financial disclosures would help investors make more informed decisions. Since then, we’ve also learned that it can help companies make more money. 

Researchers have confirmed that if you write simply and directly in disclosures like 10-Ks you can attract more investors, cut the cost of debt and equity, and even save money and time on audits.  

landmark experiment by Kristina Rennekamp, an accounting professor at Cornell, documented some of the consequences of poor corporate writing. Working with readers of corporate press releases, she showed that companies stand to lose readers owing to lousy “processing fluency” of their documents. “Processing fluency” is a measure of readability used by psychologists and neuroscientists. 

Rennekamp asked people in an experiment to evaluate two versions of financial press releases. One was the actual release, from a soft drink company. The other was an edit using simple language advocated by the SEC’s Plain English Handbook. The handbook, essentially a guide to better fluency, contains principles that now serve as a standard by which researchers measure readability. 

Published under Levitt, the handbook clarified the requirements of Rule 421, which, starting in 1998, required all prospectuses (and in 2008 all mutual fund summary prospectuses) to adhere to the handbook’s principles. Among them: Use short sentences. Stick to active voice. Seek concrete words. Shun boilerplate. Minimize jargon. And avoid multiple negatives. 

Rennekamp’s experiment, using the so-called Fog Index, a measure of readability based on handbook standards, provided evidence that companies would do better at hooking readers if they simply made their writing easier to read. “Processing fluency from a more readable disclosure,” she wrote in 2012 after measuring the greater trust readers put in well-written releases, “acts as a heuristic cue and increases investors’ beliefs that they can rely on the information in the disclosure…(More)”.

Four ways that AI and robotics are helping to transform other research fields


Article by Michael Eisenstein: “Artificial intelligence (AI) is already proving a revolutionary tool for bioinformatics; the AlphaFold database set up by London-based company DeepMind, owned by Google, is allowing scientists to predict the structures of 200 million proteins across 1 million species. But other fields are benefiting too. Here, we describe the work of researchers pursuing cutting-edge AI and robotics techniques to better anticipate the planet’s changing climate, uncover the hidden history behind artworks, understand deep sea ecology and develop new materials.

Marine biology with a soft touch

It takes a tough organism to withstand the rigours of deep-sea living. But these resilient species are also often remarkably delicate, ranging from soft and squishy creatures such as jellyfish and sea cucumbers, to firm but fragile deep-sea fishes and corals. Their fragility makes studying these organisms a complex task.

The rugged metal manipulators found on many undersea robots are more likely to harm such specimens than to retrieve them intact. But ‘soft robots’ based on flexible polymers are giving marine biologists such as David Gruber, of the City University of New York, a gentler alternative for interacting with these enigmatic denizens of the deep…(More)”.

Eliminate data asymmetries to democratize data use


Article by Rahul Matthan: “Anyone who possesses a large enough store of data can reasonably expect to glean powerful insights from it. These insights are more often than not used to enhance advertising revenues or ensure greater customer stickiness. In other instances, they’ve been subverted to alter our political preferences and manipulate us into taking decisions we otherwise may not have.

The ability to generate insights places those who have access to these data sets at a distinct advantage over those whose data is contained within them. It allows the former to benefit from the data in ways that the latter may not even have thought possible when they consented to provide it. Given how easily these insights can be used to harm those to whom it pertains, there is a need to mitigate the effects of this data asymmetry.

Privacy law attempts to do this by providing data principals with tools they can use to exert control over their personal data. It requires data collectors to obtain informed consent from data principals before collecting their data and forbids them from using it for any purpose other than that which has been previously notified. This is why, even if that consent has been obtained, data fiduciaries cannot collect more data than is absolutely necessary to achieve the stated purpose and are only allowed to retain that data for as long as is necessary to fulfil the stated purpose.

In India, we’ve gone one step further and built techno-legal solutions to help reduce this data asymmetry. The Data Empowerment and Protection Architecture (DEPA) framework makes it possible to extract data from the silos in which they reside and transfer it on the instructions of the data principal to other entities, which can then use it to provide other services to the data principal. This data micro-portability dilutes the historical advantage that incumbents enjoy on account of collecting data over the entire duration of their customer engagement. It eliminates data asymmetries by establishing the infrastructure that creates a competitive market for data-based services, allowing data principals to choose from a range of options as to how their data could be used for their benefit by service providers.

This, however, is not the only type of asymmetry we have to deal with in this age of big data. In a recent article, Stefaan Verhulst of GovLab at New York University pointed out that it is no longer enough to possess large stores of data—you need to know how to effectively extract value from it. Many businesses might have vast stores of data that they have accumulated over the years they have been in operation, but very few of them are able to effectively extract useful signals from that noisy data.

Without the know-how to translate data into actionable information, merely owning a large data set is of little value.

Unlike data asymmetries, which can be mitigated by making data more widely available, information asymmetries can only be addressed by radically democratizing the techniques and know-how that are necessary for extracting value from data. This know-how is largely proprietary and hard to access even in a fully competitive market. What’s more, in many instances, the computation power required far exceeds the capacity of entities for whom data analysis is not the main purpose of their business…(More)”.

Can Smartphones Help Predict Suicide?


Ellen Barry in The New York Times: “In March, Katelin Cruz left her latest psychiatric hospitalization with a familiar mix of feelings. She was, on the one hand, relieved to leave the ward, where aides took away her shoelaces and sometimes followed her into the shower to ensure that she would not harm herself.

But her life on the outside was as unsettled as ever, she said in an interview, with a stack of unpaid bills and no permanent home. It was easy to slide back into suicidal thoughts. For fragile patients, the weeks after discharge from a psychiatric facility are a notoriously difficult period, with a suicide rate around 15 times the national rate, according to one study.

This time, however, Ms. Cruz, 29, left the hospital as part of a vast research project which attempts to use advances in artificial intelligence to do something that has eluded psychiatrists for centuries: to predict who is likely to attempt suicide and when that person is likely to attempt it, and then, to intervene.

On her wrist, she wore a Fitbit programmed to track her sleep and physical activity. On her smartphone, an app was collecting data about her moods, her movement and her social interactions. Each device was providing a continuous stream of information to a team of researchers on the 12th floor of the William James Building, which houses Harvard’s psychology department.

In the field of mental health, few new areas generate as much excitement as machine learning, which uses computer algorithms to better predict human behavior. There is, at the same time, exploding interest in biosensors that can track a person’s mood in real time, factoring in music choices, social media posts, facial expression and vocal expression.

Matthew K. Nock, a Harvard psychologist who is one of the nation’s top suicide researchers, hopes to knit these technologies together into a kind of early-warning system that could be used when an at-risk patient is released from the hospital…(More)”.

Hurricane Ian Destroyed Their Homes. Algorithms Sent Them Money


Article by Chris Stokel-Walker: “The algorithms that power Skai’s damage assessments are trained by manually labeling satellite images of a couple of hundred buildings in a disaster-struck area that are known to have been damaged. The software can then, at speed, detect damaged buildings across the whole affected area. A research paper on the underlying technology presented at a 2020 academic workshop on AI for disaster response claimed the auto-generated damage assessments match those of human experts with between 85 and 98 percent accuracy.

In Florida this month, GiveDirectly sent its push notification offering $700 to any user of the Providers app with a registered address in neighborhoods of Collier, Charlotte, and Lee Counties where Google’s AI system deemed more than 50 percent of buildings had been damaged. So far, 900 people have taken up the offer, and half of those have been paid. If every recipient takes up GiveDirectly’s offer, the organization will pay out $2.4 million in direct financial aid.

Some may be skeptical of automated disaster response. But in the chaos after an event like a hurricane making landfall, the conventional, human response can be far from perfect. Diaz points to an analysis GiveDirectly conducted looking at their work after Hurricane Harvey, which hit Texas and Louisiana in 2017, before the project with Google. Two out of the three areas that were most damaged and economically depressed were initially overlooked. A data-driven approach is “much better than what we’ll have from boots on the ground and word of mouth,” Diaz says.

GiveDirectly and Google’s hands-off, algorithm-led approach to aid distribution has been welcomed by some disaster assistance experts—with caveats. Reem Talhouk, a research fellow at Northumbria University’s School of Design and Centre for International Development in the UK, says that the system appears to offer a more efficient way of delivering aid. And it protects the dignity of recipients, who don’t have to queue up for handouts in public…(More)”.

‘Dark data’ is killing the planet – we need digital decarbonisation


Article by Tom Jackson and Ian R. Hodgkinson: “More than half of the digital data firms generate is collected, processed and stored for single-use purposes. Often, it is never re-used. This could be your multiple near-identical images held on Google Photos or iCloud, a business’s outdated spreadsheets that will never be used again, or data from internet of things sensors that have no purpose.

This “dark data” is anchored to the real world by the energy it requires. Even data that is stored and never used again takes up space on servers – typically huge banks of computers in warehouses. Those computers and those warehouses all use lots of electricity.

This is a significant energy cost that is hidden in most organisations. Maintaining an effective organisational memory is a challenge, but at what cost to the environment?

In the drive towards net zero many organisations are trying to reduce their carbon footprints. Guidance has generally centred on reducing traditional sources of carbon production, through mechanisms such as carbon offsetting via third parties (planting trees to make up for emissions from using petrol, for instance).

While most climate change activists are focused on limiting emissions from the automotive, aviation and energy industries, the processing of digital data is already comparable to these sectors and is still growing. In 2020, digitisation was purported to generate 4% of global greenhouse gas emissions. Production of digital data is increasing fast – this year the world is expected to generate 97 zettabytes (that is: 97 trillion gigabytes) of data. By 2025, it could almost double to 181 zettabytes. It is therefore surprising that little policy attention has been placed on reducing the digital carbon footprint of organisations…(More)”.

Is This the Beginning of the End of the Internet?


Article by Charlie Warzel: “…occasionally, something happens that is so blatantly and obviously misguided that trying to explain it rationally makes you sound ridiculous. Such is the case with the Fifth Circuit Court of Appeals’s recent ruling in NetChoice v. Paxton. Earlier this month, the court upheld a preposterous Texas law stating that online platforms with more than 50 million monthly active users in the United States no longer have First Amendment rights regarding their editorial decisions. Put another way, the law tells big social-media companies that they can’t moderate the content on their platforms. YouTube purging terrorist-recruitment videos? Illegal. Twitter removing a violent cell of neo-Nazis harassing people with death threats? Sorry, that’s censorship, according to Andy Oldham, a judge of the United States Court of Appeals and the former general counsel to Texas Governor Greg Abbott.

A state compelling social-media companies to host all user content without restrictions isn’t merely, as the First Amendment litigation lawyer Ken White put it on Twitter, “the most angrily incoherent First Amendment decision I think I’ve ever read.” It’s also the type of ruling that threatens to blow up the architecture of the internet. To understand why requires some expertise in First Amendment law and content-moderation policy, and a grounding in what makes the internet a truly transformational technology. So I called up some legal and tech-policy experts and asked them to explain the Fifth Circuit ruling—and its consequences—to me as if I were a precocious 5-year-old with a strange interest in jurisprudence…(More)”

Google’s new AI can hear a snippet of song—and then keep on playing


Article by Tammy Xu: “The new AI system can generate natural sounds and voices after being prompted with a few seconds of audio.

AudioLM, developed by Google researchers, produces sounds that match the style of reminders, including complex sounds like piano music or human voices, in a way that is nearly indistinguishable from original record. The technique shows promise in terms of speeding up the training of AI to generate audio, and it could eventually be used to automatically generate music to accompany videos.

AI-generated audio has become ubiquitous: voices on home assistants like Alexa use natural language processing. AI music systems like OpenAI’s Jukebox have produced impressive results, but most current techniques require people to prepare transcriptions and label training data based on text, which does It takes a lot of time and human labor. For example, Jukebox uses text-based data to generate lyrics.

AudioLM, described in a non-peer-reviewed paper Last month was different: it didn’t require transcription or labeling. Instead, an audio database is fed into the program, and machine learning is used to compress the audio files into audio clips, called “tokens,” without losing too much information. This encrypted training data is then fed into a machine learning model that uses natural language processing to learn the audio samples.

To generate sound, a few seconds of audio is fed into AudioLM, then predict what happens next. This process is similar to how language models like GPT-3 predict sentences and words that often follow one another.

Sound clip released by the team sounds quite natural. In particular, piano music created with AudioLM sounded more fluid than piano music created with existing AI techniques, which tends to sound chaotic…(More)”.

The Transformations of Science


Essay by Geoff Anders: “In November of 1660, at Gresham College in London, an invisible college of learned men held their first meeting after 20 years of informal collaboration. They chose their coat of arms: the royal crown’s three lions of England set against a white backdrop. Their motto: “Nullius in verba,” or “take no one’s word for it.” Three years later, they received a charter from King Charles II and became what was and remains the world’s preeminent scientific institution: the Royal Society.

Three and a half centuries later, in July of 2021, even respected publications began to grow weary of a different, now constant refrain: “Trust the science.” It was a mantra everyone was supposed to accept, repeated again and again, ad nauseum

This new motto was the latest culmination of a series of transformations science has undergone since the founding of the Royal Society, reflecting the changing nature of science on one hand, and its expanding social role on the other. 

The present world’s preeminent system of thought now takes science as a central pillar and wields its authority to great consequence. But the story of how that came to be is, as one might expect, only barely understood…

There is no essential conflict between the state’s use of the authority of science and the health of the scientific enterprise itself. It is easy to imagine a well-funded and healthy scientific enterprise whose authority is deployed appropriately for state purposes without undermining the operation of science itself.

In practice, however, there can be a tension between state aims and scientific aims, where the state wants actionable knowledge and the imprimatur of science, often far in advance of the science getting settled. This is especially likely in response to a disruptive phenomenon that is too new for the science to have settled yet—for example, a novel pathogen with unknown transmission mechanisms and health effects.

Our recent experience of the pandemic put this tension on display, with state recommendations moving against masks, and then for masks, as the state had to make tactical decisions about a novel threat with limited information. In each case, politicians sought to adorn the recommendations with the authority of settled science; an unfortunate, if understandable, choice.

This joint partnership of science and the state is relatively new. One question worth asking is whether the development was inevitable. Science had an important flaw in its epistemic foundation, dating back to Boyle and the Royal Society—its failure to determine the proper conditions and use of scientific authority. “Nullius in verba” made some sense in 1660, before much science was settled and when the enterprise was small enough that most natural philosophers could personally observe or replicate the experiments of the others. It came to make less sense as science itself succeeded, scaled up, and acquired intellectual authority. Perhaps a better answer to the question of scientific authority would have led science to take a different course.

Turning from the past to the future, we now face the worrying prospect that the union of science and the state may have weakened science itself. Some time ago, commentators raised the specter of scientific slowdown, and more recent analysis has provided further justification for these fears. Why is science slowing? To put it simply, it may be difficult to have science be both authoritative and exploratory at the same time.

When scientists are meant to be authoritative, they’re supposed to know the answer. When they’re exploring, it’s okay if they don’t. Hence, encouraging scientists to reach authoritative conclusions prematurely may undermine their ability to explore—thereby yielding scientific slowdown. Such a dynamic may be difficult to detect, since the people who are supposed to detect it might themselves be wrapped up in a premature authoritative consensus…(More)”.

How one group of ‘fellas’ is winning the meme war in support of Ukraine


Article by Suzanne Smalley: “The North Atlantic Fella Organization, or NAFO, has arrived.

Ukraine’s Defense Ministry celebrated the group on Twitter for waging a “fierce fight” against Kremlin trolls. And Rep. Adam Kinzinger, D-Ill., tweeted that he was “self-declaring as a proud member of #NAFO” and “the #fellas shall prevail.”

The brainchild of former Marine Matt Moores, NAFO launched in May and quickly blew up on Twitter. It’s become something of a movement, drawing support in military and cybersecurity circles who circulate its meme backing Ukraine in its war against Russia.

“The power of what we’re doing is that instead of trying to come in and point-by-point refute, and argue about what’s true and what isn’t, it’s coming and saying, ‘Hey, that’s dumb,’” Moores said during a panel on Wednesday at the Center for International and Strategic Studies in Washington. “And the moment somebody’s replying to a cartoon dog online, you’ve lost if you work for the government of Russia.”

Memes have figured heavily in the information war following the Russian invasion. The Ukrainian government has proven eager to highlight memes on agency websites and officials have been known to personally thank online communities that spread anti-Russian memes. The NAFO meme shared by the defense ministry in August showed a Shiba Inu dog in a military uniform appearing to celebrate a missile launch.

The Shiba Inu has long been a motif in internet culture. According to Vice’s Motherboard, the use of Shiba Inu to represent a “fella” waging online war against the Russians dates to at least May when an artist started rewarding fellas who donated money to the Georgian Legion by creating customized fella art for online use…(More)”.