Open Secrets: Ukraine and the Next Intelligence Revolution


Article by Amy Zegart: “Russia’s invasion of Ukraine has been a watershed moment for the world of intelligence. For weeks before the shelling began, Washington publicly released a relentless stream of remarkably detailed findings about everything from Russian troop movements to false-flag attacks the Kremlin would use to justify the invasion. 

This disclosure strategy was new: spy agencies are accustomed to concealing intelligence, not revealing it. But it was very effective. By getting the truth out before Russian lies took hold, the United States was able to rally allies and quickly coordinate hard-hitting sanctions. Intelligence disclosures set Russian President Vladimir Putin on his back foot, wondering who and what in his government had been penetrated so deeply by U.S. agencies, and made it more difficult for other countries to hide behind Putin’s lies and side with Russia.

The disclosures were just the beginning. The war has ushered in a new era of intelligence sharing between Ukraine, the United States, and other allies and partners, which has helped counter false Russian narratives, defend digital systems from cyberattacks, and assisted Ukrainian forces in striking Russian targets on the battlefield. And it has brought to light a profound new reality: intelligence isn’t just for government spy agencies anymore…

The explosion of open-source information online, commercial satellite capabilities, and the rise of AI are enabling all sorts of individuals and private organizations to collect, analyze, and disseminate intelligence.

In the past several years, for instance, the amateur investigators of Bellingcat—a volunteer organization that describes itself as “an intelligence agency for the people”—have made all kinds of discoveries. Bellingcat identified the Russian hit team that tried to assassinate former Russian spy officer Sergei Skripal in the United Kingdom and located supporters of the Islamic State (also known as ISIS) in Europe. It also proved that Russians were behind the shootdown of Malaysia Airlines flight 17 over Ukraine. 

Bellingcat is not the only civilian intelligence initiative. When the Iranian government claimed in 2020 that a small fire had broken out in an industrial shed, two U.S. researchers working independently and using nothing more than their computers and the Internet proved within hours that Tehran was lying….(More)”.

The Signal App and the Danger of Privacy at All Costs


Article by Reid Blackman: “…One should always worry when a person or an organization places one value above all. The moral fabric of our world is complex. It’s nuanced. Sensitivity to moral nuance is difficult, but unwavering support of one principle to rule them all is morally dangerous.

The way Signal wields the word “surveillance” reflects its coarsegrained understanding of morality. To the company, surveillance covers everything from a server holding encrypted data that no one looks at to a law enforcement agent reading data after obtaining a warrant to East Germany randomly tapping citizens’ phones. One cannot think carefully about the value of privacy — including its relative importance to other values in particular contexts — with such a broad brush.

What’s more, the company’s proposition that if anyone has access to data, then many unauthorized people probably will have access to that data is false. This response reflects a lack of faith in good governance, which is essential to any well-functioning organization or community seeking to keep its members and society at large safe from bad actors. There are some people who have access to the nuclear launch codes, but “Mission Impossible” movies aside, we’re not particularly worried about a slippery slope leading to lots of unauthorized people having access to those codes.

I am drawing attention to Signal, but there’s a bigger issue here: Small groups of technologists are developing and deploying applications of their technologies for explicitly ideological reasons, with those ideologies baked into the technologies. To use those technologies is to use a tool that comes with an ethical or political bent.

Signal is pushing against businesses like Meta that turn users of their social media platforms into the product by selling user data. But Signal embeds within itself a rather extreme conception of privacy, and scaling its technology is scaling its ideology. Signal’s users may not be the product, but they ‌‌are the witting or unwitting advocates of the moral views of the 40 or so people who operate Signal.

There’s something somewhat sneaky in all this (though I don’t think the owners of Signal intend to be sneaky). Usually advocates know that they’re advocates. They engage in some level of deliberation and reach the conclusion that a set of beliefs is for them…(More)”.

Data drives media coverage of climate refugees


Case study by Sherry Ricchiardi: “Data has become a springboard for journalists on the frontlines of the climate refugee crisis. It points them to weather emergencies in hot zones like South Asia and Central America and to humans facing misery and despair.

Jorge A., a Guatemalan farmer lost his corn crop to floods. He planted okra, but a drought killed it off. He feared if he didn’t get his family out, they, too, might die.

Jorge’s story was told in gripping detail in a data-driven investigation by ProPublica in partnership with The New York Times Magazine, exploring how changes in population patterns could lead to catastrophe. The “Great Climate Migration Has Begun,” presented as a visual essay, cited scenarios of how this crisis might play out.

The joint venture, supported by the Pulitzer Center, had an over-arching strategy: To model, for the first time, how climate refugees might move across international borders. The modeling informed the journalist’s findings and “possible general pathways for the future.”

“Should the flight away from hot climates reach the scale that current research suggests is likely, it will amount to a vast remapping of the world’s population,” wrote ProPublica’s Abrahm Lustgarten, lead author for the 2020 series…

Journalists have taken a stand on how they cover the climate beat. Their view of what constitutes a “balanced news report” has shifted from “he said, she said” objectivity toward a “weight of evidence” approach. Mainstream media are giving climate skeptics less time and for good reason.

Researchers long had raised concerns that the media distorted scientific consensus on climate change by “false balance” reporting or “bothsidesism,” giving climate deniers too much say. Research by Northwestern University psychology professor David Rapp sheds light on the controversy.

During a co-authored study, experiments were conducted to test how people would respond when two views about climate change were presented as equally valid, even though one side was based on scientific consensus and the other on denial. Among the conclusions, “When both sides of an argument are presented, people tend to have lower estimates about scientific consensus and seem to be less likely to believe climate change is something to worry about.” A campus publication touted, “Northwestern research finds ‘bothsidesism’ in journalism undermines science.”..(More)”.

How the algorithm tipped the balance in Ukraine


David Ignatius at The Washington Post: “Two Ukrainian military officers peer at a laptop computer operated by a Ukrainian technician using software provided by the American technology company Palantir. On the screen are detailed digital maps of the battlefield at Bakhmut in eastern Ukraine, overlaid with other targeting intelligence — most of it obtained from commercial satellites.

As we lean closer, we see can jagged trenches on the Bakhmut front, where Russian and Ukrainian forces are separated by a few hundred yards in one of the bloodiest battles of the war. A click of the computer mouse displays thermal images of Russian and Ukrainian artillery fire; another click shows a Russian tank marked with a “Z,” seen through a picket fence, an image uploaded by a Ukrainian spy on the ground.

If this were a working combat operations center, rather than a demonstration for a visiting journalist, the Ukrainian officers could use a targeting program to select a missile, artillery piece or armed drone to attack the Russian positions displayed on the screen. Then drones could confirm the strike, and a damage assessment would be fed back into the system.

This is the “wizard war” in the Ukraine conflict — a secret digital campaign that has never been reported before in detail — and it’s a big reason David is beating Goliath here. The Ukrainians are fusing their courageous fighting spirit with the most advanced intelligence and battle-management software ever seen in combat.

“Tenacity, will and harnessing the latest technology give the Ukrainians a decisive advantage,” Gen. Mark A. Milley, chairman of the Joint Chiefs of Staff, told me last week. “We are witnessing the ways wars will be fought, and won, for years to come.”

I think Milley is right about the transformational effect of technology on the Ukraine battlefield. And for me, here’s the bottom line: With these systems aiding brave Ukrainian troops, the Russians probably cannot win this war…(More)” See also Part 2.

We need data infrastructure as well as data sharing – conflicts of interest in video game research


Article by David Zendle & Heather Wardle: “Industry data sharing has the potential to revolutionise evidence on video gaming and mental health, as well as a host of other critical topics. However, collaborative data sharing agreements between academics and industry partners may also afford industry enormous power in steering the development of this evidence base. In this paper, we outline how nonfinancial conflicts of interest may emerge when industry share data with academics. We then go on to describe ways in which such conflicts may affect the quality of the evidence base. Finally, we suggest strategies for mitigating this impact and preserving research independence. We focus on the development of data infrastructure: technological, social, and educational architecture that facilitates unfettered and free access to the kinds of high-quality data that industry hold, but without industry involvement…(More)”.

Public sector innovation has a “first mile” problem


Article by Catarina Tully, and Giulio Quaggiotto: “Even if progress has been uneven, the palette of innovation approaches adopted by the public sector has considerably expanded in the last few years: from new sources of data to behavioural science, from foresight to user-centred design, from digital transformation to system thinking. And yet, the frustration of many innovation champions within the government is palpable. We are all familiar with innovation graveyards and, in our learning journeys, probably contributed to them in spite of all best intentions:

  • Dashboards that look very “smart” and are carefully tended to by few specialists but never used by their intended target audience: decision-makers.
  • Prototypes or experiments that were developed by an innovation unit and meant to be handed over to a line ministry or city department but never were.
  • Beautifully crafted scenarios and horizon scanning reports that last the length of a press conference or a ribbon-cutting event and are quickly put on the shelves after that.

The list could go on and on.

Innovation theatre is a well known malaise (paraphrasing Sean McDonald: “the use of [technology] interventions that make people feel as if a government—and, more often, a specific group of political leaders—is solving a problem, without it doing anything to actually solve that problem.”)

In the current climate, the pressure to “scale” quick-fixes in the face of multiple crises (as opposed to the hard work of addressing root causes, building trust, and structural transformations) is only increasing the appetite for performative theatre. Eventually, public intrapreneurs learn to use the theatre to their advantage: let the photo op with the technology gadget or the “futuristic” scenario take the centre stage so as to create goodwill with the powers that be, while you work quietly in the backstage to do the “right” thing…(More)”.

How to spot AI-generated text


Article by Melissa Heikkilä: “This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?

Since it was released in late November, ChatGPT has been used by over a million people. It has the AI community enthralled, and it is clear the internet is increasingly being flooded with AI-generated text. People are using it to come up with jokes, write children’s stories, and craft better emails. 

ChatGPT is OpenAI’s spin-off of its large language model GPT-3, which generates remarkably human-sounding answers to questions that it’s asked. The magic—and danger—of these large language models lies in the illusion of correctness. The sentences they produce look right—they use the right kinds of words in the correct order. But the AI doesn’t know what any of it means. These models work by predicting the most likely next word in a sentence. They haven’t a clue whether something is correct or false, and they confidently present information as true even when it is not. 

In an already polarized, politically fraught online world, these AI tools could further distort the information we consume. If they are rolled out into the real world in real products, the consequences could be devastating. 

We’re in desperate need of ways to differentiate between human- and AI-written text in order to counter potential misuses of the technology, says Irene Solaiman, policy director at AI startup Hugging Face, who used to be an AI researcher at OpenAI and studied AI output detection for the release of GPT-3’s predecessor GPT-2. 

New tools will also be crucial to enforcing bans on AI-generated text and code, like the one recently announced by Stack Overflow, a website where coders can ask for help. ChatGPT can confidently regurgitate answers to software problems, but it’s not foolproof. Getting code wrong can lead to buggy and broken software, which is expensive and potentially chaotic to fix…(More)”.

How AI That Powers Chatbots and Search Queries Could Discover New Drugs


Karen Hao at The Wall Street Journal: “In their search for new disease-fighting medicines, drug makers have long employed a laborious trial-and-error process to identify the right compounds. But what if artificial intelligence could predict the makeup of a new drug molecule the way Google figures out what you’re searching for, or email programs anticipate your replies—like “Got it, thanks”?

That’s the aim of a new approach that uses an AI technique known as natural language processing—​the same technology​ that enables OpenAI’s ChatGPT​ to ​generate human-like responses​—to analyze and synthesize proteins, which are the building blocks of life and of many drugs. The approach exploits the fact that biological codes have something in common with search queries and email texts: Both are represented by a series of letters.  

Proteins are made up of dozens to thousands of small chemical subunits known as amino acids, and scientists use special notation to document the sequences. With each amino acid corresponding to a single letter of the alphabet, proteins are represented as long, sentence-like combinations.

Natural language algorithms, which quickly analyze language and predict the next step in a conversation, can also be applied to this biological data to create protein-language models. The models encode what might be called the grammar of proteins—the rules that govern which amino acid combinations yield specific therapeutic properties—to predict the sequences of letters that could become the basis of new drug molecules. As a result, the time required for the early stages of drug discovery could shrink from years to months.

“Nature has provided us with tons of examples of proteins that have been designed exquisitely with a variety of functions,” says Ali Madani, founder of ProFluent Bio, a Berkeley, Calif.-based startup focused on language-based protein design. “We’re learning the blueprint from nature.”…(More)”.

Storytelling Will Save the Earth


Article by Bella Lack: “…The environmental crisis is one of overconsumption, carbon emissions, and corporate greed. But it’s also a crisis of miscommunication. For too long, hard data buried environmentalists in an echo-chamber, but in 2023, storytelling will finally enable a united global response to the environmental crisis. As this crisis worsens, we will stop communicating the climate crisis with facts and stats—instead we will use stories like Timothy’s.  

Unlike numbers or facts, stories can trigger an emotional response, harnessing the power of motivation, imagination, and personal values, which drive the most powerful and permanent forms of social change. For instance, in 2019, we all saw the images of Notre Dame cathedral erupting in flames. Three minutes after the fire began, images of the incident were being broadcast globally, eliciting an immediate response from world leaders. That same year, the Amazon forest also burned, spewing smoke that spread over 2,000 miles and burning over one and a half football fields of rain forest every minute of every day—it took three weeks for the mainstream media to report that story. Why did the burning of Notre Dame warrant such rapid responses globally, when the Amazon fires did not? Although it is just a beautiful assortment of limestone, lead, and wood, we attach personal significance to Notre Dame, because it has a story we know and can relate to. That is what propelled people to react to it, while the fact that the Amazon was on fire elicited nothing…(More)”.

Storytelling allows us to make sense of the world. 

The Risks of Empowering “Citizen Data Scientists”


Article by Reid Blackman and Tamara Sipes: “New tools are enabling organizations to invite and leverage non-data scientists — say, domain data experts, team members very familiar with the business processes, or heads of various business units — to propel their AI efforts. There are advantages to empowering these internal “citizen data scientists,” but also risks. Organizations considering implementing these tools should take five steps: 1) provide ongoing education, 2) provide visibility into similar use cases throughout the organization, 3) create an expert mentor program, 4) have all projects verified by AI experts, and 5) provide resources for inspiration outside your organization…(More)”.