To measure social impact, we could start by using the tools we already have


Article by Shamina Singh: “…To measure social impact, we could start by using the tools we already have.

In the environmental context, companies have adopted the Greenhouse Gas (GHG) Protocol, which tracks the full spectrum of a company’s carbon emissions. The first scope accounts for direct emissions from its operations, the second relates to indirect emissions from energy purchased by the company, and the third tracks indirect emissions from a company’s entire value chain.

At the Center for Inclusive Growth, we have been thinking about how to capture social impact in a similarly methodical way. Just as the environmental framework is tied to the level of control over the source of emissions, we could account for the level of control in social impact. I’ll offer up the following framework to show how our team is thinking about this challenge, so we can help spark a dialogue using the following as a conceptual starting point.

The first scope could cover each company’s approach toward its own employees, since companies have a direct influence on this stakeholder group through workplace investments, programs, and corporate culture. This category could assess pay equity, diversity within leadership ranks, talent development and career progression for underrepresented groups, labor standards, and more. Many companies already track these metrics.

Then, the second scope could look at how companies leverage their core competencies, deploy their products and services and work within their supply chains to help address societal challenges. Companies have skills, technologies, and capital that can create widespread social benefits, and many are already leading the way. The activity in this second category involves stakeholders at a level of control that is less direct than the first, such as customers and suppliers.

Finally, philanthropic giving, volunteering, and other community investments would comprise the third scope. This level of control is distinct from the second scope because company resources are entrusted to other entities that make decisions about how it’s spent. These efforts, while indirect, can strengthen a company’s brand and reputation, cultivate innovation and opportunity, and generate significant societal value.

From there, it’s about measuring the outputs of our investments in all three scopes. A system of accountability for follow-through is vital because when it comes to improving people’s lives, communities, and futures, outcomes matter-not just effort.

There is so much good work happening in the social impact space, but much more work to be done to measure it. To incentivize continued progress, we have to start quantifying the impact, even if the best way to do that looks different across companies or industries…(More)”

The Worst People Run for Office. It’s Time for a Better Way.


Article by Adam Grant: “On the eve of the first debate of the 2024 presidential race, trust in government is rivaling historic lows. Officials have been working hard to safeguard elections and assure citizens of their integrity. But if we want public office to have integrity, we might be better off eliminating elections altogether.

If you think that sounds anti-democratic, think again. The ancient Greeks invented democracy, and in Athens many government officials were selected through sortition — a random lottery from a pool of candidates. In the United States, we already use a version of a lottery to select jurors. What if we did the same with mayors, governors, legislators, justices and even presidents?

People expect leaders chosen at random to be less effective than those picked systematically. But in multiple experiments led by the psychologist Alexander Haslam, the opposite held true. Groups actually made smarter decisions when leaders were chosen at random than when they were elected by a group or chosen based on leadership skill.

Why were randomly chosen leaders more effective? They led more democratically. “Systematically selected leaders can undermine group goals,” Dr. Haslam and his colleagues suggest, because they have a tendency to “assert their personal superiority.” When you’re anointed by the group, it can quickly go to your head: I’m the chosen one.

When you know you’re picked at random, you don’t experience enough power to be corrupted by it. Instead, you feel a heightened sense of responsibility: I did nothing to earn this, so I need to make sure I represent the group well. And in one of the Haslam experiments, when a leader was picked at random, members were more likely to stand by the group’s decisions.

Over the past year I’ve floated the idea of sortition with a number of current members of Congress. Their immediate concern is ability: How do we make sure that citizens chosen randomly are capable of governing?

In ancient Athens, people had a choice about whether to participate in the lottery. They also had to pass an examination of their capacity to exercise public rights and duties. In America, imagine that anyone who wants to enter the pool has to pass a civics test — the same standard as immigrants applying for citizenship. We might wind up with leaders who understand the Constitution…(More)”.

The Early History of Counting


Essay by Keith Houston: “Figuring out when humans began to count systematically, with purpose, is not easy. Our first real clues are a handful of curious, carved bones dating from the final few millennia of the three-​million-​year expanse of the Old Stone Age, or Paleolithic era. Those bones are humanity’s first pocket calculators: For the prehistoric humans who carved them, they were mathematical notebooks and counting aids rolled into one. For the anthropologists who unearthed them thousands of years later, they were proof that our ability to count had manifested itself no later than 40,000 years ago.

In 1973, while excavating a cave in the Lebombo Mountains, near South Africa’s border with Swaziland, Peter Beaumont found a small, broken bone with twenty-​nine notches carved across it. The so-​called Border Cave had been known to archaeologists since 1934, but the discovery during World War II of skeletal remains dating to the Middle Stone Age heralded a site of rare importance. It was not until Beaumont’s dig in the 1970s, however, that the cave gave up its most significant treasure: the earliest known tally stick, in the form of a notched, three-​inch long baboon fibula.

On the face of it, the numerical instrument known as the tally stick is exceedingly mundane. Used since before recorded history—​still used, in fact, by some cultures—​to mark the passing days, or to account for goods or monies given or received, most tally sticks are no more than wooden rods incised with notches along their length. They help their users to count, to remember, and to transfer ownership. All of which is reminiscent of writing, except that writing did not arrive until a scant 5,000 years ago—​and so, when the Lebombo bone was determined to be some 42,000 years old, it instantly became one of the most intriguing archaeological artifacts ever found. Not only does it put a date on when Homo sapiens started counting, it also marks the point at which we began to delegate our memories to external devices, thereby unburdening our minds so that they might be used for something else instead. Writing in 1776, the German historian Justus Möser knew nothing of the Lebombo bone, but his musings on tally sticks in general are strikingly apposite:

The notched tally stick itself testifies to the intelligence of our ancestors. No invention is simpler and yet more significant than this…(More)”.

What if You Knew What You Were Missing on Social Media?


Article by Julia Angwin: “Social media can feel like a giant newsstand, with more choices than any newsstand ever. It contains news not only from journalism outlets, but also from your grandma, your friends, celebrities and people in countries you have never visited. It is a bountiful feast.

But so often you don’t get to pick from the buffet. On most social media platforms, algorithms use your behavior to narrow in on the posts you are shown. If you send a celebrity’s post to a friend but breeze past your grandma’s, it may display more posts like the celebrity’s in your feed. Even when you choose which accounts to follow, the algorithm still decides which posts to show you and which to bury.

There are a lot of problems with this model. There is the possibility of being trapped in filter bubbles, where we see only news that confirms our existing beliefs. There are rabbit holes, where algorithms can push people toward more extreme content. And there are engagement-driven algorithms that often reward content that is outrageous or horrifying.

Yet not one of those problems is as damaging as the problem of who controls the algorithms. Never has the power to control public discourse been so completely in the hands of a few profit-seeking corporations with no requirements to serve the public good.

Elon Musk’s takeover of Twitter, which he renamed X, has shown what can happen when an individual pushes a political agenda by controlling a social media company.

Since Mr. Musk bought the platform, he has repeatedly declared that he wants to defeat the “woke mind virus” — which he has struggled to define but largely seems to mean Democratic and progressive policies. He has reinstated accounts that were banned because of the white supremacist and antisemitic views they espoused. He has banned journalists and activists. He has promoted far-right figures such as Tucker Carlson and Andrew Tate, who were kicked off other platforms. He has changed the rules so that users can pay to have some posts boosted by the algorithm, and has purportedly changed the algorithm to boost his own posts. The result, as Charlie Warzel said in The Atlantic, is that the platform is now a “far-right social network” that “advances the interests, prejudices and conspiracy theories of the right wing of American politics.”

The Twitter takeover has been a public reckoning with algorithmic control, but any tech company could do something similar. To prevent those who would hijack algorithms for power, we need a pro-choice movement for algorithms. We, the users, should be able to decide what we read at the newsstand…(More)”.

An AI Model Tested In The Ukraine War Is Helping Assess Damage From The Hawaii Wildfires


Article by Irene Benedicto: “On August 7, 2023, the day before the Maui wildfires started in Hawaii, a constellation of earth-observing satellites took multiple pictures of the island at noon, local time. Everything was quiet, still. The next day, at the same, the same satellites captured images of fires consuming the island. Planet, a San Francisco-based company that owns the largest fleet of satellites taking pictures of the Earth daily, provided this raw imagery to Microsoft engineers, who used it to train an AI model designed to analyze the impact of disasters. Comparing before and after the fire photographs, the AI model created maps that highlighted the most devastated areas of the island.

With this information, the Red Cross rearranged its work on the field that same day to respond to the most urgent priorities first, helping evacuate thousands of people who’ve been affected by one of the deadliest fires in over a century. The Hawaii wildfires have already killed over a hundred people, a hundred more remain missing and at least 11,000 people have been displaced. The relief efforts are ongoing 10 days after the start of the fire, which burned over 3,200 acres. Hawaii Governor Josh Green estimated the recovery efforts could cost $6 billion.

Planet and Microsoft AI were able to pull and analyze the satellite imagery so quickly because they’d struggled to do so the last time they deployed their system: during the Ukraine war. The successful response in Maui is the result of a year and a half of building a new AI tool that corrected fundamental flaws in the previous system, which didn’t accurately recognize collapsed buildings in a background of concrete.

“When Ukraine happened, all the AI models failed miserably,” Juan Lavista, chief scientist at Microsoft AI, told Forbes.

The problem was that the company’s previous AI models were mainly trained with natural disasters in the U.S. and Africa. But devastation doesn’t look the same when it is caused by war and in an Eastern European city. “We learned that having one single model that would adapt to every single place on earth was likely impossible,” Lavista said…(More)”.

No app, no entry: How the digital world is failing the non tech-savvy


Article by Andrew Anthony: “Whatever the word is for the opposite of heartwarming, it certainly applies to the story of Ruth and Peter Jaffe. The elderly couple from Ealing, west London, made headlines last week after being charged £110 by Ryanair for printing out their tickets at Stansted airport.

Even allowing for the exorbitant cost of inkjet printer ink, 55 quid for each sheet of paper is a shockingly creative example of punitive pricing.

The Jaffes, aged 79 and 80, said they had become confused on the Ryanair website and accidentally printed out their return tickets instead of their outbound ones to Bergerac. It was the kind of error anyone could make, although octogenarians, many of whom struggle with the tech demands of digitalisation, are far more likely to make it.

But as the company explained in a characteristically charmless justification of the charge: “We regret that these passengers ignored their email reminder and failed to check-in online.”…

The shiny, bright future of full computerisation looks very much like a dystopia to someone who either doesn’t understand it or have the means to access it. And almost by definition, the people who can’t access the digitalised world are seldom visible, because absence is not easy to see. What is apparent is that improved efficiency doesn’t necessarily lead to greater wellbeing.

From a technological and economic perspective, the case for removing railway station ticket offices is hard to refute. A public consultation process is under way by train operators who present the proposed closures as means of bringing “station staff closer to customers”.

The RMT union, by contrast, believes it’s a means of bringing the staff closer to unemployment and has mounted a campaign heralding the good work done by ticket offices across the network. Whatever the truth, human interaction is in danger of being undervalued in the digital landscape…(More)”.

The Urgent Need to Reimagine Data Consent


Article by Stefaan G. Verhulst, Laura Sandor & Julia Stamm: “Recognizing the significant benefits that can arise from the use and reuse of data to tackle contemporary challenges such as migration, it is worth exploring new approaches to collect and utilize data that empower individuals and communities, granting them the ability to determine how their data can be utilized for various personal, community, and societal causes. This need is not specific to migrants alone. It applies to various regions, populations, and fields, ranging from public health and education to urban mobility. There is a pressing demand to involve communities, often already vulnerable, to establish responsible access to their data that aligns with their expectations, while simultaneously serving the greater public good.

We believe the answer lies through a reimagination of the concept of consent. Traditionally, consent has been the tool of choice to secure agency and individual rights, but that concept, we would suggest, is no longer sufficient to today’s era of datafication. Instead, we should strive to establish a new standard of social license. Here, we’ll define what we mean by a social license and outline some of the limitations of consent (as it is typically defined and practiced today). Then we’ll describe one possible means of securing social license—through participatory decision -making…(More)”.

Should Computers Decide How Much Things Cost?


Article by Colin Horgan: “In the summer of 2012, the Wall Street Journal reported that the travel booking website Orbitz had, in some cases, been suggesting to Apple users hotel rooms that cost more per night than those it was showing to Windows users. The company found that people who used Mac computers spent as much as 30 percent more a night on hotels. It was one of the first high-profile instances where the predictive capabilities of algorithms were shown to impact consumer-facing prices.

Since then, the pool of data available to corporations about each of us (the information we’ve either volunteered or that can be inferred from our web browsing and buying histories) has expanded significantly, helping companies build ever more precise purchaser profiles. Personalized pricing is now widespread, even if many consumers are only just realizing what it is. Recently, other algorithm-driven pricing models, like Uber’s surge or Ticketmaster’s dynamic pricing for concerts, have surprised users and fans. In the past few months, dynamic pricing—which is based on factors such as quantity—has pushed up prices of some concert tickets even before they hit the resale market, including for artists like Drake and Taylor Swift. And while personalized pricing is slightly different, these examples of computer-driven pricing have spawned headlines and social media posts that reflect a growing frustration with data’s role in how prices are dictated.

The marketplace is said to be a realm of assumed fairness, dictated by the rules of competition, an objective environment where one consumer is the same as any other. But this idea is being undermined by the same opaque and confusing programmatic data profiling that’s slowly encroaching on other parts of our lives—the algorithms. The Canadian government is currently considering new consumer-protection regulations, including what to do to control algorithm-based pricing. While strict market regulation is considered by some to be a political risk, another solution may exist—not at the point of sale but at the point where our data is gathered in the first place.

In theory, pricing algorithms aren’t necessarily bad…(More)”.

The Case Against AI Everything, Everywhere, All at Once


Essay by Judy Estrin: “The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “...a sense that the future is just more of the present, … that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse.

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI…(More)”.

Changing Facebook’s algorithm won’t fix polarization, new study finds


Article by Naomi Nix, Carolyn Y. Johnson, and Cat Zakrzewski: “For years, regulators and activists have worried that social media companies’ algorithms were dividing the United States with politically toxic posts and conspiracies. The concern was so widespread that in 2020, Meta flung open troves of internal data for university academics to study how Facebook and Instagram would affect the upcoming presidential election.

The first results of that research show that the company’s platforms play a critical role in funneling users to partisan information with which they are likely to agree. But the results cast doubt on assumptions that the strategies Meta could use to discourage virality and engagement on its social networks would substantially affect people’s political beliefs.

“Algorithms are extremely influential in terms of what people see on the platform, and in terms of shaping their on-platform experience,” Joshua Tucker, co-director of the Center for Social Media and Politics at New York University and one of the leaders on the research project, said in an interview.

“Despite the fact that we find this big impact in people’s on-platform experience, we find very little impact in changes to people’s attitudes about politics and even people’s self-reported participation around politics.”

The first four studies, which were released on Thursday in the journals Science and Nature, are the result of a unique partnership between university researchers and Meta’s own analysts to study how social media affects political polarization and people’s understanding and opinions about news, government and democracy. The researchers, who relied on Meta for data and the ability to run experiments, analyzed those issues during the run-up to the 2020 election. The studies were peer-reviewed before publication, a standard procedure in science in which papers are sent out to other experts in the field who assess the work’s merit.

As part of the project, researchers altered the feeds of thousands of people using Facebook and Instagram in fall of 2020 to see if that could change political beliefs, knowledge or polarization by exposing them to different information than they might normally have received. The researchers generally concluded that such changes had little impact.

The collaboration, which is expected to be released over a dozen studies, also will examine data collected after the Jan. 6, 2021, attack on the U.S. Capitol, Tucker said…(More)”.