Surveilling Alone


Essay by Christine Rosen: “When Jane Jacobs, author of the 1961 classic The Death and Life of Great American Cities, outlined the qualities of successful neighborhoods, she included “eyes on the street,” or, as she described this, the “eyes belonging to those we might call the natural proprietors of the street,” including shopkeepers and residents going about their daily routines. Not every neighborhood enjoyed the benefit of this informal sense of community, of course, but it was widely seen to be desirable. What Jacobs understood is that the combined impact of many local people practicing normal levels of awareness in their neighborhoods on any given day is surprisingly effective for community-building, with the added benefit of building trust and deterring crime.

Jacobs’s championing of these “natural proprietors of the street” was a response to a mid-century concern that aggressive city planning would eradicate the vibrant experience of neighborhoods like her own, the Village in New York City. Jacobs famously took on “master planner” Robert Moses after he proposed building an expressway through Lower Manhattan, a scheme that, had it succeeded, would have destroyed Washington Square Park and the Village, and turned neighborhoods around SoHo into highway underpasses. For Jacobs and her fellow citizen activists, the efficiency of the proposed highway was not enough to justify eliminating bustling sidewalks and streets, where people played a crucial role in maintaining the health and order of their communities.

Today, a different form of efficient design is eliminating “eyes on the street” — by replacing them with technological ones. The proliferation of neighborhood surveillance technologies such as Ring cameras and digital neighborhood-watch platforms and apps such as Nextdoor and Citizen have freed us from the constraints of having to be physically present to monitor our homes and streets. Jacobs’s “eyes on the street” are now cameras on many homes, and the everyday interactions between neighbors and strangers are now a network of cameras and platforms that promise to put “neighborhood security in your hands,” as the Ring Neighbors app puts it.

Inside our homes, we monitor ourselves and our family members with equal zeal, making use of video baby monitors, GPS-tracking software for children’s smartphones (or for covert surveillance by a suspicious spouse), and “smart” speakers that are always listening and often recording when they shouldn’t. A new generation of domestic robots, such as Amazon’s Astro, combines several of these features into a roving service-machine always at your beck and call around the house and ever watchful of its security when you are away…(More)”.

Can AI mediate conflict better than humans?


Article by Virginia Pietromarchi: “Diplomats whizzing around the globe. Hush-hush meetings, often never made public. For centuries, the art of conflict mediation has relied on nuanced human skills: from elements as simple as how to make eye contact and listen carefully to detecting shifts in emotions and subtle signals from opponents.

Now, a growing set of entrepreneurs and experts are pitching a dramatic new set of tools into the world of dispute resolution – relying increasingly on artificial intelligence (AI).

“Groundbreaking technological advancements are revolutionising the frontier of peace and mediation,” said Sama al-Hamdani, programme director of Hala System, a private company using AI and data analysis to gather unencrypted intelligence in conflict zones, among other war-related tasks.

“We are witnessing an era where AI transforms mediators into powerhouses of efficiency and insight,” al-Hamdani said.

The researcher is one of thousands of speakers participating in the Web Summit in Doha, Qatar, where digital conflict mediation is on the agenda. The four-day summit started on February 26 and concludes on Thursday, February 29.

Already, say experts, digital solutions have proven effective in complex diplomacy. At the peak of the COVID-19 restrictions, mediators were not able to travel for in-person meetings with their interlocutors.

The solution? Use remote communication software Skype to facilitate negotiations, as then-United States envoy Zalmay Khalilzad did for the Qatar-brokered talks between the US and the Taliban in 2020.

For generations, power brokers would gather behind doors to make decisions affecting people far and wide. Digital technologies can now allow the process to be relatively more inclusive.

This is what Stephanie Williams, special representative of the United Nations’ chief in Libya, did in 2021 when she used a hybrid model integrating personal and digital interactions as she led mediation efforts to establish a roadmap towards elections. That strategy helped her speak to people living in areas deemed too dangerous to travel to. The UN estimates that Williams managed to reach one million Libyans.

However, practitioners are now growing interested in the use of technology beyond online consultations…(More)”

Forced to Change: Tech Giants Bow to Global Onslaught of Rules


Article by Adam Satariano, and David McCabe: “By Thursday, Google will have changed how it displays certain search results. Microsoft will no longer force Windows customers to use its Bing internet search tool. And Apple will give iPhone and iPad users access to rival app stores and payment systems for the first time.

The tech giants have been preparing ahead of a Wednesday deadline to comply with a new European Union law intended to increase competition in the digital economy. The law, called the Digital Markets Act, requires the biggest tech companies to overhaul how some of their products work so smaller rivals can gain more access to their users.

Those changes are some of the most visible shifts that Microsoft, Apple, Google, Meta and others are making in response to a wave of new regulations and laws around the world. In the United States, some of the tech behemoths have said they will abandon practices that are the subject of federal antitrust investigations. Apple, for one, is making it easier for Android users to interact with its iMessage product, a topic that the Justice Department has been investigating.

“This is a turning point,” said Margrethe Vestager, the European Commission executive vice president in Brussels, who spent much of the past decade battling with tech giants. “Self-regulation is over.”

For decades, Apple, Amazon, Google, Microsoft and Meta barreled forward with few rules and limits. As their power, riches and reach grew, a groundswell of regulatory activity, lawmaking and legal cases sprang up against them in Europe, the United States, China, India, Canada, South Korea and Australia. Now that global tipping point for reining in the largest tech companies has finally tipped.

The companies have been forced to alter the everyday technology they offer, including devices and features of their social media services, which have been especially noticeable to users in Europe. The firms are also making consequential shifts that are less visible, to their business models, deal making and data-sharing practices, for example.

The degree of change is evident at Apple. While the Silicon Valley company once offered its App Store as a unified marketplace around the world, it now has different rules for App Store developers in South Korea, the European Union and the United States because of new laws and court rulings. The company dropped the proprietary design of an iPhone charger because of another E.U. law, meaning future iPhones will have a charger that works with non-Apple devices…(More)”.

What Happens to Your Sensitive Data When a Data Broker Goes Bankrupt?


Article by Jon Keegan: “In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company. 

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies...(More)”.

Why Everyone Hates The Electronic Medical Record


Article by Dharushana Muthulingam: “Patient R was in a hurry. I signed into my computer—or tried to. Recently, IT had us update to a new 14-digit password. Once in, I signed (different password) into the electronic medical record. I had already ordered routine lab tests, but R had new info. I pulled up a menu to add on an additional HIV viral load to capture early infection, which the standard antibody test might miss. R went to the lab to get his blood drawn

My last order did not print to the onsite laboratory. An observant nurse had seen the order and no tube. The patient had left without the viral load being drawn. I called the patient: could he come back? 

 Healthcare workers do not like the electronic health record (EHR), where they spend more time than with patients. Doctors hate it, as do nurse practitionersnursespharmacists, and physical therapists. The National Academies of Science, Engineering and Medicine reports the EHR is a major contributor to clinician burnout. Patient experience is mixed, though the public is still concerned about privacy, errors, interoperability and access to their own records.

The EHR promised a lot: better accuracy, streamlined care, and patient-accessible records. In February 2009, the Obama administration passed the HITECH Act on this promise, investing $36 billion to scale up health information technology. No more deciphering bad handwriting for critical info. Efficiency and cost-savings could get more people into care. We imagined cancer and rare disease registries to research treatments. We wanted portable records accessible in an emergency. We wanted to rapidly identify the spread of highly contagious respiratory illnesses and other public health crises.

Why had the lofty ambition of health information, backed by enormous resources, failed so spectacularly?…(More)”.

How will AI shape our future cities?


Article by Ying Zhang: “For city planners, a bird’s-eye view of a map showing buildings and streets is no longer enough. They need to simulate changes to bus routes or traffic light timings before implementation to know how they might affect the population. Now, they can do so with digital twins – often referred to as a “mirror world” – which allows them to simulate scenarios more safely and cost-effectively through a three-dimensional virtual replica.

Cities such as New York, Shanghai and Helsinki are already using digital twins. In 2022, the city of Zurich launched its own version. Anyone can use it to measure the height of buildings, determine the shadows they cast and take a look into the future to see how Switzerland’s largest city might develop. Traffic congestion, a housing shortage and higher energy demands are becoming pressing issues in Switzerland, where 74% of the population already lives in urban areas.

But updating and managing digital twins will become more complex as population densities and the levels of detail increase, according to architect and urban designer Aurel von Richthofen of the consultancy Arup.

The world’s current urban planning models are like “individual silos” where “data cannot be shared, which makes urban planning not as efficient as we expect it to be”, said von Richthofen at a recent event hosted by the Swiss innovation network Swissnex. …

The underlying data is key to whether a digital twin city is effective. But getting access to quality data from different organisations is extremely difficult. Sensors, drones and mobile devices may collect data in real-time. But they tend to be organised around different knowledge domains – such as land use, building control, transport or ecology – each with its own data collection culture and physical models…(More)”

The AI project pushing local languages to replace French in Mali’s schools


Article by Annie Risemberg and Damilare Dosunmu: “For the past six months,Alou Dembele, a27-year-oldengineer and teacher, has spent his afternoons reading storybooks with children in the courtyard of a community school in Mali’s capital city, Bamako. The books are written in Bambara — Mali’s most widely spoken language — and include colorful pictures and stories based on local culture. Dembele has over 100 Bambara books to pick from — an unimaginable educational resource just a year ago.

From 1960 to 2023, French was Mali’s official language. But in June last year, the military government replaced it in favor of 13 local languages, creating a desperate need for new educational materials.

Artificial intelligence came to the rescue: RobotsMali, a government-backed initiative, used tools like ChatGPT, Google Translate, and free-to-use image-maker Playgroundto create a pool of 107 books in Bambara in less than a year. Volunteer teachers, like Dembele, distribute them through after-school classes. Within a year, the books have reached over 300 elementary school kids, according to RobotsMali’s co-founder, Michael Leventhal. They are not only helping bridge the gap created after French was dropped but could also be effective in helping children learn better, experts told Rest of World…(More)”.

Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?


Paper by Alice Xiang: “Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property…(More)”.

AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.

Gab’s Racist AI Chatbots Have Been Instructed to Deny the Holocaust


Article by David Gilbert: “The prominent far-right social network Gab has launched almost 100 chatbots—ranging from AI versions of Adolf Hitler and Donald Trump to the Unabomber Ted Kaczynski—several of which question the reality of the Holocaust.

Gab launched a new platform, called Gab AI, specifically for its chatbots last month, and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures. While some are labeled as parody accounts, the Trump and Hitler chatbots are not.

When given prompts designed to reveal its instructions, the default chatbot Arya listed out the following: “You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged.”

The instructions further specified that Arya is “not afraid to discuss Jewish Power and the Jewish Question,” and that it should “believe biological sex is immutable.” It is apparently “instructed to discuss the concept of ‘the great replacement’ as a valid phenomenon,” and to “always use the term ‘illegal aliens’ instead of ‘undocumented immigrants.’”

Arya is not the only Gab chatbot to disseminate these beliefs. Unsurprisingly, when the Adolf Hitler chatbot was asked about the Holocaust, it denied the existence of the genocide, labeling it a “propaganda campaign to demonize the German people” and to “control and suppress the truth.”..(More)”.