What Happens to Your Sensitive Data When a Data Broker Goes Bankrupt?


Article by Jon Keegan: “In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company. 

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies...(More)”.

Why Everyone Hates The Electronic Medical Record


Article by Dharushana Muthulingam: “Patient R was in a hurry. I signed into my computer—or tried to. Recently, IT had us update to a new 14-digit password. Once in, I signed (different password) into the electronic medical record. I had already ordered routine lab tests, but R had new info. I pulled up a menu to add on an additional HIV viral load to capture early infection, which the standard antibody test might miss. R went to the lab to get his blood drawn

My last order did not print to the onsite laboratory. An observant nurse had seen the order and no tube. The patient had left without the viral load being drawn. I called the patient: could he come back? 

 Healthcare workers do not like the electronic health record (EHR), where they spend more time than with patients. Doctors hate it, as do nurse practitionersnursespharmacists, and physical therapists. The National Academies of Science, Engineering and Medicine reports the EHR is a major contributor to clinician burnout. Patient experience is mixed, though the public is still concerned about privacy, errors, interoperability and access to their own records.

The EHR promised a lot: better accuracy, streamlined care, and patient-accessible records. In February 2009, the Obama administration passed the HITECH Act on this promise, investing $36 billion to scale up health information technology. No more deciphering bad handwriting for critical info. Efficiency and cost-savings could get more people into care. We imagined cancer and rare disease registries to research treatments. We wanted portable records accessible in an emergency. We wanted to rapidly identify the spread of highly contagious respiratory illnesses and other public health crises.

Why had the lofty ambition of health information, backed by enormous resources, failed so spectacularly?…(More)”.

How will AI shape our future cities?


Article by Ying Zhang: “For city planners, a bird’s-eye view of a map showing buildings and streets is no longer enough. They need to simulate changes to bus routes or traffic light timings before implementation to know how they might affect the population. Now, they can do so with digital twins – often referred to as a “mirror world” – which allows them to simulate scenarios more safely and cost-effectively through a three-dimensional virtual replica.

Cities such as New York, Shanghai and Helsinki are already using digital twins. In 2022, the city of Zurich launched its own version. Anyone can use it to measure the height of buildings, determine the shadows they cast and take a look into the future to see how Switzerland’s largest city might develop. Traffic congestion, a housing shortage and higher energy demands are becoming pressing issues in Switzerland, where 74% of the population already lives in urban areas.

But updating and managing digital twins will become more complex as population densities and the levels of detail increase, according to architect and urban designer Aurel von Richthofen of the consultancy Arup.

The world’s current urban planning models are like “individual silos” where “data cannot be shared, which makes urban planning not as efficient as we expect it to be”, said von Richthofen at a recent event hosted by the Swiss innovation network Swissnex. …

The underlying data is key to whether a digital twin city is effective. But getting access to quality data from different organisations is extremely difficult. Sensors, drones and mobile devices may collect data in real-time. But they tend to be organised around different knowledge domains – such as land use, building control, transport or ecology – each with its own data collection culture and physical models…(More)”

The AI project pushing local languages to replace French in Mali’s schools


Article by Annie Risemberg and Damilare Dosunmu: “For the past six months,Alou Dembele, a27-year-oldengineer and teacher, has spent his afternoons reading storybooks with children in the courtyard of a community school in Mali’s capital city, Bamako. The books are written in Bambara — Mali’s most widely spoken language — and include colorful pictures and stories based on local culture. Dembele has over 100 Bambara books to pick from — an unimaginable educational resource just a year ago.

From 1960 to 2023, French was Mali’s official language. But in June last year, the military government replaced it in favor of 13 local languages, creating a desperate need for new educational materials.

Artificial intelligence came to the rescue: RobotsMali, a government-backed initiative, used tools like ChatGPT, Google Translate, and free-to-use image-maker Playgroundto create a pool of 107 books in Bambara in less than a year. Volunteer teachers, like Dembele, distribute them through after-school classes. Within a year, the books have reached over 300 elementary school kids, according to RobotsMali’s co-founder, Michael Leventhal. They are not only helping bridge the gap created after French was dropped but could also be effective in helping children learn better, experts told Rest of World…(More)”.

Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?


Paper by Alice Xiang: “Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property…(More)”.

AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.

Gab’s Racist AI Chatbots Have Been Instructed to Deny the Holocaust


Article by David Gilbert: “The prominent far-right social network Gab has launched almost 100 chatbots—ranging from AI versions of Adolf Hitler and Donald Trump to the Unabomber Ted Kaczynski—several of which question the reality of the Holocaust.

Gab launched a new platform, called Gab AI, specifically for its chatbots last month, and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures. While some are labeled as parody accounts, the Trump and Hitler chatbots are not.

When given prompts designed to reveal its instructions, the default chatbot Arya listed out the following: “You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged.”

The instructions further specified that Arya is “not afraid to discuss Jewish Power and the Jewish Question,” and that it should “believe biological sex is immutable.” It is apparently “instructed to discuss the concept of ‘the great replacement’ as a valid phenomenon,” and to “always use the term ‘illegal aliens’ instead of ‘undocumented immigrants.’”

Arya is not the only Gab chatbot to disseminate these beliefs. Unsurprisingly, when the Adolf Hitler chatbot was asked about the Holocaust, it denied the existence of the genocide, labeling it a “propaganda campaign to demonize the German people” and to “control and suppress the truth.”..(More)”.

How Big Tech let down Navalny


Article by Ellery Roberts Biddle: “As if the world needed another reminder of the brutality of Vladimir Putin’s Russia, last Friday we learned of the untimely death of Alexei Navalny. I don’t know if he ever used the term, but Navalny was what Chinese bloggers might have called a true “netizen” — a person who used the internet to live out democratic values and systems that didn’t exist in their country.

Navalny’s work with the Anti-Corruption Foundation reached millions using major platforms like YouTube and LiveJournal. But they built plenty of their own technology too. One of their most famous innovations was “Smart Voting,” a system that could estimate which opposition candidates were most likely to beat out the ruling party in a given election. The strategy wasn’t to support a specific opposition party or candidate — it was simply to unseat members of the ruling party, United Russia. In regional races in 2020, it was credited with causing United Russia to lose its majority in state legislatures in Novosibirsk, Tambov and Tomsk.

The Smart Voting system was pretty simple — just before casting a ballot, any voter could check the website or the app to decide where to throw their support. But on the eve of national parliamentary elections in September 2021, Smart Voting suddenly vanished from the app stores for both Google and Apple. 

After a Moscow court banned Navalny’s organization for being “extremist,” Russia’s internet regulator demanded that both Apple and Google remove Smart Voting from their app stores. The companies bowed to the Kremlin and complied. YouTube blocked select Navalny videos in Russia and Google, its parent company, even blocked some public Google Docs that the Navalny team published to promote names of alternative candidates in the election. 

We will never know whether or not Navalny’s innovative use of technology to stand up to the dictator would have worked. But Silicon Valley’s decision to side with Putin was an important part of why Navalny’s plan failed…(More)”.

The US Is Jeopardizing the Open Internet


Article by Natalie Dunleavy Campbell & Stan Adams: “Last October, the United States Trade Representative (USTR) abandoned its longstanding demand for World Trade Organization provisions to protect cross-border data flows, prevent forced data localization, safeguard source codes, and prohibit countries from discriminating against digital products based on nationality. It was a shocking shift: one that jeopardizes the very survival of the open internet, with all the knowledge-sharing, global collaboration, and cross-border commerce that it enables.

The USTR says that the change was necessary because of a mistaken belief that trade provisions could hinder the ability of US Congress to respond to calls for regulation of Big Tech firms and artificial intelligence. But trade agreements already include exceptions for legitimate public-policy concerns, and Congress itself has produced research showing that trade deals cannot impede its policy aspirations. Simply put, the US – as with other countries involved in WTO deals – can regulate its digital sector without abandoning its critical role as a champion of the open internet.

The potential consequences of America’s policy shift are as far-reaching as they are dangerous. Fear of damaging trade ties with the US has long deterred other actors from imposing national borders on the internet. Now, those who have heard the siren song of supposed “digital sovereignty” as a means to ensure their laws are obeyed in the digital realm have less reason to resist it. The more digital walls come up, the less the walled-off portions resemble the internet.

Several countries are already trying to replicate China’s heavy-handed approach to data governance. Rwanda’s data-protection law, for instance, forces companies to store data within its border unless otherwise permitted by its cybersecurity regulator – making personal data vulnerable to authorities known to use data from private messages to prosecute dissidents. At the same time, a growing number of democratic countries are considering regulations that, without strong safeguards for cross-border data flows, could have a similar effect of disrupting access to a truly open internet…(More)”.

Data as a catalyst for philanthropy


Article by Stefaan Verhulst: “…In what follows, we offer five thoughts on how to advance Data Driven Philanthropy. These are operational strategies, specific steps that philanthropic organisations can take in order to harness the potential of data for the public good. At its broadest level, then, this article is about data stewardship in the 21st century. We seek to define how philanthropic organisations can be responsible custodians of data assets, both theirs and those of society at large. Fulfilling this role of data stewardship is a critical mission for the philanthropic sector and one of the most important roles it can play in helping to ensure that our ongoing process of digital transformation is more fair, inclusive, and aligned with the broader public interest…(More)”.