Understanding and Measuring Hype Around Emergent Technologies


Article by Swaptik Chowdhury and Timothy Marler: “Inaccurate or excessive hype surrounding emerging technologies can have several negative effects, including poor decisionmaking by both private companies and the U.S. government. The United States needs a comprehensive approach to understanding and assessing public discourse–driven hype surrounding emerging technologies, but current methods for measuring technology hype are insufficient for developing policies to manage it. The authors of this paper describe an approach to analyzing technology hype…(More)”.

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies


Article by Kashmir Hill: “Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident.

So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor.

LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act.

What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car.

On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month.

“It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”..(More)”.

What Happens to Your Sensitive Data When a Data Broker Goes Bankrupt?


Article by Jon Keegan: “In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company. 

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies...(More)”.

AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.

How Mental Health Apps Are Handling Personal Information


Article by Erika Solis: “…Before diving into the privacy policies of mental health apps, it’s necessary to distinguish between “personal information” and “sensitive information,” which are both collected by such apps. Personal information can be defined as information that is “used to distinguish or trace an individual’s identity.” Sensitive information, however, can be any data that, if lost, misused, or illegally modified, may negatively affect an individual’s privacy rights. While health information not under HIPAA has previously been treated as general personal information, states like Washington are implementing strong legislation that will cover a wide range of health data as sensitive, and have attendant stricter guidelines.

Legislation addressing the treatment of personal information and sensitive information varies around the world. Regulations like the General Data Protection Regulation (GDPR) in the EU, for example, require all types of personal information to be treated as being of equal importance, with certain special categories, including health data having slightly elevated levels of protection. Meanwhile, U.S. federal laws are limited in addressing applicable protections of information provided to a third party, so mental health app companies based in the United States can approach personal information in all sorts of ways. For instance, Mindspa, an app with chatbots that are only intended to be used when a user is experiencing an emergency, and Elomia, a mental health app that’s meant to be used at any time, don’t make distinctions between these contexts in their privacy policies. They also don’t distinguish between the potentially different levels of sensitivity associated with ordinary and crisis use.

Wysa, on the other hand, clearly indicates how it protects personal information. Making a distinction between personal and sensitive data, its privacy policy notes that all health-based information receives additional protection. Similarly, Limbic labels everything as personal information but notes that data, including health, genetic, and biometric, fall within a “special category” that requires more explicit consent than other personal information collected to be used…(More)”.

The U.S. Census Is Wrong on Purpose


Blog by David Friedman: “This is a story about data manipulation. But it begins in a small Nebraska town called Monowi that has only one resident, 90 year old Elsie Eiler.

The sign says “Monowi 1,” from Google Street View.

There used to be more people in Monowi. But little by little, the other residents of Monowi left or died. That’s what happened to Elsie’s own family — her children grew up and moved out and her husband passed away in 2004, leaving her as the sole resident. Now she votes for herself for Mayor, and pays herself taxes. Her husband Rudy’s old book collection became the town library, with Elsie as librarian.

But despite what you might imagine, Elsie is far from lonely. She runs a tavern that’s been in her family for 50 years, and has plenty of regulars from the town next door who come by every day to dine and chat.

I first read about Elsie more than 10 years ago. At the time, it wasn’t as well known a story but Elsie has since gotten a lot of coverage and become a bit of a minor celebrity. Now and then I still come across a new article, including a lovely photo essay in the New York Times and a short video on the BBC Travel site.

A Google search reveals many, many similar articles that all tell more or less the same story.

But then suddenly in 2021, there was a new wrinkle: According to the just-published 2020 U.S. Census data, Monowi now had 2 residents, doubling its population.

This came as a surprise to Elsie, who told a local newspaper, “Then someone’s been hiding from me, and there’s nowhere to live but my house.”

It turns out that nobody new had actually moved to Monowi without Elsie realizing. And the census bureau didn’t make a mistake. They intentionally changed the census data, adding one resident.

Why would they do that? Well, it turns out the census bureau sometimes moves residents around on paper in order to protect people’s privacy.

Full census data is only made available 72 years after the census takes place, in accordance with the creatively-named “72 year rule.” Until then, it is only available as aggregated data with individual identifiers removed. Still, if the population of a town is small enough, and census data for that town indicates, for example, that there is just one 90 year old woman and she lives alone, someone could conceivably figure out who that individual is.

So the census bureau sometimes moves people around to create noise in the data that makes that sort of identification a little bit harder…(More)”.

Air Canada chatbot promised a discount. Now the airline has to pay it


Article by Kyle Melnick: “After his grandmother died in Ontario a few years ago, British Columbia resident Jake Moffatt visited Air Canada’s website to book a flight for the funeral. He received assistance from a chatbot, which told him the airline offered reduced rates for passengers booking last-minute travel due to tragedies.

Moffatt bought a nearly $600 ticket for a next-day flight after the chatbot said he would get some of his money back under the airline’s bereavement policy as long as he applied within 90 days, according to a recent civil-resolutions tribunal decision.

But when Moffatt later attempted to receive the discount, he learned that the chatbot had been wrong. Air Canada only awarded bereavement fees if the request had been submitted before a flight. The airline later argued the chatbot wasa separate legal entity “responsible for its own actions,” the decision said.

Moffatt filed a claim with the Canadian tribunal, which ruled Wednesday that Air Canada owed Moffatt more than $600 in damages and tribunal fees after failing to provide “reasonable care.”

As companies have added artificial intelligence-powered chatbots to their websites in hopes of providing faster service, the Air Canada dispute sheds light on issues associated with the growing technology and how courts could approach questions of accountability. The Canadian tribunal in this case came down on the side of the customer, ruling that Air Canada did not ensure its chatbot was accurate…(More)”

To Design Cities Right, We Need to Focus on People


Article by Tim Keane: “Our work in the U.S. to make better neighborhoods, towns and cities is a hapless and obdurate mess. If you’ve attended a planning meeting anywhere, you have probably witnessed the miserable process in action—unrestrainedly selfish fighting about false choices and seemingly inane procedures. Rather than designing places for people, we see cities as a collection of mechanical problems with technical and legal solutions. We distract ourselves with the latest rebranded ideas about places—smart growth, resilient cities, complete streets, just cities, 15-minute cities, happy cities—rather than getting down to the actual work of designing the physical place. This lacks a fundamental vision. And it’s not succeeding.

Our flawed approach to city planning started a century ago. The first modern city plan was produced for Cincinnati in 1925 by the Technical Advisory Corporation, founded in 1913 by George Burdett Ford and E.P. Goodrich in New York City. New York adopted the country’s first comprehensive zoning ordinance in 1916, an effort Ford led. Not coincidentally, the advent of zoning, and then comprehensive planning, corresponded directly with the great migration of six million Black people from the South to Northern, Midwestern and Western cities. New city planning practices were a technical means to discriminate and exclude.

This first comprehensive plan also ushered in another type of dehumanization: city planning by formula. To justify widening downtown streets by cutting into sidewalks, engineers used a calculation that reflected the cost to operate an automobile in a congested area—including the cost of a human life, because crashes killed people. Engineers also calculated the value of a sidewalk through a formula based on how many people the elevators in adjoining buildings could deliver at peak times. In the end, Cincinnati’s planners recommended widening the streets for cars, which were becoming more common, by shrinking sidewalks. City planning became an engineering equation, and one focused on separating people and spreading the city out to the maximum extent possible…(More)”.

University of Michigan Sells Recordings of Study Groups and Office Hours to Train AI


Article by Joseph Cox: “The University of Michigan is selling hours of audio recordings of study groups, office hours, lectures, and more to outside third-parties for tens of thousands of dollars for the purpose of training large language models (LLMs). 404 Media has downloaded a sample of the data, which includes a one hour and 20 minute long audio recording of what appears to be a lecture.

The news highlights how some LLMs may ultimately be trained on data with an unclear level of consent from the source subjects. ..(More)”.

Toward a 21st Century National Data Infrastructure: Managing Privacy and Confidentiality Risks with Blended Data


Report by the National Academies of Sciences, Engineering, and Medicine: “Protecting privacy and ensuring confidentiality in data is a critical component of modernizing our national data infrastructure. The use of blended data – combining previously collected data sources – presents new considerations for responsible data stewardship. Toward a 21st Century National Data Infrastructure: Managing Privacy and Confidentiality Risks with Blended Data provides a framework for managing disclosure risks that accounts for the unique attributes of blended data and poses a series of questions to guide considered decision-making.

Technical approaches to manage disclosure risk have advanced. Recent federal legislation, regulation and guidance has described broadly the roles and responsibilities for stewardship of blended data. The report, drawing from the panel review of both technical and policy approaches, addresses these emerging opportunities and the new challenges and responsibilities they present. The report underscores that trade-offs in disclosure risks, disclosure harms, and data usefulness are unavoidable and are central considerations when planning data-release strategies, particularly for blended data…(More)”.