New York City to Require Food Delivery Services to Share Customer Data with Restaurants


Hunton Privacy Blog: “On August 29, 2021, a New York City Council bill amending the New York City Administrative Code to address customer data collected by food delivery services from online orders became law after the 30-day period for the mayor to sign or veto lapsed. Effective December 27, 2021, the law will permit restaurants to request customer data from third-party food delivery services and require delivery services to provide, on at least a monthly basis, such customer data until the restaurant “requests to no longer receive such customer data.” Customer data includes name, phone number, email address, delivery address and contents of the order.

Although customers are permitted to request that their customer data not be shared, the presumption under the law is that “customers have consented to the sharing of such customer data applicable to all online orders, unless the customer has made such a request in relation to a specific online order.” The food delivery services are required to provide on its website a way for customers to request that their data not be shared “in relation to such online order.” To “assist its customers with deciding whether their data should be shared,” delivery services must disclose to the customer (1) the data that may be shared with the restaurant and (2) the restaurant fulfilling the order as the recipient of the data.

The law will permit restaurants to use the customer data for marketing and other purposes, and prohibit delivery apps from restricting such activities by restaurants. Restaurants that receive the customer data, however, must allow customers to request and delete their customer data. In addition, restaurants are not permitted to sell, rent or disclose customer data to any other party in exchange for financial benefit, except with the express consent of the customer….(More)”.

The Battle for Digital Privacy Is Reshaping the Internet


Brian X. Chen at The New York Times: “Apple introduced a pop-up window for iPhones in April that asks people for their permission to be tracked by different apps.

Google recently outlined plans to disable a tracking technology in its Chrome web browser.

And Facebook said last month that hundreds of its engineers were working on a new method of showing ads without relying on people’s personal data.

The developments may seem like technical tinkering, but they were connected to something bigger: an intensifying battle over the future of the internet. The struggle has entangled tech titans, upended Madison Avenue and disrupted small businesses. And it heralds a profound shift in how people’s personal information may be used online, with sweeping implications for the ways that businesses make money digitally.

At the center of the tussle is what has been the internet’s lifeblood: advertising.

More than 20 years ago, the internet drove an upheaval in the advertising industry. It eviscerated newspapers and magazines that had relied on selling classified and print ads, and threatened to dethrone television advertising as the prime way for marketers to reach large audiences….

If personal information is no longer the currency that people give for online content and services, something else must take its place. Media publishers, app makers and e-commerce shops are now exploring different paths to surviving a privacy-conscious internet, in some cases overturning their business models. Many are choosing to make people pay for what they get online by levying subscription fees and other charges instead of using their personal data.

Jeff Green, the chief executive of the Trade Desk, an ad-technology company in Ventura, Calif., that works with major ad agencies, said the behind-the-scenes fight was fundamental to the nature of the web…(More)”

Harms of AI


Paper by Daron Acemoglu: “This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy’s most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI’s promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment – to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient….(More)”.

Government Lawyers: Technicians, Policy Shapers and Organisational Brakes


Paper by Philip S.C. Lewis and Linda Mulcahy: “Government lawyers have been rather neglected by scholars interested in the workings of the legal profession and the role of professional groups in contemporary society. This is surprising given the potential for them to influence the internal workings of an increasingly legalistic and centralized state. This article aims to partly fill the gap left by looking at the way that lawyers employed by the government and the administrators they work with talk about their day to day practices. It draws on the findings of a large-scale empirical study of government lawyers in seven departments, funded by the ESRC. The study was undertaken between 2002-2003 by Philip Lewis, and is reported for the first time here. By looking at lawyers in bureaucracies the interviews conducted sought to explore what government lawyers do, how they talked about their work, and what distinguished them from the administrative grade clients and colleagues they worked with….(More)”.

Enrollment algorithms are contributing to the crises of higher education


Paper by Alex Engler: “Hundreds of higher education institutions are procuring algorithms that strategically allocate scholarships to convince more students to enroll. In doing so, these enrollment management algorithms help colleges vary the cost of attendance to students’ willingness to pay, a crucial aspect of competition in the higher education market. This paper elaborates on the specific two-stage process by which these algorithms first predict how likely prospective students are to enroll, and second help decide how to disburse scholarships to convince more of those prospective students to attend the college. These algorithms are valuable to colleges for institutional planning and financial stability, as well as to help reach their preferred financial, demographic, and scholastic outcomes for the incoming student body.

Unfortunately, the widespread use of enrollment management algorithms may also be hurting students, especially due to their narrow focus on enrollment. The prevailing evidence suggests that these algorithms generally reduce the amount of scholarship funding offered to students. Further, algorithms excel at identifying a student’s exact willingness to pay, meaning they may drive enrollment while also reducing students’ chances to persist and graduate. The use of this two-step process also opens many subtle channels for algorithmic discrimination to perpetuate unfair financial aid practices. Higher education is already suffering from low graduation rates, high student debt, and stagnant inequality for racial minorities—crises that enrollment algorithms may be making worse.

This paper offers a range of recommendations to ameliorate the risks of enrollment management algorithms in higher education. Categorically, colleges should not use predicted likelihood to enroll in either the admissions process or in awarding need-based aid—these determinations should only be made based on the applicant’s merit and financial circumstances, respectively. When colleges do use algorithms to distribute scholarships, they should proceed cautiously and document their data, processes, and goals. Colleges should also examine how scholarship changes affect students’ likelihood to graduate, or whether they may deepen inequities between student populations. Colleges should also ensure an active role for humans in these processes, such as exclusively using people to evaluate application quality and hiring internal data scientists who can challenge algorithmic specifications. State policymakers should consider the expanding role of these algorithms too, and should try to create more transparency about their use in public institutions. More broadly, policymakers should consider enrollment management algorithms as a concerning symptom of pre-existing trends towards higher tuition, more debt, and reduced accessibility in higher education….(More)”.

The Future of Citizen Engagement: Rebuilding the Democratic Dialogue


Report by the Congressional Management Foundation: “The Future of Citizen Engagement: Rebuilding the Democratic Dialogue” explores the current challenges to engagement and trust between Senators and Representatives and their constituents; proposes principles for rebuilding that fundamental democratic relationship; and describes innovative practices in federal, state, local, and international venues that Congress could look to for modernizing the democratic dialogue.

cmf citizen engagement rebuilding democratic dialogue cover 200x259

The report answers the following questions:

  • What factors have contributed to the deteriorating state of communications between citizens and Congress?
  • What principles should guide Congress as it tries to transform its communications systems and practices from administrative transactions to substantive interactions with the People it represents?
  • What models at the state and international level can Congress follow as it modernizes and rebuilds the democratic dialogue?

The findings and recommendations in this report are based on CMF’s long history of researching the relationship between Members of Congress and their constituents…(More)”.

The State of Consumer Data Privacy Laws in the US (And Why It Matters)


Article by Thorin Klosowski at the New York Times: “With more of the things people buy being internet-connected, more of our reviews and recommendations at Wirecutter are including lengthy sections detailing the privacy and security features of such products, everything from smart thermostats to fitness trackers. As the data these devices collect is sold and shared—and hacked—deciding what risks you’re comfortable with is a necessary part of making an informed choice. And those risks vary widely, in part because there’s no single, comprehensive federal law regulating how most companies collect, store, or share customer data.

Most of the data economy underpinning common products and services is invisible to shoppers. As your data gets passed around between countless third parties, there aren’t just more companies profiting from your data, but also more possibilities for your data to be leaked or breached in a way that causes real harm. In just the past year, we’ve seen a news outlet use pseudonymous app data, allegedly leaked from an advertiser associated with the dating app Grindr, to out a priest. We’ve read about the US government buying location data from a prayer app. Researchers have found opioid-addiction treatment apps sharing sensitive data. And T-Mobile recently suffered a data breach that affected at least 40 million people, some who had never even had a T-Mobile account.

“We have these companies that are amassing just gigantic amounts of data about each and every one of us, all day, every day,” said Kate Ruane, senior legislative counsel for the First Amendment and consumer privacy at the American Civil Liberties Union. Ruane also pointed out how data ends up being used in surprising ways—intentionally or not—such as in targeting ads or adjusting interest rates based on race. “Your data is being taken and it is being used in ways that are harmful.”

Consumer data privacy laws can give individuals rights to control their data, but if poorly implemented such laws could also maintain the status quo. “We can stop it,” Ruane continued. “We can create a better internet, a better world, that is more privacy protective.”…(More)”

The geography of AI


Report by Mark Muro and Sifan Liu: “Much of the U.S. artificial intelligence (AI) discussion revolves around futuristic dreams of both utopia and dystopia. From extreme to extreme, the promises range from solutions to global climate change to a “robot apocalypse.”

However, it bears remembering that AI is also becoming a real-world economic fact with major implications for national and regional economic development as the U.S. crawls out of the COVID-19 pandemic.

Based on advanced uses of statistics, algorithms, and fast computer processing, AI has become a focal point of U.S. innovation debates. Even more, AI is increasingly viewed as the next great “general purpose technology”—one that has the power to boost the productivity of sector after sector of the economy.

All of which is why state and city leaders are increasingly assessing AI for its potential to spur economic growth. Such leaders are analyzing where their regions stand and what they need to do to ensure their locations are not left behind.

In response to such questions, this analysis examines the extent, location, and concentration of AI technology creation and business activity in U.S. metropolitan areas.

Employing seven basic measures of AI capacity, the report benchmarks regions on the basis of their core AI assets and capabilities as they relate to two basic dimensions: AI research and AI commercialization. In doing so, the assessment categorizes metro areas into five tiers of regional AI involvement and extracts four main findings reflecting that involvement…(More)”.

The Mobility Data Sharing Assessment


New Tool from the Mobility Data Collaborative (MDC): “…released a set of resources to support transparent and accountable decision making about how and when to share mobility data between organizations. …The Mobility Data Sharing Assessment (MDSA) is a practical and customizable assessment that provides operational guidance to support an organization’s existing processes when sharing or receiving mobility data. It consists of a collection of resources:

  • 1. A Tool that provides a practical, customizable and open-source assessment for organizations to conduct a self-assessment.
  • 2. An Operator’s Manual that provides detailed instructions, guidance and additional resources to assist organizations as they complete the tool.
  • 3. An Infographic that provides a visual overview of the MDSA process.

“We were excited to work with the MDC to create a practical set of resources to support mobility data sharing between organizations,” said Chelsey Colbert, policy counsel at FPF. “Through collaboration, we designed version one of a technology-neutral tool, which is consistent and interoperable with leading industry frameworks. The MDSA was designed to be a flexible and scalable approach that enables mobility data sharing initiatives by encouraging organizations of all sizes to assess the legal, privacy, and ethical considerations.”

New mobility options, such as shared cars and e-scooters, have rapidly emerged in cities over the past decade. Data generated by these mobility services offers an exciting opportunity to provide valuable and timely insight to effectively develop transportation policy and infrastructure. As the world becomes more data-driven, tools like the MDSA help remove barriers to safe data sharing without compromising consumer trust….(More)”.

Kansas City expands civic engagement with data stories, virtual ‘lunch-and-learns’


Ryan Johnston at Statescoop: “…The city is currently running a series of virtual “lunch-and-learns,” as well as publishing data-driven “stories” using Socrata software to improve civic engagement, said Kate Bender, a senior management analyst in the city’s data division.

The work is especially important in reaching residents that aren’t equipped with digital literacy or data analysis skills, Bender said. The free lunch-and-learns — managed under the new Office of Citizen Engagement — teaches residents how to use digital tools like the city’s open data portal and 311 mobile app.

New data stories, meanwhile, published on the city’s open data portal, allow residents to see the context behind raw data around COVID-19, 311 requests or city hiring practices that they might not otherwise be able to parse themselves. They’re both part of an effort to reach residents that aren’t already plugged in to the city’s digital channels, Bender said.

“Knowing that we have more digital options and we have good engagement, how can we open up residents’ exposure to other things, and specifically tools, that we make available, that we put on our website or that we tweet about?” Bender said. “Unless you’re already pretty engaged, you might not know or think to download the city’s 311 app, or you might have heard of open data, but not really know how it pertains to you. So that was our concept.”

Bender’s office, DataKC, has “always been pretty closely aligned in working with 311 and advising on citizen engagement,” Bender said. But when COVID-19 hit and people could no longer gather in-person for citizen engagement events, like the city’s “Community Engagement University,” a free, 8-week, in-person program that taught residents about how various city agencies work, Bender and her team decided to take the education component virtual….(More)”.