A Parent-To-Parent Campaign To Get Vaccine Rates Up


Alex Olgin at NPR: “In 2017, Kim Nelson had just moved her family back to her hometown in South Carolina. Boxes were still scattered around the apartment, and while her two young daughters played, Nelson scrolled through a newspaper article on her phone. It said religious exemptions for vaccines had jumped nearly 70 percent in recent years in the Greenville area — the part of the state she had just moved to.

She remembers yelling to her husband in the other room, “David, you have to get in here! I can’t believe this.”

Up until that point, Nelson hadn’t run into mom friends who didn’t vaccinate….

Nelson started her own group, South Carolina Parents for Vaccines. She began posting scientific articles online. She started responding to private messages from concerned parents with specific questions. She also found that positive reinforcement was important and would roam around the mom groups, sprinkling affirmations.

“If someone posts, ‘My child got their two-months shots today,’ ” Nelson says, she’d quickly post a follow-up comment: “Great job, mom!”

Nelson was inspired by peer-focused groups around the country doing similar work. Groups with national reach like Voices for Vaccines and regional groups like Vax Northwest in Washington state take a similar approach, encouraging parents to get educated and share facts about vaccines with other parents….

Public health specialists are raising concerns about the need to improve vaccination rates. But efforts to reach vaccine-hesitant parents often fail. When presented with facts about vaccine safety, parents often remained entrenched in a decision not to vaccinate.

Pediatricians could play a role — and many do — but they’re not compensated to have lengthy discussions with parents, and some of them find it a frustrating task. That has left an opening for alternative approaches, like Nelson’s.

Nelson thought it would be best to zero in on moms who were still on the fence about vaccines.

“It’s easier to pull a hesitant parent over than it is somebody who is firmly anti-vax,” Nelson says. She explains that parents who oppose vaccination often feel so strongly about it that they won’t engage in a discussion. “They feel validated by that choice — it’s part of community, it’s part of their identity.”…(More)”.

Data Fiduciary


/ˈdeɪtə fəˈduʃiˌɛri/

A person or a business that manages individual data in a trustworthy manner. Also ‘information fiduciary’, ‘data trust’, or ‘data steward’.

‘Fiduciary’ is an old concept in the legal world. Its Latin origin is fidere, which means to trust. In the legal context, a fiduciary is usually a person that is trusted to make a decision on how to manage an asset or information, within constraints given by another person who owns such asset or information. Examples of a fiduciary relationship include homeowner and property manager, patient and doctor, or client and attorney. The latter has the ability to make decisions about the trusted asset that falls within the conditions agreed upon by the former.

Jack M. Balkin and Jonathan Zittrain wrote a case for “information fiduciary”, in which they pointed out the urgency of adopting the practice of fiduciary in the data space. In The Atlantic, they wrote:

“The information age has created new kinds of entities that have many of the trappings of fiduciaries—huge online businesses, like Facebook, Google, and Uber, that collect, analyze, and use our personal information—sometimes in our interests and sometimes not. Like older fiduciaries, these businesses have become virtually indispensable. Like older fiduciaries, these companies collect a lot of personal information that could be used to our detriment. And like older fiduciaries, these businesses enjoy a much greater ability to monitor our activities than we have to monitor theirs. As a result, many people who need these services often shrug their shoulders and decide to trust them. But the important question is whether these businesses, like older fiduciaries, have legal obligations to be trustworthy. The answer is that they should.”

Recent controversy involving Facebook data and Cambridge Analytica provides another reason for why companies collecting data from users need to act as a fiduciary. Within this framework, individuals would have a say over how and where their data can be used.

Another call for a form of data fiduciary comes from Google’s Sidewalk Labs project in Canada. After collecting data to inform urban planning in the Quayside area in Toronto, Sidewalk Labs announced that they would not be claiming ownership over the data that they collected and that the data should be “under the control of an independent Civic Data Trust.”

In a blog post, Sidewalk Labs wrote that:

“Sidewalk Labs believes an independent Civic Data Trust should become the steward of urban data collected in the physical environment. This Trust would approve and control the collection of, and manage access to, urban data originating in Quayside. The Civic Data Trust would be guided by a charter ensuring that urban data is collected and used in a way that is beneficial to the community, protects privacy, and spurs innovation and investment.”

Realizing the potential of creating new public value through an exchange of data, or data collaboratives, the GovLab “ is advancing the concept and practice of Data Stewardship to promote responsible data leadership that can address the challenges of the 21st century.” A Data Steward mirrors some of the responsibilities of a data fiduciary, in that they are “responsible for determining what, when, how and with whom to share private data for public good.”

Balkin and Zittrain suggest that there is an asymmetrical power between companies that collect user-generated data and the users themselves, in that these companies are becoming indispensable and having more control over an individual’s data. However, these companies are currently not legally obligated to be trustworthy, meaning that there is no legal consequence for when they use this data in a way that breaches privacy or is in the least interest of the customers.

Under a data fiduciary framework, individuals who are trusted with data are attached with legal rights and responsibilities regarding the use of the data. In a case where a breach of trust happens, the trustee will have to face legal consequences.

Sources and Further Readings:

Nudging Citizens through Technology in Smart Cities


Sofia Ranchordas in the International Review of Law, Computers & Technology: “In the last decade, several smart cities throughout the world have started employing Internet of Things, big data, and algorithms to nudge citizens to save more water and energy, live healthily, use public transportation, and participate more actively in local affairs. Thus far, the potential and implications of data-driven nudges and behavioral insights in smart cities have remained an overlooked subject in the legal literature. Nevertheless, combining technology with behavioral insights may allow smart cities to nudge citizens more systematically and help these urban centers achieve their sustainability goals and promote civic engagement. For example, in Boston, real-time feedback on driving has increased road safety and in Eindhoven, light sensors have been used to successfully reduce nightlife crime and disturbance. While nudging tends to be well-intended, data-driven nudges raise a number of legal and ethical issues. This article offers a novel and interdisciplinary perspective on nudging which delves into the legal, ethical, and trust implications of collecting and processing large amounts of personal and impersonal data to influence citizens’ behavior in smart cities….(More)”.

Twentieth Century Town Halls: Architecture of Democracy


Book by Jon Stewart: “This is the first book to examine the development of the town hall during the twentieth century and the way in which these civic buildings have responded to the dramatic political, social and architectural changes which took place during the period. Following an overview of the history of the town hall as a building type, it examines the key themes, variations and lessons which emerged during the twentieth century. This is followed by 20 case studies from around the world which include plans, sections and full-colour illustrations. Each of the case studies examines the town hall’s procurement, the selection of its architect and the building design, and critically analyses its success and contribution to the type’s development. The case studies include:

Copenhagen Town Hall, Denmark, Martin Nyrop

Stockholm City Hall, Sweden, Ragnar Ostberg

Hilversum Town Hall, the Netherlands, Willem M. Dudok

Walthamstow Town Hall, Britain, Philip Dalton Hepworth

Oslo Town Hall, Norway, Arnstein Arneberg and Magnus Poulsson

Casa del Fascio, Como, Italy, Guiseppe Terragni

Aarhus Town Hall, Denmark, Arne Jacobsen with Eric Moller

Saynatsalo Town Hall, Finland, Alvar Aalto

Kurashiki City Hall, Japan, Kenzo Tange

Toronto City Hall, Canada, Viljo Revell

Boston City Hall, USA, Kallmann, McKinnell and Knowles

Dallas City Hall, USA, IM Pei

Mississauga City Hall, Canada, Ed Jones and Michael Kirkland

Borgoricco Town Hall, Italy, Aldo Rossi

Reykjavik City Hall, Iceland, Studio Granda

Valdelaguna Town Hall, Spain, Victor Lopez Cotelo and Carlos Puente Fernandez

The Hague City Hall, the Netherlands, Richard Meier

Iragna Town Hall, Switzerland, Raffaele Cavadini

Murcia City Hall, Spain, Jose Rafael Moneo

London City Hall, UK, Norman Foster…(More)”.

Weather Service prepares to launch prediction model many forecasters don’t trust


Jason Samenow in the Washington Post: “In a month, the National Weather Service plans to launch its “next generation” weather prediction model with the aim of “better, more timely forecasts.” But many meteorologists familiar with the model fear it is unreliable.

The introduction of a model that forecasters lack confidence in matters, considering the enormous impact that weather has on the economy, valued at around $485 billion annually.

The Weather Service announced Wednesday that the model, known as the GFS-FV3 (FV3 stands for Finite­ Volume Cubed-Sphere dynamical core), is “tentatively” set to become the United States’ primary forecast model on March 20, pending tests. It is an update to the current version of the GFS (Global Forecast System), popularly known as the American model, which has existed in various forms for more than 30 years….

A concern is that if forecasters cannot rely on the FV3, they will be left to rely only on the European model for their predictions without a credible alternative for comparisons. And they’ll also have to pay large fees for the European model data. Whereas model data from the Weather Service is free, the European Center for Medium-Range Weather Forecasts, which produces the European model, charges for access.

But there is an alternative perspective, which is that forecasters will just need to adjust to the new model and learn to account for its biases. That is, a little short-term pain is worth the long-term potential benefits as the model improves….

The Weather Service’s parent agency, the National Oceanic and Atmospheric Administration, recently entered an agreement with the National Center for Atmospheric Research to increase collaboration between forecasters and researchers in improving forecast modeling.

In addition, President Trump recently signed into law the Weather Research and Forecast Innovation Act Reauthorization, which establishes the NOAA Earth Prediction Innovation Center, aimed at further enhancing prediction capabilities. But even while NOAA develops relationships and infrastructure to improve the Weather Service’s modeling, the question remains whether the FV3 can meet the forecasting needs of the moment. Until the problems identified are addressed, its introduction could represent a step back in U.S. weather prediction despite a well-intended effort to leap forward….(More).

Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice


Paper by Rashida Richardson, Jason Schultz, and Kate Crawford: “Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices (‘dirty policing’). This can include systemic data manipulation, falsifying police reports, unlawful use of force, planted evidence, and unconstitutional searches. These policing practices shape the environment and the methodology by which data is created, which leads to inaccuracies, skews, and forms of systemic bias embedded in the data (‘dirty data’). Predictive policing systems informed by such data cannot escape the legacy of unlawful or biased policing practices that they are built on. Nor do claims by predictive policing vendors that these systems provide greater objectivity, transparency, or accountability hold up. While some systems offer the ability to see the algorithms used and even occasionally access to the data itself, there is no evidence to suggest that vendors independently or adequately assess the impact that unlawful and bias policing practices have on their systems, or otherwise assess how broader societal biases may affect their systems.

In our research, we examine the implications of using dirty data with predictive policing, and look at jurisdictions that (1) have utilized predictive policing systems and (2) have done so while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, we examine the link between unlawful and biased police practices and the data used to train or implement these systems across thirteen case studies. We highlight three of these: (1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices suggests an extremely high risk that dirty data was or will be used in any predictive policing application, and (3) Maricopa County where despite extensive evidence of dirty policing practices, lack of transparency and public accountability surrounding predictive policing inhibits the public from assessing the risks of dirty data within such systems. The implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed, biased, and unlawful predictions which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. Thus, for any jurisdiction where police have been found to engage in such practices, the use of predictive policing in any context must be treated with skepticism and mechanisms for the public to examine and reject such systems are imperative….(More)”.

Should Libraries Be the Keepers of Their Cities’ Public Data?


Linda Poon at CityLab: “In recent years, dozens of U.S. cities have released pools of public data. It’s an effort to improve transparency and drive innovation, and done well, it can succeed at both: Governments, nonprofits, and app developers alike have eagerly gobbled up that data, hoping to improve everything from road conditions to air quality to food delivery.

But what often gets lost in the conversation is the idea of how public data should be collected, managed, and disseminated so that it serves everyone—rather than just a few residents—and so that people’s privacy and data rights are protected. That’s where librarians come in.

“As far as how private and public data should be handled, there isn’t really a strong model out there,” says Curtis Rogers, communications director for the Urban Library Council (ULC), an association of leading libraries across North America. “So to have the library as the local institution that is the most trusted, and to give them that responsibility, is a whole new paradigm for how data could be handled in a local government.”

In fact, librarians have long been advocates of digital inclusion and literacy. That’s why, last month, ULC launched a new initiative to give public libraries a leading role in a future with artificial intelligence. They kicked it off with a working group meeting in Washington, D.C., where representatives from libraries in cities like Baltimore, Toronto, Toledo, and Milwaukee met to exchange ideas on how to achieve that through education and by taking on a larger role in data governance.

It’s a broad initiative, and Rogers says they are still in the beginning stages of determining what that role will ultimately look like. But the group will discuss how data should be organized and managed, hash out the potential risks of artificial intelligence, and eventually develop a field-wide framework for how libraries can help drive equitable public data policies in cities.

Already, individual libraries are involved with their city’s data. Chattanooga Public Library (which wasn’t part of the working group, but is a member of ULC) began hosting the city’s open data portal in 2014, turning a traditionally print-centered institution into a community data hub. Since then, the portal has added more than 280 data sets and garnered hundreds of thousands of page views, according to a report for the 2018 fiscal year….

The Toronto Public Library is also in a unique position because it may soon sit inside one of North America’s “smartest” cities. Last month, the city’s board of trade published a 17-page report titled “BiblioTech,” calling for the library to oversee data governance for all smart city projects.

It’s a grand example of just how big the potential is for public libraries. Ryan says the proposal remains just that at the moment, and there are no details yet on what such a model would even look like. She adds that they were not involved in drafting the proposal, and were only asked to provide feedback. But the library is willing to entertain the idea.

Such ambitions would be a large undertaking in the U.S., however, especially for smaller libraries that are already understaffed and under-resourced. According to ULC’s survey of its members, only 23 percent of respondents said they have a staff person designated as the AI lead. A little over a quarter said they even have AI-related educational programming, and just 15 percent report being part of any local or national initiative.

Debbie Rabina, a professor of library science at Pratt Institute in New York, also cautions that putting libraries in charge of data governance has to be carefully thought out. It’s one thing for libraries to teach data literacy and privacy, and to help cities disseminate data. But to go further than that—to have libraries collecting and owning data and to have them assessing who can and can’t use the data—can lead to ethical conflicts and unintended consequences that could erode the public’s trust….(More)”.

Democracy Beyond Voting and Protests


Sasha Fisher at Project Syndicate: “For over a decade now, we have witnessed more elections and, simultaneously, less democracy. According to Bloomberg, elections have been occurring more frequently around the world. Yet Freedom House finds that some 110 countries have experienced declines in political and civil rights over the past 13 years.

As democracy declines, so does our sense of community. In the United States, this is evidenced by a looming loneliness epidemicand the rapid disappearance of civic institutions such as churches, eight of which close every day. And though these trends are global in nature, the US exemplifies them in the extreme.

This is no coincidence. As Alexis de Tocqueville pointed out in the 1830s, America’s founders envisioned a country governed not by shared values, but by self-interest. That vision has since defined America’s institutions, and fostered a hyper-individualistic society.

Growing distrust in governing institutions has fueled a rise in authoritarian populist movements around the world. Citizens are demanding individual economic security and retreating into an isolationist mentality. ...

And yet we know that “user engagement” works, as shown by countless studies and human experiences. For example, an evaluation conducted in Uganda found that the more citizens participated in the design of health programs, the more the perception of the health-care system improved. And in Indonesia, direct citizen involvement in government decision-making has led to higher satisfaction with government services....

While the Western world suffers from over-individualization, the most notable governance and economic innovations are taking place in the Global South. In Rwanda, for example, the government has introduced policies to encourage grassroots solutions that strengthen citizens’ sense of community and shared accountability. Through monthly community-service meetings, families and individuals work together to build homes for the needy, fix roads, and pool funds to invest in better farming practices and equipment.

Imagine if over 300 million Americans convened every month for a similar purpose. There would suddenly be billions more citizen hours invested in neighbor-to-neighbor interaction and citizen action.

This was one of the main effects of the Village Savings and Loan Associations that originated in the Democratic Republic of Congo. Within communities, members have access to loans to start small businesses and save for a rainy day. The model works because it leverages neighbor-to-neighbor accountability. Likewise, from Haiti to Liberia to Burundi and beyond, community-based health systems have proven effective precisely because health workers know their neighbors and their needs. Community health workers go from home to home, checking in on pregnant mothers and making sure they are cared for. Each of these solutions uses and strengthens communal accountability through shared engagement – not traditional vertical accountability lines.

If we believe in the democratic principle that governments must be accountable to citizens, we should build systems that hold us accountable to each other – and we must engage beyond elections and protests. We must usher in a new era of community-driven democracy – power must be decentralized and placed in the hands of families and communities.

When we achieve community-driven democracy, we will engage with one another and with our governments – not just on special occasions, but continuously, because our democracy and freedom depend on us….(More)” (See also Index on Trust in Institutions)

7 things we’ve learned about computer algorithms


Aaron Smith at Pew Research Center: “Algorithms are all around us, using massive stores of data and complex analytics to make decisions with often significant impacts on humans – from choosing the content people see on social media to judging whether a person is a good credit risk or job candidate. Pew Research Center released several reports in 2018 that explored the role and meaning of algorithms in people’s lives today. Here are some of the key themes that emerged from that research.

  1. Algorithmically generated content platforms play a prominent role in Americans’ information diets. Sizable shares of U.S. adults now get news on sites like Facebook or YouTube that use algorithms to curate the content they show to their users. A study by the Center found that 81% of YouTube users say they at least occasionally watch the videos suggested by the platform’s recommendation algorithm, and that these recommendations encourage users to watch progressively longer content as they click through the videos suggested by the site.
  2. The inner workings of even the most common algorithms can be confusing to users. Facebook is among the most popular social media platforms, but roughly half of Facebook users – including six-in-ten users ages 50 and older – say they do not understand how the site’s algorithmically generated news feed selects which posts to show them. And around three-quarters of Facebook users are not aware that the site automatically estimates their interests and preferences based on their online behaviors in order to deliver them targeted advertisements and other content.
  3. The public is wary of computer algorithms being used to make decisions with real-world consequences. The public expresses widespread concern about companies and other institutions using computer algorithms in situations with potential impacts on people’s lives. More than half (56%) of U.S. adults think it is unacceptable to use automated criminal risk scores when evaluating people who are up for parole. And 68% think it is unacceptable for companies to collect large quantities of data about individuals for the purposes of offering them deals or other financial incentives. When asked to elaborate about their worries, many feel that these programs violate people’s privacy, are unfair, or simply will not work as well as decisions made by humans….(More)”.

Technology and National Security


Book from the Aspen Strategy Group: “This edition is a collection of papers commissioned for the 2018 Aspen Strategy Group Summer Workshop, a bipartisan meeting of national security experts, academics, private sector leaders, and technologists. The chapters in this volume evaluate the disruptive nature of technological change on the US military, economic power, and democratic governance. They highlight possible avenues for US defense modernization, the impact of disinformation tactics and hybrid warfare on democratic institutions, and the need for a reinvigorated innovation triangle comprised of the US government, academia, and private corporations. The executive summary offers practical recommendations to meet the daunting challenges this technological era imposes….(More)”.