Our goal in this paper is to bring these regulatory frameworks to the attention of the data management
Digital Pro Bono: Leveraging Technology to Provide Access to Justice
Paper by Kathleen Elliott Vinson and Samantha A. Moppett: “…While individuals have the constitutional right to legal assistance in criminal cases, the same does not hold true for civil matters. Low-income Americans are unable to gain access to meaningful help for basic legal needs. Although legal aid organizations exist to help low-income Americans who cannot afford legal representation, the resources available are insufficient to meet current civil legal needs. Studies show more than 80 percent of the legal needs of low-income Americans go unaddressed every year.
This article examines how law students, law schools, the legal profession, legal services’ agencies, and low-income individuals who need assistance, all have a shared interest—access to justice—and can work together to reach the elusive goal in the Pledge of Allegiance of “justice for all.” It illustrates how their collaborative leveraging of technology in innovative ways like digital pro bono services, is one way to provide access to justice. It discusses ABA Free Legal Answers Online, the program that the ABA pioneered to help confront the justice gap in the United States. The program provides a “virtual legal advice clinic” where attorneys answer civil legal questions that low-income residents post on free, secure, and confidential state-specific websites. The article provides a helpful resource of how law schools can leverage this technology to increase access to justice for low-income communities while providing pro bono opportunities for attorneys and students in their state…(More)”.
Visualizing where rich and poor people really cross paths—or don’t
Ben Paynter at Fast Company: “…It’s an idea that’s hard to visualize unless you can see it on a map. So MIT Media Lab collaborated with the location intelligence firm Cuebiqto build one. The result is called the Atlas of Inequality and harvests the anonymized location data from 150,000 people who opted

The result is an interactive view of just how filtered, sheltered, or sequestered many people’s lives really are. That’s an important thing to be reminded of at a time when the U.S. feels increasingly ideologically and economically divided. “Economic inequality isn’t just limited to neighborhoods, it’s part of the places you visit every day,” the researchers say in a mission statement about the Atlas….(More)”.
Public Interest Technology University Network
About: “The Public Interest Technology Universities Network is a partnership that fosters collaboration between 21 universities and colleges committed to building the nascent field of public interest technology and growing a new generation of civic-minded technologists. Through the development of curricula, research agendas, and experiential learning programs in the public interest technology space, these universities are trying innovative tactics to produce graduates with multiple fluencies at the intersection of technology and policy. By joining PIT-UN, members commit to field building on campus. Members may choose to focus on some or all of these elements, in addition to other initiatives they deem relevant to establishing public interest technology on campus:
- Support curriculum and faculty development to enable interdisciplinary and cross-disciplinary education of students, so they can critically assess the ethical, political, and societal implications of new technologies, and design technologies in service of the public good.
- Develop experiential learning opportunities such as clinics, fellowships, apprenticeships, and internship, with public and private sector partners in the public interest technology space.
- Find ways to support graduates who pursue careers working in public interest technology, recognizing that financial considerations may make careers in this area unaffordable to many.
- Create mechanisms for faculty to receive recognition for the research, curriculum development, teaching, and service work needed to build public interest technology as an arena of inquiry.
- Provide institutional data that will allow us to measure the effectiveness of our interventions in helping to develop the field of public interest technology
…. (More)”.
Identifying commonly used and potentially unsafe transit transfers with crowdsourcing
Paper by Elizabeth J.Traut and Aaron Steinfeld: “Public transit is an important contributor to sustainable transportation as well as a public service that makes necessary travel possible for many. Poor transit transfers can lead to both a real and perceived reduction in convenience and safety, especially for people with disabilities. Poor transfers can expose riders to inclement weather and crime, and they can reduce transit ridership by motivating riders who have the option of driving or using paratransit to elect a more expensive and inefficient travel mode. Unfortunately, knowledge about inconvenient, missed, and unsafe transit transfers is sparse and incomplete.
We show that crowdsourced public transit ridership data, which is more scalable than conducting traditional surveys, can be used to analyze transit transfers. The Tiramisu Transit app merges open transit data with information contributed by users about which trips they take. We use Tiramisu data to do origin-destination analysis and identify connecting trips to create a better understanding of where and when poor transfers are occurring in the Pittsburgh region. We merge the results with data from other open public data sources, including crime data, to create a data resource that can be used for planning and identification of locations where bus shelters and other infrastructure improvements may facilitate safer and more comfortable waits and more accessible transfers. We use generalizable methods to ensure broader value to both science and practitioners.
We present a case study of the Pittsburgh region, in which we identified and characterized 338 transfers from 142 users. We found that 66.6% of transfers were within 0.4 km (0.25 mi.) and 44.1% of transfers were less than 10 min. We identified the geographical distribution of transfers and found several highly-utilized transfer locations that were not identified by the Port Authority of Allegheny County as recommended transfer points, and so might need more planning attention. We cross-referenced transfer location and wait time data with crime levels to provide additional planning insight….(More)”.
Toward an Open Data Bias Assessment Tool Measuring Bias in Open Spatial Data
Working Paper by Ajjit Narayanan and Graham MacDonald: “Data is a critical resource for government decisionmaking, and in recent years, local governments, in a bid for transparency, community engagement, and innovation, have released many municipal datasets on publicly accessible open data portals. In recent years, advocates, reporters, and others have voiced concerns about the bias of algorithms used to guide public decisions and the data that power them.
Although significant progress is being made in developing tools for algorithmic bias and transparency, we could not find any standardized tools available for assessing bias in open data itself. In other words, how can policymakers, analysts, and advocates systematically measure the level of bias in the data that power city decisionmaking, whether an algorithm is used or not?
To fill this gap, we present a prototype of an automated bias assessment tool for geographic data. This new tool will allow city officials, concerned residents, and other stakeholders to quickly assess the bias and representativeness of their data. The tool allows users to upload a file with latitude and longitude coordinates and receive simple metrics of spatial and demographic bias across their city.
The tool is built on geographic and demographic data from the Census and assumes that the population distribution in a city represents the “ground truth” of the underlying distribution in the data uploaded. To provide an illustrative example of the tool’s use and output, we test our bias assessment on three datasets—bikeshare station locations, 311 service request locations, and Low Income Housing Tax Credit (LIHTC) building locations—across a few, hand-selected example cities….(More)”
1,000,000+
Open data demand in New York City in
The trouble with informed consent in smart cities
Blog Post by Emilie Scott: “…Lilian Edwards, a U.K.-based academic in internet law, points out that public spaces like smart cities further dilutes the level of consent in the IoT: “While consumers may at least have theoretically had a chance to read the privacy policy of their Nest thermostat before signing the contract, they will have no such opportunity in any real sense when their data is collected by the smart road or smart tram they go to work on, or as they pass the smart dustbin.”
If citizens have expectations that their interactions in smart cities will resemble the technological interactions they have become familiar with, they will likely be sadly misinformed about the level of control they will have over what personal information they end up sharing.
The typical citizen understands that “choosing convenience” when you engage with technology can correspond to a decrease in their level of personal privacy. On at least some level, this is intended to be a choice. Most users may not choose to carefully read a privacy policy on a smartphone application or a website; however, if that policy is well-written and compliant, the user can exercise a right to decide whether they consent to the terms and wish to engage with the company.
The right to choose what personal information you exchange for services is lost in the smart city.
Theoretically, the smart city can bypass this right because municipal government services are subject to provincial public-sector privacy legislation, which can ultimately entail informing citizens their personal information is being collected by way of a notice.
However, the assumption that smart city projects are solely controlled by the public sector is questionable and verges on problematic. Most smart-city projects in Canada are run via public-private partnerships as municipal governments lack both the budget and the expertise to implement the technology system. Private companies can have leading roles in designing, building, financing, operating and maintaining smart-city projects. In the process, they can also have a large degree of control over the data that is created and used.
In some countries, these partnerships can even result in an unprecedented level of privatization. For example, Cisco Systems debatably has a larger claim over Songdo’s development than the South Korean government. Smart-city public-private partnership can have complex implications for data control even when both partners are highly engaged. Trapeze, a private-sector company in transportation software, cautions the public sector on the unintended transfer of data control when electing private providers to operate data systems in a partnership….
When the typical citizen enters a smart city, they will not know 1.) what personal information is being collected, nor will they know 2.) who is collecting it. The former is an established requirement of informed consent, and the later has debatably never been an issue until the development of smart cities.
While similar privacy issues are playing out in smart cities all around the world, Canada must take steps to determine how its own specific privacy legal structure is going to play a role in responding to these privacy issues in our own emerging smart-city projects
Is Ethical A.I. Even Possible?
Cade Metz at The New York Times: ” When a news article revealed that Clarifaiwas working with the Pentagon and some employees questioned the ethics of building
“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.
As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.
But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation
As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.
All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.
As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”
Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.
Google worked on the same Pentagon project as Clarifai, and after a protest from company employees, the tech giant ultimately ended its involvement. But like Clarifai, as many as 20 other companies have worked on the project without bowing to ethical concerns.
After the controversy over its Pentagon work, Google laid down a set of “A.I. principles” meant as a guide for future projects. But even with the corporate rules in place, some employees left the company in protest. The new principles are open to interpretation. And they are overseen by executives who must also protect the company’s financial interests….
In their open letter, the Clarifai employees said they were unsure whether
EU Data Protection Rules and U.S. Implications
In Focus by the Congressional Research Service: “
Both the United States and the 28-member EU assert that they are committed to upholding individual privacy rights and ensuring the protection of personal data, including electronic data. However, data privacy and protection issues have long been sticking points in U.S.-EU economic and security relations, in part because of differences in U.S. and EU legal regimes and approaches to data privacy.
The GDPR highlights some of those differences and poses challenges for U.S. companies doing business in the EU. The United States does not broadly restrict cross-border data flows and has traditionally regulated privacy at a sectoral level to cover certain types of data. The EU considers the privacy of communications and the protection of personal data to be fundamental rights, which are codified in EU law. Europe’s history with fascist and totalitarian regimes informs the EU’s views on data protection and contributes to the demand for strict data privacy controls. The EU regards current U.S. data protection safeguards as inadequate; this has complicated the conclusion of U.S.-EU information-sharing agreements and raised concerns about U.S.-EU data flows….(More).