Article by Lucrezia Lozza: “Marta has lived with a bad smell lingering in her hometown in central Spain, Villanueva del Pardillo, for a long time. Fed up, in 2017 she and her neighbors decided to pursue the issue. “The smell is disgusting,” Marta says, pointing a finger at a local yeast factory.
Originally, she thought of recording the “bad smell days” on a spreadsheet. When this didn’t work out, after some research she found Odour Collect, a crowdsourced map that allows users to enter a geolocalized timestamp of bad smells in their neighborhood.
After noise, odor nuisances are the second cause of environmental complaints. Odor regulations vary among countries and there’s little legislation about how to manage smells. For instance, in Spain some municipalities regulate odors, but others do not. In the United States, the Environmental Protection Agency does not regulate odor as a pollutant, so states and local jurisdictions are in charge of the issue.
Only after Marta started using Odour Collect to record the unpleasant smells in her town did she discover that the map was part of ‘D-NOSES’, a European project aimed at bringing citizens, industries and local authorities together to monitor and minimize odor nuisances. D-NOSES relies heavily on citizen science: Affected communities gather odor observations through two maps — Odour Collect and Community Maps — with the goal of implementing new policies in their area. D-NOSES launched several pilots in Europe — in Spain, Greece, Bulgaria, and Portugal — and two outside the continent in Uganda and in Chile.
“Citizen science promotes transparency between all the actors,” said Nora Salas Seoane, Social Sciences Researcher at Fundación Ibercivis, one of the partners of D-NOSES…(More)”.
Jennifer Yokoyama at Microsoft: “…The biggest takeaway from our work this past year – and the one thing I hope any reader of this post will take away – is that data collaboration is a spectrum. From the presence (or absence) of data to how open that data is to the trust level of the collaboration participants, these factors may necessarily lead to different configurations and different goals, but they can all lead to more open data and innovative insights and discoveries.
Here are a few other lessons we have learned over the last year:
- Principles set the foundation for stakeholder collaboration: When we launched the Open Data Campaign, we adopted five principles that guide our contributions and commitments to trusted data collaborations: Open, Usable, Empowering, Secure and Private. These principles underpin our participation, but importantly, organizations can build on them to establish responsible ways to share and collaborate around their data. The London Data Commission, for example, established a set of data sharing principles for public- and private-sector organizations to ensure alignment and to guide the participating groups in how they share data.
- There is value in pilot projects: Traditionally, data collaborations with several stakeholders require time – often including a long runway for building the collaboration, plus the time needed to execute on the project and learn from it. However, our learnings show short-term projects that experiment and test data collaborations can provide valuable insights. The London Data Commission did exactly that with the launch of four short-term pilot projects. Due to the success of the pilots, the partners are exploring how they can be expanded upon.
- Open data doesn’t require new data: Identifying data to share does not always mean it must be newly shared data; sometimes the data was narrowly shared, but can be shared more broadly, made more accessible or analyzed for a different purpose. Microsoft’s environmental indicator data is an example of data that was already disclosed in certain venues, but was then made available to the Linux Foundation’s OS-Climate Initiative to be consumed through analytics, thereby extending its reach and impact…
To get started, we suggest that emerging data collaborations make use of the wealth of existing resources. When embarking on data collaborations, we leveraged many of the definitions, toolkits and guides from leading organizations in this space. As examples, resources such as the Open Data Institute’s Data Ethics Canvas are extremely useful as a framework to develop ethical guidance. Additionally, The GovLab’s Open Data Policy Lab and Executive Course on Data Stewardship, both supported by Microsoft, highlight important case studies, governance considerations and frameworks when sharing data. If you want to learn more about the exciting work our partners are doing, check out the latest posts from the Open Data Institute and GovLab…(More)”. See also Open Data Policy Lab.
Book by Alison B. Powell: “City life has been reconfigured by our use—and our expectations—of communication, data, and sensing technologies. This book examines the civic use, regulation, and politics of these technologies, looking at how governments, planners, citizens, and activists expect them to enhance life in the city. Alison Powell argues that the de facto forms of citizenship that emerge in relation to these technologies represent sites of contention over how governance and civic power should operate. These become more significant in an increasingly urbanized and polarized world facing new struggles over local participation and engagement. The author moves past the usual discussion of top-down versus bottom-up civic action and instead explains how citizenship shifts in response to technological change and particularly in response to issues related to pervasive sensing, big data, and surveillance in “smart cities.”…(More)”.
Paper by Beatriz Botero Arcila: “Cities in the US have started to enact data-sharing rules and programs to access some of the data that technology companies operating under their jurisdiction – like short-term rental or ride hailing companies – collect. This information allows cities to adapt too to the challenges and benefits of the digital information economy. It allows them to understand what their impact is on congestion, the housing market, the local job market and even the use of public spaces. It also empowers them to act accordingly by, for example, setting vehicle caps or mandating a tailored minimum pay for gig-workers. These companies, however, sometimes argue that sharing this information attempts against their users’ privacy rights and their privacy rights, because this information is theirs; it’s part of their business records. The question is thus what those rights are, and whether it should and could be possible for local governments to access that information to advance equity and sustainability, without harming the legitimate privacy interests of both individuals and companies. This Article argues that within current Fourth Amendment doctrine and privacy law there is space for data-sharing programs. Privacy law, however, is being mobilized to alter the distribution of power and welfare between local governments, companies, and citizens within current digital information capitalism to extend those rights beyond their fair share and preempt permissible data-sharing requests. The Article warns that if the companies succeed in their challenges, privacy law will have helped shield corporate power from regulatory oversight, while still leaving individuals largely unprotected and submitting local governments further to corporate interests….(More)”.
Book by Claudio Scardovi: “Global cities are facing an almost unprecedented challenge of change. As they re-emerge from the Covid 19 pandemic and get ready to face climate change and other, potentially existential threats, they need to look for new ways to support wealth and wellbeing creation – leveraging Big Data and AI and suing them into their physical reality and to become greener, more inclusive and resilient, hence sustainable.This book describes how new digital technologies could be used to design digital and physical twins of cities that are able to feed into each other to optimize their working and ability to create new wealth and wellbeing. The book also describes how to increase cities’ social and economic resilience during crisis time and addressing their almost fatal weaknesses – as it became all too obvious during the recent COVID 19 crisis. Also, the book presents a framework for a critical discussion of the concept of “smart-city”, suggesting its development into a “cyber” and “meta” one – meaning, not only digital systems can allow physical ones (e.g. cities, citizens, households and companies) to become “smarter”, but also the vice versa is true, as off line data and real life behaviours can support the optimization and development of virtual brains as a sum of big data and artificial intelligence apps all sitting “over the cloud”.
An analysis of the fundamental dynamics of this emerging “info-telligence” economy, and of the potential role of big digital players like Amazon, Google and Facebook is then paving the way to discuss a few strategic forays on how traditional sectors such as financial services, real estate, TMT or health could also evolve, leveraging Big Data and AI in a cyber-physical integrated setting. Finally, a number of thought provoking use cases that could be designed around individuals, and to improve the success and the resilience of households and companies living and working in urban areas are discussed, as an example of one of the most exciting future markets to come: the one of global, sustainable cities…(More)”.
Paper by Candela, Filippo; and Mulassano, Paolo: “The paper presents and discusses the method adopted by Compagnia di San Paolo, one of the largest European philanthropic institutions, to monitor the advancement, despite the COVID-19 situation, in providing specific input to the decision-making process for dedicated projects. An innovative approach based on the use of daily open data was adopted to monitor the metropolitan area with a multidimensional perspective. Several open data indicators related to the economy, society, culture, environment, and climate were identified and incorporated into the decision support system dashboard. Indicators are presented and discussed to highlight how open data could be integrated into the foundation’s strategic approach and potentially replicated on a large scale by local institutions. Moreover, starting from the lessons learned from this experience, the paper analyzes the opportunities and critical issues surrounding the use of open data, not only to improve the quality of life during the COVID-19 epidemic but also for the effective regulation of society, the participation of citizens, and their well-being….(More)”
Julie Stoner at Library of Congress: “Whether you’ve used an online map to check traffic conditions, a fitness app to track your jogging route, or found photos tagged by location on social media, many of us rely on geospatial data more and more each day. So what are the most common ways geospatial data is created and stored, and how does it differ from how we have stored geographic information in the past?
A primary method for creating geospatial data is to digitize directly from scanned analog maps. After maps are georeferenced, GIS software allows a data creator to manually digitize boundaries, place points, or define areas using the georeferenced map image as a reference layer. The goal of digitization is to capture information carefully stored in the original map and translate it into a digital format. As an example, let’s explore and then digitize a section of this 1914 Sanborn Fire Insurance Map from Eatonville, Washington.
Sanborn Fire Insurance Map from Eatonville, Pierce County, Washington. Sanborn Map Company, October 1914. Geography & Map Division, Library of Congress.
Sanborn Fire Insurance Maps were created to detail the built environment of American towns and cities through the late 19th and early 20th centuries. The creation of these information-dense maps allowed the Sanborn Fire Insurance Company to underwrite insurance agreements without needing to inspect each building in person. Sanborn maps have become incredibly valuable sources of historic information because of the rich geographic detail they store on each page.
When extracting information from analog maps, the digitizer must decide which features will be digitized and how information about those features will be stored. Behind the geometric features created through the digitization process, a table is utilized to store information about each feature on the map. Using the table, we can store information gleaned from the analog map, such as the name of a road or the purpose of a building. We can also quickly calculate new data, such as the length of a road segment. The data in the table can then be put to work in the visual display of the new digital information that has been created. This often done through symbolization and map labels….(More)”.
Paper by Alexandra Albert: “The growth of citizen science and participatory science, where non-professional scientists voluntarily participate in scientific activities, raises questions around the ownership and interpretation of data, issues of data quality and reliability, and new kinds of data literacy. Citizen social science (CSS), as an approach that bridges these fields, calls into question the way in which research is undertaken, as well as who can collect data, what data can be collected, and what such data can be used for. This article outlines a case study—the Empty Houses Project—to explore how CSS plays out in practice, and to reflect on the opportunities and challenges it presents. The Empty Houses Project was set up to investigate how citizens could be mobilised to collect data about empty houses in their local area, so as to potentially contribute towards tackling a pressing policy issue. The study shows how the possibilities of CSS exceed the dominant view of it as a new means of creating data repositories. Rather, it considers how the data produced in CSS is an epistemology, and a politics, not just a realist tool for analysis….(More)”.
Book edited by Grazia Concilio, Paola Pucci, Lieven Raes and Geert Mareels: “This open access book represents one of the key milestones of PoliVisu, an H2020 research and innovation project funded by the European Commission under the call “Policy-development in the age of big data: data-driven policy-making, policy-modelling and policy-implementation”.
It investigates the operative and organizational implications related to the use of the growing amount of available data on policy making processes, highlighting the experimental dimension of policy making that, thanks to data, proves to be more and more exploitable towards more effective and sustainable decisions.
The first section of the book introduces the key questions highlighted by the PoliVisu project, which still represent operational and strategic challenges in the exploitation of data potentials in urban policy making. The second section explores how data and data visualisations can assume different roles in the different stages of a policy cycle and profoundly transform policy making….(More)”.
Paper by Gabriel Kreindler and Yuhei Miyauchi: “We show how to use commuting flows to infer the spatial distribution of income within a city. A simple workplace choice model predicts a gravity equation for commuting flows whose destination fixed effects correspond to wages. We implement this method with cell phone transaction data from Dhaka and Colombo. Model-predicted income predicts separate income data, at the workplace and residential level, and by skill group. Unlike machine learning approaches, our method does not require training data, yet achieves comparable predictive power. We show that hartals (transportation strikes) in Dhaka reduce commuting more for high model-predicted wage and high-skill commuters….(More)”.