TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications


Paper by Daniel G. Costa et al in Sensors: “Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve.

In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events….(More)”.

Practical approaches to big data privacy over time


Micah Altman, Alexandra Wood, David R O’Brien and Urs Gasser in International Data Privacy Law: “

  • Governments and businesses are increasingly collecting, analysing, and sharing detailed information about individuals over long periods of time.
  • Vast quantities of data from new sources and novel methods for large-scale data analysis promise to yield deeper understanding of human characteristics, behaviour, and relationships and advance the state of science, public policy, and innovation.
  • The collection and use of fine-grained personal data over time, at the same time, is associated with significant risks to individuals, groups, and society at large.
  • This article examines a range of long-term research studies in order to identify the characteristics that drive their unique sets of risks and benefits and the practices established to protect research data subjects from long-term privacy risks.
  • We find that many big data activities in government and industry settings have characteristics and risks similar to those of long-term research studies, but are subject to less oversight and control.
  • We argue that the risks posed by big data over time can best be understood as a function of temporal factors comprising age, period, and frequency and non-temporal factors such as population diversity, sample size, dimensionality, and intended analytic use.
  • Increasing complexity in any of these factors, individually or in combination, creates heightened risks that are not readily addressable through traditional de-identification and process controls.
  • We provide practical recommendations for big data privacy controls based on the risk factors present in a specific case and informed by recent insights from the state of the art and practice….(More)”.

Citizen Sensing: A Toolkit


Book from Making Sense: “Collaboration using open-source technologies makes it possible to create new and powerful forms of community action, social learning and citizenship. There are now widely accessible platforms through which we can come together to make sense of urgent challenges, and discover ways to address these. Together we can shape our streets, neighbourhoods, cities and countries – and in turn, shape our future. You can join with others to become the solution to challenges in our environment, in our communities and in the way we live together.

In this book, there are ideas and ways of working that can help you build collective understanding and inspire others to take action. By coming together with others on issues you identify and define yourselves, and by designing and using the right tools collaboratively, both your awareness and ability to act will be improved. In the process, everyone involved will have better insights, better arguments and better discussions; sometimes to astonishing effect!

We hope this book will help you engage people to learn more about an issue that concerns you, support you to take action, and change the world for the better. This resource will teach you how to scope your questions, identify and nurture relevant communities, and plan an effective campaign. It will then help you gather data and evidence, interpret your findings, build awareness and achieve tangible outcomes. Finally, it will show you how to reflect on these outcomes, and offers suggestions on how you can leave a lasting legacy.

This book is intended to help community activists who are curious or concerned about one or more issues, whether local or global, and are motivated to take action. This resource can also be of value to professionals in organisations which support community actions and activists. Finally, this book will be of interest to researchers in the fields of citizen science, community activism and participatory sensing, government officials and other public policy actors who wish to include citizens’ voices in the decision-making process…(More)”.

The Potential and Practice of Data Collaboratives for Migration


Essay by Stefaan Verhulst and Andrew Young in the Stanford Social Innovation Review: “According to recent United Nations estimates, there are globally about 258 million international migrants, meaning people who live in a country other than the one in which they were born; this represents an increase of 49 percent since 2000. Of those, 26 million people have been forcibly displaced across borders, having migrated either as refugees or asylum seekers. An additional 40 million or so people are internally displaced due to conflict and violence, and millions more are displaced each year because of natural disasters. It is sobering, then, to consider that, according to many observers, global warming is likely to make the situation worse.

Migration flows of all kinds—for work, family reunification, or political or environmental reasons—create a range of both opportunities and challenges for nation states and international actors. But the issues associated with refugees and asylum seekers are particularly complex. Despite the high stakes and increased attention to the issue, our understanding of the full dimensions and root causes of refugee movements remains limited. Refugee flows arise in response to not only push factors like wars and economic insecurity, but also powerful pull factors in recipient countries, including economic opportunities, and perceived goods like greater tolerance and rule of law. In addition, more objectively measurable variables like border barriers, topography, and even the weather, play an important role in determining the number and pattern of refugee flows. These push and pull factors interact in complex and often unpredictable ways. Further complicating matters, some experts argue that push-pull research on migration is dogged by a number of conceptual and methodological limitations.

To mitigate negative impacts and anticipate opportunities arising from high levels of global migration, we need a better understanding of the various factors contributing to the international movement of people and how they work together.

Data—specifically, the widely dispersed data sets that exist across governments, the private sector, and civil society—can help alleviate today’s information shortcoming. Several recent initiatives show the potential of using data to address some of the underlying informational gaps. In particular, there is an important role for a new form of data-driven problem-solving and policymaking—what we call “data collaboratives.” Data collaboratives offer the potential for inter-sectoral collaboration, and for the merging and augmentation of otherwise siloed data sets. While public and private actors are increasingly experimenting with various types of data in a variety of sectors and geographies—including sharing disease data to accelerate disease treatments and leveraging private bus data to improve urban planning—we are only beginning to understand the potential of data collaboration in the context of migration and refugee issues….(More)”.

 

…(More)”

Democracy is in danger when the census undercounts vulnerable populations


Emily Klancher Merchant at The Conversation: “The 2020 U.S. Census is still two years away, but experts and civil rights groups are already disputing the results.At issue is whether the census will fulfill the Census Bureau’s mandate to “count everyone once, only once, and in the right place.”

The task is hardly as simple as it seems and has serious political consequences. Recent changes to the 2020 census, such as asking about citizenship status, will make populations already vulnerable to undercounting even more likely to be missed. These vulnerable populations include the young, poor, nonwhite, non-English-speaking, foreign-born and transient.

An accurate count is critical to the functioning of the U.S. government. Census data determine how the power and resources of the federal government are distributed across the 50 states. This includes seats in the House, votes in the Electoral College and funds for federal programs. Census data also guide the drawing of congressional and other voting districts and the enforcement of civil and voting rights laws.

Places where large numbers of people go uncounted get less than their fair share of political representation and federal resources. When specific racial and ethnic groups are undercounted, it is harder to identify and rectify violations of their civil rights. My research on the international history of demography demonstrates that the question of how to equitably count the population is not new, nor is it unique to the United States. The experience of the United States and other countries may hold important lessons as the Census Bureau finalizes its plans for the 2020 count.

Let’s take a look at that history….

In 1790, the United States became the first country to take a regular census. Following World War II, the U.S. government began to promote census-taking in other countries. U.S. leaders believed data about the size and location of populations throughout the Western Hemisphere could help the government plan defense. What’s more, U.S. businesses could also use the data to identify potential markets and labor forces in nearby countries.

The U.S. government began investing in a program called the Census of the Americas. Through this program, the State Department provided financial support and the Census Bureau provided technical assistance to Western Hemisphere countries taking censuses in 1950.

United Nations demographers also viewed the Census of the Americas as an opportunity. Data that were standardized across countries could serve as the basis for projections of world population growth and the calculation of social and economic indicators. They also hoped that censuses would provide useful information to newly established governments. The U.N. turned the Census of the Americas into a global affair, recommending that “all Member States planning population censuses about 1950 use comparable schedules so far as possible.” Since 1960, the U.N. has sponsored a World Census Program every 10 years. The 2020 World Census Program will be the seventh round….

Not all countries went along with the program. For example, Lebanon’s Christian rulers feared that a census would show Christians to be a minority, undermining the legitimacy of their government. However, for the 65 sovereign countries taking censuses between 1945 and 1954, leaders faced the same question the U.S. faces today: How can we make sure that everyone has an equal chance of being counted?…(More)”.

What Do State Chief Data Officers Do?


Kil Huh and Sallyann Bergh at the Pew Charitable Trust: ” In 2017, Hurricane Harvey heaped destruction on the state of Texas. With maximum wind speeds clocked at nearly 135 miles per hour, and a record rainfall of more than 60 inches  that resulted in 3 to 4 feet of water flooding Houston’s metro area, the state is still recovering from the storm’s devastation.  Harvey is among the most expensive U.S. hurricanes on record.

As the storm made landfall, Texas government agencies mapped affected areas in real time to help first responders identify the most vulnerable citizens and places. The state’s Geographic Information Systems (GIS) group shared numerous map updates that informed law enforcement and other government agencies of the hardest hit areas, which enabled the efficient delivery of food, water, and other critical supplies. The group also helped identify safe, dry, “lily pad” areas where helicopters could land, ascertained the best evacuation routes, mapped areas where people were most critically in need of rescue, and analyzed the status of flooded schools to estimate reopenings. Additionally, mapping service data prompted the Sabine River Authority of Texas to dam its pump station before the flooding occurred—which averted $2 million in property losses.

Data from multiple state agencies, used to launch the Google Imagery Project in 2015, made this storm response possible. Furthermore, a crucial element of the state’s preparation was the hiring of a state data coordinator, a job known as chief data officer (CDO) in other states. These positions play a key role in advancing the quality of data used as a strategic asset to support more effective program investments. CDOs create data-driven solutions for intermittent issues like hurricanes and traffic events, as well as for chronic problems like poverty.

In February 2018, The Pew Charitable Trusts’ project on data as a strategic asset published a 50-state report, “How States Use Data to Inform Decisions,” which explores the five key actions that promote data-driven decision-making in states: planning ahead, building capacity, sharing data, analyzing data to create meaningful information, and sustaining data efforts to enhance their capabilities. CDOs have helped states implement these steps to support more data-informed decision-making, and states are increasingly acknowledging the important role this position plays in governance efforts….(More)”.

On Digital Passages and Borders: Refugees and the New Infrastructure for Movement and Control


Paper by Mark Latonero and Paula Kift: “Since 2014, millions of refugees and migrants have arrived at the borders of Europe. This article argues that, in making their way to safe spaces, refugees rely not only on a physical but increasingly also digital infrastructure of movement. Social media, mobile devices, and similar digitally networked technologies comprise this infrastructure of “digital passages”—sociotechnical spaces of flows in which refugees, smugglers, governments, and corporations interact with each other and with new technologies. At the same time, a digital infrastructure for movement can just as easily be leveraged for surveillance and control. European border policies, in particular, instantiate digital controls over refugee movement and identity. We review the actors, technologies, and policies of movement and control in the EU context and argue that scholars, policymakers, and the tech community alike should pay heed to the ethics of the use of new technologies in refugee and migration flows….(More)”.

Replicating the Justice Data Lab in the USA: Key Considerations


Blog by Tracey Gyateng and Tris Lumley: “Since 2011, NPC has researched, supported and advocated for the development of impact-focussed Data Labs in the UK. The goal has been to unlock government administrative data so that organisations (primarily nonprofits) who provide a social service can understand the impact of their services on the people who use them.

So far, one of these Data Labs has been developed to measure re-offending outcomes- the Justice Data Lab-, and others are currently being piloted for employment and education. Given our seven years of work in this area, we at NPC have decided to reflect on the key factors needed to create a Data Lab with our report: How to Create an Impact Data Lab. This blog outlines these factors, examines whether they are present in the USA, and asks what the next steps should be — drawing on the research undertaken with the Governance Lab….Below we examine the key factors and to what extent they appear to be present within the USA.

Environment: A broad culture that supports impact measurement. Similar to the UK, nonprofits in the USA are increasingly measuring the impact they have had on the participants of their service and sharing the difficulties of undertaking robust, high quality evaluations.

Data: Individual person-level administrative data. A key difference between the two countries is that, in the USA, personal data on social services tends to be held at a local, rather than central level. In the UK social services data such as reoffending, education and employment are collated into a central database. In the USA, the federal government has limited centrally collated personal data, instead this data can be found at state/city level….

A leading advocate: A Data Lab project team, and strong networks. Data Labs do not manifest by themselves. They requires a lead agency to campaign with, and on behalf of, nonprofits to set out a persuasive case for their development. In the USA, we have developed a partnership with the Governance Lab to seek out opportunities where Data Labs can be established but given the size of the country, there is scope for further collaborations/ and or advocates to be identified and supported.

Customers: Identifiable organisations that would use the Data Lab. Initial discussions with several US nonprofits and academia indicate support for a Data Lab in their context. Broad consultation based on an agreed region and outcome(s) will be needed to fully assess the potential customer base.

Data owners: Engaged civil servants. Generating buy-in and persuading various stakeholders including data owners, analysts and politicians is a critical part of setting up a data lab. While the exact profiles of the right people to approach can only be assessed once a region and outcome(s) of interest have been chosen, there are encouraging signs, such as the passing of the Foundations for Evidence-Based Policy Making Act of 2017 in the house of representatives which, among other things, mandates the appointment of “Chief Evaluation Officers” in government departments- suggesting that there is bipartisan support for increased data-driven policy evaluation.

Legal and ethical governance: A legal framework for sharing data. In the UK, all personal data is subject to data protection legislation, which provides standardised governance for how personal data can be processed across the country and within the European Union. A universal data protection framework does not exist within the USA, therefore data sharing agreements between customers and government data-owners will need to be designed for the purposes of Data Labs, unless there are existing agreements that enable data sharing for research purposes. This will need to be investigated at the state/city level of a desired Data Lab.

Funding: Resource and support for driving the set-up of the Data Lab. Most of our policy lab case studies were funded by a mixture of philanthropy and government grants. It is expected that a similar mixed funding model will need to be created to establish Data Labs. One alternative is the model adopted by the Washington State Institute for Public Policy (WSIPP), which was created by the Washington State Legislature and is funded on a project basis, primarily by the state. Additionally funding will be needed to enable advocates of a Data Lab to campaign for the service….(More)”.

Algorithmic Sovereignty


Thesis by Denis Roio: “This thesis describes a practice based research journey across various projects dealing with the design of algorithms, to highlight the governance implications in design choices made on them. The research provides answers and documents methodologies to address the urgent need for more awareness of decisions made by algorithms about the social and economical context in which we live. Algorithms consitute a foundational basis across different fields of studies: policy making, governance, art and technology. The ability to understand what is inscribed in such algorithms, what are the consequences of their execution and what is the agency left for the living world is crucial. Yet there is a lack of interdisciplinary and practice based literature, while specialised treatises are too narrow to relate to the broader context in which algorithms are enacted.

This thesis advances the awareness of algorithms and related aspects of sovereignty through a series of projects documented as participatory action research. One of the projects described, Devuan, leads to the realisation of a new, worldwide renown operating system. Another project, “sup”, consists of a minimalist approach to mission critical software and literate programming to enhance security and reliability of applications. Another project, D-CENT, consisted in a 3 year long path of cutting edge research funded by the EU commission on the emerging dynamics of participatory democracy connected to the technologies adopted by citizen organizations.

My original contribution to knowledge lies within the function that the research underpinning these projects has on the ability to gain a better understanding of sociopolitical aspects connected to the design and management of algorithms. It suggests that we can improve the design and regulation of future public, private and common spaces which are increasingly governed by algorithms by understanding not only economical and legal implications, but also the connections between design choices and the sociopolitical context for their development and execution….(More)”.

How Democracy Can Survive Big Data


Colin Koopman in The New York Times: “…The challenge of designing ethics into data technologies is formidable. This is in part because it requires overcoming a century-long ethos of data science: Develop first, question later. Datafication first, regulation afterward. A glimpse at the history of data science shows as much.

The techniques that Cambridge Analytica uses to produce its psychometric profiles are the cutting edge of data-driven methodologies first devised 100 years ago. The science of personality research was born in 1917. That year, in the midst of America’s fevered entry into war, Robert Sessions Woodworth of Columbia University created the Personal Data Sheet, a questionnaire that promised to assess the personalities of Army recruits. The war ended before Woodworth’s psychological instrument was ready for deployment, but the Army had envisioned its use according to the precedent set by the intelligence tests it had been administering to new recruits under the direction of Robert Yerkes, a professor of psychology at Harvard at the time. The data these tests could produce would help decide who should go to the fronts, who was fit to lead and who should stay well behind the lines.

The stakes of those wartime decisions were particularly stark, but the aftermath of those psychometric instruments is even more unsettling. As the century progressed, such tests — I.Q. tests, college placement exams, predictive behavioral assessments — would affect the lives of millions of Americans. Schoolchildren who may have once or twice acted out in such a way as to prompt a psychometric evaluation could find themselves labeled, setting them on an inescapable track through the education system.

Researchers like Woodworth and Yerkes (or their Stanford colleague Lewis Terman, who formalized the first SAT) did not anticipate the deep consequences of their work; they were too busy pursuing the great intellectual challenges of their day, much like Mr. Zuckerberg in his pursuit of the next great social media platform. Or like Cambridge Analytica’s Christopher Wylie, the twentysomething data scientist who helped build psychometric profiles of two-thirds of all Americans by leveraging personal information gained through uninformed consent. All of these researchers were, quite understandably, obsessed with the great data science challenges of their generation. Their failure to consider the consequences of their pursuits, however, is not so much their fault as it is our collective failing.

For the past 100 years we have been chasing visions of data with a singular passion. Many of the best minds of each new generation have devoted themselves to delivering on the inspired data science promises of their day: intelligence testing, building the computer, cracking the genetic code, creating the internet, and now this. We have in the course of a single century built an entire society, economy and culture that runs on information. Yet we have hardly begun to engineer data ethics appropriate for our extraordinary information carnival. If we do not do so soon, data will drive democracy, and we may well lose our chance to do anything about it….(More)”.