Social Media Use in Crisis and Risk Communication: Emergencies, Concerns and Awareness


Open Access Book edited by Harald Hornmoen and Klas Backholm: ” This book is about how different communicators – whether professionals, such as crisis managers, first responders and journalists, or private citizens and disaster victims – have used social media to communicate about risks and crises. It is also about how these very different actors can play a crucial role in mitigating or preventing crises. How can they use social media to strengthen their own and the public’s awareness and understanding of crises when they unfold? How can they use social media to promote resilience during crises and the ability to deal with the after-effects? Moreover, what can they do to avoid using social media in a manner that weakens the situation awareness of crisis workers and citizens, or obstructs effective emergency management?

The RESCUE (Researching Social Media and Collaborative Software Use in Emergency Situations) project, on which this book is based, has sought to enable a more efficient and appropriate use of social media among key communicators, such as journalists and government actors involved in crisis management. Through empirical studies, and by drawing on relevant theory, the collection aims to improve our understanding of how social media have been used in different types of risks and crises. Building on our empirical work, we provide research-based input into how social media can be used efficiently by different communicators in a way appropriate to the specific crisis and to the concerns of the public.

We address our questions by presenting new research-based knowledge on social media use during different crises: the terrorist attacks in Norway on 22 July 2011; the central European floods in Austria in 2013; and the West African Ebola outbreak in 2014. The social media platforms analysed include the most popular ones in the affected areas at the time of the crises: Twitter and Facebook. By addressing such different cases, the book will move the field of crisis communication in social media beyond individual studies towards providing knowledge which is valid across situations….(More)”.

Revisiting the governance of privacy: Contemporary policy instruments in global perspective


Colin J. Bennett and Charles D. Raab at Regulation & Governance: “The repertoire of policy instruments within a particular policy sector varies by jurisdiction; some “tools of government” are associated with particular administrative and regulatory traditions and political cultures. It is less clear how the instruments associated with a particular policy sector may change over time, as economic, social, and technological conditions evolve.

In the early 2000s, we surveyed and analyzed the global repertoire of policy instruments deployed to protect personal data. In this article, we explore how those instruments have changed as a result of 15 years of social, economic and technological transformations, during which the issue has assumed a far higher global profile, as one of the central policy questions associated with modern networked communications.

We review the contemporary range of transnational, regulatory, self‐regulatory, and technical instruments according to the same framework, and conclude that the types of policy instrument have remained relatively stable, even though they are now deployed on a global scale.

While the labels remain the same, however, the conceptual foundations for their legitimation and justification are shifting as greater emphases on accountability, risk, ethics, and the social/political value of privacy have gained purchase. Our analysis demonstrates both continuity and change within the governance of privacy, and displays how we would have tackled the same research project today.

As a broader case study of regulation, it highlights the importance of going beyond technical and instrumental labels. Change or stability of policy instruments does not take place in isolation from the wider conceptualizations that shape their meaning, purpose, and effect…(More)”.

The Role of Urban Living Labs in Entrepreneurship, Energy, and Governance of Smart Cities


Chapter by Ana Pego and Maria do Rosário Matos Bernardo in Handbook of Research on Entrepreneurship and Marketing for Global Reach in the Digital Economy: “Urban living labs (ULL) are a new concept which involves users in innovation and development and are regarded as a way of meeting the innovation challenges faced by information and communication technology (ICT) service providers.

The chapter focuses on the role of urban living labs in entrepreneurship, energy and governance of smart cities, where it is performed the relationship between innovations, governance, and renewable energy. The methodology proposed will focus on content analysis and on the exploration of some European examples of implemented ULL, namely Amsterdam, Helsinki, Stockholm and Copenhagen. The contributions of the present research should be the consolidation of knowledge about the impact of ULL on innovation and development of smart cities regarding the concepts of renewable energy, smart governance and entrepreneurship….(More)”

Future Politics: Living Together in a World Transformed by Tech


Book by Jamie Susskind: “Future Politics confronts one of the most important questions of our time: how will digital technology transform politics and society? The great political debate of the last century was about how much of our collective life should be determined by the state and what should be left to the market and civil society. In the future, the question will be how far our lives should be directed and controlled by powerful digital systems — and on what terms?

Jamie Susskind argues that rapid and relentless innovation in a range of technologies — from artificial intelligence to virtual reality — will transform the way we live together. Calling for a fundamental change in the way we think about politics, he describes a world in which certain technologies and platforms, and those who control them, come to hold great power over us. Some will gather data about our lives, causing us to avoid conduct perceived as shameful, sinful, or wrong. Others will filter our perception of the world, choosing what we know, shaping what we think, affecting how we feel, and guiding how we act. Still others will force us to behave certain ways, like self-driving cars that refuse to drive over the speed limit.

Those who control these technologies — usually big tech firms and the state — will increasingly control us. They will set the limits of our liberty, decreeing what we may do and what is forbidden. Their algorithms will resolve vital questions of social justice, allocating social goods and sorting us into hierarchies of status and esteem. They will decide the future of democracy, causing it to flourish or decay.

A groundbreaking work of political analysis, Future Politics challenges readers to rethink what it means to be free or equal, what it means to have power or property, what it means for a political system to be just or democratic, and proposes ways in which we can — and must — regain control….(More)”.

Emerging Labour Market Data Sources towards Digital Technical and Vocational Education and Training (TVET)


Paper by Nikos Askitas, Rafik Mahjoubi, Pedro S. Martins, Koffi Zougbede for Paris21/OECD: “Experience from both technology and policy making shows that solutions for labour market improvements are simply choices of new, more tolerable problems. All data solutions supporting digital Technical and Vocational Education and Training (TVET) will have to incorporate a roadmap of changes rather than an unrealistic super-solution. The ideal situation is a world in which labour market participants engage in intelligent strategic behavior in an informed, fair and sophisticated manner.

Labour market data captures transactions within labour market processes. In order to successfully capture such data, we need to understand the specifics of these market processes. Designing an ecosystem of labour market matching facilitators and rules of engagement for contributing to a lean and streamlined Logistics Management and Information System (LMIS) is the best way to create Big Data with context relevance. This is in contrast with pre-existing Big Data captured by global job boards or social media for which relevance is limited by the technology access gap and its variations across the developing world.

Network effects occur in technology and job facilitation, as seen in the developed world. Managing and instigating the right network effects might be crucial to avoid fragmented stagnation and inefficiency. This is key to avoid throwing money behind wrong choices that do not gain traction.

A mixed mode approach is possibly the ideal approach for developing countries. Mixing offline and online elements correctly will be crucial in bridging the technology access gap and reaping the benefits of digitisation at the same time.

Properly incentivising the various entities is critical for progression, and more specifically the private sector, which is significantly more agile and inventive, has “skin in the game” and a long-term commitment to the conditions in the field, has intimate knowledge of how to solve the the technology gap and brings a better understanding of the particular ambient context they are operating in. To summarise: Big Data starts small.

Managing expectations and creating incentives for the various stakeholders will be crucial in establishing digitally supported TVET. Developing the right business models will be crucial in the short term and beyond, and it will be the result of creating the right mix of technological and policy expertise with good knowledge of the situation on the ground….(More)”.

Crowdsourced social media data for disaster management: Lessons from the PetaJakarta.org project


R.I.Ogie, R.J.Clarke, H.Forehead and P.Perez in Computers, Environment and Urban Systems: “The application of crowdsourced social media data in flood mapping and other disaster management initiatives is a burgeoning field of research, but not one that is without challenges. In identifying these challenges and in making appropriate recommendations for future direction, it is vital that we learn from the past by taking a constructively critical appraisal of highly-praised projects in this field, which through real-world implementations have pioneered the use of crowdsourced geospatial data in modern disaster management. These real-world applications represent natural experiments, each with myriads of lessons that cannot be easily gained from computer-confined simulations.

This paper reports on lessons learnt from a 3-year implementation of a highly-praised project- the PetaJakarta.org project. The lessons presented derive from the key success factors and the challenges associated with the PetaJakarta.org project. To contribute in addressing some of the identified challenges, desirable characteristics of future social media-based disaster mapping systems are discussed. It is envisaged that the lessons and insights shared in this study will prove invaluable within the broader context of designing socio-technical systems for crowdsourcing and harnessing disaster-related information….(More)”.

To turn the open data revolution from idea to reality, we need more evidence


Stefaan Verhulst at apolitical: “The idea that we are living in a data age — one characterised by unprecedented amounts of information with unprecedented potential — has  become mainstream. We regularly read “data is the new oil,” or “data is the most valuable commodity in the global economy.”

Doubtlessly, there is truth in these statements. But a major, often unacknowledged problem is how much data remains inaccessible, hidden in siloes and behind walls.

For close to a decade, the technology and public interest community has pushed the idea of open data. At its core, open data represents a new paradigm of information and information access.

Rooted in notions of an information commons — developed by scholars like Nobel Prize winner Elinor Ostrom — and borrowing from the language of open source, open data begins from the premise that data collected from the public, often using public funds or publicly funded infrastructure, should also belong to the public — or at least, be made broadly accessible to those pursuing public-interest goals.

The open data movement has reached significant milestones in its short history. An ever-increasing number of governments across both developed and developing economies have released large datasets for the public’s benefit….

Similarly, a growing number of private companies have “Data Collaboratives” leveraging their data — with various degrees of limitations — to serve the public interest.

Despite such initiatives, many open data projects (and data collaboratives) remain fledgling. The field has trouble scaling projects beyond initial pilots. In addition, many potential stakeholders — private sector and government “owners” of data, as well as public beneficiaries — remain sceptical of open data’s value. Such limitations need to be overcome if open data and its benefits are to spread. We need hard evidence of its impact.

Ironically, the field is held back by an absence of good data on open data — that is, a lack of reliable empirical evidence that could guide new initiatives.

At the GovLab, a do-tank at New York University, we study the impact of open data. One of our overarching conclusions is that we need a far more solid evidence base to move open data from being a good idea to reality.

What do we know? Several initiatives undertaken at the GovLab offer insight. Our ODImpactwebsite now includes more than 35 detailed case studies of open government data projects. These examples provide powerful evidence not only that open data can work but also about howit works….

We have also launched an Open Data Periodic Table to better understand what conditions predispose an open data project toward success or failure. For example, having a clear problem definition, as well as the capacity and culture to carry out open data projects, are vital. Successful projects also build cross-sector partnerships around open data and its potential uses and establish practices to assess and mitigate risks, and have transparent and responsive governance structures….(More)”.

Google is using AI to predict floods in India and warn users


James Vincent at The Verge: “For years Google has warned users about natural disasters by incorporating alerts from government agencies like FEMA into apps like Maps and Search. Now, the company is making predictions of its own. As part of a partnership with the Central Water Commission of India, Google will now alert users in the country about impending floods. The service is only currently available in the Patna region, with the first alert going out earlier this month.

As Google’s engineering VP Yossi Matias outlines in a blog post, these predictions are being made using a combination of machine learning, rainfall records, and flood simulations.

“A variety of elements — from historical events, to river level readings, to the terrain and elevation of a specific area — feed into our models,” writes Matias. “With this information, we’ve created river flood forecasting models that can more accurately predict not only when and where a flood might occur, but the severity of the event as well.”

The US tech giant announced its partnership with the Central Water Commission back in June. The two organizations agreed to share technical expertise and data to work on the predictions, with the Commission calling the collaboration a “milestone in flood management and in mitigating the flood losses.” Such warnings are particularly important in India, where 20 percent of the world’s flood-related fatalities are estimated to occur….(More)”.

The New York City Business Atlas: Leveling the Playing Field for Small Businesses with Open Data


Chapter by Stefaan Verhulst and Andrew Young in Smarter New York City:How City Agencies Innovate. Edited by André Corrêa d’Almeida: “While retail entrepreneurs, particularly those operating in the small-business space, are experts in their respective trades, they often lack access to high-quality information about social, environmental, and economic conditions in the neighborhoods where they operate or are considering operating.

The New York City Business Atlas, conceived by the Mayor’s Office of Data Analytics (MODA) and the Department of Small Business Services, is designed to alleviate that information gap by providing a public web-based tool that gives small businesses access to high-quality data to help them decide where to establish a new business or expand an existing one. e tool brings together a diversity of data, including business-fling data from the Department of Consumer Affairs, sales-tax data from the Department of Finance, demographic data from the census, and traffic data from Placemeter, a New York City startup focusing on real-time traffic information.

The initial iteration of the Business Atlas made useful and previously inaccessible data available to small-business owners and entrepreneurs in an innovative manner. After a few years, however, it became clear that the tool was not experiencing the level of use or creating the level of demonstrable impact anticipated. Rather than continuing down the same path or abandoning the effort entirely, MODA pivoted to a new approach, moving from the Business Atlas as a single information-providing tool to the Business Atlas as a suite of capabilities aimed at bolstering New York’s small-business community.

Through problem- and user-centered efforts, the Business Atlas is now making important insights available to stakeholders who can put it to meaningful use—from how long it takes to open a restaurant in the city to which areas are most in need of education and outreach to improve their code compliance. This chapter considers the open data environment from which the Business Atlas was launched, details the initial version of the Business Atlas and the lessons it generated and describes the pivot to this new approach….(More)”.

Walmart wants to track lettuce on the blockchain


Matthew Beedham at TNW: “Walmart is asking all of its leafy greens suppliers to get on blockchain by this time next year.

With instances of E. coli on the rise, particularly in romaine lettuce, Walmart is insisting that its suppliers use blockchain to track and trace products from source to the customer.

Walmart notes that, while health officials at the Centers for Disease Control told Americans have already warned citizens to avoid eating lettuce grown in Yuma, Arizona, it’s near impossible for consumers to know where their greens are coming from.

On one hand this could be a great system for reducing waste. Earlier this year, green grocers had to throw away produce thought to be infected with E. Coli.

The announcement states, “[h]ealth officials at the Centers for Disease Control told Americans to avoid eating lettuce that was grown in Yuma, Arizona”

However, it’s near impossible for consumers to know where their lettuce was grown.

It would seem that most producers and suppliers still rely on paper-based ledgers. As a result, tracking down vital information about where a product came from can be very time consuming.

By which time, it might be too late and many customers might have purchased and consumed infected produce.

If Walmart’s plans come to fruition, it would allow customers to view the entire supply chain of a product at the point of purchase… (More)”