Using social media in hotel crisis management: the case of bed bugs


Social media has helped to bridge the communication gap between customers and hotels. Bed bug infestations are a growing health crisis and have obtained increasing attention on social media sites. Without managing this crisis effectively, bed bug infestation can cause economic loss and reputational damages to hotel properties, ranging from negative comments and complaints, to possible law suits. Thus, it is essential for hoteliers to understand the importance of social media in crisis communication, and to incorporate social media in hotels’ crisis management plans.

This study serves as one of the first attempts in the hospitality field to offer discussions and recommendations on how hotels can manage the bed bug crisis and other crises of this kind by incorporating social media into their crisis management practices….(More)”

Interactive app lets constituents help balance their city’s budget


Springwise: “In this era of information, political spending and municipal budgets are still often shrouded in confusion and mystery. But a new web app called Balancing Act hopes to change that, by enabling US citizens to see the breakdown of their city’s budget via adjustable, comprehensive pie charts.

Created by Colorado-based consultants Engaged Public, Balancing Act not only shows citizens the current budget breakdown, it also enables them to experiment with hypothetical future budgets, adjusting spending and taxes to suit their own priorities. The project aims to engage and inform citizens about the money that their mayors and governments assign on their behalf and allow them to have more of a say in the future of their city. The resource has already been utilized by Pedro Segarra, Mayor of Hartford, Connecticut, who asked his citizens for their input on how best to balance the USD 49 million.

The system can be used to help governments understand the wants and needs of their constituents, as well as enable citizens to see the bigger picture when it comes to tough or unappealing policies. Eventually it can even be used to create the world’s first crowdsourced budget, giving the public the power to make their preferences heard in a clear, comprehensible way…(More)”

Why Protecting Data Privacy Matters, and When


Anne Russell at Data Science Central: “It’s official. Public concerns over the privacy of data used in digital approaches have reached an apex. Worried about the safety of digital networks, consumers want to gain control over what they increasingly sense as a loss of power over how their data is used. It’s not hard to wonder why. Look at the extent of coverage on the U.S. Government data breach last month and the sheer growth in the number of attacks against government and others overall. Then there is the increasing coverage on the inherent security flaws built into the internet, through which most of our data flows. The costs of data breaches to individuals, industries, and government are adding up. And users are taking note…..
If you’re not sure whether the data fueling your approach will raise privacy and security flags, consider the following. When it comes to data privacy and security, not all data is going to be of equal concern. Much depends on the level of detail in data content, data type, data structure, volume, and velocity, and indeed how the data itself will be used and released.

First there is the data where security and privacy has always mattered and for which there is already an existing and well galvanized body of law in place. Foremost among these is classified or national security data where data usage is highly regulated and enforced. Other data for which there exists a considerable body of international and national law regulating usage includes:

  • Proprietary Data – specifically the data that makes up the intellectual capital of individual businesses and gives them their competitive economic advantage over others, including data protected under copyright, patent, or trade secret laws and the sensitive, protected data that companies collect on behalf of its customers;
  • Infrastructure Data – data from the physical facilities and systems – such as roads, electrical systems, communications services, etc. – that enable local, regional, national, and international economic activity; and
  • Controlled Technical Data – technical, biological, chemical, and military-related data and research that could be considered of national interest and be under foreign export restrictions….

The second group of data that raises privacy and security concerns is personal data. Commonly referred to as Personally Identifiable Information (PII), it is any data that distinguishes individuals from each other. It is also the data that an increasing number of digital approaches rely on, and the data whose use tends to raise the most public ire. …

A third category of data needing privacy consideration is the data related to good people working in difficult or dangerous places. Activists, journalists, politicians, whistle-blowers, business owners, and others working in contentious areas and conflict zones need secure means to communicate and share data without fear of retribution and personal harm.  That there are parts of the world where individuals can be in mortal danger for speaking out is one of the reason that TOR (The Onion Router) has received substantial funding from multiple government and philanthropic groups, even at the high risk of enabling anonymized criminal behavior. Indeed, in the absence of alternate secure networks on which to pass data, many would be in grave danger, including those such as the organizers of the Arab Spring in 2010 as well as dissidents in Syria and elsewhere….(More)”

 

Modernizing Informed Consent: Expanding the Boundaries of Materiality


Paper by Nadia N. Sawicki: “Informed consent law’s emphasis on the disclosure of purely medical information – such as diagnosis, prognosis, and the risks and benefits of various treatment alternatives – does not accurately reflect modern understandings of how patients make medical decisions. Existing common law disclosure duties fail to capture a variety of non-medical factors relevant to patients, including information about the physician’s personal characteristics; the cost of treatment; the social implications of various health care interventions; and the legal consequences associated with diagnosis and treatment. Although there is a wealth of literature analyzing the merits of such disclosures in a few narrow contexts, there is little broader discussion and no consensus about whether there the doctrine of informed consent should be expanded to include information that may be relevant to patients but falls outside the traditional scope of medical materiality. This article seeks to fill that gap.
I offer a normative argument for expanding the scope of informed consent disclosure to include non-medical information that is within the physician’s knowledge and expertise, where the information would be material to the reasonable patient and its disclosure does not violate public policy. This proposal would result in a set of disclosure requirements quite different from the ones set by modern common law and legislation. In many ways, the range of required disclosures may become broader, particularly with respect to physician-specific information about qualifications, health status, and financial conflicts of interests. However, some disclosures that are currently required by statute (or have been proposed by commentators) would fall outside the scope of informed consent – most notably, information about support resources available in the abortion context; about the social, ethical, and legal implications of treatment; and about health care costs….(More)”

Please, Corporations, Experiment on Us


Michelle N. Meyer and Christopher Chabris in the New York Times: ” Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent?

The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise.

But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.

Companies — and other powerful actors, including lawmakers, educators and doctors — “experiment” on us without our consent every time they implement a new policy, practice or product without knowing its consequences. When Facebook started, it created a radical new way for people to share emotionally laden information, with unknown effects on their moods. And when OkCupid started, it advised users to go on dates based on an algorithm without knowing whether it worked.

Why does one “experiment” (i.e., introducing a new product) fail to raise ethical concerns, whereas a true scientific experiment (i.e., introducing a variation of the product to determine the comparative safety or efficacy of the original) sets off ethical alarms?

In a forthcoming article in the Colorado Technology Law Journal, one of us (Professor Meyer) calls this the “A/B illusion” — the human tendency to focus on the risk, uncertainty and power asymmetries of running a test that compares A to B, while ignoring those factors when A is simply imposed by itself.

Consider a hypothetical example. A chief executive is concerned that her employees are taking insufficient advantage of the company’s policy of matching contributions to retirement savings accounts. She suspects that telling her workers how many others their age are making the maximum contribution would nudge them to save more, so she includes this information in personalized letters to them.

If contributions go up, maybe the new policy worked. But perhaps contributions would have gone up anyhow (say, because of an improving economy). If contributions go down, it might be because the policy failed. Or perhaps a declining economy is to blame, and contributions would have gone down even more without the letter.

You can’t answer these questions without doing a true scientific experiment — in technology jargon, an “A/B test.” The company could randomly assign its employees to receive either the old enrollment packet or the new one that includes the peer contribution information, and then statistically compare the two groups of employees to see which saved more.

Let’s be clear: This is experimenting on people without their consent, and the absence of consent is essential to the validity of the entire endeavor. If the C.E.O. were to tell the workers that they had been randomly assigned to receive one of two different letters, and why, that information would be likely to distort their choices.

Our chief executive isn’t so hypothetical. Economists do help corporations run such experiments, but many managers chafe at debriefing their employees afterward, fearing that they will be outraged that they were experimented on without their consent. A company’s unwillingness to debrief, in turn, can be a deal-breaker for the ethics boards that authorize research. So those C.E.O.s do what powerful people usually do: Pick the policy that their intuition tells them will work best, and apply it to everyone….(More)”

When Guarding Student Data Endangers Valuable Research


Susan M. Dynarski  in the New York Times: “There is widespread concern over threats to privacy posed by the extensive personal data collected by private companies and public agencies.

Some of the potential danger comes from the government: The National Security Agency has swept up the telephone records of millions of people, in what it describes as a search for terrorists. Other threats are posed by hackers, who have exploited security gaps to steal data from retail giantslike Target and from the federal Office of Personnel Management.

Resistance to data collection was inevitable — and it has been particularly intense in education.

Privacy laws have already been strengthened in some states, and multiple bills now pending in state legislatures and in Congress would tighten the security and privacy of student data. Some of this proposed legislation is so broadly written, however, that it could unintentionally choke off the use of student data for its original purpose: assessing and improving education. This data has already exposed inequities, allowing researchers and advocates to pinpoint where poor, nonwhite and non-English-speaking children have been educated inadequately by their schools.

Data gathering in education is indeed extensive: Across the United States, large, comprehensive administrative data sets now track the academic progress of tens of millions of students. Educators parse this data to understand what is working in their schools. Advocates plumb the data to expose unfair disparities in test scores and graduation rates, building cases to target more resources for the poor. Researchers rely on this data when measuring the effectiveness of education interventions.

To my knowledge there has been no large-scale, Target-like theft of private student records — probably because students’ test scores don’t have the market value of consumers’ credit card numbers. Parents’ concerns have mainly centered not on theft, but on the sharing of student data with third parties, including education technology companies. Last year, parentsresisted efforts by the tech start-up InBloom to draw data on millions of students into the cloud and return it to schools as teacher-friendly “data dashboards.” Parents were deeply uncomfortable with a third party receiving and analyzing data about their children.

In response to such concerns, some pending legislation would scale back the authority of schools, districts and states to share student data with third parties, including researchers. Perhaps the most stringent of these proposals, sponsored by Senator David Vitter, a Louisiana Republican, would effectively end the analysis of student data by outside social scientists. This legislation would have banned recent prominent research documenting the benefits of smaller classes, the value of excellent teachersand the varied performance of charter schools.

Under current law, education agencies can share data with outside researchers only to benefit students and improve education. Collaborations with researchers allow districts and states to tap specialized expertise that they otherwise couldn’t afford. The Boston public school district, for example, has teamed up with early-childhood experts at Harvard to plan and evaluate its universal prekindergarten program.

In one of the longest-standing research partnerships, the University of Chicago works with the Chicago Public Schools to improve education. Partnerships like Chicago’s exist across the nation, funded by foundations and the United States Department of Education. In one initiative, a Chicago research consortium compiled reports showing high school principals that many of the seniors they had sent off to college swiftly dropped out without earning a degree. This information spurred efforts to improve high school counseling and college placement.

Specific, tailored information in the hands of teachers, principals or superintendents empowers them to do better by their students. No national survey could have told Chicago’s principals how their students were doing in college. Administrative data can provide this information, cheaply and accurately…(More)”

Why open data should be central to Fifa reform


Gavin Starks in The Guardian: “Over the past two weeks, Fifa has faced mounting pressure to radically improve its transparency and governance in the wake of corruption allegations. David Cameron has called for reforms including expanding the use of open data.

Open data is information made available by governments, businesses and other groups for anyone to read, use and share. Data.gov.uk was launched as the home of UK open government data in January 2010 and now has almost 21,000 published datasets, including on government spending.

Allowing citizens to freely access data related to the institutions that govern them is essential to a well-functioning democratic society. It is the first step towards holding leaders to account for failures and wrongdoing.

Fifa has a responsibility for the shared interests of millions of fans around the world. Football’s popularity means that Fifa’s governance has wide-ranging implications for society, too. This is particularly true of decisions about hosting the World Cup, which is often tied to large-scale government investment in infrastructure and even extends to law-making. Brazil spent up to £10bn hosting the 2014 World Cup and had to legalise the sale of beer at matches.

Following Sepp Blatter’s resignation, Fifa will gather its executive committee in July to plan for a presidential election, expected to take place in mid-December. Open data should form the cornerstone of any prospective candidate’s manifesto. It can help Fifa make better spending decisions and ensure partners deliver value for money, restore the trust of the international football community.

Fifa’s lengthy annual financial report gives summaries of financial expenditure,budgeted at £184m for operations and governance alone in 2016, but individual transactions are not published. Publishing spending data incentivises better spending decisions. If all Fifa’s outgoings – which totalled around £3.5bn between 2011 and 2014 – were made open, it would encourage much more efficiency….(more)”

Flawed Humans, Flawed Justice


Adam Benforado in the New York Times  on using …”lessons from behavioral science to make police and courts more fair…. WHAT would it take to achieve true criminal justice in America?

Imagine that we got rid of all of the cops who cracked racist jokes and prosecutors blinded by a thirst for power. Imagine that we cleansed our courtrooms of lying witnesses and foolish jurors. Imagine that we removed every judge who thought the law should bend to her own personal agenda and every sadistic prison guard.

We would certainly feel just then. But we would be wrong.

We would still have unarmed kids shot in the back and innocent men and women sentenced to death. We would still have unequal treatment, disregarded rights and profound mistreatment.

The reason is simple and almost entirely overlooked: Our legal system is based on an inaccurate model of human behavior. Until recently, we had no way of understanding what was driving people’s thoughts, perceptions and actions in the criminal arena. So, we built our institutions on what we had: untested assumptions about what deceit looks like, how memories work and when punishment is merited.

But we now have tools — from experimental methods and data collection approaches to brain-imaging technologies — that provide an incredible opportunity to establish a new and robust foundation.

Our justice system must be reconstructed upon scientific fact. We can start by acknowledging what the data says about the fundamental flaws in our current legal processes and structures.

Consider the evidence that we treat as nearly unassailable proof of guilt at trial — an unwavering eyewitness, a suspect’s signed confession or a forensic match to the crime scene.

While we charge tens of thousands of people with crimes each year after they are identified in police lineups, research shows that eyewitnesses chose an innocent person roughly one-third of the time. Our memories can fail us because we’re frightened. They can be altered by the word choice of a detective. They can be corrupted by previously seeing someone’s image on a social media site.

Picking out lying suspects from their body language is ineffective. And trying then to gain a confession by exaggerating the strength of the evidence and playing down the seriousness of the offense can encourage people to admit to terrible things they didn’t do.

Even seemingly objective forensic analysis is far from incorruptible. Recent data shows that fingerprint — and even DNA — matches are significantly more likely when the forensic expert is aware that the sample comes from someone the police believe is guilty.

With the aid of psychology, we see there’s a whole host of seemingly extraneous forces influencing behavior and producing systematic distortions. But they remain hidden because they don’t fit into our familiar legal narratives.

We assume that the specific text of the law is critical to whether someone is convicted of rape, but research shows that the details of the criminal code — whether it includes a “force” requirement or excuses a “reasonably mistaken” belief in consent — can be irrelevant. What matters are the backgrounds and identifies of the jurors.

When a black teenager is shot by a police officer, we expect to find a bigot at the trigger.

But studies suggest that implicit bias, rather than explicit racism, is behind many recent tragedies. Indeed, simulator experiments show that the biggest danger posed to young African-American men may not be hate-filled cops, but well-intentioned police officers exposed to pervasive, damaging stereotypes that link the concepts of blackness and violence.

Likewise, Americans have been sold a myth that there are two kinds of judges — umpires and activists — and that being unbiased is a choice that a person makes. But the truth is that all judges are swayed by countless forces beyond their conscious awareness or control. It should have no impact on your case, for instance, whether your parole hearing is scheduled first thing in the morning or right before lunch, but when scientists looked at real parole boards, they found that judges were far more likely to grant petitions at the beginning of the day than they were midmorning.

The choice of where to place the camera in an interrogation room may seem immaterial, yet experiments show that it can affect whether a confession is determined to be coerced. When people watch a recording with the camera behind the detective, they are far more likely to find that the confession was voluntary than when watching the interactions from the perspective of the suspect.

With such challenges to our criminal justice system, what can possibly be done? The good news is that an evidence-based approach also illuminates the path forward.

Once we have clear data that something causes a bias, we can then figure out how to remove that influence. …(More)

Safecity: Combatting Sexual Violence Through Technology


Safecity, …. is a not for profit organization that provides a platform for people to share their personal stories of sexual harassment and abuse in public spaces. This data, which may be anonymous, gets aggregated as hot spots on a map indicating trends at a local level. The idea is to make this data useful for individuals, local communities and local administration for social and systemic change for safer cities. We launched on 26 Dec 2012 and since then have collected over 4000 stories from over 50 cities in India and Nepal.

How can Safecity help?
Safecity is a crowd map that converts these individual stories into data that is then plotted on a map. It is then easier to see trends at the location level (e.g. a street). The focus is taken away from the individual victim and instead we can focus on solving the problem at the local neighborhood level.

The Objectives:
• Create awareness on street harassment and abuse and get people, especially women, victims of hate and LGBTQ crimes to break their silence and report their personal experiences.
• Collate this information to showcase location based trends.
• Make this information available and useful for individuals, local communities and local administration to solve the problem at the local level through urban planning aimed at addressing infrastructural deficits
• Establish successful models of community engagement using crowd sourced data to solve civic and local issues.
• Reach out to women who do not have equal access to technology through our Missed dial facility for them to report any cases of abuse and harassment.

We wish to take this data forward to lobby for systemic change in terms of urban planning and infrastructure, reforms in our law that are premised on gender equity, and social changes to loosen the shackles that do not allow us otherwise to live the way we want to, with the freedom we want to, and with the rights that are fundamental to all of us, and it will just build our momentum further by having as many passionate, concerned and diverse genders on board.

We are trying to build a movement by collecting these reports through campaigns, workshops and awareness programs with schools, colleges, local communities and partners with shared vision. Crime against women has been rampant and largely remains unreported even till date. That silence needs to gain a voice and the time is now. We are determined to highlight this serious social issue and we believe we are taking a step towards changing the way our society thinks and reacts and are hopeful that so are you. In time we hope it will lead to a safe and non-violent environment for all.

Safecity uses technology to document sexual harassment and abuse in public spaces in the following way. People can report incidents of sexual abuse and street harassment, that they have experienced or witnessed. They can share solutions that can help avoid such situations and decide for themselves what works best for them, their geographic location or circumstances.

By allowing people to pin such incidents on a crowd-sourced map, we aim to let them highlight the “hotspots” of such activities. This accentuates the emerging trend in a particular area, enabling the citizens to acknowledge the problem, take personal precautions and devise a solution at the neighbourhood level.

Safecity believes in uniting millions of voices that can become a catalyst for change.

You can read the FAQs section for more information on how the data is used for public good. (More)”

Social Dimensions of Privacy


New book edited by Dorota Mokrosinska and Beate Roessler: “Written by a select international group of leading privacy scholars, Social Dimensions of Privacy endorses and develops an innovative approach to privacy. By debating topical privacy cases in their specific research areas, the contributors explore the new privacy-sensitive areas: legal scholars and political theorists discuss the European and American approaches to privacy regulation; sociologists explore new forms of surveillance and privacy on social network sites; and philosophers revisit feminist critiques of privacy, discuss markets in personal data, issues of privacy in health care and democratic politics. The broad interdisciplinary character of the volume will be of interest to readers from a variety of scientific disciplines who are concerned with privacy and data protection issues.

  • Takes an innovative approach to privacy which focuses on the social dimensions and value of privacy in contrast to the value of privacy for individuals
  • Addresses readers from a variety of disciplines, including law, philosophy, media studies, gender studies and political science
  • Addresses new privacy-sensitive areas triggered by recent technological developments (More)”