‘Smart Cities’ Will Know Everything About You


Mike Weston in the Wall Street Journal: “From Boston to Beijing, municipalities and governments across the world are pledging billions to create “smart cities”—urban areas covered with Internet-connected devices that control citywide systems, such as transit, and collect data. Although the details can vary, the basic goal is to create super-efficient infrastructure, aid urban planning and improve the well-being of the populace.

A byproduct of a tech utopia will be a prodigious amount of data collected on the inhabitants. For instance, at the company I head, we recently undertook an experiment in which some staff volunteered to wear devices around the clock for 10 days. We monitored more than 170 metrics reflecting their daily habits and preferences—including how they slept, where they traveled and how they felt (a fast heart rate and no movement can indicate excitement or stress).

If the Internet age has taught us anything, it’s that where there is information, there is money to be made. With so much personal information available and countless ways to use it, businesses and authorities will be faced with a number of ethical questions.

In a fully “smart” city, every movement an individual makes can be tracked. The data will reveal where she works, how she commutes, her shopping habits, places she visits and her proximity to other people. You could argue that this sort of tracking already exists via various apps and on social-media platforms, or is held by public-transport companies and e-commerce sites. The difference is that with a smart city this data will be centralized and easy to access. Given the value of this data, it’s conceivable that municipalities or private businesses that pay to create a smart city will seek to recoup their expenses by selling it….

Recent history—issues of privacy and security on social networks and chatting apps, and questions about how intellectual-property regulations apply online—has shown that the law has been slow to catch up with digital innovations. So businesses that can purchase smart-city data will be presented with many strategic and ethical concerns.

What degree of targeting is too specific and violates privacy? Should businesses limit the types of goods or services they offer to certain individuals? Is it ethical for data—on an employee’s eating habits, for instance—to be sold to employers or to insurance companies to help them assess claims? Do individuals own their own personal data once it enters the smart-city system?

With or without stringent controlling legislation, businesses in a smart city will need to craft their own policies and procedures regarding the use of data. A large-scale misuse of personal data could provoke a consumer backlash that could cripple a company’s reputation and lead to monster lawsuits. An additional problem is that businesses won’t know which individuals might welcome the convenience of targeted advertising and which will find it creepy—although data science could solve this equation eventually by predicting where each individual’s privacy line is.

A smart city doesn’t have to be as Orwellian as it sounds. If businesses act responsibly, there is no reason why what sounds intrusive in the abstract can’t revolutionize the way people live for the better by offering services that anticipates their needs; by designing ultraefficient infrastructure that makes commuting a (relative) dream; or with a revolutionary approach to how energy is generated and used by businesses and the populace at large….(More)”

White House to make public records more public


Lisa Rein at the Washington Post: “The law that’s supposed to keep citizens in the know about what their government is doing is about to get more robust.

Starting this week, seven agencies — including the Environmental Protection Agency and the Office of the Director of National Intelligence —  launched a new effort to put online the records they distribute to requesters under the Freedom of Information Act (FOIA).

So if a journalist, nonprofit group or corporation asks for the records, what they see, the public also will see. Documents still will be redacted where necessary to protect what the government decides is sensitive information, an area that’s often disputed but won’t change with this policy.

The Obama administration’s new Open Government initiative began quietly on the agencies’ Web sites days after FOIA’s 49th anniversary. It’s a response to years of pressure from open-government groups and lawmakers to boost public access to records of government decisions, deliberations and policies.

The “release to one is release to all” policy will start as a six-month pilot at the EPA, the Office of the Director of National Intelligence, the Millennium Challenge Corporation and within some offices at the Department of Homeland Security, the Defense Department, the Justice Department and the National Archives and Records Administration….(More)”

The case for data ethics


Steven Tiell at Accenture: “Personal data is the coin of the digital realm, which for business leaders creates a critical dilemma. Companies are being asked to gather more types of data faster than ever to maintain a competitive edge in the digital marketplace; at the same time, however, they are being asked to provide pervasive and granular control mechanisms over the use of that data throughout the data supply chain.

The stakes couldn’t be higher. If organizations, or the platforms they use to deliver services, fail to secure personal data, they expose themselves to tremendous risk—from eroding brand value and the hard-won trust of established vendors and customers to ceding market share, from violating laws to costing top executives their jobs.

To distinguish their businesses in this marketplace, leaders should be asking themselves two questions. What are the appropriate standards and practices our company needs to have in place to govern the handling of data? And how can our company make strong data controls a value proposition for our employees, customers and partners?

Defining effective compliance activities to support legal and regulatory obligations can be a starting point. However, mere compliance with existing regulations—which are, for the most part, focused on privacy—is insufficient. Respect for privacy is a byproduct of high ethical standards, but it is only part of the picture. Companies need to embrace data ethics, an expansive set of practices and behaviors grounded in a moral framework for the betterment of a community (however defined).

 RAISING THE BAR

Why ethics? When communities of people—in this case, the business community at large—encounter new influences, the way they respond to and engage with those influences becomes the community’s shared ethics. Individuals who behave in accordance with these community norms are said to be moral, and those who are exemplary are able to gain the trust of their community.

Over time, as ethical standards within a community shift, the bar for trustworthiness is raised on the assumption that participants in civil society must, at a minimum, adhere to the rule of law. And thus, to maintain moral authority and a high degree of trust, actors in a community must constantly evolve to adopt the highest ethical standards.

Actors in the big data community, where security and privacy are at the core of relationships with stakeholders, must adhere to a high ethical standard to gain this trust. This requires them to go beyond privacy law and existing data control measures. It will also reward those who practice strong ethical behaviors and a high degree of transparency at every stage of the data supply chain. The most successful actors will become the platform-based trust authorities, and others will depend on these platforms for disclosure, sharing and analytics of big data assets.

Data ethics becomes a value proposition only once controls and capabilities are in place to granularly manage data assets at scale throughout the data supply chain. It is also beneficial when a community shares the same behavioral norms and taxonomy to describe the data itself, the ethical decision points along the data supply chain, and how those decisions lead to beneficial or harmful impacts….(More)”

Using social media in hotel crisis management: the case of bed bugs


Social media has helped to bridge the communication gap between customers and hotels. Bed bug infestations are a growing health crisis and have obtained increasing attention on social media sites. Without managing this crisis effectively, bed bug infestation can cause economic loss and reputational damages to hotel properties, ranging from negative comments and complaints, to possible law suits. Thus, it is essential for hoteliers to understand the importance of social media in crisis communication, and to incorporate social media in hotels’ crisis management plans.

This study serves as one of the first attempts in the hospitality field to offer discussions and recommendations on how hotels can manage the bed bug crisis and other crises of this kind by incorporating social media into their crisis management practices….(More)”

Interactive app lets constituents help balance their city’s budget


Springwise: “In this era of information, political spending and municipal budgets are still often shrouded in confusion and mystery. But a new web app called Balancing Act hopes to change that, by enabling US citizens to see the breakdown of their city’s budget via adjustable, comprehensive pie charts.

Created by Colorado-based consultants Engaged Public, Balancing Act not only shows citizens the current budget breakdown, it also enables them to experiment with hypothetical future budgets, adjusting spending and taxes to suit their own priorities. The project aims to engage and inform citizens about the money that their mayors and governments assign on their behalf and allow them to have more of a say in the future of their city. The resource has already been utilized by Pedro Segarra, Mayor of Hartford, Connecticut, who asked his citizens for their input on how best to balance the USD 49 million.

The system can be used to help governments understand the wants and needs of their constituents, as well as enable citizens to see the bigger picture when it comes to tough or unappealing policies. Eventually it can even be used to create the world’s first crowdsourced budget, giving the public the power to make their preferences heard in a clear, comprehensible way…(More)”

Why Protecting Data Privacy Matters, and When


Anne Russell at Data Science Central: “It’s official. Public concerns over the privacy of data used in digital approaches have reached an apex. Worried about the safety of digital networks, consumers want to gain control over what they increasingly sense as a loss of power over how their data is used. It’s not hard to wonder why. Look at the extent of coverage on the U.S. Government data breach last month and the sheer growth in the number of attacks against government and others overall. Then there is the increasing coverage on the inherent security flaws built into the internet, through which most of our data flows. The costs of data breaches to individuals, industries, and government are adding up. And users are taking note…..
If you’re not sure whether the data fueling your approach will raise privacy and security flags, consider the following. When it comes to data privacy and security, not all data is going to be of equal concern. Much depends on the level of detail in data content, data type, data structure, volume, and velocity, and indeed how the data itself will be used and released.

First there is the data where security and privacy has always mattered and for which there is already an existing and well galvanized body of law in place. Foremost among these is classified or national security data where data usage is highly regulated and enforced. Other data for which there exists a considerable body of international and national law regulating usage includes:

  • Proprietary Data – specifically the data that makes up the intellectual capital of individual businesses and gives them their competitive economic advantage over others, including data protected under copyright, patent, or trade secret laws and the sensitive, protected data that companies collect on behalf of its customers;
  • Infrastructure Data – data from the physical facilities and systems – such as roads, electrical systems, communications services, etc. – that enable local, regional, national, and international economic activity; and
  • Controlled Technical Data – technical, biological, chemical, and military-related data and research that could be considered of national interest and be under foreign export restrictions….

The second group of data that raises privacy and security concerns is personal data. Commonly referred to as Personally Identifiable Information (PII), it is any data that distinguishes individuals from each other. It is also the data that an increasing number of digital approaches rely on, and the data whose use tends to raise the most public ire. …

A third category of data needing privacy consideration is the data related to good people working in difficult or dangerous places. Activists, journalists, politicians, whistle-blowers, business owners, and others working in contentious areas and conflict zones need secure means to communicate and share data without fear of retribution and personal harm.  That there are parts of the world where individuals can be in mortal danger for speaking out is one of the reason that TOR (The Onion Router) has received substantial funding from multiple government and philanthropic groups, even at the high risk of enabling anonymized criminal behavior. Indeed, in the absence of alternate secure networks on which to pass data, many would be in grave danger, including those such as the organizers of the Arab Spring in 2010 as well as dissidents in Syria and elsewhere….(More)”

 

Modernizing Informed Consent: Expanding the Boundaries of Materiality


Paper by Nadia N. Sawicki: “Informed consent law’s emphasis on the disclosure of purely medical information – such as diagnosis, prognosis, and the risks and benefits of various treatment alternatives – does not accurately reflect modern understandings of how patients make medical decisions. Existing common law disclosure duties fail to capture a variety of non-medical factors relevant to patients, including information about the physician’s personal characteristics; the cost of treatment; the social implications of various health care interventions; and the legal consequences associated with diagnosis and treatment. Although there is a wealth of literature analyzing the merits of such disclosures in a few narrow contexts, there is little broader discussion and no consensus about whether there the doctrine of informed consent should be expanded to include information that may be relevant to patients but falls outside the traditional scope of medical materiality. This article seeks to fill that gap.
I offer a normative argument for expanding the scope of informed consent disclosure to include non-medical information that is within the physician’s knowledge and expertise, where the information would be material to the reasonable patient and its disclosure does not violate public policy. This proposal would result in a set of disclosure requirements quite different from the ones set by modern common law and legislation. In many ways, the range of required disclosures may become broader, particularly with respect to physician-specific information about qualifications, health status, and financial conflicts of interests. However, some disclosures that are currently required by statute (or have been proposed by commentators) would fall outside the scope of informed consent – most notably, information about support resources available in the abortion context; about the social, ethical, and legal implications of treatment; and about health care costs….(More)”

Please, Corporations, Experiment on Us


Michelle N. Meyer and Christopher Chabris in the New York Times: ” Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent?

The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise.

But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.

Companies — and other powerful actors, including lawmakers, educators and doctors — “experiment” on us without our consent every time they implement a new policy, practice or product without knowing its consequences. When Facebook started, it created a radical new way for people to share emotionally laden information, with unknown effects on their moods. And when OkCupid started, it advised users to go on dates based on an algorithm without knowing whether it worked.

Why does one “experiment” (i.e., introducing a new product) fail to raise ethical concerns, whereas a true scientific experiment (i.e., introducing a variation of the product to determine the comparative safety or efficacy of the original) sets off ethical alarms?

In a forthcoming article in the Colorado Technology Law Journal, one of us (Professor Meyer) calls this the “A/B illusion” — the human tendency to focus on the risk, uncertainty and power asymmetries of running a test that compares A to B, while ignoring those factors when A is simply imposed by itself.

Consider a hypothetical example. A chief executive is concerned that her employees are taking insufficient advantage of the company’s policy of matching contributions to retirement savings accounts. She suspects that telling her workers how many others their age are making the maximum contribution would nudge them to save more, so she includes this information in personalized letters to them.

If contributions go up, maybe the new policy worked. But perhaps contributions would have gone up anyhow (say, because of an improving economy). If contributions go down, it might be because the policy failed. Or perhaps a declining economy is to blame, and contributions would have gone down even more without the letter.

You can’t answer these questions without doing a true scientific experiment — in technology jargon, an “A/B test.” The company could randomly assign its employees to receive either the old enrollment packet or the new one that includes the peer contribution information, and then statistically compare the two groups of employees to see which saved more.

Let’s be clear: This is experimenting on people without their consent, and the absence of consent is essential to the validity of the entire endeavor. If the C.E.O. were to tell the workers that they had been randomly assigned to receive one of two different letters, and why, that information would be likely to distort their choices.

Our chief executive isn’t so hypothetical. Economists do help corporations run such experiments, but many managers chafe at debriefing their employees afterward, fearing that they will be outraged that they were experimented on without their consent. A company’s unwillingness to debrief, in turn, can be a deal-breaker for the ethics boards that authorize research. So those C.E.O.s do what powerful people usually do: Pick the policy that their intuition tells them will work best, and apply it to everyone….(More)”

When Guarding Student Data Endangers Valuable Research


Susan M. Dynarski  in the New York Times: “There is widespread concern over threats to privacy posed by the extensive personal data collected by private companies and public agencies.

Some of the potential danger comes from the government: The National Security Agency has swept up the telephone records of millions of people, in what it describes as a search for terrorists. Other threats are posed by hackers, who have exploited security gaps to steal data from retail giantslike Target and from the federal Office of Personnel Management.

Resistance to data collection was inevitable — and it has been particularly intense in education.

Privacy laws have already been strengthened in some states, and multiple bills now pending in state legislatures and in Congress would tighten the security and privacy of student data. Some of this proposed legislation is so broadly written, however, that it could unintentionally choke off the use of student data for its original purpose: assessing and improving education. This data has already exposed inequities, allowing researchers and advocates to pinpoint where poor, nonwhite and non-English-speaking children have been educated inadequately by their schools.

Data gathering in education is indeed extensive: Across the United States, large, comprehensive administrative data sets now track the academic progress of tens of millions of students. Educators parse this data to understand what is working in their schools. Advocates plumb the data to expose unfair disparities in test scores and graduation rates, building cases to target more resources for the poor. Researchers rely on this data when measuring the effectiveness of education interventions.

To my knowledge there has been no large-scale, Target-like theft of private student records — probably because students’ test scores don’t have the market value of consumers’ credit card numbers. Parents’ concerns have mainly centered not on theft, but on the sharing of student data with third parties, including education technology companies. Last year, parentsresisted efforts by the tech start-up InBloom to draw data on millions of students into the cloud and return it to schools as teacher-friendly “data dashboards.” Parents were deeply uncomfortable with a third party receiving and analyzing data about their children.

In response to such concerns, some pending legislation would scale back the authority of schools, districts and states to share student data with third parties, including researchers. Perhaps the most stringent of these proposals, sponsored by Senator David Vitter, a Louisiana Republican, would effectively end the analysis of student data by outside social scientists. This legislation would have banned recent prominent research documenting the benefits of smaller classes, the value of excellent teachersand the varied performance of charter schools.

Under current law, education agencies can share data with outside researchers only to benefit students and improve education. Collaborations with researchers allow districts and states to tap specialized expertise that they otherwise couldn’t afford. The Boston public school district, for example, has teamed up with early-childhood experts at Harvard to plan and evaluate its universal prekindergarten program.

In one of the longest-standing research partnerships, the University of Chicago works with the Chicago Public Schools to improve education. Partnerships like Chicago’s exist across the nation, funded by foundations and the United States Department of Education. In one initiative, a Chicago research consortium compiled reports showing high school principals that many of the seniors they had sent off to college swiftly dropped out without earning a degree. This information spurred efforts to improve high school counseling and college placement.

Specific, tailored information in the hands of teachers, principals or superintendents empowers them to do better by their students. No national survey could have told Chicago’s principals how their students were doing in college. Administrative data can provide this information, cheaply and accurately…(More)”

Why open data should be central to Fifa reform


Gavin Starks in The Guardian: “Over the past two weeks, Fifa has faced mounting pressure to radically improve its transparency and governance in the wake of corruption allegations. David Cameron has called for reforms including expanding the use of open data.

Open data is information made available by governments, businesses and other groups for anyone to read, use and share. Data.gov.uk was launched as the home of UK open government data in January 2010 and now has almost 21,000 published datasets, including on government spending.

Allowing citizens to freely access data related to the institutions that govern them is essential to a well-functioning democratic society. It is the first step towards holding leaders to account for failures and wrongdoing.

Fifa has a responsibility for the shared interests of millions of fans around the world. Football’s popularity means that Fifa’s governance has wide-ranging implications for society, too. This is particularly true of decisions about hosting the World Cup, which is often tied to large-scale government investment in infrastructure and even extends to law-making. Brazil spent up to £10bn hosting the 2014 World Cup and had to legalise the sale of beer at matches.

Following Sepp Blatter’s resignation, Fifa will gather its executive committee in July to plan for a presidential election, expected to take place in mid-December. Open data should form the cornerstone of any prospective candidate’s manifesto. It can help Fifa make better spending decisions and ensure partners deliver value for money, restore the trust of the international football community.

Fifa’s lengthy annual financial report gives summaries of financial expenditure,budgeted at £184m for operations and governance alone in 2016, but individual transactions are not published. Publishing spending data incentivises better spending decisions. If all Fifa’s outgoings – which totalled around £3.5bn between 2011 and 2014 – were made open, it would encourage much more efficiency….(more)”