The Spectrum of Control: A Social Theory of the Smart City


Sadowski , Jathan and Pasquale, Frank at First Monday: “There is a certain allure to the idea that cities allow a person to both feel at home and like a stranger in the same place. That one can know the streets and shops, avenues and alleys, while also going days without being recognized. But as elites fill cities with “smart” technologies — turning them into platforms for the “Internet of Things” (IoT): sensors and computation embedded within physical objects that then connect, communicate, and/or transmit information with or between each other through the Internet — there is little escape from a seamless web of surveillance and power. This paper will outline a social theory of the “smart city” by developing our Deleuzian concept of the “spectrum of control.” We present two illustrative examples: biometric surveillance as a form of monitoring, and automated policing as a particularly brutal and exacting form of manipulation. We conclude by offering normative guidelines for governance of the pervasive surveillance and control mechanisms that constitute an emerging critical infrastructure of the “smart city.”…(More)”

 

A systematic review of open government data initiatives


Paper by J. Attard, F. Orlandi, S. Scerri, and S. Auer in Government Information Quarterly: “We conduct a systematic survey with the aim of assessing open government data initiatives, that is; any attempt, by a government or otherwise, to open data that is produced by a governmental entity. We describe the open government data life-cycle and we focus our discussion on publishing and consuming processes required within open government data initiatives. We cover current approaches undertaken for such initiatives, and classify them. A number of evaluations found within related literature are discussed, and from them we extract challenges and issues that hinder open government initiatives from reaching their full potential. In a bid to overcome these challenges, we also extract guidelines for publishing data and provide an integrated overview. This will enable stakeholders to start with a firm foot in a new open government data initiative. We also identify the impacts on the stakeholders involved in such initiatives….(More)”

Dissecting the Spirit of Gezi: Influence vs. Selection in the Occupy Gezi Movement


New study by Ceren Budak and Duncan J. Watts in Sociological Science: “Do social movements actively shape the opinions and attitudes of participants by bringing together diverse groups that subsequently influence one another? Ethnographic studies of the 2013 Gezi uprising seem to answer “yes,” pointing to solidarity among groups that were traditionally indifferent, or even hostile, to one another. We argue that two mechanisms with differing implications may generate this observed outcome: “influence” (change in attitude caused by interacting with other participants); and “selection” (individuals who participated in the movement were generally more supportive of other groups beforehand).

We tease out the relative importance of these mechanisms by constructing a panel of over 30,000 Twitter users and analyzing their support for the main Turkish opposition parties before, during, and after the movement. We find that although individuals changed in significant ways, becoming in general more supportive of the other opposition parties, those who participated in the movement were also significantly more supportive of the other parties all along. These findings suggest that both mechanisms were important, but that selection dominated. In addition to our substantive findings, our paper also makes a methodological contribution that we believe could be useful to studies of social movements and mass opinion change more generally. In contrast with traditional panel studies, which must be designed and implemented prior to the event of interest, our method relies on ex post panel construction, and hence can be used to study unanticipated or otherwise inaccessible events. We conclude that despite the well known limitations of social media, their “always on” nature and their widespread availability offer an important source of public opinion data….(More)”

On the morals of network research and beyond


Conspicuous Chatter:”…Discussion on ethics have become very popular in computer science lately — and to some extent I am glad about this. However, I think we should dispel three key fallacies.

The first one is that things we do not like (some may brand “immoral”) happen because others do not think of the moral implications of their actions. In fact it is entirely possible that they do and decide to act in a manner we do not like none-the-less. This could be out of conviction: those who built the surveillance equipment, that argue against strong encryption, and also those that do the torture and the killing (harm), may have entirely self-righteous ways of justifying their actions to themselves and others. Others, may simply be doing a good buck — and there are plenty of examples of this in the links above.

The second fallacy is that ethics, and research ethics more specifically, comes down to a “common sense” variant of “do no harm” — and that is that. In fact Ethics, as a philosophical discipline is extremely deep, and there are plenty of entirely legitimate ways to argue that doing harm is perfectly fine. If the authors of the paper were a bit more sophisticated in their philosophy they could, for example have made reference to the “doctrine of double effect” or the nature of free will of those that will bring actual harm to users, and therefore their moral responsibility. It seems that a key immoral aspect of this work was that the authors forgot to write that, confusing section.

Finally, we should dispel in conversations about research ethics, the myth that morality equals legality. The public review mentions “informed consent”, but in fact this is an extremely difficult notion — and legalistically it has been used to justify terrible things. The data protection variant of informed consent allows large internet companies, and telcos, to basically scoop most users’ data because of some small print in lengthy terms and conditions. In fact it should probably be our responsibility to highlight the immorality of this state of affairs, before writing public reviews about the immorality of a hypothetical censorship detection system.

Thus, I would argue, if one is to make an ethical point relating to the values and risks of technology they have to make it in the larger context of how technology is fielded and used, the politics around it, who has power, who makes the money, who does the torturing and the killing, and why. Technology lives within a big moral picture that a research community has a responsibility to comment on. Focusing moral attention on the microcosm of a specific hypothetical use case — just because it is the closest to our research community — misses the point, perpetuating silently a terrible state of moral affairs….(More)”

Customer-Driven Government


Jane Wiseman at DataSmart City Solutions: “Public trust in government is low — of the 43 industries tracked in the American Customer Satisfaction Index, only one ranks lower than the federal government in satisfaction levels.  Local government ranks a bit higher than the federal government, but for most of the public, that makes little difference. It’s time for government to change that perception by listening to its customers and improving service delivery.

What can the cup holder in your car teach government about customer engagement? A cup holder would be hard to live without — it keeps a latte from spilling and has room for keys and a phone. But the cup holder was not always such a multi-tasker. The first ones were shallow indentations in the plastic on the inside of the glove box. Accelerate and the drinks went flying. Did a brilliant automotive engineer decide that was a design flaw and fix it? No. It was only when Chrysler received more complaints about the cup holder than about anything else in their cars that they were forced to innovate. Don Clark, a DaimlerChrysler engineer known as the “Cup Holder King,” designed the first of the modern cup holders, debuting in the company’s 1984 minivans. The engineersthought they knew what their customers wanted (more powerful engines, better fuel economy, safety features) but it wasn’t until they listened to customers’ comments that they put in the cup holder. And sales took off.

Today, we’re awash in customer feedback, seemingly everywhere but government.  Over the past decade, customer feedback ratings for products and services have shown up everywhere — whether in a review on Yelp, a “like” on Facebook, or a Tweet about the virtues or shortcomings of a product or service.  Ratings help draw attention to poor quality and allow companies to address these gaps.  Many companies routinely follow up a customer interaction with a satisfaction survey.  This data drives improvement efforts aimed at keeping customers happy.  Some companies aggressively manage their online reviews, seeking to increase their NPS, or net promoter score.  Many people really like to provide feedback — there are 77 million reviews on Yelp to date, according to the company.  Imagine the power of that many reviews of government service.

If customer input can influence the automotive industry, and can help consumers make better decisions, what if we turned this energy toward government?  After all, the government is run “by the people” and “for the people” — what if citizens gave government real-time guidance on improving services?  And could leaders in government ask customers what they want, instead of presuming to know?  This paper explores these questions and suggests a way forward.

….

If I were a mayor, how would I begin harnessing customer feedback to improve service delivery?  I would build a foundation for improving core city operations (trash pickup, pothole fixing, etc.) by using the same three questions Kansas City uses for follow-up surveys to all who contact 311.  Upon that foundation I would layer additional outreach on a tactical, ad hoc basis.  I would experiment with the growing body of tools for engaging the public in shaping tactical decisions, such as how to allocate capital projects and where to locate bike share hubs.

To get an even deeper insight into the customer experience, I might copy what Somerville, MA has done with its Secret Resident program.  Trained volunteers assess the efficiency, courtesy, and ease of use of selected city departments.  The volunteers transact typical city services by phone or in person, and then document their customer experience.  They rate the agencies, and the 311 call center, and provide assessments that can help improve customer service.

By listening to and leveraging data on constituent calls for service, government can move from a culture of reaction to a proactive culture of listening and learning from the data provided by the public.  Engaging the public, and following through on the suggestions they give, can increase not only the quality of government service, but the faith of the public that government can listen and respond.

Every leader in government should commit to getting feedback from customers — it’s the only way to know how to increase their satisfaction with the services.  There is no more urgent time to improve the customer experience…(More)

Anonymization and Risk


Paper by Ira Rubinstein and Woodrow Hartzog: “Perfect anonymization of data sets has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the better locus of data release policy is on the process of minimizing the risk of reidentification and sensitive attribute disclosure. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been “anonymized.” It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk….(More)”

Meaningful Consent: The Economics of Privity in Networked Environments


Paper by Jonathan Cave: “Recent work on privacy (e.g. WEIS 2013/4, Meaningful Consent in the Digital Economy project) recognises the unanticipated consequences of data-centred legal protections in a world of shifting relations between data and human actors. But the rules have not caught up with these changes, and the irreversible consequences of ‘make do and mend’ are not often taken into account when changing policy.

Many of the most-protected ‘personal’ data are not personal at all, but are created to facilitate the operation of larger (e.g. administrative, economic, transport) systems or inadvertently generated by using such systems. The protection given to such data typically rests on notions of informed consent even in circumstances where such consent may be difficult to define, harder to give and nearly impossible to certify in meaningful ways. Such protections typically involve a mix of data collection, access and processing rules that are either imposed on behalf of individuals or are to be exercised by them. This approach adequately protects some personal interests, but not all – and is definitely not future-proof. Boundaries between allowing individuals to discover and pursue their interests on one side and behavioural manipulation on the other are often blurred. The costs (psychological and behavioural as well as economic and practical) of exercising control over one’s data are rarely taken into account as some instances of the Right to be Forgotten illustrate. The purposes for which privacy rights were constructed are often forgotten, or have not been reinterpreted in a world of ubiquitous monitoring data, multi-person ‘private exchanges,’ and multiple pathways through which data can be used to create and to capture value. Moreover, the parties who should be involved in making decisions – those connected by a network of informational relationships – are often not in contractual, practical or legal contact. These developments, associated with e.g. the Internet of Things, Cloud computing and big data analytics, should be recognised as challenging privacy rules and, more fundamentally, the adequacy of informed consent (e.g. to access specified data for specified purposes) as a means of managing innovative, flexible, and complex informational architectures.

This paper presents a framework for organising these challenges using them to evaluate proposed policies, specifically in relation to complex, automated, automatic or autonomous data collection, processing and use. It argues for a movement away from a system of property rights based on individual consent to a values-based ‘privity’ regime – a collection of differentiated (relational as well as property) rights and consents that may be better able to accommodate innovations. Privity regimes (see deFillipis 2006) bundle together rights regarding e.g. confidential disclosure with ‘standing’ or voice options in relation to informational linkages.

The impacts are examined through a game-theoretic comparison between the proposed privity regime and existing privacy rights in personal data markets that include: conventional ‘behavioural profiling’ and search; situations where third parties may have complementary roles conflicting interests in such data and where data have value in relation both to specific individuals and to larger groups (e.g. ‘real-world’ health data); n-sided markets on data platforms (including social and crowd-sourcing platforms with long and short memories); and the use of ‘privity-like’ rights inherited by data objects and by autonomous systems whose ownership may be shared among many people….(More)”

Outcome-driven open innovation at NASA


New paper by Jennifer L. Gustetic et al in Space Policy: “In an increasingly connected and networked world, the National Aeronautics and Space Administration (NASA) recognizes the value of the public as a strategic partner in addressing some of our most pressing challenges. The agency is working to more effectively harness the expertise, ingenuity, and creativity of individual members of the public by enabling, accelerating, and scaling the use of open innovation approaches including prizes, challenges, and crowdsourcing. As NASA’s use of open innovation tools to solve a variety of types of problems and advance of number of outcomes continues to grow, challenge design is also becoming more sophisticated as our expertise and capacity (personnel, platforms, and partners) grows and develops. NASA has recently pivoted from talking about the benefits of challenge-driven approaches, to the outcomes these types of activities yield. Challenge design should be informed by desired outcomes that align with NASA’s mission. This paper provides several case studies of NASA open innovation activities and maps the outcomes of those activities to a successful set of outcomes that challenges can help drive alongside traditional tools such as contracts, grants and partnerships….(More)”

Journal of Technology Science


Technology Science is an open access forum for any original material dealing primarily with a social, political, personal, or organizational benefit or adverse consequence of technology. Studies that characterize a technology-society clash or present an approach to better harmonize technology and society are especially welcomed. Papers can come from anywhere in the world.

Technology Science is interested in reviews of research, experiments, surveys, tutorials, and analyses. Writings may propose solutions or describe unsolved problems. Technology Science may also publish letters, short communications, and relevant news items. All submissions are peer-reviewed.

The scientific study of technology-society clashes is a cross-disciplinary pursuit, so papers in Technology Science may come from any of many possible disciplinary traditions, including but not limited to social science, computer science, political science, law, economics, policy, or statistics.

The Data Privacy Lab at Harvard University publishes Technology Science and its affiliated subset of papers called the Journal of Technology Science and maintains them online at techscience.org and at jots.pub. Technology Science is available free of charge over the Internet. While it is possible that bound paper copies of Technology Science content may be produced for a fee, all content will continue to be offered online at no charge….(More)”

 

Science Isn’t Broken


Christie Aschwanden at FiveThirtyEight: “Yet even in the face of overwhelming evidence, it’s hard to let go of a cherished idea, especially one a scientist has built a career on developing. And so, as anyone who’s ever tried to correct a falsehood on the Internet knows, the truth doesn’t always win, at least not initially, because we process new evidence through the lens of what we already believe. Confirmation bias can blind us to the facts; we are quick to make up our minds and slow to change them in the face of new evidence.

A few years ago, Ioannidis and some colleagues searched the scientific literature for references to two well-known epidemiological studies suggesting that vitamin E supplements might protect against cardiovascular disease. These studies were followed by several large randomized clinical trials that showed no benefit from vitamin E and one meta-analysis finding that at high doses, vitamin E actually increased the risk of death.

Human fallibilities send the scientific process hurtling in fits, starts and misdirections instead of in a straight line from question to truth.

Despite the contradictory evidence from more rigorous trials, the first studies continued to be cited and defended in the literature. Shaky claims about beta carotene’s ability to reduce cancer risk and estrogen’s role in staving off dementia also persisted, even after they’d been overturned by more definitive studies. Once an idea becomes fixed, it’s difficult to remove from the conventional wisdom.

Sometimes scientific ideas persist beyond the evidence because the stories we tell about them feel true and confirm what we already believe. It’s natural to think about possible explanations for scientific results — this is how we put them in context and ascertain how plausible they are. The problem comes when we fall so in love with these explanations that we reject the evidence refuting them.

The media is often accused of hyping studies, but scientists are prone to overstating their results too.

Take, for instance, the breakfast study. Published in 2013, it examined whether breakfast eaters weigh less than those who skip the morning meal and if breakfast could protect against obesity. Obesity researcher Andrew Brown and his colleagues found that despite more than 90 mentions of this hypothesis in published media and journals, the evidence for breakfast’s effect on body weight was tenuous and circumstantial. Yet researchers in the field seemed blind to these shortcomings, overstating the evidence and using causative language to describe associations between breakfast and obesity. The human brain is primed to find causality even where it doesn’t exist, and scientists are not immune.

As a society, our stories about how science works are also prone to error. The standard way of thinking about the scientific method is: ask a question, do a study, get an answer. But this notion is vastly oversimplified. A more common path to truth looks like this: ask a question, do a study, get a partial or ambiguous answer, then do another study, and then do another to keep testing potential hypotheses and homing in on a more complete answer. Human fallibilities send the scientific process hurtling in fits, starts and misdirections instead of in a straight line from question to truth.

Media accounts of science tend to gloss over the nuance, and it’s easy to understand why. For one thing, reporters and editors who cover science don’t always have training on how to interpret studies. And headlines that read “weak, unreplicated study finds tenuous link between certain vegetables and cancer risk” don’t fly off the newsstands or bring in the clicks as fast as ones that scream “foods that fight cancer!”

People often joke about the herky-jerky nature of science and health headlines in the media — coffee is good for you one day, bad the next — but that back and forth embodies exactly what the scientific process is all about. It’s hard to measure the impact of diet on health, Nosek told me. “That variation [in results] occurs because science is hard.” Isolating how coffee affects health requires lots of studies and lots of evidence, and only over time and in the course of many, many studies does the evidence start to narrow to a conclusion that’s defensible. “The variation in findings should not be seen as a threat,” Nosek said. “It means that scientists are working on a hard problem.”

The scientific method is the most rigorous path to knowledge, but it’s also messy and tough. Science deserves respect exactly because it is difficult — not because it gets everything correct on the first try. The uncertainty inherent in science doesn’t mean that we can’t use it to make important policies or decisions. It just means that we should remain cautious and adopt a mindset that’s open to changing course if new data arises. We should make the best decisions we can with the current evidence and take care not to lose sight of its strength and degree of certainty. It’s no accident that every good paper includes the phrase “more study is needed” — there is always more to learn….(More)”