Customer-Driven Government


Jane Wiseman at DataSmart City Solutions: “Public trust in government is low — of the 43 industries tracked in the American Customer Satisfaction Index, only one ranks lower than the federal government in satisfaction levels.  Local government ranks a bit higher than the federal government, but for most of the public, that makes little difference. It’s time for government to change that perception by listening to its customers and improving service delivery.

What can the cup holder in your car teach government about customer engagement? A cup holder would be hard to live without — it keeps a latte from spilling and has room for keys and a phone. But the cup holder was not always such a multi-tasker. The first ones were shallow indentations in the plastic on the inside of the glove box. Accelerate and the drinks went flying. Did a brilliant automotive engineer decide that was a design flaw and fix it? No. It was only when Chrysler received more complaints about the cup holder than about anything else in their cars that they were forced to innovate. Don Clark, a DaimlerChrysler engineer known as the “Cup Holder King,” designed the first of the modern cup holders, debuting in the company’s 1984 minivans. The engineersthought they knew what their customers wanted (more powerful engines, better fuel economy, safety features) but it wasn’t until they listened to customers’ comments that they put in the cup holder. And sales took off.

Today, we’re awash in customer feedback, seemingly everywhere but government.  Over the past decade, customer feedback ratings for products and services have shown up everywhere — whether in a review on Yelp, a “like” on Facebook, or a Tweet about the virtues or shortcomings of a product or service.  Ratings help draw attention to poor quality and allow companies to address these gaps.  Many companies routinely follow up a customer interaction with a satisfaction survey.  This data drives improvement efforts aimed at keeping customers happy.  Some companies aggressively manage their online reviews, seeking to increase their NPS, or net promoter score.  Many people really like to provide feedback — there are 77 million reviews on Yelp to date, according to the company.  Imagine the power of that many reviews of government service.

If customer input can influence the automotive industry, and can help consumers make better decisions, what if we turned this energy toward government?  After all, the government is run “by the people” and “for the people” — what if citizens gave government real-time guidance on improving services?  And could leaders in government ask customers what they want, instead of presuming to know?  This paper explores these questions and suggests a way forward.

….

If I were a mayor, how would I begin harnessing customer feedback to improve service delivery?  I would build a foundation for improving core city operations (trash pickup, pothole fixing, etc.) by using the same three questions Kansas City uses for follow-up surveys to all who contact 311.  Upon that foundation I would layer additional outreach on a tactical, ad hoc basis.  I would experiment with the growing body of tools for engaging the public in shaping tactical decisions, such as how to allocate capital projects and where to locate bike share hubs.

To get an even deeper insight into the customer experience, I might copy what Somerville, MA has done with its Secret Resident program.  Trained volunteers assess the efficiency, courtesy, and ease of use of selected city departments.  The volunteers transact typical city services by phone or in person, and then document their customer experience.  They rate the agencies, and the 311 call center, and provide assessments that can help improve customer service.

By listening to and leveraging data on constituent calls for service, government can move from a culture of reaction to a proactive culture of listening and learning from the data provided by the public.  Engaging the public, and following through on the suggestions they give, can increase not only the quality of government service, but the faith of the public that government can listen and respond.

Every leader in government should commit to getting feedback from customers — it’s the only way to know how to increase their satisfaction with the services.  There is no more urgent time to improve the customer experience…(More)

Anonymization and Risk


Paper by Ira Rubinstein and Woodrow Hartzog: “Perfect anonymization of data sets has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the better locus of data release policy is on the process of minimizing the risk of reidentification and sensitive attribute disclosure. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been “anonymized.” It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk….(More)”

Meaningful Consent: The Economics of Privity in Networked Environments


Paper by Jonathan Cave: “Recent work on privacy (e.g. WEIS 2013/4, Meaningful Consent in the Digital Economy project) recognises the unanticipated consequences of data-centred legal protections in a world of shifting relations between data and human actors. But the rules have not caught up with these changes, and the irreversible consequences of ‘make do and mend’ are not often taken into account when changing policy.

Many of the most-protected ‘personal’ data are not personal at all, but are created to facilitate the operation of larger (e.g. administrative, economic, transport) systems or inadvertently generated by using such systems. The protection given to such data typically rests on notions of informed consent even in circumstances where such consent may be difficult to define, harder to give and nearly impossible to certify in meaningful ways. Such protections typically involve a mix of data collection, access and processing rules that are either imposed on behalf of individuals or are to be exercised by them. This approach adequately protects some personal interests, but not all – and is definitely not future-proof. Boundaries between allowing individuals to discover and pursue their interests on one side and behavioural manipulation on the other are often blurred. The costs (psychological and behavioural as well as economic and practical) of exercising control over one’s data are rarely taken into account as some instances of the Right to be Forgotten illustrate. The purposes for which privacy rights were constructed are often forgotten, or have not been reinterpreted in a world of ubiquitous monitoring data, multi-person ‘private exchanges,’ and multiple pathways through which data can be used to create and to capture value. Moreover, the parties who should be involved in making decisions – those connected by a network of informational relationships – are often not in contractual, practical or legal contact. These developments, associated with e.g. the Internet of Things, Cloud computing and big data analytics, should be recognised as challenging privacy rules and, more fundamentally, the adequacy of informed consent (e.g. to access specified data for specified purposes) as a means of managing innovative, flexible, and complex informational architectures.

This paper presents a framework for organising these challenges using them to evaluate proposed policies, specifically in relation to complex, automated, automatic or autonomous data collection, processing and use. It argues for a movement away from a system of property rights based on individual consent to a values-based ‘privity’ regime – a collection of differentiated (relational as well as property) rights and consents that may be better able to accommodate innovations. Privity regimes (see deFillipis 2006) bundle together rights regarding e.g. confidential disclosure with ‘standing’ or voice options in relation to informational linkages.

The impacts are examined through a game-theoretic comparison between the proposed privity regime and existing privacy rights in personal data markets that include: conventional ‘behavioural profiling’ and search; situations where third parties may have complementary roles conflicting interests in such data and where data have value in relation both to specific individuals and to larger groups (e.g. ‘real-world’ health data); n-sided markets on data platforms (including social and crowd-sourcing platforms with long and short memories); and the use of ‘privity-like’ rights inherited by data objects and by autonomous systems whose ownership may be shared among many people….(More)”

Outcome-driven open innovation at NASA


New paper by Jennifer L. Gustetic et al in Space Policy: “In an increasingly connected and networked world, the National Aeronautics and Space Administration (NASA) recognizes the value of the public as a strategic partner in addressing some of our most pressing challenges. The agency is working to more effectively harness the expertise, ingenuity, and creativity of individual members of the public by enabling, accelerating, and scaling the use of open innovation approaches including prizes, challenges, and crowdsourcing. As NASA’s use of open innovation tools to solve a variety of types of problems and advance of number of outcomes continues to grow, challenge design is also becoming more sophisticated as our expertise and capacity (personnel, platforms, and partners) grows and develops. NASA has recently pivoted from talking about the benefits of challenge-driven approaches, to the outcomes these types of activities yield. Challenge design should be informed by desired outcomes that align with NASA’s mission. This paper provides several case studies of NASA open innovation activities and maps the outcomes of those activities to a successful set of outcomes that challenges can help drive alongside traditional tools such as contracts, grants and partnerships….(More)”

Journal of Technology Science


Technology Science is an open access forum for any original material dealing primarily with a social, political, personal, or organizational benefit or adverse consequence of technology. Studies that characterize a technology-society clash or present an approach to better harmonize technology and society are especially welcomed. Papers can come from anywhere in the world.

Technology Science is interested in reviews of research, experiments, surveys, tutorials, and analyses. Writings may propose solutions or describe unsolved problems. Technology Science may also publish letters, short communications, and relevant news items. All submissions are peer-reviewed.

The scientific study of technology-society clashes is a cross-disciplinary pursuit, so papers in Technology Science may come from any of many possible disciplinary traditions, including but not limited to social science, computer science, political science, law, economics, policy, or statistics.

The Data Privacy Lab at Harvard University publishes Technology Science and its affiliated subset of papers called the Journal of Technology Science and maintains them online at techscience.org and at jots.pub. Technology Science is available free of charge over the Internet. While it is possible that bound paper copies of Technology Science content may be produced for a fee, all content will continue to be offered online at no charge….(More)”

 

Science Isn’t Broken


Christie Aschwanden at FiveThirtyEight: “Yet even in the face of overwhelming evidence, it’s hard to let go of a cherished idea, especially one a scientist has built a career on developing. And so, as anyone who’s ever tried to correct a falsehood on the Internet knows, the truth doesn’t always win, at least not initially, because we process new evidence through the lens of what we already believe. Confirmation bias can blind us to the facts; we are quick to make up our minds and slow to change them in the face of new evidence.

A few years ago, Ioannidis and some colleagues searched the scientific literature for references to two well-known epidemiological studies suggesting that vitamin E supplements might protect against cardiovascular disease. These studies were followed by several large randomized clinical trials that showed no benefit from vitamin E and one meta-analysis finding that at high doses, vitamin E actually increased the risk of death.

Human fallibilities send the scientific process hurtling in fits, starts and misdirections instead of in a straight line from question to truth.

Despite the contradictory evidence from more rigorous trials, the first studies continued to be cited and defended in the literature. Shaky claims about beta carotene’s ability to reduce cancer risk and estrogen’s role in staving off dementia also persisted, even after they’d been overturned by more definitive studies. Once an idea becomes fixed, it’s difficult to remove from the conventional wisdom.

Sometimes scientific ideas persist beyond the evidence because the stories we tell about them feel true and confirm what we already believe. It’s natural to think about possible explanations for scientific results — this is how we put them in context and ascertain how plausible they are. The problem comes when we fall so in love with these explanations that we reject the evidence refuting them.

The media is often accused of hyping studies, but scientists are prone to overstating their results too.

Take, for instance, the breakfast study. Published in 2013, it examined whether breakfast eaters weigh less than those who skip the morning meal and if breakfast could protect against obesity. Obesity researcher Andrew Brown and his colleagues found that despite more than 90 mentions of this hypothesis in published media and journals, the evidence for breakfast’s effect on body weight was tenuous and circumstantial. Yet researchers in the field seemed blind to these shortcomings, overstating the evidence and using causative language to describe associations between breakfast and obesity. The human brain is primed to find causality even where it doesn’t exist, and scientists are not immune.

As a society, our stories about how science works are also prone to error. The standard way of thinking about the scientific method is: ask a question, do a study, get an answer. But this notion is vastly oversimplified. A more common path to truth looks like this: ask a question, do a study, get a partial or ambiguous answer, then do another study, and then do another to keep testing potential hypotheses and homing in on a more complete answer. Human fallibilities send the scientific process hurtling in fits, starts and misdirections instead of in a straight line from question to truth.

Media accounts of science tend to gloss over the nuance, and it’s easy to understand why. For one thing, reporters and editors who cover science don’t always have training on how to interpret studies. And headlines that read “weak, unreplicated study finds tenuous link between certain vegetables and cancer risk” don’t fly off the newsstands or bring in the clicks as fast as ones that scream “foods that fight cancer!”

People often joke about the herky-jerky nature of science and health headlines in the media — coffee is good for you one day, bad the next — but that back and forth embodies exactly what the scientific process is all about. It’s hard to measure the impact of diet on health, Nosek told me. “That variation [in results] occurs because science is hard.” Isolating how coffee affects health requires lots of studies and lots of evidence, and only over time and in the course of many, many studies does the evidence start to narrow to a conclusion that’s defensible. “The variation in findings should not be seen as a threat,” Nosek said. “It means that scientists are working on a hard problem.”

The scientific method is the most rigorous path to knowledge, but it’s also messy and tough. Science deserves respect exactly because it is difficult — not because it gets everything correct on the first try. The uncertainty inherent in science doesn’t mean that we can’t use it to make important policies or decisions. It just means that we should remain cautious and adopt a mindset that’s open to changing course if new data arises. We should make the best decisions we can with the current evidence and take care not to lose sight of its strength and degree of certainty. It’s no accident that every good paper includes the phrase “more study is needed” — there is always more to learn….(More)”

Open Data: A 21st Century Asset for Small and Medium Sized Enterprises


“The economic and social potential of open data is widely acknowledged. In particular, the business opportunities have received much attention. But for all the excitement, we still know very little about how and under what conditions open data really works.

To broaden our understanding of the use and impact of open data, the GovLab has a variety of initiatives and studies underway. Today, we share publicly our findings on how Small and Medium Sized Enterprises (SMEs) are leveraging open data for a variety of purposes. Our paper “Open Data: A 21st Century Asset for Small and Medium Sized Enterprises” seeks to build a portrait of the lifecycle of open data—how it is collected, stored and used. It outlines some of the most important parameters of an open data business model for SMEs….

The paper analyzes ten aspects of open data and establishes ten principles for its effective use by SMEs. Taken together, these offer a roadmap for any SME considering greater use or adoption of open data in its business.

Among the key findings included in the paper:

  • SMEs, which often lack access to data or sophisticated analytical tools to process large datasets, are likely to be one of the chief beneficiaries of open data.
  • Government data is the main category of open data being used by SMEs. A number of SMEs are also using open scientific and shared corporate data.
  • Open data is used primarily to serve the Business-to-Business (B2B) markets, followed by the Business-to-Consumer (B2C) markets. A number of the companies studied serve two or three market segments simultaneously.
  • Open data is usually a free resource, but SMEs are monetizing their open-data-driven services to build viable businesses. The most common revenue models include subscription-based services, advertising, fees for products and services, freemium models, licensing fees, lead generation and philanthropic grants.
  • The most significant challenges SMEs face in using open data include those concerning data quality and consistency, insufficient financial and human resources, and issues surrounding privacy.

This is just a sampling of findings and observations. The paper includes a number of additional observations concerning business and revenue models, product development, customer acquisition, and other subjects of relevance to any company considering an open data strategy.”

Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale


Paper by Wilson AM and Likens GE at PLOS: “Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for ‘edit wars’ surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically “controversial” (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider “noncontroversial” (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans….(More)”

e-Consultation Platforms: Generating or Just Recycling Ideas?


Chapter by Efthimios TambourisAnastasia Migotzidou, and Konstantinos Tarabanis in Electronic Participation: “A number of governments worldwide employ web-based e-consultation platforms to enable stakeholders commenting on draft legislation. Stakeholders’ input includes arguing in favour or against the proposed legislation as well as proposing alternative ideas. In this paper, we empirically investigate the relationship between the volume of contributions in these platforms and the amount of new ideas that are generated. This enables us to determine whether participants in such platforms keep generating new ideas or just recycle a finite number of ideas. We capitalised on argumentation models to code and analyse a large number of draft law consultations published inopengov.gr, the official e-consultation platform for draft legislation in Greece. Our results suggest that as the number of posts grows, the number of new ideas continues to increase. The results of this study improve our understanding of the dynamics of these consultations and enable us to design better platforms….(More)”

 

Policy makers’ perceptions on the transformational effect of Web 2.0 technologies on public services delivery


Paper by Manuel Pedro Rodríguez Bolívar at Electronic Commerce Research: “The growing participation in social networking sites is altering the nature of social relations and changing the nature of political and public dialogue. This paper contributes to the current debate on Web 2.0 technologies and their implications for local governance, identifying the perceptions of policy makers on the use of Web 2.0 in providing public services and on the changing roles that could arise from the resulting interaction between local governments and their stakeholders. The results obtained suggest that policy makers are willing to implement Web 2.0 technologies in providing public services, but preferably under the Bureaucratic model framework, thus retaining a leading role in this implementation. The learning curve of local governments in the use of Web 2.0 technologies is a factor that could influence policy makers’ perceptions. In this respect, many research gaps are identified and further study of the question is recommended….(More)”