The Guardian: In our livechat on 28 February the experts discussed how to connect up government and citizens online. Digital public services are not just for ‘techno wizzy people’, so government should make them easier for everyone… Read the livechat in full
Michael Sanders, head of research for the behavioural insights team – @mike_t_sanders
It’s important that government is a part of people’s lives: when people interact with government it shouldn’t be a weird and alienating experience, but one that feels part of their everyday lives.
Online services are still too often difficult to use: most people who use the HMRC website will do so infrequently, and will forget its many nuances between visits. This is getting better but there’s a long way to go.
Digital by default keeps things simple: one of our main findings from our research on improving public services is that we should do all we can to “make it easy”.
There is always a risk of exclusion: we should avoid “digital by default” becoming “digital only”.
Ben Matthews, head of communications at Futuregov – @benrmatthews
We prefer digital by design to digital by default: sometimes people can use technology badly, under the guise of ‘digital by default’. We should take a more thoughtful approach to technology, using it as a means to an end – to help us be open, accountable and human.
Leadership is important: you can get enthusiasm from the frontline or younger workers who are comfortable with digital tools, but until they’re empowered by the top of the organisation to use them actively and effectively, we’ll see little progress.
Jargon scares people off: ‘big data’ or ‘open data’, for example….”
Predicting Individual Behavior with Social Networks
Article by Sharad Goel and Daniel Goldstein (Microsoft Research): “With the availability of social network data, it has become possible to relate the behavior of individuals to that of their acquaintances on a large scale. Although the similarity of connected individuals is well established, it is unclear whether behavioral predictions based on social data are more accurate than those arising from current marketing practices. We employ a communications network of over 100 million people to forecast highly diverse behaviors, from patronizing an off-line department store to responding to advertising to joining a recreational league. Across all domains, we find that social data are informative in identifying individuals who are most likely to undertake various actions, and moreover, such data improve on both demographic and behavioral models. There are, however, limits to the utility of social data. In particular, when rich transactional data were available, social data did little to improve prediction.”
The benefits—and limits—of decision models
Article by Phil Rosenzweig in McKinsey Quaterly: “The growing power of decision models has captured plenty of C-suite attention in recent years. Combining vast amounts of data and increasingly sophisticated algorithms, modeling has opened up new pathways for improving corporate performance.1 Models can be immensely useful, often making very accurate predictions or guiding knotty optimization choices and, in the process, can help companies to avoid some of the common biases that at times undermine leaders’ judgments.
Yet when organizations embrace decision models, they sometimes overlook the need to use them well. In this article, I’ll address an important distinction between outcomes leaders can influence and those they cannot. For things that executives cannot directly influence, accurate judgments are paramount and the new modeling tools can be valuable. However, when a senior manager can have a direct influence over the outcome of a decision, the challenge is quite different. In this case, the task isn’t to predict what will happen but to make it happen. Here, positive thinking—indeed, a healthy dose of management confidence—can make the difference between success and failure.
Where models work well
Examples of successful decision models are numerous and growing. Retailers gather real-time information about customer behavior by monitoring preferences and spending patterns. They can also run experiments to test the impact of changes in pricing or packaging and then rapidly observe the quantities sold. Banks approve loans and insurance companies extend coverage, basing their decisions on models that are continually updated, factoring in the most information to make the best decisions.
Some recent applications are truly dazzling. Certain companies analyze masses of financial transactions in real time to detect fraudulent credit-card use. A number of companies are gathering years of data about temperature and rainfall across the United States to run weather simulations and help farmers decide what to plant and when. Better risk management and improved crop yields are the result.
Other examples of decision models border on the humorous. Garth Sundem and John Tierney devised a model to shed light on what they described, tongues firmly in cheek, as one of the world’s great unsolved mysteries: how long will a celebrity marriage last? They came up with the Sundem/Tierney Unified Celebrity Theory, which predicted the length of a marriage based on the couple’s combined age (older was better), whether either had tied the knot before (failed marriages were not a good sign), and how long they had dated (the longer the better). The model also took into account fame (measured by hits on a Google search) and sex appeal (the share of those Google hits that came up with images of the wife scantily clad). With only a handful of variables, the model did a very good job of predicting the fate of celebrity marriages over the next few years.
Models have also shown remarkable power in fields that are usually considered the domain of experts. With data from France’s premier wine-producing regions, Bordeaux and Burgundy, Princeton economist Orley Ashenfelter devised a model that used just three variables to predict the quality of a vintage: winter rainfall, harvest rainfall, and average growing-season temperature. To the surprise of many, the model outperformed wine connoisseurs.
Why do decision models perform so well? In part because they can gather vast quantities of data, but also because they avoid common biases that undermine human judgment.2 People tend to be overly precise, believing that their estimates will be more accurate than they really are. They suffer from the recency bias, placing too much weight on the most immediate information. They are also unreliable: ask someone the same question on two different occasions and you may get two different answers. Decision models have none of these drawbacks; they weigh all data objectively and evenly. No wonder they do better than humans.
Can we control outcomes?
With so many impressive examples, we might conclude that decision models can improve just about anything. That would be a mistake. Executives need not only to appreciate the power of models but also to be cognizant of their limits.
Look back over the previous examples. In every case, the goal was to make a prediction about something that could not be influenced directly. Models can estimate whether a loan will be repaid but won’t actually change the likelihood that payments will arrive on time, give borrowers a greater capacity to pay, or make sure they don’t squander their money before payment is due. Models can predict the rainfall and days of sunshine on a given farm in central Iowa but can’t change the weather. They can estimate how long a celebrity marriage might last but won’t help it last longer or cause another to end sooner. They can predict the quality of a wine vintage but won’t make the wine any better, reduce its acidity, improve the balance, or change the undertones. For these sorts of estimates, finding ways to avoid bias and maintain accuracy is essential.
Executives, however, are not concerned only with predicting things they cannot influence. Their primary duty—as the word execution implies—is to get things done. The task of leadership is to mobilize people to achieve a desired end. For that, leaders need to inspire their followers to reach demanding goals, perhaps even to do more than they have done before or believe is possible. Here, positive thinking matters. Holding a somewhat exaggerated level of self-confidence isn’t a dangerous bias; it often helps to stimulate higher performance.
This distinction seems simple but it’s often overlooked. In our embrace of decision models, we sometimes forget that so much of life is about getting things done, not predicting things we cannot control.
…
Improving models over time
Part of the appeal of decision models lies in their ability to make predictions, to compare those predictions with what actually happens, and then to evolve so as to make more accurate predictions. In retailing, for example, companies can run experiments with different combinations of price and packaging, then rapidly obtain feedback and alter their marketing strategy. Netflix captures rapid feedback to learn what programs have the greatest appeal and then uses those insights to adjust its offerings. Models are not only useful at any particular moment but can also be updated over time to become more and more accurate.
Using feedback to improve models is a powerful technique but is more applicable in some settings than in others. Dynamic improvement depends on two features: first, the observation of results should not make any future occurrence either more or less likely and, second, the feedback cycle of observation and adjustment should happen rapidly. Both conditions hold in retailing, where customer behavior can be measured without directly altering it and results can be applied rapidly, with prices or other features changed almost in real time. They also hold in weather forecasting, since daily measurements can refine models and help to improve subsequent predictions. The steady improvement of models that predict weather—from an average error (in the maximum temperature) of 6 degrees Fahrenheit in the early 1970s to 5 degrees in the 1990s and just 4 by 2010—is testimony to the power of updated models.
For other events, however, these two conditions may not be present. As noted, executives not only estimate things they cannot affect but are also charged with bringing about outcomes. Some of the most consequential decisions of all—including the launch of a new product, entry into a new market, or the acquisition of a rival—are about mobilizing resources to get things done. Furthermore, the results are not immediately visible and may take months or years to unfold. The ability to gather and insert objective feedback into a model, to update it, and to make a better decision the next time just isn’t present.
None of these caveats call into question the considerable power of decision analysis and predictive models in so many domains. They help underscore the main point: an appreciation of decision analytics is important, but an understanding of when these techniques are useful and of their limitations is essential, too…”
Trust, Computing, and Society
New book edited by Richard H. R. Harper: “The Internet has altered how people engage with each other in myriad ways, including offering opportunities for people to act distrustfully. This fascinating set of essays explores the question of trust in computing from technical, socio-philosophical, and design perspectives. Why has the identity of the human user been taken for granted in the design of the Internet? What difficulties ensue when it is understood that security systems can never be perfect? What role does trust have in society in general? How is trust to be understood when trying to describe activities as part of a user requirement program? What questions of trust arise in a time when data analytics are meant to offer new insights into user behavior and when users are confronted with different sorts of digital entities? These questions and their answers are of paramount interest to computer scientists, sociologists, philosophers, and designers confronting the problem of trust.
- Brings together authors from a variety of disciplines
- Can be adopted in multiple course areas: computer science, philosophy, sociology, anthropology
- Integrated, multidisciplinary approach to understanding trust as it relates to modern computing”
Table of Contents
Table of Contents
1. Introduction and overview Richard Harper
Part I. The Topography of Trust and Computing:
2. The role of trust in cyberspace David Clark
3. The new face of the internet Thomas Karagiannis
4. Trust as a methodological tool in security engineering George Danezis
Part II. Conceptual Points of View:
5. Computing and the search for trust Tom Simpson
6. The worry about trust Olli Lagerspetz
7. The inescapability of trust Bob Anderson and Wes Sharrock
8. Trust in interpersonal interaction and cloud computing Rod Watson
9. Trust, social identity, and computation Charles Ess
Part III. Trust in Design:
10. Design for trusted and trustworthy services M. Angela Sasse and Iacovos Kirlappos
11. Dialogues: trust in design Richard Banks
12. Trusting oneself Richard Harper and William Odom
13. Reflections on trust, computing and society Richard Harper
Bibliography.
Service lets web users sell their data for cash
Springwise: “Most people are uneasy about companies making money from the personal data they make available online, but are happy to turn a blind eye if it means they can continue using services such as Facebook for free. Aiming to give web users more control over what they share, Datacoup is a marketplace that lets anyone sell their personal information direct to advertisers.
The data we create on platforms such as Facebook, Twitter, Amazon and Google are worth billions of dollars to advertisers, data brokers and businesses. Through Datacoup, users pick and choose basic information, real time social feeds and even credit and debit card purchases if they’re happy to share them with advertisers, as well deciding which brands can buy their information. Datacoup stores the data — which is all anonymous — under bank-level encryption and acts as a broker to sell it to businesses who want it. It then hands a portion of the sale — typically around USD 8 — back to users on a monthly basis…”
Are Cities Losing Control Over 'Smart' Initiatives?
Opinion by Alex Marshall in GovTech: “From the thermostats on our walls to the sensors under the asphalt of our streets, digital technology – the so-called Internet of things – is pervading and infecting every aspect of our lives.
As this technology comes to cities, whether lazy suburban ones or frenetic urban centers, it is increasingly wearing the banner of “Smart Cities.” Like those other S-words and phrases, such as smart growth and sustainability, a smart city can be just about anything to anybody, and therein lies both its utility and danger. I use the term to mean the marrying of our places with the telecommunications revolution that has took hold over the last half century, including the silicon chip, the Internet, the fiber optic line and broadband networks.
Because this transformation is so broad and deep, it’s impossible to list or even dream of all the different ways we will reshape our communities, any more than we could 100 years ago name all the ways the then-new technologies of electricity or phone service would be employed. But we can list some of the ways digital technologies are being used right now. It’s sensors in sewers, face-recognizing cameras in plazas, and individual streetlights being controlled through a dial in an office at City Hall. It’s entire new cities arising out of the ground, like Songdo in South Korea or others in the Middle East….
But as wondrous as these new technologies are, we should remember an old truth: Whether it’s the silicon chip or the entire Internet, they are just tools that deliver power and possibilities to whoever wields them. So, it’s important to know and to think about who will and should control these tools. A policeman can use street cameras with facial recognition software to look for a thief, or a dictator can use them to hunt for dissidents. So far, different cities even within the same country are answering that question differently.”
Overcoming 'Tragedies of the Commons' with a Self-Regulating, Participatory Market Society
Paper by Dirk Helbing; “Our society is fundamentally changing. These days, almost nothing works without a computer chip. Processing power doubles every 18 months and will exceed the capabilities of human brains in about ten years from now. Some time ago, IBM’s Big Blue computer already beat the best chess player. Meanwhile, computers perform about 70 percent of all financial transactions, and IBM’s Watson advises customers better than human telephone hotlines. Will computers and robots soon replace skilled labor? In many European countries, unemployment is reaching historical heights. The forthcoming economic and social impact of future information and communication technologies (ICT) will be huge – probably more significant than that caused by the steam engine, or by nano- or biotechnology.
The storage capacity for data is growing even faster than computational capacity. Within just a year we will soon generate more data than in the entire history of humankind. The “Internet of Things” will network trillions of sensors. Unimaginable amounts of data will be collected. Big Data is already being praised as the “oil of the 21st century”. What opportunities and risks does this create for our society, economy, and environment?”
DIY Toolkit
Development Impact and You Toolkit: “This is a toolkit on how to invent, adopt or adapt ideas that can deliver better results. It’s quick to use, simple to apply, and designed to help busy people working in development.
The tools are not coming out of thin air. It draws on a study of many hundreds of tools currently being used – here we have included only the ones which practitioners found most useful. Many of them are well documented and have been widely used in other sectors. In that sense this toolkit is standing on the shoulders of giants, and we are happy to acknowledge that. All the tool descriptions include a key reference, so it is easy to trace back their origins and dive deeper into other publications about their application.”
The disruptive power of collaboration: An interview with Clay Shirky
McKinsey: “From the invention of the printing press to the telephone, the radio, and the Internet, the ways people collaborate change frequently, and the effects of those changes often reverberate through generations. In this video interview, Clay Shirky, author, New York University professor, and leading thinker on the impact of social media, explains the disruptive impact of technology on how people live and work—and on the economics of what we make and consume. This interview was conducted by McKinsey Global Institute partner Michael Chui, and an edited transcript of Shirky’s remarks follows….
Shirky:…The thing I’ve always looked at, because it is long-term disruptive, is changes in the way people collaborate. Because in the history of particularly the Western world, when communications tools come along and they change how people can contact each other, how they can share information, how they can find each other—we’re talking about the printing press, or the telephone, or the radio, or what have you—the changes that are left in the wake of those new technologies often span generations.
The printing press was a sustaining technology for the scientific revolution, the spread of newspapers, the spread of democracy, just on down the list. So the thing I always watch out for, when any source of disruption comes along, when anything that’s going to upset the old order comes along, is I look for what the collaborative penumbra is.”
Smart Governance: A Roadmap for Research and Practice
New report by Hans J. Scholl and Margit C. Scholl: “It has been the object of this article to make the case and present a roadmap for the study of the phenomena of smart governance as well as smart and open governance as an enactment of smart governance in practice. As a concept paper, this contribution aimed at sparking interest and at inspiring scholarly and practitioner discourse in this area of study inside the community of electronic government research and practice, and beyond. The roadmap presented here comprises and details seven elements of smart governance along with eight areas of focus in practice.
Smart governance along with its administrative enactment of smart and open government, it was argued, can help effectively address the three grand challenges to 21st century societal and individual well-being, which are (a) the Third Industrial Revolution with the information revolution at its core, (b) the rapidity of change and the lack of timely and effective government intervention, and (c) expansive government spending and exorbitant public debt financing. Although not seen as a panacea, it was also argued that smart governance principles could guide the relatively complex administrative enactment of smart and open government more intelligently than traditional static and inflexible governance approaches could do.
Since much of the road ahead metaphorically speaking leads through uncharted territory, dedicated research is needed that accompanies projects in this area and evaluates them. Research could further be embedded into practical projects providing for fast and systematic learning. We believe that such embedding of research into smart governance projects should become an integral part of smart projects’ agendas.”