Explore our articles

Stefaan Verhulst

Article by Phil Rosenzweig in McKinsey Quaterly: “The growing power of decision models has captured plenty of C-suite attention in recent years. Combining vast amounts of data and increasingly sophisticated algorithms, modeling has opened up new pathways for improving corporate performance.1 Models can be immensely useful, often making very accurate predictions or guiding knotty optimization choices and, in the process, can help companies to avoid some of the common biases that at times undermine leaders’ judgments.
Yet when organizations embrace decision models, they sometimes overlook the need to use them well. In this article, I’ll address an important distinction between outcomes leaders can influence and those they cannot. For things that executives cannot directly influence, accurate judgments are paramount and the new modeling tools can be valuable. However, when a senior manager can have a direct influence over the outcome of a decision, the challenge is quite different. In this case, the task isn’t to predict what will happen but to make it happen. Here, positive thinking—indeed, a healthy dose of management confidence—can make the difference between success and failure.

Where models work well

Examples of successful decision models are numerous and growing. Retailers gather real-time information about customer behavior by monitoring preferences and spending patterns. They can also run experiments to test the impact of changes in pricing or packaging and then rapidly observe the quantities sold. Banks approve loans and insurance companies extend coverage, basing their decisions on models that are continually updated, factoring in the most information to make the best decisions.
Some recent applications are truly dazzling. Certain companies analyze masses of financial transactions in real time to detect fraudulent credit-card use. A number of companies are gathering years of data about temperature and rainfall across the United States to run weather simulations and help farmers decide what to plant and when. Better risk management and improved crop yields are the result.
Other examples of decision models border on the humorous. Garth Sundem and John Tierney devised a model to shed light on what they described, tongues firmly in cheek, as one of the world’s great unsolved mysteries: how long will a celebrity marriage last? They came up with the Sundem/Tierney Unified Celebrity Theory, which predicted the length of a marriage based on the couple’s combined age (older was better), whether either had tied the knot before (failed marriages were not a good sign), and how long they had dated (the longer the better). The model also took into account fame (measured by hits on a Google search) and sex appeal (the share of those Google hits that came up with images of the wife scantily clad). With only a handful of variables, the model did a very good job of predicting the fate of celebrity marriages over the next few years.
Models have also shown remarkable power in fields that are usually considered the domain of experts. With data from France’s premier wine-producing regions, Bordeaux and Burgundy, Princeton economist Orley Ashenfelter devised a model that used just three variables to predict the quality of a vintage: winter rainfall, harvest rainfall, and average growing-season temperature. To the surprise of many, the model outperformed wine connoisseurs.
Why do decision models perform so well? In part because they can gather vast quantities of data, but also because they avoid common biases that undermine human judgment.2 People tend to be overly precise, believing that their estimates will be more accurate than they really are. They suffer from the recency bias, placing too much weight on the most immediate information. They are also unreliable: ask someone the same question on two different occasions and you may get two different answers. Decision models have none of these drawbacks; they weigh all data objectively and evenly. No wonder they do better than humans.

Can we control outcomes?

With so many impressive examples, we might conclude that decision models can improve just about anything. That would be a mistake. Executives need not only to appreciate the power of models but also to be cognizant of their limits.
Look back over the previous examples. In every case, the goal was to make a prediction about something that could not be influenced directly. Models can estimate whether a loan will be repaid but won’t actually change the likelihood that payments will arrive on time, give borrowers a greater capacity to pay, or make sure they don’t squander their money before payment is due. Models can predict the rainfall and days of sunshine on a given farm in central Iowa but can’t change the weather. They can estimate how long a celebrity marriage might last but won’t help it last longer or cause another to end sooner. They can predict the quality of a wine vintage but won’t make the wine any better, reduce its acidity, improve the balance, or change the undertones. For these sorts of estimates, finding ways to avoid bias and maintain accuracy is essential.
Executives, however, are not concerned only with predicting things they cannot influence. Their primary duty—as the word execution implies—is to get things done. The task of leadership is to mobilize people to achieve a desired end. For that, leaders need to inspire their followers to reach demanding goals, perhaps even to do more than they have done before or believe is possible. Here, positive thinking matters. Holding a somewhat exaggerated level of self-confidence isn’t a dangerous bias; it often helps to stimulate higher performance.
This distinction seems simple but it’s often overlooked. In our embrace of decision models, we sometimes forget that so much of life is about getting things done, not predicting things we cannot control.

Improving models over time

Part of the appeal of decision models lies in their ability to make predictions, to compare those predictions with what actually happens, and then to evolve so as to make more accurate predictions. In retailing, for example, companies can run experiments with different combinations of price and packaging, then rapidly obtain feedback and alter their marketing strategy. Netflix captures rapid feedback to learn what programs have the greatest appeal and then uses those insights to adjust its offerings. Models are not only useful at any particular moment but can also be updated over time to become more and more accurate.
Using feedback to improve models is a powerful technique but is more applicable in some settings than in others. Dynamic improvement depends on two features: first, the observation of results should not make any future occurrence either more or less likely and, second, the feedback cycle of observation and adjustment should happen rapidly. Both conditions hold in retailing, where customer behavior can be measured without directly altering it and results can be applied rapidly, with prices or other features changed almost in real time. They also hold in weather forecasting, since daily measurements can refine models and help to improve subsequent predictions. The steady improvement of models that predict weather—from an average error (in the maximum temperature) of 6 degrees Fahrenheit in the early 1970s to 5 degrees in the 1990s and just 4 by 2010—is testimony to the power of updated models.
For other events, however, these two conditions may not be present. As noted, executives not only estimate things they cannot affect but are also charged with bringing about outcomes. Some of the most consequential decisions of all—including the launch of a new product, entry into a new market, or the acquisition of a rival—are about mobilizing resources to get things done. Furthermore, the results are not immediately visible and may take months or years to unfold. The ability to gather and insert objective feedback into a model, to update it, and to make a better decision the next time just isn’t present.
None of these caveats call into question the considerable power of decision analysis and predictive models in so many domains. They help underscore the main point: an appreciation of decision analytics is important, but an understanding of when these techniques are useful and of their limitations is essential, too…”

The benefits—and limits—of decision models

New book edited by Richard H. R. Harper: “The Internet has altered how people engage with each other in myriad ways, including offering opportunities for people to act distrustfully. This fascinating set of essays explores the question of trust in computing from technical, socio-philosophical, and design perspectives. Why has the identity of the human user been taken for granted in the design of the Internet? What difficulties ensue when it is understood that security systems can never be perfect? What role does trust have in society in general? How is trust to be understood when trying to describe activities as part of a user requirement program? What questions of trust arise in a time when data analytics are meant to offer new insights into user behavior and when users are confronted with different sorts of digital entities? These questions and their answers are of paramount interest to computer scientists, sociologists, philosophers, and designers confronting the problem of trust.

  • Brings together authors from a variety of disciplines
  • Can be adopted in multiple course areas: computer science, philosophy, sociology, anthropology
  • Integrated, multidisciplinary approach to understanding trust as it relates to modern computing”

Table of Contents

Table of Contents

1. Introduction and overview Richard Harper
Part I. The Topography of Trust and Computing:
2. The role of trust in cyberspace David Clark
3. The new face of the internet Thomas Karagiannis
4. Trust as a methodological tool in security engineering George Danezis
Part II. Conceptual Points of View:
5. Computing and the search for trust Tom Simpson
6. The worry about trust Olli Lagerspetz
7. The inescapability of trust Bob Anderson and Wes Sharrock
8. Trust in interpersonal interaction and cloud computing Rod Watson
9. Trust, social identity, and computation Charles Ess
Part III. Trust in Design:
10. Design for trusted and trustworthy services M. Angela Sasse and Iacovos Kirlappos
11. Dialogues: trust in design Richard Banks
12. Trusting oneself Richard Harper and William Odom
13. Reflections on trust, computing and society Richard Harper
Bibliography.

Trust, Computing, and Society

Springwise: “Most people are uneasy about companies making money from the personal data they make available online, but are happy to turn a blind eye if it means they can continue using services such as Facebook for free. Aiming to give web users more control over what they share, Datacoup is a marketplace that lets anyone sell their personal information direct to advertisers.
The data we create on platforms such as Facebook, Twitter, Amazon and Google are worth billions of dollars to advertisers, data brokers and businesses. Through Datacoup, users pick and choose basic information, real time social feeds and even credit and debit card purchases if they’re happy to share them with advertisers, as well deciding which brands can buy their information. Datacoup stores the data — which is all anonymous — under bank-level encryption and acts as a broker to sell it to businesses who want it. It then hands a portion of the sale — typically around USD 8 — back to users on a monthly basis…”

Service lets web users sell their data for cash

Opinion by Alex Marshall in GovTech: “From the thermostats on our walls to the sensors under the asphalt of our streets, digital technology – the so-called Internet of things – is pervading and infecting every aspect of our lives.
As this technology comes to cities, whether lazy suburban ones or frenetic urban centers, it is increasingly wearing the banner of “Smart Cities.” Like those other S-words and phrases, such as smart growth and sustainability, a smart city can be just about anything to anybody, and therein lies both its utility and danger. I use the term to mean the marrying of our places with the telecommunications revolution that has took hold over the last half century, including the silicon chip, the Internet, the fiber optic line and broadband networks.
Because this transformation is so broad and deep, it’s impossible to list or even dream of all the different ways we will reshape our communities, any more than we could 100 years ago name all the ways the then-new technologies of electricity or phone service would be employed. But we can list some of the ways digital technologies are being used right now. It’s sensors in sewers, face-recognizing cameras in plazas, and individual streetlights being controlled through a dial in an office at City Hall. It’s entire new cities arising out of the ground, like Songdo in South Korea or others in the Middle East….
But as wondrous as these new technologies are, we should remember an old truth: Whether it’s the silicon chip or the entire Internet, they are just tools that deliver power and possibilities to whoever wields them. So, it’s important to know and to think about who will and should control these tools. A policeman can use street cameras with facial recognition software to look for a thief, or a dictator can use them to hunt for dissidents. So far, different cities even within the same country are answering that question differently.”

Are Cities Losing Control Over 'Smart' Initiatives?

Paper by Dirk Helbing; “Our society is fundamentally changing. These days, almost nothing works without a computer chip. Processing power doubles every 18 months and will exceed the capabilities of human brains in about ten years from now. Some time ago, IBM’s Big Blue computer already beat the best chess player. Meanwhile, computers perform about 70 percent of all financial transactions, and IBM’s Watson advises customers better than human telephone hotlines. Will computers and robots soon replace skilled labor? In many European countries, unemployment is reaching historical heights. The forthcoming economic and social impact of future information and communication technologies (ICT) will be huge – probably more significant than that caused by the steam engine, or by nano- or biotechnology.
The storage capacity for data is growing even faster than computational capacity. Within just a year we will soon generate more data than in the entire history of humankind. The “Internet of Things” will network trillions of sensors. Unimaginable amounts of data will be collected. Big Data is already being praised as the “oil of the 21st century”. What opportunities and risks does this create for our society, economy, and environment?”

Overcoming 'Tragedies of the Commons' with a Self-Regulating, Participatory Market Society

Development Impact and You Toolkit: “This is a toolkit on how to invent, adopt or adapt ideas that can deliver better results. It’s quick to use, simple to apply, and designed to help busy people working in development.


The tools are not coming out of thin air. It draws on a study of many hundreds of tools currently being used – here we have included only the ones which practitioners found most useful. Many of them are well documented and have been widely used in other sectors. In that sense this toolkit is standing on the shoulders of giants, and we are happy to acknowledge that. All the tool descriptions include a key reference, so it is easy to trace back their origins and dive deeper into other publications about their application.”

DIY Toolkit

McKinsey: “From the invention of the printing press to the telephone, the radio, and the Internet, the ways people collaborate change frequently, and the effects of those changes often reverberate through generations. In this video interview, Clay Shirky, author, New York University professor, and leading thinker on the impact of social media, explains the disruptive impact of technology on how people live and work—and on the economics of what we make and consume. This interview was conducted by McKinsey Global Institute partner Michael Chui, and an edited transcript of Shirky’s remarks follows….
Shirky:…The thing I’ve always looked at, because it is long-term disruptive, is changes in the way people collaborate. Because in the history of particularly the Western world, when communications tools come along and they change how people can contact each other, how they can share information, how they can find each other—we’re talking about the printing press, or the telephone, or the radio, or what have you—the changes that are left in the wake of those new technologies often span generations.
The printing press was a sustaining technology for the scientific revolution, the spread of newspapers, the spread of democracy, just on down the list. So the thing I always watch out for, when any source of disruption comes along, when anything that’s going to upset the old order comes along, is I look for what the collaborative penumbra is.”

The disruptive power of collaboration: An interview with Clay Shirky

New report by Hans J. Scholl and Margit C. Scholl: “It has been the object of this article to make the case and present a roadmap for the study of the phenomena of smart governance as well as smart and open governance as an enactment of smart governance in practice. As a concept paper, this contribution aimed at sparking interest and at inspiring scholarly and practitioner discourse in this area of study inside the community of electronic government research and practice, and beyond. The roadmap presented here comprises and details seven elements of smart governance along with eight areas of focus in practice.
Smart governance along with its administrative enactment of smart and open government, it was argued, can help effectively address the three grand challenges to 21st century societal and individual well-being, which are (a) the Third Industrial Revolution with the information revolution at its core, (b) the rapidity of change and the lack of timely and effective government intervention, and (c) expansive government spending and exorbitant public debt financing. Although not seen as a panacea, it was also argued that smart governance principles could guide the relatively complex administrative enactment of smart and open government more intelligently than traditional static and inflexible governance approaches could do.
Since much of the road ahead metaphorically speaking leads through uncharted territory, dedicated research is needed that accompanies projects in this area and evaluates them. Research could further be embedded into practical projects providing for fast and systematic learning. We believe that such embedding of research into smart governance projects should become an integral part of smart projects’ agendas.”

Smart Governance: A Roadmap for Research and Practice
New volume of Public Administration and Information Technology series: “Given this global context, and taking into account both the need of academicians and practitioners, it is the intention of this book to shed light on the open government concept and, in particular:
• To provide comprehensive knowledge of recent major developments of open government around the world.
• To analyze the importance of open government efforts for public governance.
• To provide insightful analysis about those factors that are critical when designing, implementing and evaluating open government initiatives.
• To discuss how contextual factors affect open government initiatives’success or failure.
• To explore the existence of theoretical models of open government.
• To propose strategies to move forward and to address future challenges in an international context.”
Open Government -Opportunities and Challenges for Public Governance

Paper by Lee Rainie and Susannah Fox from Pew: “The overall verdict: The internet has been a plus for society and an especially good thing for individual users… This report is the first part of a sustained effort through 2014 by the Pew Research Center to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-Lee. Lee wrote a paper on March 12, 1989 proposing an “information management” system that became the conceptual and architectural structure for the Web.  He eventually released the code for his system—for free—to the world on Christmas Day in 1990. It became a milestone in easing the way for ordinary people to access documents and interact over a network of computers called the internet—a system that linked computers and that had been around for years. The Web became especially appealing after Web browsers were perfected in the early 1990s to facilitate graphical displays of pages on those linked computers.”

The Web at 25 in the U.S.

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday