Explore our articles
View All Results

Stefaan Verhulst

Ken Carbone in the Huffington Post: “Allow me to begin with the truth. I’ve never studied political science, run for public office nor held a position in government. For the last forty years I’ve led a design agency working with enduring brands across the globe. As with any experienced person in my profession, I have used research, deductive reasoning, logic and “design thinking“ to solve complex problems and create opportunities. Great brands that are showing their age turn to our agency to get back on course. In this light, I believe American democracy is a prime target for some retooling….

The present campaign cycle has left many voters wondering how such divisiveness and national embarrassment could be happening in the land of the free and home of the brave. This could be viewed as symptomatic of deeper structural problems in our tradition bound 240 year-old democracy. Great brands operate on a “innovate or die” model to insure success. The continual improvement of how a business operates and adapts to market conditions is a sound and critical practice.

Although the current election frenzy will soon be over, I want to examine three challenges to our election process and propose possible solutions for consideration. I’ll use the same diagnostic thinking I use with major corporations:

Term Limits…

Voting and Voter registration…

Political Campaigns…

In June of this year I attended the annual leadership conference of AIGA, the professional association for design, in Raleigh NC. A provocative question posed to a select group of designers was “What would you do if you were Secretary of Design.” The responses addressed issues concerning positive social change, education and Veteran Affairs. The audience was full of several hundred trained professionals whose everyday problem solving methods encourage divergent thinking to explore many solutions (possible or impossible) and then use convergent thinking to select and realize the best resolution. This is the very definition of “design thinking.” That leads to progress….(More)”.

Make Democracy Great Again: Let’s Try Some ‘Design Thinking’

Phil Howard at Culture Digitally: “This is the big year for computational propaganda—using immense data sets to manipulate public opinion over social media.  Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits. 

Platforms like Twitter and Facebook now provide a structure for our political lives.  We’ve always relied on many kinds of sources for our political news and information.  Family, friends, news organizations, charismatic politicians certainly predate the internet.  But whereas those are sources of information, social media now provides the structure for political conversation.  And the problem is that these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.

First, social algorithms allow fake news stories from untrustworthy sources to spread like wildfire over networks of family and friends.  …

Second, social media algorithms provide very real structure to what political scientists often call “elective affinity” or “selective exposure”…

The third problem is that technology companies, including Facebook and Twitter, have been given a “moral pass” on the obligations we hold journalists and civil society groups to….

Facebook has run several experiments now, published in scholarly journals, demonstrating that they have the ability to accurately anticipate and measure social trends.  Whereas journalists and social scientists feel an obligation to openly analyze and discuss public preferences, we do not expect this of Facebook.  The network effects that clearly were unmeasured by pollsters were almost certainly observable to Facebook.  When it comes to news and information about politics, or public preferences on important social questions, Facebook has a moral obligation to share data and prevent computational propaganda.  The Brexit referendum and US election have taught us that Twitter and Facebook are now media companies.  Their engineering decisions are effectively editorial decisions, and we need to expect more openness about how their algorithms work.  And we should expect them to deliberate about their editorial decisions.

There are some ways to fix these problems.  Opaque software algorithms shape what people find in their news feeds.  We’ve all noticed fake news stories, often called clickbait, and while these can be an entertaining part of using the internet, it is bad when they are used to manipulate public opinion.  These algorithms work as “bots” on social media platforms like Twitter, where they were used in both the Brexit and US Presidential campaign to aggressively advance the case for leaving Europe and the case for electing Trump.  Similar algorithms work behind the scenes on Facebook, where they govern what content from your social networks actually gets your attention. 

So the first way to strengthen democratic practices is for academics, journalists, policy makers and the interested public to audit social media algorithms….(More)”.

Is Social Media Killing Democracy?

Essay by Stephane Lavertu in Public Administration Review: “Rapid advances in our ability to collect, analyze, and disseminate information are transforming public administration. This “big data” revolution presents opportunities for improving the management of public programs, but it also entails some risks. In addition to potentially magnifying well-known problems with public sector performance management—particularly the problem of goal displacement—the widespread dissemination of administrative data and performance information increasingly enables external political actors to peer into and evaluate the administration of public programs. The latter trend is consequential because external actors may have little sense of the validity of performance metrics and little understanding of the policy priorities they capture. The author illustrates these potential problems using recent research on U.S. primary and secondary education and suggests that public administration scholars could help improve governance in the data-rich future by informing the development and dissemination of organizational report cards that better capture the value that public agencies deliver….(More)”.

We All Need Help: “Big Data” and the Mismeasure of Public Administration

Paper by Charles Kenny and Ben Crisman: “Governments buy about $9 trillion worth of goods and services a year, and their procurement policies are increasingly subject to international standards and institutional regulation including the WTO Plurilateral Agreement on Government Procurement, Open Government Partnership commitments and International Financial Institution procurement rules. These standards focus on transparency and open competition as key tools to improve outcomes. While there is some evidence on the impact of competition on prices in government procurement, there is less on the impact of specific procurement rules including transparency on competition or procurement outcomes. Using a database of World Bank financed contracts, we explore the impact of a relatively minor procurement rule governing advertising on competition using regression discontinuity design and matching methods….(More)”

Results Through Transparency: Does Publicity Lead to Better Procurement?

Book edited by Svenja Falk, Andrea Römmele, Andrea and Michael Silverman: “This book focuses on the implementation of digital strategies in the public sectors in the US, Mexico, Brazil, India and Germany. The case studies presented examine different digital projects by looking at their impact as well as their alignment with their national governments’ digital strategies. The contributors assess the current state of digital government, analyze the contribution of digital technologies in achieving outcomes for citizens, discuss ways to measure digitalization and address the question of how governments oversee the legal and regulatory obligations of information technology. The book argues that most countries formulate good strategies for digital government, but do not effectively prescribe and implement corresponding policies and programs. Showing specific programs that deliver results can help policy makers, knowledge specialists and public-sector researchers to develop best practices for future national strategies….(More)”

Digital Government: Leveraging Innovation to Improve Public Sector Performance and Outcomes for Citizens

Springwise: “Following orders by the national government to improve the air quality of the New Delhi region by reducing air pollution, the Environment Pollution (Prevention and Control) Authority created the Hawa Badlo app. Designed for citizens to report cases of air pollution, each complaint is sent to the appropriate official for resolution.

Free to use, the app is available for both iOS and Android. Complaints are geo-tagged, and there are two different versions available – one for citizens and one for government officials. Officials must provide photographic evidence to close a case. The app itself produces weekly reports listings the numbers and status of complaints, along with any actions taken to resolve the problem. Currently focusing on pollution from construction, unpaved roads and the burning of garbage, the team behind the app plans to expand its use to cover other types of pollution as well.

From providing free wi-fi when the air is clean enough to mapping air-quality in real-time, air pollution solutions are increasingly involving citizens….(More)”

Crowd-sourcing pollution control in India
Dina Bass at BloombergTech: “Microsoft Corp. researchers want to give patients and doctors a new tool in the quest to find cancers earlier: web searches.

Lung cancer can be detected a year prior to current methods of diagnosis in more than one-third of cases by analyzing a patient’s internet searches for symptoms and demographic data that put them at higher risk, according to research from Microsoft published Thursday in the journal JAMA Oncology. The study shows it’s possible to use search data to give patients or doctors enough reason to seek cancer screenings earlier, improving the prospects for treatment for lung cancer, which is the leading cause of cancer deaths worldwide.

To train their algorithms, researchers Ryen White and Eric Horvitz scanned anonymous queries in Bing, the company’s search engine. They took searchers who had asked Bing something that indicated a recent lung cancer diagnosis, such as questions about specific treatments or the phrase “I was just diagnosed with lung cancer.”
Then they went back over the user’s previous searches to see if there were other queries that might have indicated the possibility of cancer prior to diagnosis. They looked for searches such as those related to symptoms, including bronchitis, chest pain and blood in sputum. The researchers reviewed other risk factors such as gender, age, race and whether searchers lived in areas with high levels of asbestos and radon, both of which increase the risk of lung cancer. And they looked for indications the user was a smoker, such as people searching for smoking cessation products like Nicorette gum.

How effective this method can be depends on how many false positives — people who don’t end up having cancer but are told they may — you are willing to tolerate, the researchers said. More false positives also mean catching more cases early. With one false positive in 1,000, 39 percent of cases can be caught a year earlier, according to the study. Dropping to one false positive per 100,000 still could allow researchers to catch 3 percent of cases a year earlier, Horvitz said.  The company published similar research on pancreatic cancer in June….(More)”

Microsoft Shows Searches Can Boost Early Detection of Lung Cancer

Mark Kinver at BBC News: “Rothamsted Research, a leading agricultural research institution, is attempting to make data from long-term experiments available to all.

In partnership with a data consultancy, is it developing a method to make complex results accessible and useable.

The institution is a member of the Godan Initiative that aims to make data available to the scientific community.

In September, Godan called on the public to sign its global petition to open agricultural research data.

“The continuing challenge we face is that the raw data alone is not sufficient enough on its own for people to make sense of it,” said Chris Rawlings, head of computational and systems biology at Rothamsted Research.

“This is because the long-term experiments are very complex, and they are looking at agriculture and agricultural ecosystems so you need to know a lot of about what the intention of the studies are, how they are being used, and the changes that have taken place over time.”

However, he added: “Even with this level of complexity, we do see significant number of users contacting us or developing links with us.”

One size fits all

The ability to provide open data to all is one of the research organisation’s national capabilities, and forms a defining principle of its web portal to the experiments carried out at its North Wyke Farm Platform in North Devon.

Rothamsted worked in partnership with Tessella, a data consultancy, on the data collected from the experiments, which focused on livestock pastures.

The information being collected, as often as every 15 minutes, includes water run-off levels, soil moisture, meteorological data, and soil nutrients, and this is expected to run for decades.

“The data is quite varied and quite diverse, and [Rothamsted] wants to make to make this data available to the wider research community,” explained Tessella’s Andrew Bowen.

“What Rothamsted needed was a way to store it and a way to present it in a portal in which people could see what they had to offer.”

He told BBC News that there were a number of challenges that needed to be tackled.

One was the management of the data, and the team from Tessella adopted an “agile scrum” approach.

“Basically, what you do is draw up a list of the requirements, of what you need, and we break the project down into short iterations, starting with the highest priority,” he said.

“This means that you are able to take a more exploratory approach to the process of developing software. This is very well suited to the research environment.”…(More)”

Open data aims to boost food security prospects

Bjarne Corydon, Vidhya Ganesan, and Martin Lundqvist at McKinsey: “By digitizing processes and making organizational changes, governments can enhance services, save money, and improve citizens’ quality of life.

As companies have transformed themselves with digital technologies, people are calling on governments to follow suit. By digitizing, governments can provide services that meet the evolving expectations of citizens and businesses, even in a period of tight budgets and increasingly complex challenges. Our estimates suggest that government digitization, using current technology, could generate over $1 trillion annually worldwide.

Digitizing a government requires attention to two major considerations: the core capabilities for engaging citizens and businesses, and the organizational enablers that support those capabilities (exhibit). These make up a framework for setting digital priorities. In this article, we look at the capabilities and enablers in this framework, along with guidelines and real-world examples to help governments seize the opportunities that digitization offers.

A digital government has core capabilities supported by organizational enablers.

Governments typically center their digitization efforts on four capabilities: services, processes, decisions, and data sharing. For each, we believe there is a natural progression from quick wins to transformative efforts….(More)”

See also: Digital by default: A guide to transforming government (PDF–474KB) and  “Never underestimate the importance of good government,”  a New at McKinsey blog post with coauthor Bjarne Corydon, director of the McKinsey Center for Government.

Transforming government through digitization

 at The Conversation: “…We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

Type I AI: Reactive machines

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world….

Type II AI: Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel…;

Type III AI: Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

Type IV AI: Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it….

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences….(More)”

Understanding the four types of AI, from reactive robots to self-aware beings

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday