Government innovations and the hype cycle


Danny Buerkli at the Centre for Public Impact: “The Gartner hype cycle tracks how technologies develop from initial conception to productive use. There is much excitement around different methodologies and technologies in the “government innovation” space, but which of these is hyped and which of these is truly productive?

Last year we made some educated guesses and placed ten government innovations along the hype cycle. This year, however, we went for something bigger and better. We created an entirely non-scientific poll and asked respondents to tell us where they thought these same ten government innovations sat on the hype cycle.

The innovations we included were artificial intelligence, blockchain, design thinking, policy labs, behavioural insights, open data, e-government, agile, lean and New Public Management.

Here is what we learned.

  1. For the most part, we’re still in the early days

On average, our respondents don’t think that any of the methods have made it into truly productive use. In fact, for seven out of the ten innovations, the majority of respondents believed that these were indeed still in the “technology trigger” phase.

Assuming that these innovations will steadily make their way along the hype cycle, we should expect a lot more hype (as they enter the “peak of inflated expectations”) and a lot more disappointment (as they descend into the “trough of disillusionment)” going forward. Government innovation advocates should take heed.

  1. Policy Labs are believed to be in “peak of inflated expectations”

This innovation attracted the highest level of disagreement from respondents. While almost two out of five people believe that policy labs are in the “technology trigger” phase, one out of five see them as having already reached the “slope of enlightenment”. On average, however, respondents believe policy labs to be in the “peak of inflated expectations”….

  1. Blockchain is seen as the most nascent government innovation

Our survey respondents rather unanimously believe that blockchain is at the very early stage of the “technology trigger” phase. Given that blockchain is often characterized as a solution in search of a problem, this view may not be surprising. The survey results also indicates that blockchain will have a long way to go before it will be used productively in government, but there are several ways this can be done.

  1. Artificial intelligence inspires a lot of confidence (in some)
  1. New Public Management is – still – overhyped?… (More).

Digital transformation’s people problem


Jen Kelchner at open source: …Arguably, the greatest chasm we see in our organizational work today is the actual transformation before, during, or after the implementation of a digital technology—because technology invariably crosses through and impacts people, processes, and culture. What are we transforming from? What are we transforming into? These are “people issues” as much as they are “technology issues,” but we too rarely acknowledge this.

Operating our organizations on open principles promises to spark new ways of thinking that can help us address this gap. Over the course of this three-part series, we’ll take a look at how the principle foundations of open play a major role in addressing the “people part” of digital transformation—and closing that gap before and during implementations.

The impact of digital transformation

The meaning of the term “digital transformation” has changed considerably in the last decade. For example, if you look at where organizations were in 2007, you’d watch them grapple with the first iPhone. Focus here was more on search engines, data mining, and methods of virtual collaboration.

A decade later in 2017, however, we’re investing in artificial intelligence, machine learning, and the Internet of Things. Our technologies have matured—but our organizational and cultural structures have not kept pace with them.

Value Co-creation In The Organizations of the Future, a recent research report from Aalto University, states that digital transformation has created opportunities to revolutionize and change existing business models, socioeconomic structures, legal and policy measures, organizational patterns, and cultural barriers. But we can only realize this potential if we address both the technological and the organizational aspects of digital transformation.

Four critical areas of digital transformation

Let’s examine four crucial elements involved in any digital transformation effort:

  • change management
  • the needs of the ecosystem
  • processes
  • silos

Any organization must address these four elements in advance of (ideally) or in conjunction with the implementation of a new technology if that organization is going to realize success and sustainability….(More)”.

We have unrealistic expectations of a tech-driven future utopia


Bob O’Donnell in RECODE: “No one likes to think about limits, especially in the tech industry, where the idea of putting constraints on almost anything is perceived as anathema.

In fact, the entire tech industry is arguably built on the concept of bursting through limitations and enabling things that weren’t possible before. New technology developments have clearly created incredible new capabilities and opportunities, and have generally helped improve the world around us.

But there does come a point — and I think we’ve arrived there — where it’s worth stepping back to both think about and talk about the potential value of, yes, technology limits … on several different levels.

On a technical level, we’ve reached a point where advances in computing applications like AI, or medical applications like gene splicing, are raising even more ethical questions than practical ones on issues such as how they work and for what applications they might be used. Not surprisingly, there aren’t any clear or easy answers to these questions, and it’s going to take a lot more time and thought to create frameworks or guidelines for both the appropriate and inappropriate uses of these potentially life-changing technologies.

Does this mean these kinds of technological advances should be stopped? Of course not. But having more discourse on the types of technologies that get created and released certainly needs to happen.

 Even on a practical level, the need for limiting people’s expectations about what a technology can or cannot do is becoming increasingly important. With science-fiction-like advances becoming daily occurrences, it’s easy to fall into the trap that there are no limits to what a given technology can do. As a result, people are increasingly willing to believe and accept almost any kind of statements or predictions about the future of many increasingly well-known technologies, from autonomous driving to VR to AI and machine learning. I hate to say it, but it’s the fake news of tech.

Just as we’ve seen the fallout from fake news on all sides of the political perspective, so, too, are we starting to see that unbridled and unlimited expectations for certain new technologies are starting to have negative implications of their own. Essentially, we’re starting to build unrealistic expectations for a tech-driven nirvana that doesn’t clearly jibe with the realities of the modern world, particularly in the time frames that are often discussed….(More)”.

How AI Is Crunching Big Data To Improve Healthcare Outcomes


PSFK: “The state of your health shouldn’t be a mystery, nor should patients or doctors have to wait long to find answers to pressing medical concerns. In PSFK’s Future of Health Report, we dig deep into the latest in AI, big data algorithms and IoT tools that are enabling a new, more comprehensive overview of patient data collection and analysis. Machine support, patient information from medical records and conversations with doctors are combined with the latest medical literature to help form a diagnosis without detracting from doctor-patient relations.

The impact of improved AI helps patients form a baseline for well-being and is making changes all across the healthcare industry. AI not only streamlines intake processes and reduces processing volume at clinics, it also controls input and diagnostic errors within a patient record, allowing doctors to focus on patient care and communication, rather than data entry. AI also improves pattern recognition and early diagnosis by learning from multiple patient data sets.

By utilizing deep learning algorithms and software, healthcare providers can connect various libraries of medical information and scan databases of medical records, spotting patterns that lead to more accurate detection and greater breadth of efficiency in medical diagnosis and research. IBM Watson, which has previously been used to help identify genetic markers and develop drugs, is applying its neural learning networks to help doctors correctly diagnose heart abnormalities from medical imaging tests. By scanning thousands of images and learning from correct diagnoses, Watson is able to increase diagnostic accuracy, supporting doctors’ cardiac assessments.

Outside of the doctor’s office, AI is also being used to monitor patient vitals to help create a baseline for well-being. By monitoring health on a day-to-day basis, AI systems can alert patients and medical teams to abnormalities or changes from the baseline in real time, increasing positive outcomes. Take xbird, a mobile platform that uses artificial intelligence to help diabetics understand when hypoglycemic attacks will occur. The AI combines personal and environmental data points from over 20 sensors within mobile and wearable devices to create an automated personal diary and cross references it against blood sugar levels. Patients then share this data with their doctors in order to uncover their unique hypoglycemic triggers and better manage their condition.

In China, meanwhile, web provider Baidu has debuted Melody, a chat-based medical assistant that helps individuals communicate their symptoms, learn of possible diagnoses and connect to medical experts….(More)”.

China seeks glimpse of citizens’ future with crime-predicting AI


, Yingzhi Yang and Sherry Fei Ju in the Financial Times: “China, a surveillance state where authorities have unchecked access to citizens’ histories, is seeking to look into their future with technology designed to predict and prevent crime. Companies are helping police develop artificial intelligence they say will help them identify and apprehend suspects before criminal acts are committed. “If we use our smart systems and smart facilities well, we can know beforehand . . . who might be a terrorist, who might do something bad,” Li Meng, vice-minister of science and technology, said on Friday.

Facial recognition company Cloud Walk has been trialling a system that uses data on individuals’ movements and behaviour — for instance visits to shops where weapons are sold — to assess their chances of committing a crime. Its software warns police when a citizen’s crime risk becomes dangerously high, allowing the police to intervene. “The police are using a big-data rating system to rate highly suspicious groups of people based on where they go and what they do,” a company spokesperson told the Financial Times. Risks rise if the individual “frequently visits transport hubs and goes to suspicious places like a knife store”, the spokesperson added. China’s authoritarian government has always amassed personal data to monitor and control its citizens — whether they are criminals or suspected of politically sensitive activity. But new technology, from phones and computers to fast-developing AI software, is amplifying its capabilities. These are being used to crack down on even the most minor of infractions — facial recognition cameras, for instance, are also being used to identify and shame jaywalkers, according to state media. Mr Li said crime prediction would become an important use for AI technology in the government sphere.

China’s crime-prediction technology relies on several AI techniques, including facial recognition and gait analysis, to identify people from surveillance footage. In addition, “crowd analysis” can be used to detect “suspicious” patterns of behaviour in crowds, for example to single out thieves from normal passengers at a train stations. As well as tracking people with a criminal history, Cloud Walk’s technology is being used to monitor “high-risk” places such as hardware stores…(More)”

The DeepMind debacle demands dialogue on data


Hetan Shah in Nature: “Without public approval, advances in how we use data will stall. That is why a regulator’s ruling against the operator of three London hospitals is about more than mishandling records from 1.6 million patients. It is a missed opportunity to have a conversation with the public about appropriate uses for their data….

What can be done to address this deficit? Beyond meeting legal standards, all relevant institutions must take care to show themselves trustworthy in the eyes of the public. The lapses of the Royal Free hospitals and DeepMind provide, by omission, valuable lessons.

The first is to be open about what data are transferred. The extent of data transfer between the Royal Free and DeepMind came to light through investigative journalism. In my opinion, had the project proceeded under open contracting, it would have been subject to public scrutiny, and to questions about whether a company owned by Google — often accused of data monopoly — was best suited to create a relatively simple app.

The second lesson is that data transfer should be proportionate to the task. Information-sharing agreements should specify clear limits. It is unclear why an app for kidney injury requires the identifiable records of every patient seen by three hospitals over a five-year period.

Finally, governance mechanisms must be strengthened. It is shocking to me that the Royal Free did not assess the privacy impact of its actions before handing over access to records. DeepMind does deserve credit for (belatedly) setting up an independent review panel for health-care projects, especially because the panel has a designated budget and has not required members to sign non-disclosure agreements. (The two groups also agreed a new contract late last year, after criticism.)

More is needed. The Information Commissioner asked the Royal Free to improve its processes but did not fine it or require it to rescind data. This rap on the knuckles is unlikely to deter future, potentially worse, misuses of data. People are aware of the potential for over-reach, from the US government’s demands for state voter records to the Chinese government’s alleged plans to create a ‘social credit’ system that would monitor private behaviour.

Innovations such as artificial intelligence, machine learning and the Internet of Things offer great opportunities, but will falter without a public consensus around the role of data. To develop this, all data collectors and crunchers must be open and transparent. Consider how public confidence in genetic modification was lost in Europe, and how that has set back progress.

Public dialogue can build trust through collaborative efforts. A 14-member Citizen’s Reference Panel on health technologies was convened in Ontario, Canada in 2009. The Engage2020 programme incorporates societal input in the Horizon2020 stream of European Union science funding….(More)”

From binoculars to big data: Citizen scientists use emerging technology in the wild


Interview by Rebecca Kondos: “For years, citizen scientists have trekked through local fields, rivers, and forests to observe, measure, and report on species and habitats with notebooks, binoculars, butterfly nets, and cameras in hand. It’s a slow process, and the gathered data isn’t easily shared. It’s a system that has worked to some degree, but one that’s in need of a technology and methodology overhaul.

Thanks to the team behind Wildme.org and their Wildbook software, both citizen and professional scientists are becoming active participants in using AI, computer vision, and big data. Wildbook is working to transform the data collection process, and citizen scientists who use the software have more transparency into conservation research and the impact it’s making. As a result, engagement levels have increased; scientists can more easily share their work; and, most important, endangered species like the whale shark benefit.

In this interview, Colin Kingen, a software engineer for WildBook, (with assistance from his colleagues Jason Holmberg and Jon Van Oast) discusses Wildbook’s work, explains classic problems in field observation science, and shares how Wildbook is working to solve some of the big problems that have plagued wildlife research. He also addresses something I’ve wondered about: why isn’t there an “uberdatabase” to share the work of scientists across all global efforts? The work Kingen and his team are doing exemplifies what can be accomplished when computer scientists with big hearts apply their talents to saving wildlife….(More)”.

AI, people, and society


Eric Horvitz at Science: “In an essay about his science fiction, Isaac Asimov reflected that “it became very common…to picture robots as dangerous devices that invariably destroyed their creators.” He rejected this view and formulated the “laws of robotics,” aimed at ensuring the safety and benevolence of robotic systems. Asimov’s stories about the relationship between people and robots were only a few years old when the phrase “artificial intelligence” (AI) was used for the first time in a 1955 proposal for a study on using computers to “…solve kinds of problems now reserved for humans.” Over the half-century since that study, AI has matured into subdisciplines that have yielded a constellation of methods that enable perception, learning, reasoning, and natural language understanding.

Growing exuberance about AI has come in the wake of surprising jumps in the accuracy of machine pattern recognition using methods referred to as “deep learning.” The advances have put new capabilities in the hands of consumers, including speech-to-speech translation and semi-autonomous driving. Yet, many hard challenges persist—and AI scientists remain mystified by numerous capabilities of human intellect.

Excitement about AI has been tempered by concerns about potential downsides. Some fear the rise of superintelligences and the loss of control of AI systems, echoing themes from age-old stories. Others have focused on nearer-term issues, highlighting potential adverse outcomes. For example, data-fueled classifiers used to guide high-stakes decisions in health care and criminal justice may be influenced by biases buried deep in data sets, leading to unfair and inaccurate inferences. Other imminent concerns include legal and ethical issues regarding decisions made by autonomous systems, difficulties with explaining inferences, threats to civil liberties through new forms of surveillance, precision manipulation aimed at persuasion, criminal uses of AI, destabilizing influences in military applications, and the potential to displace workers from jobs and to amplify inequities in wealth.

As we push AI science forward, it will be critical to address the influences of AI on people and society, on short- and long-term scales. Valuable assessments and guidance can be developed through focused studies, monitoring, and analysis. The broad reach of AI’s influences requires engagement with interdisciplinary groups, including computer scientists, social scientists, psychologists, economists, and lawyers. On longer-term issues, conversations are needed to bridge differences of opinion about the possibilities of superintelligence and malevolent AI. Promising directions include working to specify trajectories and outcomes, and engaging computer scientists and engineers with expertise in software verification, security, and principles of failsafe design….Asimov concludes in his essay, “I could not bring myself to believe that if knowledge presented danger, the solution was ignorance. To me, it always seemed that the solution had to be wisdom. You did not refuse to look at danger, rather you learned how to handle it safely.” Indeed, the path forward for AI should be guided by intellectual curiosity, care, and collaboration….(More)”

Bangalore Taps Tech Crowdsourcing to Fix ‘Unruly’ Gridlock


Saritha Rai at Bloomberg Technology: “In Bangalore, tech giants and startups typically spend their days fiercely battling each other for customers. Now they are turning their attention to a common enemy: the Indian city’s infernal traffic congestion.

Cross-town commutes that can take hours has inspired Gridlock Hackathon, a contest initiated by Flipkart Online Services Pvt. for technology workers to find solutions to the snarled roads that cost the economy billions of dollars. While the prize totals a mere $5,500, it’s attracting teams from global giants Microsoft Corp., Google and Amazon.com. Inc. to local startups including Ola.

The online contest is crowdsourcing solutions for Bangalore, a city of more than 10 million, as it grapples with inadequate roads, unprecedented growth and overpopulation. The technology industry began booming decades ago and with its base of talent, it continues to attract companies. Just last month, Intel Corp. said it would invest $178 million and add more workers to expand its R&D operations.

The ideas put forward at the hackathon range from using artificial intelligence and big data on traffic flows to true moonshots, such as flying cars.

The gridlock remains a problem for a city dependent on its technology industry and seeking to attract new investment…(More)”.

Lessons from Airbnb and Uber to Open Government as a Platform


Interview by Marquis Cabrera with Sangeet Paul Choudary: “…Platform companies have a very strong core built around data, machine learning, and a central infrastructure. But they rapidly innovate around it to try and test new things in the market and that helps them open themselves for further innovation in the ecosystem. Governments can learn to become more modular and more agile, the way platform companies are. Modularity in architecture is a very fundamental part of being a platform company; both in terms of your organizational architecture, as well as your business model architecture.

The second thing that governments can learn from a platform company is that successful platform companies are created with intent. They are not created by just opening out what you have available. If you look at the current approach of applying platform thinking in government, a common approach is just to take data and open it out to the world. However, successful platform companies first create a shaping strategy to shape-out and craft a direction of vision for the ecosystem in terms of what they can achieve by being on the platform. They then provision the right tools and services that serve the vision to enable success for the ecosystem[1] . And only then do they open up their infrastructure. It’s really important that you craft the right shaping strategy and use that to define the rights tools and services before you start pursuing a platform implementation.

In my work with governments, I regularly find myself stressing the importance of thinking as a market maker rather than as a service provider. Governments have always been market makers but when it comes to technology, they often take the service provider approach.

In your book, you used San Francisco City Government and Data.gov as examples of infusing platform thinking in government. But what are some global examples of governments, countries infusing platform thinking around the world?

One of the best examples is from my home country Singapore, which has been at the forefront of converting the nation into a platform. It has now been pursuing platform strategy both overall as a nation by building a smart nation platform, and also within verticals. If you look particularly at mobility and transportation, it has worked to create a central core platform and then build greater autonomy around how mobility and transportation works in the country. Other good examples of governments applying this are Dubai, South Korea, Barcelona; they are all countries and cities that have applied the concept of platforms very well to create a smart nation platform. India is another example that is applying platform thinking with the creation of the India stack, though the implementation could benefit from better platform governance structures and a more open regulation around participation….(More)”.