Smart city initiatives in Africa


Eyerusalem Siba and Mariama Sow at Brookings: “…African countries are presently in the early stages of their urbanization process. Though Africa was the least urbanized region in the world in 2015—only 40 percent of sub-Saharan Africa’s population lived in cities—it is now the second-fastest urbanizing region in the world (behind Asia). Population experts predict that by 2020, Africa will be on top. Given this rapid growth, now is the time for African policymakers to incorporate smart cities into their urbanization strategies….

Rwanda is one of the pioneers of smart city engineering in Africa. Modernizing Kigali is part of a wider effort by the Rwanda government to increase and simplify access to public services. The Irembo platform launched by the government, seeks to create e-government services to allow citizens to complete public processes online, such as registering for driving exams and requesting birth certificates.

In addition, the country is active in involving the private sector in its goal towards creating smart cities. In mid-May, the Rwandan government launched a partnership with Nokia and SRG in order to deploy smart city technology to “improve the lifestyle and social sustainability of [Rwandan] citizens.” The project involves investment in network connectivity and sensor deployment to improve public safety, waste management, utility management, and health care, among other functions.

Rwanda’s smart city rollout has not been perfect, though, proving that smart city development can hit some snags: For example, in 2016, the city started rolling out buses with free Wi-Fi and cashless payment service, but the buses have had connectivity issues related to the Korea-built technology’s inability to adapt to local conditions.

In addition, there has been criticism around the lack of inclusivity of certain smart cities projects. Kigali’s Smart Neighborhood project, Vision City, creates a tech-enabled neighborhood with solar powered street lamps and free Wi-Fi in the town square. Critics, though, state that the project ignored the socioeconomic realities of a city where 80 percent of its population lives in slums with monthly earnings below $240 (Vision City Homes cost $160,000). (Rwandan planners have responded stating that affordable housing will be built in the later phases of the project.)

POLICY RECOMMENDATIONS

As seen in the case of Rwanda, smart cities—while creating opportunities for innovation and better livelihoods—face challenges during and after their development. City planners and policymakers must keep the big picture in mind when promoting smart cities, emphasizing well-implemented infrastructure and citizen needs. Technology for technology’s sake will not create solutions to some of Africa’s cities biggest challenges, including high-cost, low-quality, and inaccessible services. Indeed, in a 2015 issue paper, UN-Habitat urges city planners to avoid viewing smart cities as the final product. In particular, UN-Habitat calls for smart cities to minimize transport needs, reduce service delivery costs, and maximize land use. These moves, among others, will ensure that the city reduces congestion, creates spaces dedicated to recreational uses, enhances service delivery, and, thus, improves its citizen’s quality of life…(More)”.

Public Brainpower: Civil Society and Natural Resource Management


Book edited by Indra Øverland: ” …examines how civil society, public debate and freedom of speech affect natural resource governance. Drawing on the theories of Robert Dahl, Jurgen Habermas and Robert Putnam, the book introduces the concept of ‘public brainpower’, proposing that good institutions require: fertile public debate involving many and varied contributors to provide a broad base for conceiving new institutions; checks and balances on existing institutions; and the continuous dynamic evolution of institutions as the needs of society change.

The book explores the strength of these ideas through case studies of 18 oil and gas-producing countries: Algeria, Angola, Azerbaijan, Canada, Colombia, Egypt, Iraq, Kazakhstan, Libya, Netherlands, Nigeria, Norway, Qatar, Russia, Saudi, UAE, UK and Venezuela. The concluding chapter includes 10 tenets on how states can maximize their public brainpower, and a ranking of 33 resource-rich countries and the degree to which they succeed in doing so.

The Introduction and the chapters ‘Norway: Public Debate and the Management of Petroleum Resources and Revenues’, ‘Kazakhstan: Civil Society and Natural-Resource Policy in Kazakhstan’, and ‘Russia: Public Debate and the Petroleum Sector’ of this book are available open access under a CC BY 4.0 license at link.springer.com….(More)”.

Spotting the Patterns: 2017 Trends in Design Thinking


Andy Hagerman at Stanford Social Innovation Review: “Design thinking: It started as an academic theory in the 60’s, a notion of starting to look at broader types of challenges with the intention and creativity that designers use to tackle their work. It gained widespread traction as a product design process, has been integrated into culture change initiatives of some of the world’s most important organizations and governments, and has been taught in schools kindergarten to grad school. It’s been celebrated, criticized, merged with other methodologies, and modified for nearly every conceivable niche.

Regardless of what side of those perspectives you fall on, it’s undeniable that design thinking is continuing to grow and evolve. Looking across the social innovation landscape today, we see a few patterns that, taken together, suggest that social innovators continue to see great promise in design thinking. They are working to find ways to make it yield real performance gains for their organizations and clients.

From design thinking to design doing

Creative leaders have moved beyond increasing people’s awareness of design thinking to actively seeking concrete opportunities for using it. One of the principal drivers of this shift has been the need to demonstrate value and return on investment from design-thinking initiatives—something people have talked about for years. (Ever heard the question, “Is design thinking just the next fad?”) Social sector organizations, in particular, stand to benefit from the shift from design thinking to design doing. Timelines for getting things built in the social sector are often slow, due to legitimate constraints of responsibly doing impact work, as well as to legacy practices and politics. As long as organizations use design thinking responsibly and acknowledge the broader systems in which new ideas live, some of the emerging models can help them move projects along more quickly and gain greater stakeholder participation….

Building cultures around design thinking

As design thinking has proliferated, many organizational leaders have moved from replicating the design thinking programs of academic institutions like the Stanford d.School or foundational agencies like IDEO to adapting the methodology to their own goals, external environments, and organizational cultures.

One organization that has particularly inspired us is Beespace, a New York City-based social-impact foundation. Beespace has designed a two-year program that helps new organizations not only get off the ground, but also create the conditions for breakthrough innovation. To create this program, which combines deep thinking, impact assessment, and rapid prototyping, Beespace’s leadership asked itself what tools it would need, and came up with a mix that included not just design thinking, but also disciplines of behavioral science and systems thinking, and tools stemming from emotional intelligence and theory of change….

Empowering the few to shift the many

We have seen a lot of interest this year in “train the trainer” programs, particularly from organizations realizing the value of developing their internal capabilities to reduce reliance on outside consultants. Such development often entails focusing on the few people in the organization who are highly capable of instigating major change, as opposed to spreading awareness among the many. It takes time and resources, but the payoff is well worth it from both cultural and operational perspectives….(More)”.

Data-driven reporting: An on-going (r)evolution?


Paper by  and : “Data-driven journalism can be considered as journalism’s response to the datafication of society. To better understand the key components and development of this still young and fast evolving genre, we investigate what the field itself defines as its ‘gold-standard’: projects that were nominated for the Data Journalism Awards from 2013 to 2016 (n = 225). Using a content analysis, we examine, among other aspects, the data sources and types, visualisations, interactive features, topics and producers. Our results demonstrate, for instance, only a few consistent developments over the years and a predominance of political pieces, of projects by newspapers and by investigative journalism organisations, of public data from official institutions as well as a glut of simple visualisations, which in sum echoes a range of general tendencies in data journalism. On the basis of our findings, we evaluate data-driven journalism’s potential for improvement with regard to journalism’s societal functions….(More)”.

Growing the artificial intelligence industry in the UK


Summary from an independent review, carried out by Professor Dame Wendy Hall and Jérôme Pesenti: “Increased use of Artificial Intelligence (AI) can bring major social and economic benefits to the UK. With AI, computers can analyse and learn from information at higher accuracy and speed than humans can. AI offers massive gains in efficiency and performance to most or all industry sectors, from drug discovery to logistics. AI is software that can be integrated into existing processes, improving them, scaling them, and reducing their costs, by making or suggesting more accurate decisions through better use of information.

It has been estimated that AI could add an additional USD $814 billion (£630bn) to the UK economy by 2035, increasing the annual growth rate of GVA from 2.5 to 3.9%.

Our vision is for the UK to become the best place in the world for businesses developing and deploying AI to start, grow and thrive, to realise all the benefits the technology offers….

Key factors have combined to increase the capability of AI in recent years, in particular:

  • New and larger volumes of data
  • Supply of experts with the specific high level skills
  • Availability of increasingly powerful computing capacity. The barriers to achieving performance have fallen significantly, and continue to fall.

To continue developing and applying AI, the UK will need to increase ease of access to data in a wider range of sectors. This Review recommends:

  • Development of data trusts, to improve trust and ease around sharing data
  • Making more research data machine readable
  • Supporting text and data mining as a standard and essential tool for research.

Skilled experts are needed to develop AI, and they are in short supply. To develop more AI, the UK will need a larger workforce with deep AI expertise, and more development of lower level skills to work with AI. …

Increasing uptake of AI means increasing demand as well as supply through a better understanding of what AI can do and where it could be applied. This review recommends:

  • An AI Council to promote growth and coordination in the sector
  • Guidance on how to explain decisions and processes enabled by AI
  • Support for export and inward investment
  • Guidance on successfully applying AI to drive improvements in industry
  • A programme to support public sector use of AI
  • Funded challenges around data held by public organisations.

Our work has indicated that action in these areas could deliver a step-change improvement in growth of UK AI. This report makes the 18 recommendations listed in full below, which describe how Government, industry and academia should work together to keep the UK among the world leaders in AI…(More)”

Growing government innovation labs: an insider’s guide


Report by UNDP and Futurgov: “Effective and inspirational labs exist in many highly developed countries. In Western Europe, MindLab (Denmark) and The Behavioural Insights Team (UK) push their governments to re-imagine public services. In Asia, the Innovation Bureau in Seoul, South Korea, co-designs better services with citizens.

However, this guide is aimed towards those working in the development context. The authors believe their collective experience of running labs in Eurasia, Asia and the Middle East is directly transferrable to other regions who face similar challenges, for example, moving from poverty to inequality, or from a recent history of democratisation towards more open government.

This report does not offer a “how-to” of innovation techniques — there are plenty of guides out there. Instead, we give the real story of how government innovation labs develop in regions like ours: organic and people-driven, often operating under the radar until safe to emerge. We share a truthful  examination of the twists and turns of seeding, starting up and scaling labs, covering the challenges we faced and our failures, as much as our successes. …(More)”.

Linux Foundation Debuts Community Data License Agreement


Press Release: “The Linux Foundation, the nonprofit advancing professional open source management for mass collaboration, today announced the Community Data License Agreement(CDLA) family of open data agreements. In an era of expansive and often underused data, the CDLA licenses are an effort to define a licensing framework to support collaborative communities built around curating and sharing “open” data.

Inspired by the collaborative software development models of open source software, the CDLA licenses are designed to enable individuals and organizations of all types to share data as easily as they currently share open source software code. Soundly drafted licensing models can help people form communities to assemble, curate and maintain vast amounts of data, measured in petabytes and exabytes, to bring new value to communities of all types, to build new business opportunities and to power new applications that promise to enhance safety and services.

The growth of big data analytics, machine learning and artificial intelligence (AI) technologies has allowed people to extract unprecedented levels of insight from data. Now the challenge is to assemble the critical mass of data for those tools to analyze. The CDLA licenses are designed to help governments, academic institutions, businesses and other organizations open up and share data, with the goal of creating communities that curate and share data openly.

For instance, if automakers, suppliers and civil infrastructure services can share data, they may be able to improve safety, decrease energy consumption and improve predictive maintenance. Self-driving cars are heavily dependent on AI systems for navigation, and need massive volumes of data to function properly. Once on the road, they can generate nearly a gigabyte of data every second. For the average car, that means two petabytes of sensor, audio, video and other data each year.

Similarly, climate modeling can integrate measurements captured by government agencies with simulation data from other organizations and then use machine learning systems to look for patterns in the information. It’s estimated that a single model can yield a petabyte of data, a volume that challenges standard computer algorithms, but is useful for machine learning systems. This knowledge may help improve agriculture or aid in studying extreme weather patterns.

And if government agencies share aggregated data on building permits, school enrollment figures, sewer and water usage, their citizens benefit from the ability of commercial entities to anticipate their future needs and respond with infrastructure and facilities that arrive in anticipation of citizens’ demands.

“An open data license is essential for the frictionless sharing of the data that powers both critical technologies and societal benefits,” said Jim Zemlin, Executive Director of The Linux Foundation. “The success of open source software provides a powerful example of what can be accomplished when people come together around a resource and advance it for the common good. The CDLA licenses are a key step in that direction and will encourage the continued growth of applications and infrastructure.”…(More)”.

A Brief History of Living Labs: From Scattered Initiatives to Global Movement


Paper by Seppo Leminen, Veli-Pekka Niitamo, and Mika Westerlund presented at the Open Living Labs Day Conference: “This paper analyses the emergence of living labs based on a literature review and interviews with early living labs experts. Our study makes a contribution to the growing literature of living labs by analysing the emergence of living labs from the perspectives of (i) early living lab pioneers, (ii) early living lab activities in Europe and especially Nokia Corporation, (iii) framework programs of the European Union supporting the development of living labs, (iv) emergence of national living lab networks, and (v) emergence of the European Network of Living Labs (ENoLL). Moreover, the paper highlights major events in the emergence of living lab movement and labels three consecutive phases of the global living lab movement as (i) toward a new paradigm, (ii) practical experiences, and (iii) professional living labs….(More)”.

Open Space: The Global Effort for Open Access to Environmental Satellite Data


Book by Mariel Borowitz: “Key to understanding and addressing climate change is continuous and precise monitoring of environmental conditions. Satellites play an important role in collecting climate data, offering comprehensive global coverage that can’t be matched by in situ observation. And yet, as Mariel Borowitz shows in this book, much satellite data is not freely available but restricted; this remains true despite the data-sharing advocacy of international organizations and a global open data movement. Borowitz examines policies governing the sharing of environmental satellite data, offering a model of data-sharing policy development and applying it in case studies from the United States, Europe, and Japan—countries responsible for nearly half of the unclassified government Earth observation satellites.

Borowitz develops a model that centers on the government agency as the primary actor while taking into account the roles of such outside actors as other government officials and non-governmental actors, as well as the economic, security, and normative attributes of the data itself. The case studies include the U.S. National Aeronautics and Space Administration (NASA) and the U.S. National Oceanographic and Atmospheric Association (NOAA), and the United States Geological Survey (USGS); the European Space Agency (ESA) and the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT); and the Japanese Aerospace Exploration Agency (JAXA) and the Japanese Meteorological Agency (JMA). Finally, she considers the policy implications of her findings for the future and provides recommendations on how to increase global sharing of satellite data….(More)”.

The Unexamined Algorithm Is Not Worth Using


Ruben Mancha & Haslina Ali at Stanford Social Innovation Review: “In 1983, at the height of the Cold War, just one man stood between an algorithm and the outbreak of nuclear war. Stanislav Petrov, a colonel of the Soviet Air Defence Forces, was on duty in a secret command center when early-warning alarms went off indicating the launch of intercontinental ballistic missiles from an American base. The systems reported that the alarm was of the highest possible reliability. Petrov’s role was to advise his superiors on the veracity of the alarm that, in turn, would affect their decision to launch a retaliatory nuclear attack. Instead of trusting the algorithm, Petrov went with his gut and reported that the alarm was a malfunction. He turned out to be right.

This historical nugget represents an extreme example of the effect that algorithms have on our lives. The detection algorithm, it turns out, mistook the sun’s reflection for a missile launch. It is a sobering thought that a poorly designed or malfunctioning algorithm could have changed the course of history and resulted in millions of deaths….

We offer five recommendations to guide the ethical development and evaluation of algorithms used in your organization:

  1. Consider ethical outcomes first, speed and efficiency second. Organizations seeking speed and efficiency through algorithmic automation should remember that customer value comes through higher strategic speed, not higher operational speed. When implementing algorithms, organizations should never forget their ultimate goal is creating customer value, and fast yet potentially unethical algorithms defile that objective.
  2. Make ethical guiding principles salient to your organization. Your organization should reflect on the ethical principles guiding it and convey them clearly to employees, business partners, and customers. A corporate social responsibility framework is a good starting point for any organization ready to articulate its ethical principles.
  3. Employ programmers well versed in ethics. The computer engineers responsible for designing and programming algorithms should understand the ethical implications of the products of their work. While some ethical decisions may seem intuitive (such as do not use an algorithm to steal data from a user’s computer), most are not. The study of ethics and the practice of ethical inquiry should be part of every coding project.
  4. Interrogate your algorithms against your organization’s ethical standards. Through careful evaluation of the your algorithms’ behavior and outcomes, your organization can identify those circumstances, real or simulated, in which they do not meet the ethical standards.
  5. Engage your stakeholders. Transparently share with your customers, employees, and business partners details about the processes and outcomes of your algorithms. Stakeholders can help you identify and address ethical gaps….(More).