Leveraging ‘big data’ analytics in the public sector


Pandula Gamage in Public Money & Management: “This article examines the opportunities presented by effectively harnessing big data in the public sector context. The article is exploratory and reviews both academic- and practitioner–oriented literature related to big data developments. The findings suggest that big data will have an impact on the future role of public sector organizations in functional areas. However, the author also reveals that there are challenges to be addressed by governments in adopting big data applications. To realize the benefits of big data, policy-makers need to: invest in research; create incentives for private and public sector entities to share data; and set up programmes to develop appropriate skills….(More)”

Is artificial intelligence key to dengue prevention?


BreakDengue: “Dengue fever outbreaks are increasing in both frequency and magnitude. Not only that, the number of countries that could potentially be affected by the disease is growing all the time.

This growth has led to renewed efforts to address the disease, and a pioneering Malaysian researcher was recently recognized for his efforts to harness the power of big data and artificial intelligence to accurately predict dengue outbreaks.

Dr. Dhesi Baha Raja received the Pistoia Alliance Life Science Award at King’s College London in April of this year, for developing a disease prediction platform that employs technology and data to give people prior warning of when disease outbreaks occur.The medical doctor and epidemiologist has spent years working to develop AIME (Artificial Intelligence in Medical Epidemiology)…

it relies on a complex algorithm, which analyses a wide range of data collected by local government and also satellite image recognition systems. Over 20 variables such as weather, wind speed, wind direction, thunderstorm, solar radiation and rainfall schedule are included and analyzed. Population models and geographical terrain are also included. The ultimate result of this intersection between epidemiology, public health and technology is a map, which clearly illustrates the probability and location of the next dengue outbreak.

The ground-breaking platform can predict dengue fever outbreaks up to two or three months in advance, with an accuracy approaching 88.7 per cent and within a 400m radius. Dr. Dhesi has just returned from Rio de Janeiro, where the platform was employed in a bid to fight dengue in advance of this summer’s Olympics. In Brazil, its perceived accuracy was around 84 per cent, whereas in Malaysia in was over 88 per cent – giving it an average accuracy of 86.37 per cent.

The web-based application has been tested in two states within Malaysia, Kuala Lumpur, and Selangor, and the first ever mobile app is due to be deployed across Malaysia soon. Once its capability is adequately tested there, it will be rolled out globally. Dr. Dhesi’s team are working closely with mobile digital service provider Webe on this.

By making the app free to download, this will ensure the service becomes accessible to all, Dr Dhesi explains.
“With the web-based application, this could only be used by public health officials and agencies. We recognized the need for us to democratize this health service to the community, and the only way to do this is to provide the community with the mobile app.”
This will also enable the gathering of even greater knowledge on the possibility of dengue outbreaks in high-risk areas, as well as monitoring the changing risks as people move to different areas, he adds….(More)”

Estonia Is Demonstrating How Government Should Work in a Digital World


Motherboard: “In May, Manu Sporny became the 10,000th “e-Resident” of Estonia. Sporny, the founder and CEO of a digital payments and identity company located in the United States, has never set foot in Estonia. However, he heard about the country’s e-Residency program and decided it would be an obvious choice for his company’s European headquarters.

People like Sporny are why Estonia launched a digital residency program in December 2014. The program allows anyone in the world to apply for a digital identity, which will let them: establish and run a location independent business online, get easier access to EU markets, open a bank account and conduct e-banking, use international payment service providers, declare taxes, and sign all relevant documents and contracts remotely…..

One of the most essential components of a functioning digital society is a secure digital identity. The state and the private sector need to know who is accessing these online services. Likewise, users need to feel secure that their identity is protected.

Estonia found the solution to this problem. In 2002, we started issuing residents a mandatory ID-card with a chip that empowers them to categorically identify themselves and verify legal transactions and documents through a digital signature. A digital signature has been legally equivalent to a handwritten one throughout the European Union—not just in Estonia—since 1999.

With this new digital identity system, the state could serve not only areas with a low population, but also the entire Estonian diaspora. Estonians anywhere in the world could maintain a connection to their homeland via e-services, contribute to the legislative process, and even participate in elections. Once the government realized that it could scale this service worldwide, it seemed logical to offer its e-services to those without physical residency in Estonia. This meant the Estonian country suddenly had value as a service in addition to a place to live.

What does “Country as a Service” mean?

With the rise of a global internet, we’ve seen more skilled workers and businesspeople offering their services across nations, regardless of their physical location. A survey by Intuit estimates that this number will reach 40 percent in the US alone by 2020.

These entrepreneurs and skilled artisans are ultimately looking for the simplest way to create and maintain a legal, global identity as an outlet for their global offerings.

They look to other countries, not because they are looking for a tax haven, but because they have been prevented from incorporating and maintaining a business, due to barriers from their own government.

The most important thing for these entrepreneurs is that the creation and upkeep of the company is easy and hassle-free. It is also important that, despite being incorporated in a different nation, they remain honest taxpayers within their country of physical residence.

This is exactly what Estonia offers—a location-independent, hassle-free and fully-digital economic and financial environment where entrepreneurs can run their own company globally….

When an e-Resident establishes a company, it means that the company will likely start using the services offered by other Estonian companies (like creating a bank account, partnering with a payment service provider, seeking assistance from accountants, auditors and lawyers). As more clients are created for Estonian companies, their growth potential increases, along with the growth potential of the Estonian economy.

Eventually, there will be more residents outside borders than inside them

If states fail to redesign and simplify the machinery of bureaucracy and make it location-independent, there will be an opportunity for countries that can offer such services across borders.

Estonia has learned that it’s incredibly important in a small state to serve primarily small and micro businesses. In order to sustain a nation on this, we must automate and digitize processes to scale. Estonia’s model, for instance, is location-independent, making it simple to scale successfully. We hope to acquire at least 10 million digital residents (e-Residents) in a way that is mutually beneficial by the nation-states where these people are tax residents….(More)”

Open access: All human knowledge is there—so why can’t everybody access it?


 at ArsTechnica: “In 1836, Anthony Panizzi, who later became principal librarian of the British Museum, gave evidence before a parliamentary select committee. At that time, he was only first assistant librarian, but even then he had an ambitious vision for what would one day became the British Library. He told the committee:

I want a poor student to have the same means of indulging his learned curiosity, of following his rational pursuits, of consulting the same authorities, of fathoming the most intricate inquiry as the richest man in the kingdom, as far as books go, and I contend that the government is bound to give him the most liberal and unlimited assistance in this respect.

He went some way to achieving that goal of providing general access to human knowledge. In 1856, after 20 years of labour as Keeper of Printed Books, he had helped boost the British Museum’s collection to over half a million books, making it the largest library in the world at the time. But there was a serious problem: to enjoy the benefits of those volumes, visitors needed to go to the British Museum in London.

Imagine, for a moment, if it were possible to provide access not just to those books, but to all knowledge for everyone, everywhere—the ultimate realisation of Panizzi’s dream. In fact, we don’t have to imagine: it is possible today, thanks to the combined technologies of digital texts and the Internet. The former means that we can make as many copies of a work as we want, for vanishingly small cost; the latter provides a way to provide those copies to anyone with an Internet connection. The global rise of low-cost smartphones means that group will soon include even the poorest members of society in every country.

That is to say, we have the technical means to share all knowledge, and yet we are nowhere near providing everyone with the ability to indulge their learned curiosity as Panizzi wanted it.

What’s stopping us? That’s the central question that the “open access” movement has been asking, and trying to answer, for the last two decades. Although tremendous progress has been made, with more knowledge freely available now than ever before, there are signs that open access is at a critical point in its development, which could determine whether it will ever succeed in realising Panizzi’s plan.

Table of Contents

Code and the City


Book edited by Rob Kitchin, Sung-Yueh Perng: “Software has become essential to the functioning of cities. It is deeply embedded into the systems and infrastructure of the built environment and is entrenched in the management and governance of urban societies. Software-enabled technologies and services enhance the ways in which we understand and plan cities. It even has an effect on how we manage urban services and utilities.

Code and the City explores the extent and depth of the ways in which software mediates how people work, consume, communication, travel and play. The reach of these systems is set to become even more pervasive through efforts to create smart cities: cities that employ ICTs to underpin and drive their economy and governance. Yet, despite the roll-out of software-enabled systems across all aspects of city life, the relationship between code and the city has barely been explored from a critical social science perspective. This collection of essays seeks to fill that gap, and offers an interdisciplinary examination of the relationship between software and contemporary urbanism.

This book will be of interest to those researching or studying smart cities and urban infrastructure….(More)”.

Democracy in Decline: Rebuilding its Future


Book by Philip Kotler: “An examination by the ‘father of modern marketing’ into how well  a long cherished product (democracy) is satisfying the needs of its consumers (citizens), bringing conversation and solutions on how we can all do our bit to bring about positive change.

At a time where voting systems are flawed, fewer vote, major corporations fund campaigns and political parties battle it out, democracies are being seriously challenged and with that the prospects of a better world for all.

Philip Kotler identifies 14 shortcomings of today’s democracy and proposes potential remedies whilst encouraging readers to join the conversation, exercise their free speech and get on top of the issues that affect their lives regardless of nationality or political persuasion.

An accompanying website (www.democracyindecline.com) invites those interested to help find and publish thoughtful articles that aid our understanding of what is happening and what can be done to improve democracies around the world….(More)”

Customers, Users or Citizens? Inclusion, Spatial Data and Governance in the Smart City


Paper by Linnet Taylor, Christine Richter, Shazade Jameson and Carmen Perez de Pulgar: “This report discusses the use and governance of spatial data in Amsterdam’s smart city projects. How much does spatial data tell the city about its people, and how is that likely to change in the next decade? The project focuses especially on those who may be marginalised or challenged by increasing visibility due to the use of big data in the future smart city: various groups were interviewed including immigrants, children, sex workers, opt-outs of smart technologies, and technology developers. They were asked how they felt about their personal ‘data-sphere’, the level of data-awareness and the kind of consultation they would like to see as citizens of a smart city, and how they felt about increasing interaction between the city and private-sector partners around digital data. The report presents a social roadmap for the datafied city’s future, and addresses the question of how the city can build an inclusive and responsive spatial data governance infrastructure….(More)”

Digital Government: overcoming the systemic failure of transformation


Paul Waller and Vishanth Weerakkody: “This Working Paper contains propositions regarding the use of digital technology to “transform” government that significantly conflict with received wisdom in academia and governments across the world. It counters assertions made in countless political, official and commercial statements and reports produced over past decades….

The “transformation of government” has often been proposed as an objective of e-government; frequently presented as a phase in stage models following the provision online of information and transactions. Yet in literature or official documents there is no established definition of transformation as applied to government. Implicitly or explicitly, it mostly refers to a change in organisational form, signalled by the terms “joining-up” or “integration”, of government. In some work,

In some work, transformation is limited to changing processes or “services”— though “services” is a term unhelpfully applied to a multitude of entities. There is in academic or other literature little evidence of any type of “transformation” achieved beyond a change in an administrative process, nor a robust framework of benefits one might deliver. This begs the questions of what it actually means in reality and why it might be a desired goal.

In essence, what we aim to do in this paper is to develop a structured frame of reference for making sense of how information and communications technologies (ICT), in all their forms, really fit within the world of government and public administration — exactly the challenge set by Professor Christopher Hood in his 2007 paper:

“But we need to have a way of assessing current developments in administrative technologies with those of other eras, such as development of telephones, cars, radios, and fingerprinting in police work in the early part of the twentieth century, or of exact methods of measurement on excise tax collection in the eighteenth century. And if the analysis of the changes such developments bring is to amount to anything more than a breathless tour d’horizon of the latest technological gizmos in public policy (much though governments themselves have a liking for that sort of approach), it needs to be related to some foundational analysis that is, in some way, technology-free and rooted in the nature of government as a social and legal phenomenon.”

After a brief historical review, the paper starts by considering what governments and public administrations actually do: specifically, policy design and implementation through policy instruments. It redefines transformation in terms of changing the policy instrument set chosen to implement policy and sets out broad rationales for how and why ICT can enable this. It proposes a frame of reference of terminology, concepts and objects that enable the examination of not only such transformation, but e-government in general as it has developed over two decades. …(More)”

Connect the corporate dots to see true transparency


Gillian Tett at the Financial Times: “…In all this, a crucial point is often forgotten: simply amassing data will not solve the problem of transparency. What is also needed is a way for analysts to track the connections that exist between companies scattered across different national jurisdictions.

There are more than 45,000 companies listed on global stock exchanges and, according to Chris Taggart of OpenCorporates, an independent data company, there are between 250m and 400m unlisted groups. Many of these are listed on national registries but, since registries are extremely fragmented, it is very difficult for shareholders or regulators to form a complete picture of company activity.

It also creates financial stability risks. One reason why it is currently hard to track the scale of Chinese corporate debt, say, is that it is being issued by an opaque web of legal entities. Similarly, regulators struggled to cope with the fallout from the Lehman Brothers collapse in 2008 because the bank was operating almost 3,000 different legal entities around the world.

Is there a solution to this? A good place to start would be for governments to put their corporate registries online. Another crucial step would be for governments and companies to agree on a common standard for labelling legal entities, so that these can be tracked across borders.

Happily, work on that has begun: in 2014, the Global Legal Entity Identifier Foundation was created. It supports the implementation and use of “legal entity identifiers”, a data standard that identifies participants in financial transactions. Groups such as the Data Coalition in Washington DC are lobbying for laws that would force companies to use LEIs….However, this inter-governmental project is moving so slowly that the private sector may be a better bet. In recent years, companies such as Dun & Bradstreet have begun to amass proprietary information about complex corporate webs, and computer nerds are also starting to use the power of big data to join up the corporate dots in a public format.

OpenCorporates is a good example. Over the past five years, a dozen staff there have been painstakingly scraping national corporate registries to create a database designed to show how companies are connected around the world. This is far from complete but data from 100m entities have already been logged. And in the wake of the Panama Papers, more governments are coming on board — data from the Cayman Islands are currently being added and France is likely to collaborate soon.

Sadly, these moves will not deliver real transparency straight away. If you type “MIO” into the search box on the OpenCorporates website, you will not see a map of all of McKinsey’s activities — at least not yet.

The good news, however, is that with every data scrape, or use of an LEI, the picture of global corporate activity is becoming slightly less opaque thanks to the work of a hidden army of geeks. They deserve acclaim and support — even (or especially) from management consultants….(More)”

Three Things Great Data Storytellers Do Differently


Jake Porway at Stanford Social Innovation Review: “…At DataKind, we use data science and algorithms in the service of humanity, and we believe that communicating about our work using data for social impact is just as important as the work itself. There’s nothing worse than findings gathering dust in an unread report.

We also believe our projects should always start with a question. It’s clear from the questions above and others that the art of data storytelling needs some demystifying. But rather than answering each question individually, I’d like to pose a broader question that can help us get at some of the essentials: What do great data storytellers do differently and what can we learn from them?

1. They answer the most important question: So what?

Knowing how to compel your audience with data is more of an art than a science. Most people still have negative associations with numbers and statistics—unpleasant memories of boring math classes, intimidating technical concepts, or dry accounting. That’s a shame, because the message behind the numbers can be so enriching and enlightening.

The solution? Help your audience understand the “so what,” not the numbers. Ask: Why should someone care about your findings? How does this information impact them? My strong opinion is that most people actually don’t want to look at data. They need to trust that your methods are sound and that you’re reasoning from data, but ultimately they just want to know what it all means for them and what they should do next.

A great example of going straight to the “so what” is this beautiful, interactive visualization by Periscopic about gun deaths. It uses data sparingly but still evokes a very clear anti-gun message….

2. They inspire us to ask more questions.

The best data visualization helps people investigate a topic further, instead of drawing a conclusion for them or persuading them to believe something new.

For example, the nonprofit DC Action for Children was interested in leveraging publicly available data from government agencies and the US Census, as well as DC Action for Children’s own databases, to help policymakers, parents, and community members understand the conditions influencing child well-being in Washington, DC. We helped create a tool that could bring together data in a multitude of forms, and present it in a way that allowed people to delve into the topic themselves and uncover surprising truths, such as the fact that one out of every three kids in DC lives in a neighborhood without a grocery store….

3. They use rigorous analysis instead of just putting numbers on a page.

Data visualization isn’t an end goal; it’s a process. It’s often the final step in a long manufacturing chain, along which we poke, prod, and mold data to create that pretty graph.

Years ago, the New York City Department of Parks & Recreation (NYC Parks) approached us—armed with data about every single tree in the city, including when it was planted and how it was pruned—and wanted to know: Does pruning trees in one year reduce the number of hazardous tree conditions in the following year? This is one of the first things our volunteer data scientists came up with:

Visualization of NYC Parks’ Department data showing tree density in New York City.

This is a visualization of tree density New York—and it was met with oohs and aahs. It was interactive! You could see where different types of trees lived! It was engaging! But another finding that came out of this work arguably had a greater impact. Brian D’Alessandro, one of our volunteer data scientists, used statistical modeling to help NYC Parks calculate a number: 22 percent. It turns out that if you prune trees in New York, there are 22 percent fewer emergencies on those blocks than on the blocks where you didn’t prune. This number is helping the city become more effective by understanding how to best allocate its resources, and now other urban forestry programs are asking New York how they can do the same thing. There was no sexy visualization, no interactivity—just a rigorous statistical model of the world that’s shaping how cities protect their citizens….(More)”