Harnessing Science, Technology and Innovation to achieve the Sustainable Development Goals


Featured innovations for the second STI Forum: “…The theme of the 2017 High-level Political Forum on Sustainable Development (HLPF) is “Eradicating poverty and promoting prosperity in a changing world“, and the Member States have decided that the HLPF 2017 shall focus on six SDGs (1, 2, 3, 5, 9 and 14) in addition to SDG 17 that will be considered at each HLPF. In this context, the following topic may be considered for the STI Forum 2017: “Science, Technology and Innovation for a Changing World – Focus on SDGs 1, 2, 3, 5, 9, and 14“….

The second Call for Innovations was launched for the sharing of innovations that provide solutions targeted to these six SDGs. Innovators from around the world were invited to submit their scientific and technological solutions to the challenges posed by the six SDGs.The Call for Innovations is now closed. More than 110 inspiring innovations from all the globe were submitted through the Global Innovations Exchange platform.The following outstanding innovators were selected to attend the STI Forum 2017 at UNHQ and showcase their solutions:

Blockchain 2.0: How it could overhaul the fabric of democracy and identity


Colm Gorey at SiliconRepublic: “…not all blockchain technologies need to be about making money. A recent report issued by the European Commission discussed the possible ways it could change people’s lives….
While many democratic nations still prefer a traditional paper ballot system to an electronic voting system over fears that digital votes could be tampered with, new technologies are starting to change that opinion.
One suggestion is blockchain enabled e-voting (BEV), which would take control from a central authority and put it back in the hands of the voter.
As a person’s vote would be timestamped with details of their last vote thanks to the encrypted algorithm, an illegitimate one would be spotted more easily by a digital system, or even those within digital-savvy communities.
Despite still being a fledgling technology, BEV has already begun working on the local scale of politics within Europe, such as the internal elections of political parties in Denmark.
But perhaps at this early stage, its actual use in governmental elections at a national level will remain limited, depending on “the extent to which it can reflect the values and structure of society, politics and democracy”, according to the EU….blockchain has also been offered as an answer to sustaining the public service, particularly with transparency of where people’s taxes are going.
One governmental concept could allow blockchain to form the basis for a secure method of distributing social welfare or other state payments, without the need for divisions running expensive and time-consuming fraud investigations.
Irish start-up Aid:Tech is one noticeable example that is working with Serbia to do just that, along with its efforts to use blockchain to create a transparent system for aid to be evenly distributed in countries such as Syria.
Bank of Ireland’s innovation manager, Stephen Moran, is certainly of the opinion that blockchain in the area of identity offers greater revolutionary change than BEV.
“By identity, that can cover everything from educational records, but can also cover the idea of a national identity card,” he said in conversation with Siliconrepublic.com….
But perhaps the wildest idea within blockchain – and one that is somewhat connected to governance – is that, through an amalgamation of smart contracts, it could effectively run itself as an artificially intelligent being.
Known as decentralised autonomous organisations (DAOs), these are, in effect, entities that can run a business or any operation autonomously, allocating tasks or distributing micropayments instantly to users….
An example similar to the DAO already exists, in a crowdsourced blockchain online organisation run entirely on the open source platform Ethereum.
Last year, through the sheer will of its users, it was able to crowdfund the largest sum ever – $100m – through smart contracts alone.
If it appears confusing and unyielding, then you are not alone.
However, as was simply summed up by writer Leda Glyptis, blockchain is a force to be reckoned with, but it will be so subtle that you won’t even notice….(More)”.

Scientists crowdsource autism data to learn where resource gaps exist


SCOPE: “How common is autism? Since 2000, the U.S. Centers for Disease Control and Prevention has revised its estimate several times, with the numbers ticking steadily upward. But the most recent figure of 1 in 68 kids affected is based on data from only 11 states. It gives no indication of where people with autism live around the country nor whether their communities have the resources to treat them.
That’s a knowledge gap Stanford biomedical data scientist Dennis Wall, PhD, wants to fill — not just in the United States but also around the world. A new paper, published online in JMIR Public Health & Surveillance, explains how Wall and his team created GapMap, an interactive website designed to crowdsource the missing autism data. They’re now inviting people and families affected by autism to contribute to the database….
The pilot phase of the research, which is described in the new paper, estimated that the average distance from an individual in the U.S. to the nearest autism diagnostic center is 50 miles, while those with an autism diagnosis live an average of 20 miles from the nearest diagnostic center. The researchers think this may reflect lower rates of diagnosis among people in rural areas….Data submitted to GapMap will be stored in a secure, HIPAA-compliant database. In addition to showing where more autism treatment resources are needed, the researchers hope the project will help build communities of families affected by autism and will inform them of treatment options nearby. Families will also have the option of participating in future autism research, and the scientists plan to add more features, including the locations of environmental factors such as local pollution, to understand if they contribute to autism…(More)”

SeeClickFix Empowers Citizens by Connecting Them to Their Local Governments


Paper by Ben Berkowitz and Jean-Paul Gagnon in Democratic Theory: “SeeClickFix began in 2009 when founder and present CEO Ben Berkowitz spotted a piece of graffiti in his New Haven, Connecticut, neighborhood. After calling numerous departments at city hall in a bid to have the graffiti removed, Berkowitz felt no closer to fixing the problem. Confused and frustrated, his emotions resonated with what many citizens in real- existing democracies feel today (Manning 2015): we see problems in public and want to fix them but can’t. This all too habitual inability for “common people” to fix problems they have to live with on a day-to-day basis is a prelude to the irascible citizen (White 2012), which, according to certain scholars (e.g., Dean 1960; Lee 2009), is itself a prelude to political apathy and a citizen’s alienation from specific political institutions….(More)”

Artificial intelligence prevails at predicting Supreme Court decisions


Matthew Hutson at Science: “See you in the Supreme Court!” President Donald Trump tweeted last week, responding to lower court holds on his national security policies. But is taking cases all the way to the highest court in the land a good idea? Artificial intelligence may soon have the answer. A new study shows that computers can do a better job than legal scholars at predicting Supreme Court decisions, even with less information.

Several other studies have guessed at justices’ behavior with algorithms. A 2011 project, for example, used the votes of any eight justices from 1953 to 2004 to predict the vote of the ninth in those same cases, with 83% accuracy. A 2004 paper tried seeing into the future, by using decisions from the nine justices who’d been on the court since 1994 to predict the outcomes of cases in the 2002 term. That method had an accuracy of 75%.

The new study draws on a much richer set of data to predict the behavior of any set of justices at any time. Researchers used the Supreme Court Database, which contains information on cases dating back to 1791, to build a general algorithm for predicting any justice’s vote at any time. They drew on 16 features of each vote, including the justice, the term, the issue, and the court of origin. Researchers also added other factors, such as whether oral arguments were heard….

From 1816 until 2015, the algorithm correctly predicted 70.2% of the court’s 28,000 decisions and 71.9% of the justices’ 240,000 votes, the authors report in PLOS ONE. That bests the popular betting strategy of “always guess reverse,” which has been the case in 63% of Supreme Court cases over the last 35 terms. It’s also better than another strategy that uses rulings from the previous 10 years to automatically go with a “reverse” or an “affirm” prediction. Even knowledgeable legal experts are only about 66% accurate at predicting cases, the 2004 study found. “Every time we’ve kept score, it hasn’t been a terribly pretty picture for humans,” says the study’s lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago…..Outside the lab, bankers and lawyers might put the new algorithm to practical use. Investors could bet on companies that might benefit from a likely ruling. And appellants could decide whether to take a case to the Supreme Court based on their chances of winning. “The lawyers who typically argue these cases are not exactly bargain basement priced,” Katz says….(More)”.

The Nudge Wars: A Glimpse into the Modern Socialist Calculation Debate


Paper by Abigail Devereaux: “Nudge theory, the preferences-neutral subset of modern behavioral economic policy, is premised on irrational decision-making at the level of the individual agent. We demonstrate how Hayek’s epistemological argument, developed primarily during the socialist calculation debate in response to claims made by fellow economists in favor of central planning, can be extended to show how nudge theory requires social architects to have access to fundamentally unascertainable implicit and local knowledge. We draw parallels between the socialist calculation debate and nudge theoretical arguments throughout, particularly the “libertarian socialism” of H. D. Dickinson and the “libertarian paternalism” of Cass Sunstein and Richard Thaler. We discuss the theory of creative and computable economics in order to demonstrate how nudges are provably not preferences-neutral, as even in a state of theoretically perfect information about current preferences, policy-makers cannot access information about how preferences may change in the future. We conclude by noting that making it cheaper to engage in some methods of decision-making is analogous to subsidizing some goods. Therefore, the practical consequences of implementing nudge theory could erode the ability of individuals to make good decisions by destroying the kinds of knowledge-encoding institutions that endogenously emerge to assist agent decision-making….(More)”

Solving a Global Digital Identity Crisis


Seth Berkley at MIT Technology Review:” In developing countries, one in three children under age five has no record of their existence. Technology can help….Digital identities have become an integral part of modern life, but things like e-passports, digital health records, or Apple Pay really only provide faster, easier, or sometimes smarter ways of accessing services that are already available.

In developing countries it’s a different story. There, digital ID technology can have a profound impact on people’s lives by enabling them to access vital and often life-saving services for the very first time….The challenge is that in poor countries, an increasing number of people live under the radar, invisible to the often archaic, paper-based methods used to certify births, deaths, and marriages. One in three children under age five does not officially exist because their birth wasn’t registered. Even when it is, many don’t have proof in the form of birth certificates. This can have a lasting impact on children’s lives, leaving them vulnerable to neglect and abuse.

In light of this, it is difficult to see how we will meet the SDG16 deadline without a radical solution. What we need are new and affordable digital ID technologies capable of working in poorly resourced settings—for example, where there is no reliable electricity—and yet able to leapfrog current approaches to reach everyone, whether they’re living in remote villages or urban slums.

Such technologies are already emerging as part of efforts to increase global childhood vaccination coverage, with small-scale trials across Africa and Asia. With 86 percent of infants now having access to routine immunization—where they receive all three doses of a diphtheria-pertussis-tetanus vaccine—there are obvious advantages of building on an existing system with such a broad reach.

These systems were designed to help the World Health Organization, UNICEF, and my organization, Gavi, the Vaccine Alliance, close the gap on the one in seven infants still missing out. But they can also be used to help us achieve SDG16.

One, called MyChild, helps countries transition from paper to digital. At first glance it looks like a typical paper booklet on which workers can record health-record details about the child, such as vaccinations, deworming, or nutritional supplements. But each booklet contains a unique identification number and tear-out slips that are collected and scanned later. This means that even if a child’s birth hasn’t been registered, a unique digital record will follow them through childhood. Developed by Swedish startup Shifo, this system has been used to register more than 95,000 infants in Uganda, Afghanistan, and the Gambia, enabling health workers to follow up either in person or using text reminders to parents.

Another system, called Khushi Baby, is entirely paperless and involves giving each child a digital necklace that contains a unique ID number on a near-field communication chip. This can be scanned by community health workers using a cell phone, enabling them to update a child’s digital health records even in remote areas with no cell coverage. Trials in the Indian state of Rajasthan have been carried out across 100 villages to track more than 15,000 vaccination events. An organization called ID2020 is exploring the use of blockchain technology to create access to a unique identity for those who currently lack one….(More)”

The Global Open Data Index 2016/2017 – Advancing the State of Open Data Through Dialogue


Open Knowledge International: “The Global Open Data Index (GODI) is the annual global benchmark for publication of open government data, run by the Open Knowledge Network. Our crowdsourced survey measures the openness of government data according to the Open Definition.

By having a tool that is run by civil society, GODI creates valuable insights for government’s data publishers to understand where they have data gaps. It also shows how to make data more useable and eventually more impactful. GODI therefore provides important feedback that governments are usually lacking.

For the last 5 years we have been revising GODI methodology to fit the changing needs of the open data movement. This year, we changed our entire survey design by adding experimental questions to assess data findability and usability. We also improved our datasets definitions by looking at essential data points that can solve real world problems. Using more precise data definitions also increased the reliability of our cross-country comparison. See all about the GODI methodology here

In addition, this year shall be more than a mere measurement tool. We see it as a tool for conversation. To spark debate, we release GODI in two phases:

  1. The dialogue phase – We are releasing the data to the public after a rigorous review. Yet, like everyone, our work is not assessment in not always perfect. We give all users a chance to contest the index results for 30 days, starting May 2nd. In this period, users of the index can comment on our assessments through our Global Open Data Index forum. On June 2nd, we will review those comments and will change some index submissions if needed.
  2. The final results – on June 15 we will present the final results of the index. For the first time ever, we will also publish the GODI white paper. This paper will include our main findings and recommendations to advance open data publication….

… findings from this year’s GODI

  • GODI highlights data gaps. Open data is the final stage of an information production chain, where governments measure and collect data, process and share data internally, and publish this data openly. While being designed to measure open data, the Index also highlights gaps in this production chain. Does a government collect data at all? Why is data not collected? Some governments lack the infrastructure and resources to modernise their information systems; other countries do not have information systems in place at all.
  • Data findability is a major challenge. We have data portals and registries, but government agencies under one national government still publish data in different ways and different locations. Moreover, they have different protocols for license and formats. This has a hazardous impact – we may not find open data, even if it is out there, and therefore can’t use it. Data findability is a prerequisite for open data to fulfill its potential and currently most data is very hard to find.
  • A lot of ‘data’ IS online, but the ways in which it is presented are limiting their openness. Governments publish data in many forms, not only as tabular datasets but also visualisations, maps, graphs and texts. While this is a good effort to make data relatable, it sometimes makes the data very hard or even impossible for reuse. It is crucial for governments to revise how they produce and provide data that is in good quality for reuse in its raw form. For that, we need to be aware what is best raw data required which varies from data category to category.
  • Open licensing is a problem, and we cannot assess public domain status. Each year we find ourselves more confused about open data licences. On the one hand, more governments implement their unique open data license versions. Some of them are compliant with the Open Definition, but most are not officially acknowledged. On the other hand, some governments do not provide open licenses, but terms of use, that may leave users in the dark about the actual possibilities to reuse data. There is a need to draw more attention to data licenses and make sure data producers understand how to license data better….(More)”.

The opportunity in government productivity


The McKinsey Center for Government: “Governments face a pressing question: How to do more with less? Raising productivity could save $3.5 trillion a year—or boost outcomes at no extra cost.

Higher costs and rising demand have driven rapid increases in spending on core public services such as education, healthcare, and transport—while countries must grapple with complex challenges such as population aging, economic inequality, and protracted security concerns. Government expenditure amounts to more than a third of global GDP, budgets are strained, and the world public-sector deficit is close to $4 trillion a year.

At the same time, governments are struggling to meet citizens’ rising expectations. Satisfaction with key state services, such as public transportation, schools, and healthcare facilities, is less than half that of nonstate providers, such as banks or utilities.

Governments need a way to deliver better outcomes—and a better experience for citizens—at a sustainable cost. A new paper by the McKinsey Center for Government (MCG), Government productivity: Unlocking the $3.5 trillion opportunity, suggests that goal is within reach. It shows that several countries have achieved dramatic productivity improvements in recent years—for example, by improving health, public safety, and education outcomes while maintaining or even reducing spending per capita or per student in those sectors.

If other countries were to match the improvements already demonstrated in these pockets of excellence, the world’s governments could potentially save as much as $3.5 trillion a year by 2021—equivalent to the entire global fiscal gap. Alternatively, countries could choose to keep spending constant while boosting the quality of key services. For example, if all the countries studied had improved the productivity of their healthcare systems at the rate of comparable best performers over the past 5 years, they would have added 1.4 years to the healthy life expectancy of their combined populations. That translates into 12 billion healthy life years gained, without additional per capita spending…(More)”

Too Much of a Good Thing? Frequent Flyers and the Implications for the Coproduction of Public Service Delivery


Paper by Benjamin Y. Clark and Jeffrey L. Brudney: “The attention on coproduction and specifically technology-enabled coproduction has grown substantially. This attention had provided findings that highlight the benefits for citizens and governments. Previous research on technologically-enabled coproduction (Internet, smartphones, and centralized non-emergency municipal call centers), show that these technologies have brought coproduction within reach of citizens (Meijer 2011; Kim and Lee 2012; Norris and Reddick 2013; Clark, Brudney, and Jang 2013; Linders 2012; Clark et al. 2016; Clark and Shurik 2016) and have the potential to improve perceptions of government performance (Clark and Shurik 2016). The advent of technologically-enabled coproduction has also made it possible for some residents to participate at levels not previously possible. These high volume coproducers, now known as “frequent flyers,” have the potential to become pseudo-bureaucrats. This chapter seeks to understand if we need to be concerned about this development. Additionally, we seek to understand what individual & neighborhood characteristics affect the intensity of coproduction of public services and if there are diffusion effects of frequent flyers.

To address these questions, we use surveys of San Francisco, California, residents conducted in 2011, 2013, and 2015. Our results suggest that the frequent flyers are largely representative of their communities. Our study finds some evidence that racial and ethnic minorities might be more likely to be a part of this group than the white majority. And perhaps most interestingly we find that neighbors appear to be learning from one another — the more frequent flyers that live in a neighborhood, the more likely it is that you are going to be a frequent flyer….(More)”