Where’s the ‘Civic’ in CivicTech?


Blog by Pius Enywaru: “The ideology of community participation and development is a crucial topic for any nation or community seeking to attain sustainable development. Here in Uganda, oftentimes when the opportunity for public participation either in local planning or in holding local politicians to account — the ‘don’t care’ attitude reigns….

What works?

Some of these tools include Ask Your Government Uganda, a platform built to help members of the public get the information they want about from 106 public agencies in Uganda. U-Report developed by UNICEF provides an SMS-based social monitoring tool designed to address issues affecting the youth of Uganda. Mentioned in a previous blog post, Parliament Watchbrings the proceedings of the Par­lia­ment of Uganda to the citizens. The or­ga­ni­za­tion lever­ages tech­nol­ogy to share live up­dates on so­cial me­dia and pro­vides in-depth analy­sis to cre­ate a bet­ter un­der­stand­ing on the busi­ness of Par­lia­ment. Other tools used include citizen scorecards, public media campaigns and public petitions. Just recently, we have had a few calls to action to get people to sign petitions, with somewhat lackluster results.

What doesn’t work?

Although the usage of these tools have dramatically grown, there is still a lack of awareness and consequently, community participation. In order to understand the interventions which the Government of Uganda believes are necessary for sustainable urban development, it is important to examine the realities pertaining to urban areas and their planning processes. There are many challenges in deploying community participation tools based on ICT such as limited funding and support for such initiatives, low literacy levels, low technical literacy, a large digital divide, low rates of seeking input from communities in developing these tools, lack of adequate government involvement and resistance/distrust of change by both government and citizens. Furthermore, in many of these initiatives, a large marketing or sensitization push is needed to let citizens know that these services exist for their benefit.

There are great minds who have brilliant ideas to try and bring literally everyone on board though civic engagement. When you have a look at their ideas, you will agree that indeed they might make a reputable service and bring about remarkable change in different communities. However, the biggest question has always been, “How do these ideas get executed and adopted by these communities that they target”? These ideas suffer a major setback of lack of inclusivity to enhance community participation. This still remains a puzzle for most folks that have these ideas….(More)”.

We need a safe space for policy failure


Catherine Althaus & David Threlfall in The Mandarin: “Who remembers Google Schemer, the Apple Pippin, or Microsoft Zune? No one — and yet such no-go ideas didn’t hold back these prominent companies. In IT, such high profile failures are simply steps on the path to future success. When a start-up or major corporate puts a product onto the market they identify the kinks in their invention immediately, design a fix, and release a new version. If the whole idea falls flat — and who ever listened to music on a Zune instead of an iPod? — the next big thing is just around the corner. Learning from failure is celebrated as a key feature of innovation.

But in the world of public policy, this approach is only now creeping into our collective consciousness. We tread ever so lightly.

Drug policy, childcare reform, or information technology initiatives are areas where innovation could provide policy improvements, but who is going to be a first-mover innovator in this policy area without fearing potential retribution should anything go wrong?…

Public servants don’t have the luxury of ‘making a new version’ without fear of blame or retribution. Critically, their process often lacks the ability to test assumptions before delivery….

The most persuasive or entertaining narrative often trumps the painstaking work — and potential missteps — required to build an evidence base to support political and policy decisions. American academics Elizabeth Shanahan, Mark McBeth and Paul Hathaway make a remarkable claim regarding the power of narrative in the policy world: “Research in the field of psychology shows that narratives have a stronger ability to persuade individuals and influence their beliefs than scientific evidence does.” If narrative and stories overtake what we normally accept as evidence, then surely we ought to be taking more notice of what the narratives are, which we choose and how we use them…

Failing the right way

Essential policy spheres such as health, education and social services should benefit from innovative thinking and theory testing. What is necessary in these areas is even more robust attention to carefully calibrated and well-thought through experimentation. Rewards need to outweigh risks, and risks need to be properly managed. This has always been the case in clinical trials in medicine. Incredible breakthroughs in medical practice made throughout the 20th century speak to the success of this model. Why should policymaking suffer from a timid inertia given the potential for similar success?

An innovative approach, focused on learning while failing right, will certainly require a shift in thinking. Every new initiative will need to be designed in a holistic way, to not just solve an issue but learn from every stage of the design and delivery process. Evaluation doesn’t follow implementation but instead becomes part of the entire cycle. A small-scale, iterative approach can then lead to bigger successes down the track….(More)”.

Ireland Opens E-Health Open Data Portal


Adi Gaskell at HuffPost: “… an open data portal has been launched by eHealth Ireland.  The portal aims to bring together some 300 different open data sources into one place, making it easier to find data from across the Irish Health Sector.

The portal includes data from a range of sources, including statistics on hospital day and inpatient cases, waiting list statistics and information around key new digital initiatives.

Open data

The resource features datasets from both the Department of Health and HealthLink, so the team believe that the data is of the highest quality, and also compliant with the Open Health Data Policy.  This ensures that the approach taken with the release of data is consistent and in accordance with national and international guidelines.

“I am delighted to welcome the launch of the eHealth Ireland Open Data Portal today. The aim of Open Data is twofold; on the one hand facilitating transparency of the Public Sector and on the other providing a valuable resource that can drive innovation. The availability of Open Data can empower citizens and support clinicians, care providers, and researchers make better decisions, spur new innovations and identify efficiencies while ensuring that personal data remains confidential,” Richard Corbridge, CIO at the Health Service Executive says.

Data from both HealthLink and the National Treatment Purchase Fund (NTPF) will be uploaded to the portal each month, with new datasets due to be added on a regular basis….

The project follows a number of clearly defined Open Health Data Principles that are designed to support the health service in the provision of better patient care and in the support of new innovations in the sector, all whilst ensuring that patient data is secured and governed appropriately…(More)”.

Artificial Intelligence for Citizen Services and Government


Paper by Hila Mehr: “From online services like Netflix and Facebook, to chatbots on our phones and in our homes like Siri and Alexa, we are beginning to interact with artificial intelligence (AI) on a near daily basis. AI is the programming or training of a computer to do tasks typically reserved for human intelligence, whether it is recommending which movie to watch next or answering technical questions. Soon, AI will permeate the ways we interact with our government, too. From small cities in the US to countries like Japan, government agencies are looking to AI to improve citizen services.

While the potential future use cases of AI in government remain bounded by government resources and the limits of both human creativity and trust in government, the most obvious and immediately beneficial opportunities are those where AI can reduce administrative burdens, help resolve resource allocation problems, and take on significantly complex tasks. Many AI case studies in citizen services today fall into five categories: answering questions, filling out and searching documents, routing requests, translation, and drafting documents. These applications could make government work more efficient while freeing up time for employees to build better relationships with citizens. With citizen satisfaction with digital government offerings leaving much to be desired, AI may be one way to bridge the gap while improving citizen engagement and service delivery.

Despite the clear opportunities, AI will not solve systemic problems in government, and could potentially exacerbate issues around service delivery, privacy, and ethics if not implemented thoughtfully and strategically. Agencies interested in implementing AI can learn from previous government transformation efforts, as well as private-sector implementation of AI. Government offices should consider these six strategies for applying AI to their work: make AI a part of a goals-based, citizen-centric program; get citizen input; build upon existing resources; be data-prepared and tread carefully with privacy; mitigate ethical risks and avoid AI decision making; and, augment employees, do not replace them.

This paper explores the various types of AI applications, and current and future uses of AI in government delivery of citizen services, with a focus on citizen inquiries and information. It also offers strategies for governments as they consider implementing AI….(More)”

Algorithmic regulation: A critical interrogation


Karen Yeung in Regulation and Governance: “Innovations in networked digital communications technologies, including the rise of “Big Data,” ubiquitous computing, and cloud storage systems, may be giving rise to a new system of social ordering known as algorithmic regulation. Algorithmic regulation refers to decisionmaking systems that regulate a domain of activity in order to manage risk or alter behavior through continual computational generation of knowledge by systematically collecting data (in real time on a continuous basis) emitted directly from numerous dynamic components pertaining to the regulated environment in order to identify and, if necessary, automatically refine (or prompt refinement of) the system’s operations to attain a pre-specified goal. This study provides a descriptive analysis of algorithmic regulation, classifying these decisionmaking systems as either reactive or pre-emptive, and offers a taxonomy that identifies eight different forms of algorithmic regulation based on their configuration at each of the three stages of the cybernetic process: notably, at the level of standard setting (adaptive vs. fixed behavioral standards), information-gathering and monitoring (historic data vs. predictions based on inferred data), and at the level of sanction and behavioral change (automatic execution vs. recommender systems). It maps the contours of several emerging debates surrounding algorithmic regulation, drawing upon insights from regulatory governance studies, legal critiques, surveillance studies, and critical data studies to highlight various concerns about the legitimacy of algorithmic regulation….(More)”.

Journal tries crowdsourcing peer reviews, sees excellent results


Chris Lee at ArsTechnica: “Peer review is supposed to act as a sanity check on science. A few learned scientists take a look at your work, and if it withstands their objective and entirely neutral scrutiny, a journal will happily publish your work. As those links indicate, however, there are some issues with peer review as it is currently practiced. Recently, Benjamin List, a researcher and journal editor in Germany, and his graduate assistant, Denis Höfler, have come up with a genius idea for improving matters: something called selected crowd-sourced peer review….

My central point: peer review is burdensome and sometimes barely functional. So how do we improve it? The main way is to experiment with different approaches to the reviewing process, which many journals have tried, albeit with limited success. Post-publication peer review, when scientists look over papers after they’ve been published, is also an option but depends on community engagement.

But if your paper is uninteresting, no one will comment on it after it is published. Pre-publication peer review is the only moment where we can be certain that someone will read the paper.

So, List (an editor for Synlett) and Höfler recruited 100 referees. For their trial, a forum-style commenting system was set up that allowed referees to comment anonymously on submitted papers but also on each other’s comments as well. To provide a comparison, the papers that went through this process also went through the traditional peer review process. The authors and editors compared comments and (subjectively) evaluated the pros and cons. The 100-person crowd of researchers was deemed the more effective of the two.

The editors found that it took a bit more time to read and collate all the comments into a reviewers’ report. But it was still faster, which the authors loved. Typically, it took the crowd just a few days to complete their review, which compares very nicely to the usual four to six weeks of the traditional route (I’ve had papers languish for six months in peer review). And, perhaps most important, the responses were more substantive and useful compared to the typical two-to-four-person review.

So far, List has not published the trial results formally. Despite that, Synlett is moving to the new system for all its papers.

Why does crowdsourcing work?

Here we get back to something more editorial. I’d suggest that there is a physical analog to traditional peer review, called noise. Noise is not just a constant background that must be overcome. Noise is also generated by the very process that creates a signal. The difference is how the amplitude of noise grows compared to the amplitude of signal. For very low-amplitude signals, all you measure is noise, while for very high-intensity signals, the noise is vanishingly small compared to the signal, even though it’s huge compared to the noise of the low-amplitude signal.

Our esteemed peers, I would argue, are somewhat random in their response, but weighted toward objectivity. Using this inappropriate physics model, a review conducted by four reviewers can be expected (on average) to contain two responses that are, basically, noise. By contrast, a review by 100 reviewers may only have 10 responses that are noise. Overall, a substantial improvement. So, adding the responses of a large number of peers together should produce a better picture of a scientific paper’s strengths and weaknesses.

Didn’t I just say that reviewers are overloaded? Doesn’t it seem that this will make the problem worse?

Well, no, as it turns out. When this approach was tested (with consent) on papers submitted to Synlett, it was discovered that review times went way down—from weeks to days. And authors reported getting more useful comments from their reviewers….(More)”.

Free Speech and Transparency in a Digital Era


Russell L. Weaver at IMODEV: ” Governmental openness and transparency is inextricably intertwined with freedom of expression. In order to engage in scrutinize government, the people must have access to information regarding the functioning of government. As the U.S. Supreme Court has noted, “It is inherent in the nature of the political process that voters must be free to obtain information from divers sources in order to determine how to cast their votes”. As one commentator noted, “Citizens need to understand what their government is doing in their name.”

Despite the need for transparency, the U.S. government has frequently functioned opaquely.  For example, even though the U.S. Supreme Court is a fundamental component of the U.S. constitutional system, confirmation hearings for U.S. Supreme Court justices were held in secret for decades. That changed about a hundred years ago when the U.S. Senate broke with tradition and began holding confirmation hearings in public.  The results of this openness have been both interesting and enlightening: the U.S. citizenry has become much more interested and involved in the confirmation process, galvanizing and campaigning both for and against proposed nominees. In the 1930s, Congress decided to open up the administrative process as well. For more than a century, administrative agencies were not required to notify the public of proposed actions, or to allow the public to have input on the policy choices reflected in proposed rules and regulations. That changed in the 1930s when Congress adopted the federal Administrative Procedure Act (APA). For the creation of so-called “informal rules,” the APA required agencies to publish a NOPR (notice of proposed rulemaking) in the Federal Register, thereby providing the public with notice of the proposed rule. Congress required that the NOPR provide the public with various types of information, including “(1) a statement of the time, place, and nature of public rule making proceedings; (2) reference to the legal authority under which the rule is proposed; and (3) either the terms or substance of the proposed rule or a description of the subjects and issues involved. »  In addition to allowing interested parties the opportunity to comment on NOPRs, and requiring agencies to “consider” those comments, the APA also required agencies to issue a “concise general statement” of the “basis and purpose” of any final rule that they issue.  As with the U.S. Supreme Court’s confirmation processes, the APA’s rulemaking procedures led to greater citizen involvement in the rulemaking process.  The APA also promoted openness by requiring administrative agencies to voluntarily disclose various types of internal information to the public, including “interpretative rules and statements of policy.”

Congress supplemented the APA in the 1960s when it enacted the federal Freedom of Information Act (FOIA). FOIA gave individuals and corporations a right of access to government held information. As a “disclosure” statute, FOIA specifically provides that “upon any request for records which reasonably describes such records and is made in accordance with published rules stating the time, place, fees (if any), and procedures to be followed, shall make the records promptly available to any person.”  Agencies are required to decide within twenty days whether to comply with a request. However, the time limit can be tolled under certain circumstances. Although FOIA is a disclosure statute, it does not require disclosure of all governmental documents.  In addition to FOIA, Congress also enacted the Federal Advisory Committee Act (FACA),  the Government in the Sunshine Act, and amendments to FOIA, all of which were designed to enhance governmental openness and transparency.  In addition, many state legislatures have adopted their own open records provisions that are similar to FOIA.

Despite these movements towards openness, advancements in speech technology have forced governments to become much more open and transparent than they have ever been.  Some of this openness has been intentional as governmental entities have used new speech technologies to communicate with the citizenry and enhance its understanding of governmental operations.  However, some of this openness has taken place despite governmental resistance.  The net effect is that free speech, and changes in communications technologies, have produced a society that is much more open and transparent.  This article examines the relationship between free speech, the new technologies, and governmental openness and transparency….(More).

Courts Disrupted


A new Resource Bulletin by the Joint Technology Committee (JTC): “The concept of disruptive innovation made its debut more than 20 years ago in a Harvard Business Review article. Researchers Clayton M. Christensen and Joseph L. Bower observed that established organizations may invest in retaining current customers but often fail to make the technological investments that future customers will expect. That opens the way for low-cost competitive alternatives to enter the marketplace, addressing the needs of unserved and under-served populations. Lower-cost alternatives over time can be enhanced, gain acceptance in well-served populations, and sometimes ultimately displace traditional products or services. This should be a cautionary tale for court managers. What would happen if the people took their business elsewhere? Is that even possible? What would be the implications to both the public and the courts? Should court leaders concern themselves with this possibility?

While disruptive innovation theory is both revered and reviled, it provides a perspective that can help court managers anticipate and respond to significant change. Like large businesses with proprietary offerings, courts have a unique customer base. Until recently, those customers had no other option than to accept whatever level of service the courts would provide and at whatever cost, or simply choose not to address their legal needs. Innovations such as non-JD legal service providers, online dispute resolution (ODR), and unbundled legal services are circumventing some traditional court processes, providing more timely and cost-effective outcomes. While there is no consensus in the court community on the potential impact to courts (whether they are in danger of “going out of business”), there are compelling reasons for court managers to be aware of and leverage the concept of disruptive innovation.

As technology dramatically changes the way routine transactions are handled in other industries, courts can also embrace innovation as one way to enhance the public’s experience. Doing so may help courts “disrupt” themselves, making justice available to a wider audience at a lower cost while preserving fairness, neutrality, and transparency in the judicial process….(More).”

Community Digital Storytelling for Collective Intelligence: towards a Storytelling Cycle of Trust


Sarah Copeland and Aldo de Moor in AI & SOCIETY: “Digital storytelling has become a popular method for curating community, organisational, and individual narratives. Since its beginnings over 20 years ago, projects have sprung up across the globe, where authentic voice is found in the narration of lived experiences. Contributing to a Collective Intelligence for the Common Good, the authors of this paper ask how shared stories can bring impetus to community groups to help identify what they seek to change, and how digital storytelling can be effectively implemented in community partnership projects to enable authentic voices to be carried to other stakeholders in society. The Community Digital Storytelling (CDST) method is introduced as a means for addressing community-of-place issues. There are five stages to this method: preparation, story telling, story digitisation, digital story sense-making, and digital story sharing. Additionally, a Storytelling Cycle of Trust framework is proposed. We identify four trust dimensions as being imperative foundations in implementing community digital media interventions for the common good: legitimacy, authenticity, synergy, and commons. This framework is concerned with increasing the impact that everyday stories can have on society; it is an engine driving prolonged storytelling. From this perspective, we consider the ability to scale up the scope and benefit of stories in civic contexts. To illustrate this framework, we use experiences from the CDST workshop in northern Britain and compare this with a social innovation project in the southern Netherlands….(More)”.

The Tech Revolution That’s Changing How We Measure Poverty


Alvin Etang Ndip at the Worldbank: “The world has an ambitious goal to end extreme poverty by 2030. But, without good poverty data, it is impossible to know whether we are making progress, or whether programs and policies are reaching those who are the most in need.

Countries, often in partnership with the World Bank Group and other agencies, measure poverty and wellbeing using household surveys that help give policymakers a sense of who the poor are, where they live, and what is holding back their progress. Once a paper-and-pencil exercise, technology is beginning to revolutionize the field of household data collection, and the World Bank is tapping into this potential to produce more and better poverty data….

“Technology can be harnessed in three different ways,” says Utz Pape, an economist with the World Bank. “It can help improve data quality of existing surveys, it can help to increase the frequency of data collection to complement traditional household surveys, and can also open up new avenues of data collection methods to improve our understanding of people’s behaviors.”

As technology is changing the field of data collection, researchers are continuing to find new ways to build on the power of mobile phones and tablets.

The World Bank’s Pulse of South Sudan initiative, for example, takes tablet-based data collection a step further. In addition to conducting the household survey, the enumerators also record a short, personalized testimonial with the people they are interviewing, revealing a first-person account of the situation on the ground. Such testimonials allow users to put a human face on data and statistics, giving a fuller picture of the country’s experience.

Real-time data through mobile phones

At the same time, more and more countries are generating real-time data through high-frequency surveys, capitalizing on the proliferation of mobile phones around the world. The World Bank’s Listening to Africa (L2A) initiative has piloted the use of mobile phones to regularly collect information on living conditions. The approach combines face-to-face surveys with follow-up mobile phone interviews to collect data that allows to monitor well-being.

The initiative hands out mobile phones and solar chargers to all respondents. To minimize the risk of people dropping out, the respondents are given credit top-ups to stay in the program. From monitoring health care facilities in Tanzania to collecting data on frequency of power outages in Togo, the initiative has been rolled out in six countries and has been used to collect data on a wide range of areas. …

Technology-driven data collection efforts haven’t been restricted to the Africa region alone. In fact, the approach was piloted early in Peru and Honduras with the Listening 2 LAC program. In Europe and Central Asia, the World Bank has rolled out the Listening to Tajikistan program, which was designed to monitor the impact of the Russian economic slowdown in 2014 and 2015. Initially a six-month pilot, the initiative has now been in operation for 29 months, and a partnership with UNICEF and JICA has ensured that data collection can continue for the next 12 months. Given the volume of data, the team is currently working to create a multidimensional fragility index, where one can monitor a set of well-being indicators – ranging from food security to quality jobs and public services – on a monthly basis…

There are other initiatives, such as in Mexico where the World Bank and its partners are using satellite imagery and survey data to estimate how many people live below the poverty line down to the municipal level, or guiding data collectors using satellite images to pick a representative sample for the Somali High Frequency Survey. However, despite the innovation, these initiatives are not intended to replace traditional household surveys, which still form the backbone of measuring poverty. When better integrated, they can prove to be a formidable set of tools for data collection to provide the best evidence possible to policymakers….(More)”