Intragovernmental Collaborations: Pipedreams or the Future of the Public Sector?


Sarah Worthing at the Stanford Social Innovation Review:Despite the need for concerted, joint efforts among public sector leaders, those working with or in government know too well that such collaborations are rare. The motivation and ability to collaborate in government is usually lacking. So how did these leaders—some with competing agendas—manage to do it?

A new tool for collaboration

Policy labs are units embedded within the public sector—“owned” by one or several ministries—that anchor systematic public sector innovation efforts by facilitating creative approaches to policymaking. Since the inception of the first labs over a decade ago, many innovation experts and academics have touted labs as the leading-edge of public policy innovation. They can generate novel, citizen-centric, effective policies and service provisions, because they include a wide range of governmental and, in many cases, non-governmental actors in tackling complex public policy issues like social inequality, mass migration, and terrorism. MindLab in Denmark, for example, brought together government decision makers from across five ministries in December 2007 to co-create policy strategies on tackling climate change while also propelling new business growth. The collaboration resulted in a range of business strategies for climate change that were adopted during the 2009 UN COP15 Summit in Copenhagen. Under normal circumstances, these government leaders often push conflicting agendas, compete over resources, and are highly risk-adverse in undertaking intragovermental partnerships—all “poison pills” for the kind of collaboration successful public sector innovation needs. However, policy labs like MindLab, Policy Lab UK, and almost 100 similar cross-governmental units are finding ways to overcome these barriers and drive public sector innovation.

Five ways policy labs facilitate successful intragovermental collaboration

To examine how labs do this, we conducted a multiple-case analysis of policy labs in the European Union and United States.

1. Reducing potential future conflict through experiential on-boarding processes. Policy labs conduct extensive screening and induction activities to provide policymakers with both knowledge of and faith in the policy lab’s approach to policymaking. …

2. Utilization of spatial cues to flatten hierarchical and departmental differences. Policy labs strategically use non-traditional spatial elements such as moveable whiteboards, tactile and colorful prototyping materials, and sitting cubes, along with the absence of expected elements such as conference tables and chairs, to indicate that unconventional norms—non-hierarchical and relational norms—govern lab spaces….

3. Reframing policy issues to focus on affected citizens. Policy labs highlight individual citizens’ stories to help reconstruct policymakers’ perceptions toward a more common and human-centered understanding of a policy issue…

4. Politically neutral, process-focused facilitation. Lab practitioners employ design methods that can help bring together divided policymakers and break scripted behavior patterns. Many policy labs use variations of design thinking and foresight methods, with a focus on iterative prototyping and testing, stressing the need for skilled but politically neutral facilitation to work through points of conflict and reach consensus on solutions. …

5. Mitigating risk through policy lab branding….(More)”.

Free Speech and Transparency in a Digital Era


Russell L. Weaver at IMODEV: ” Governmental openness and transparency is inextricably intertwined with freedom of expression. In order to engage in scrutinize government, the people must have access to information regarding the functioning of government. As the U.S. Supreme Court has noted, “It is inherent in the nature of the political process that voters must be free to obtain information from divers sources in order to determine how to cast their votes”. As one commentator noted, “Citizens need to understand what their government is doing in their name.”

Despite the need for transparency, the U.S. government has frequently functioned opaquely.  For example, even though the U.S. Supreme Court is a fundamental component of the U.S. constitutional system, confirmation hearings for U.S. Supreme Court justices were held in secret for decades. That changed about a hundred years ago when the U.S. Senate broke with tradition and began holding confirmation hearings in public.  The results of this openness have been both interesting and enlightening: the U.S. citizenry has become much more interested and involved in the confirmation process, galvanizing and campaigning both for and against proposed nominees. In the 1930s, Congress decided to open up the administrative process as well. For more than a century, administrative agencies were not required to notify the public of proposed actions, or to allow the public to have input on the policy choices reflected in proposed rules and regulations. That changed in the 1930s when Congress adopted the federal Administrative Procedure Act (APA). For the creation of so-called “informal rules,” the APA required agencies to publish a NOPR (notice of proposed rulemaking) in the Federal Register, thereby providing the public with notice of the proposed rule. Congress required that the NOPR provide the public with various types of information, including “(1) a statement of the time, place, and nature of public rule making proceedings; (2) reference to the legal authority under which the rule is proposed; and (3) either the terms or substance of the proposed rule or a description of the subjects and issues involved. »  In addition to allowing interested parties the opportunity to comment on NOPRs, and requiring agencies to “consider” those comments, the APA also required agencies to issue a “concise general statement” of the “basis and purpose” of any final rule that they issue.  As with the U.S. Supreme Court’s confirmation processes, the APA’s rulemaking procedures led to greater citizen involvement in the rulemaking process.  The APA also promoted openness by requiring administrative agencies to voluntarily disclose various types of internal information to the public, including “interpretative rules and statements of policy.”

Congress supplemented the APA in the 1960s when it enacted the federal Freedom of Information Act (FOIA). FOIA gave individuals and corporations a right of access to government held information. As a “disclosure” statute, FOIA specifically provides that “upon any request for records which reasonably describes such records and is made in accordance with published rules stating the time, place, fees (if any), and procedures to be followed, shall make the records promptly available to any person.”  Agencies are required to decide within twenty days whether to comply with a request. However, the time limit can be tolled under certain circumstances. Although FOIA is a disclosure statute, it does not require disclosure of all governmental documents.  In addition to FOIA, Congress also enacted the Federal Advisory Committee Act (FACA),  the Government in the Sunshine Act, and amendments to FOIA, all of which were designed to enhance governmental openness and transparency.  In addition, many state legislatures have adopted their own open records provisions that are similar to FOIA.

Despite these movements towards openness, advancements in speech technology have forced governments to become much more open and transparent than they have ever been.  Some of this openness has been intentional as governmental entities have used new speech technologies to communicate with the citizenry and enhance its understanding of governmental operations.  However, some of this openness has taken place despite governmental resistance.  The net effect is that free speech, and changes in communications technologies, have produced a society that is much more open and transparent.  This article examines the relationship between free speech, the new technologies, and governmental openness and transparency….(More).

Courts Disrupted


A new Resource Bulletin by the Joint Technology Committee (JTC): “The concept of disruptive innovation made its debut more than 20 years ago in a Harvard Business Review article. Researchers Clayton M. Christensen and Joseph L. Bower observed that established organizations may invest in retaining current customers but often fail to make the technological investments that future customers will expect. That opens the way for low-cost competitive alternatives to enter the marketplace, addressing the needs of unserved and under-served populations. Lower-cost alternatives over time can be enhanced, gain acceptance in well-served populations, and sometimes ultimately displace traditional products or services. This should be a cautionary tale for court managers. What would happen if the people took their business elsewhere? Is that even possible? What would be the implications to both the public and the courts? Should court leaders concern themselves with this possibility?

While disruptive innovation theory is both revered and reviled, it provides a perspective that can help court managers anticipate and respond to significant change. Like large businesses with proprietary offerings, courts have a unique customer base. Until recently, those customers had no other option than to accept whatever level of service the courts would provide and at whatever cost, or simply choose not to address their legal needs. Innovations such as non-JD legal service providers, online dispute resolution (ODR), and unbundled legal services are circumventing some traditional court processes, providing more timely and cost-effective outcomes. While there is no consensus in the court community on the potential impact to courts (whether they are in danger of “going out of business”), there are compelling reasons for court managers to be aware of and leverage the concept of disruptive innovation.

As technology dramatically changes the way routine transactions are handled in other industries, courts can also embrace innovation as one way to enhance the public’s experience. Doing so may help courts “disrupt” themselves, making justice available to a wider audience at a lower cost while preserving fairness, neutrality, and transparency in the judicial process….(More).”

Mastercard’s Big Data For Good Initiative: Data Philanthropy On The Front Lines


Interview by Randy Bean of Shamina Singh: Much has been written about big data initiatives and the efforts of market leaders to derive critical business insights faster. Less has been written about initiatives by some of these same firms to apply big data and analytics to a different set of issues, which are not solely focused on revenue growth or bottom line profitability. While the focus of most writing has been on the use of data for competitive advantage, a small set of companies has been undertaking, with much less fanfare, a range of initiatives designed to ensure that data can be applied not just for corporate good, but also for social good.

One such firm is Mastercard, which describes itself as a technology company in the payments industry, which connects buyers and sellers in 210 countries and territories across the globe. In 2013 Mastercard launched the Mastercard Center for Inclusive Growth, which operates as an independent subsidiary of Mastercard and is focused on the application of data to a range of issues for social benefit….

In testimony before the Senate Committee on Foreign Affairs on May 4, 2017, Mastercard Vice Chairman Walt Macnee, who serves as the Chairman of the Center for Inclusive Growth, addressed issues of private sector engagement. Macnee noted, “The private sector and public sector can each serve as a force for good independently; however when the public and private sectors work together, they unlock the potential to achieve even more.” Macnee further commented, “We will continue to leverage our technology, data, and know-how in an effort to solve many of the world’s most pressing problems. It is the right thing to do, and it is also good for business.”…

Central to the mission of the Mastercard Center is the notion of “data philanthropy”. This term encompasses notions of data collaboration and data sharing and is at the heart of the initiatives that the Center is undertaking. The three cornerstones on the Center’s mandate are:

  • Sharing Data Insights– This is achieved through the concept of “data grants”, which entails granting access to proprietary insights in support of social initiatives in a way that fully protects consumer privacy.
  • Data Knowledge – The Mastercard Center undertakes collaborations with not-for-profit and governmental organizations on a range of initiatives. One such effort was in collaboration with the Obama White House’s Data-Driven Justice Initiative, by which data was used to help advance criminal justice reform. This initiative was then able, through the use of insights provided by Mastercard, to demonstrate the impact crime has on merchant locations and local job opportunities in Baltimore.
  • Leveraging Expertise – Similarly, the Mastercard Center has collaborated with private organizations such as DataKind, which undertakes data science initiatives for social good.Just this past month, the Mastercard Center released initial findings from its Data Exploration: Neighborhood Crime and Local Business initiative. This effort was focused on ways in which Mastercard’s proprietary insights could be combined with public data on commercial robberies to help understand the potential relationships between criminal activity and business closings. A preliminary analysis showed a spike in commercial robberies followed by an increase in bar and nightclub closings. These analyses help community and business leaders understand factors that can impact business success.Late last year, Ms. Singh issued A Call to Action on Data Philanthropy, in which she challenges her industry peers to look at ways in which they can make a difference — “I urge colleagues at other companies to review their data assets to see how they may be leveraged for the benefit of society.” She concludes, “the sheer abundance of data available today offers an unprecedented opportunity to transform the world for good.”….(More)

Chicago police see less violent crime after using predictive code


Jon Fingas at Engadget: “Law enforcement has been trying predictive policing software for a while now, but how well does it work when it’s put to a tough test? Potentially very well, according to Chicago police. The city’s 7th District police reportthat their use of predictive algorithms helped reduce the number of shootings 39 percent year-over-year in the first 7 months of 2017, with murders dropping by 33 percent. Three other districts didn’t witness as dramatic a change, but they still saw 15 to 29 percent reductions in shootings and a corresponding 9 to 18 percent drop in murders.

It mainly comes down to knowing where and when to deploy officers. One of the tools used in the 7th District, HunchLab, blends crime statistics with socioeconomic data, weather info and business locations to determine where crimes are likely to happen. Other tools (such as the Strategic Subject’s List and ShotSpotter) look at gang affiliation, drug arrest history and gunfire detection sensors.

If the performance holds, It’ll suggest that predictive policing can save lives when crime rates are particularly high, as they have been on Chicago’s South Side. However, both the Chicago Police Department and academics are quick to stress that algorithms are just one part of a larger solution. Officers still have be present, and this doesn’t tackle the underlying issues that cause crime, such as limited access to education and a lack of economic opportunity. Still, any successful reduction in violence is bound to be appreciated….(More)”.

Rise of the Government Chatbot


Zack Quaintance at Government Technology: “A robot uprising has begun, except instead of overthrowing mankind so as to usher in a bleak yet efficient age of cold judgment and colder steel, this uprising is one of friendly robots (so far).

Which is all an alarming way to say that many state, county and municipal governments across the country have begun to deploy relatively simple chatbots, aimed at helping users get more out of online public services such as a city’s website, pothole reporting and open data. These chatbots have been installed in recent months in a diverse range of places including Kansas City, Mo.; North Charleston, S.C.; and Los Angeles — and by many indications, there is an accompanying wave of civic tech companies that are offering this tech to the public sector.

They range from simple to complex in scope, and most of the jurisdictions currently using them say they are doing so on somewhat of a trial or experimental basis. That’s certainly the case in Kansas City, where the city now has a Facebook chatbot to help users get more out of its open data portal.

“The idea was never to create a final chatbot that was super intelligent and amazing,” said Eric Roche, Kansas City’s chief data officer. “The idea was let’s put together a good effort, and put it out there and see if people find it interesting. If they use it, get some lessons learned and then figure out — either in our city, or with developers, or with people like me in other cities, other chief data officers and such — and talk about the future of this platform.”

Roche developed Kansas City’s chatbot earlier this year by working after hours with Code for Kansas City, the local Code for America brigade — and he did so because since in the four-plus years the city’s open data program has been active, there have been regular concerns that the info available through it was hard to navigate, search and use for average citizens who aren’t data scientists and don’t work for the city (a common issue currently being addressed by many jurisdictions). The idea behind the Facebook chatbot is that Roche can program it with a host of answers to the most prevalent questions, enabling it to both help interested users and save him time for other work….

In North Charleston, S.C., the city has adopted a text-based chatbot, which goes above common 311-style interfaces by allowing users to report potholes or any other lapses in city services they may notice. It also allows them to ask questions, which it subsequently answers by crawling city websites and replying with relevant links, said Ryan Johnson, the city’s public relations coordinator.

North Charleston has done this by partnering with a local tech startup that has deep roots in the area’s local government. The company is called Citibot …

With Citibot, residents can report a pothole at 2 a.m., or they can get info about street signs or trash pickup sent right to their phones.

There are also more complex chatbot technologies taking hold at both the civic and state levels, in Los Angeles and Mississippi, to be exact.

Mississippi’s chatbot is called Missi, and its capabilities are vast and nuanced. Residents can even use it for help submitting online payments. It’s accessible by clicking a small chat icon on the side of the website.

Back in May, Los Angeles rolled out Chip, or City Hall Internet Personality, on the Los Angeles Business Assistance Virtual Network. The chatbot aims to assist visitors by operating as a 24/7 digital assistant for visitors to the site, helping them navigate it and better understand its services by answering their inquiries. It is capable of presenting info from anywhere on the site, and it can even go so far as helping users fill out forms or set up email alerts….(More)”

Scientists Use Google Earth and Crowdsourcing to Map Uncharted Forests


Katie Fletcher, Tesfay Woldemariam and Fred Stolle at EcoWatch: “No single person could ever hope to count the world’s trees. But a crowd of them just counted the world’s drylands forests—and, in the process, charted forests never before mapped, cumulatively adding up to an area equivalent in size to the Amazon rainforest.

Current technology enables computers to automatically detect forest area through satellite data in order to adequately map most of the world’s forests. But drylands, where trees are fewer and farther apart, stymied these modern methods. To measure the extent of forests in drylands, which make up more than 40 percent of land surface on Earth, researchers from UN Food and Agriculture Organization, World Resources Institute and several universities and organizations had to come up with unconventional techniques. Foremost among these was turning to residents, who contributed their expertise through local map-a-thons….

Google Earth collects satellite data from several satellites with a variety of resolutions and technical capacities. The dryland satellite imagery collection compiled by Google from various providers, including Digital Globe, is of particularly high quality, as desert areas have little cloud cover to obstruct the views. So while difficult for algorithms to detect non-dominant land cover, the human eye has no problem distinguishing trees in the landscapes. Using this advantage, the scientists decided to visually count trees in hundreds of thousands of high-resolution images to determine overall dryland tree cover….

Armed with the quality images from Google that allowed researchers to see objects as small as half a meter (about 20 inches) across, the team divided the global dryland images into 12 regions, each with a regional partner to lead the counting assessment. The regional partners in turn recruited local residents with practical knowledge of the landscape to identify content in the sample imagery. These volunteers would come together in participatory mapping workshops, known colloquially as “map-a-thons.”…

Utilizing local landscape knowledge not only improved the map quality but also created a sense of ownership within each region. The map-a-thon participants have access to the open source tools and can now use these data and results to better engage around land use changes in their communities. Local experts, including forestry offices, can also use this easily accessible application to continue monitoring in the future.

Global Forest Watch uses medium resolution satellites (30 meters or about 89 feet) and sophisticated algorithms to detect near-real time deforestation in densely forested area. The dryland tree cover maps complement Global Forest Watch by providing the capability to monitor non-dominant tree cover and small-scale, slower-moving events like degradation and restoration. Mapping forest change at this level of detail is critical both for guiding land decisions and enabling government and business actors to demonstrate their pledges are being fulfilled, even over short periods of time.

The data documented by local participants will enable scientists to do many more analyses on both natural and man-made land changes including settlements, erosion features and roads. Mapping the tree cover in drylands is just the beginning….(More)”.

How data can heal our oceans


Nishan Degnarain and Steve Adler at WEF: “We have collected more data on our oceans in the past two years than in the history of the planet.

There has been a proliferation of remote and near sensors above, on, and beneath the oceans. New low-cost micro satellites ring the earth and can record what happens below daily. Thousands of tidal buoys follow currents transmitting ocean temperature, salinity, acidity and current speed every minute. Undersea autonomous drones photograph and map the continental shelf and seabed, explore deep sea volcanic vents, and can help discover mineral and rare earth deposits.

The volume, diversity and frequency of data is increasing as the cost of sensors fall, new low-cost satellites are launched, and an emerging drone sector begins to offer new insights into our oceans. In addition, new processing capabilities are enhancing the value we receive from such data on the biological, physical and chemical properties of our oceans.

Yet it is not enough.

We need much more data at higher frequency, quality, and variety to understand our oceans to the degree we already understand the land. Less than 5% of the oceans are comprehensively monitored. We need more data collection capacity to unlock the sustainable development potential of the oceans and protect critical ecosystems.

More data from satellites will help identify illegal fishing activity, track plastic pollution, and detect whales and prevent vessel collisions. More data will help speed the placement of offshore wind and tide farms, improve vessel telematics, develop smart aquaculture, protect urban coastal zones, and enhance coastal tourism.

Unlocking the ocean data market

But we’re not there yet.

This new wave of data innovation is constrained by inadequate data supply, demand, and governance. The supply of existing ocean data is locked by paper records, old formats, proprietary archives, inadequate infrastructure, and scarce ocean data skills and capacity.

The market for ocean observation is driven by science and science isn’t adequately funded.

To unlock future commercial potential, new financing mechanisms are needed to create market demand that will stimulate greater investments in new ocean data collection, innovation and capacity.

Efforts such as the Financial Stability Board’s Taskforce on Climate-related Financial Disclosure have gone some way to raise awareness and create demand for such ocean-related climate risk data.

Much data that is produced is collected by nations, universities and research organizations, NGO’s, and the private sector, but only a small percentage is Open Data and widely available.

Data creates more value when it is widely utilized and well governed. Helping organize to improve data infrastructure, quality, integrity, and availability is a requirement for achieving new ocean data-driven business models and markets. New Ocean Data Governance models, standards, platforms, and skills are urgently needed to stimulate new market demand for innovation and sustainable development….(More)”.

Our digital journey: moving to electronic questionnaires


Jason Bradbury at the Office for National Statistics (UK): “Earlier this year we shared news about the Retail Sales Inquiry (RSI) – the monthly national survey of shops and shopping –  moving to digital data collection. ONS is transforming the way it collects data, improving the speed and quality of the information while reducing the burden on respondents. The past six months has seen a significant expansion of our digital survey availability. In January 5,000 retailers were invited to sign-up for an account giving them the option to send us their data  for one of our business surveys digitally.

Electronic questionnaires

The take-up of the electronic questionnaire (eQ) was incredible with over 80% of respondents choosing to supply their information for the RSI online. Overt the last six months, we have continued to see the appetite for online completion grow. Each month, an average of 300 new businesses opt to return their Retail Sales data digitally with many eager to move to digital methods for the other surveys they are required to complete….

Moving data collection from the phone and paper to online has been a huge success delivering improved quality, an ‘easy  to access’ online experience and when thinking about the impact this change could  have had on our core function as a statistical body, I am delighted to share that we have not witnessed any statistical issues and all of outputs have been compiled and produced as normal.

Put simply, the easier it is for someone to complete our surveys, the more likely they are to take the time to provide more detailed accurate data. It is worth noting that once a business has an account with ONS they often send back data to us quicker. The earlier and more detailed responses allow us more time to quality assure (QA) the information and reduce the need to re-contact the businesses.

Our digital journey

The digital world is a fast paced and an ever changing environment. We have found it challenging to match this pace in both our team’s skill base and our digital service. We are in the process of up-skilling our teams and updating our data collection service and infrastructure. This will enable us to improve our data collection service and move even more surveys online….(More)”

The hidden costs of open data


Sara Friedman at GCN: “As more local governments open their data for public use, the emphasis is often on “free” — using open source tools to freely share already-created government datasets, often with pro bono help from outside groups. But according to a new report, there are unforeseen costs when it comes pushing government datasets out of public-facing platforms — especially when geospatial data is involved.

The research, led by University of Waterloo professor Peter A. Johnson and McGill University professor Renee Sieber, was based on work as part of Geothink.ca partnership research grant and exploration of the direct and indirect costs of open data.

Costs related to data collection, publishing, data sharing, maintenance and updates are increasingly driving governments to third-party providers to help with hosting, standardization and analytical tools for data inspection, the researchers found. GIS implementation also has associated costs to train staff, develop standards, create valuations for geospatial data, connect data to various user communities and get feedback on challenges.

Due to these direct costs, some governments are more likely to avoid opening datasets that need complex assessment or anonymization techniques for GIS concerns. Johnson and Sieber identified four areas where the benefits of open geospatial data can generate unexpected costs.

First, open data can create “smoke and mirrors” situation where insufficient resources are put toward deploying open data for government use. Users then experience “transaction costs” when it comes to working in specialist data formats that need additional skills, training and software to use.

Second, the level of investment and quality of open data can lead to “material benefits and social privilege” for communities that devote resources to providing more comprehensive platforms.

While there are some open source data platforms, the majority of solutions are proprietary and charged on a pro-rata basis, which can present a challenge for cities with larger, poor populations compared to smaller, wealthier cities. Issues also arise when governments try to combine their data sets, leading to increased costs to reconcile problems.

The third problem revolves around the private sector pushing for the release of data sets that can benefit their business objectives. Companies could push for the release high-value sets, such as a real-time transit data, to help with their product development goals. This can divert attention from low-value sets, such as those detailing municipal services or installations, that could have a bigger impact on residents “from a civil society perspective.”

If communities decide to release the low-value sets first, Johnson and Sieber think the focus can then be shifted to high-value sets that can help recoup the costs of developing the platforms.

Lastly, the report finds inadvertent consequences could result from tying open data resources to private-sector companies. Public-private open data partnerships could lead to infrastructure problems that prevent data from being widely shared, and help private companies in developing their bids for public services….

Johnson and Sieber encourage communities to ask the following questions before investing in open data:

  1. Who are the intended constituents for this open data?
  2. What is the purpose behind the structure for providing this data set?
  3. Does this data enable the intended users to meet their goals?
  4. How are privacy concerns addressed?
  5. Who sets the priorities for release and updates?…(More)”

Read the full report here.