The Values of Public Library in Promoting an Open Government Environment


Djoko Sigit Sayogo et al in the Proceedings of the 17th International Digital Government Research Conference on Digital Government Research: “Public participation has been less than ideal in many government-implemented ICT initiatives. Extant studies highlight the importance of public libraries as an intermediary between citizens and government. This study evaluates the role of public libraries as mediating the relationship between citizens and government in support of an open government environment. Using data from a national survey of “Library and Technology Use” conducted by PEW Internet in 2015, we test whether a citizen’s perception of public values provided by public libraries influence the likelihood of the citizen’s engagement within open-government environment contexts. The results signify a significant relationship between certain public values provided by public libraries with the propensity of citizens engaging government in an online environment. Our findings further indicate that varying public values generate different results in regard to the way citizens are stimulated to use public libraries to engage with government online. These findings imply that programs designed and developed to take into account a variety of values are more likely to effectively induce citizen engagement in an open government environment through the mediation of public libraries….(More)”

Big Crisis Data: Social Media in Disasters and Time-Critical Situations


Book by Carlos Castillo: “Social media is an invaluable source of time-critical information during a crisis. However, emergency response and humanitarian relief organizations that would like to use this information struggle with an avalanche of social media messages that exceeds human capacity to process. Emergency managers, decision makers, and affected communities can make sense of social media through a combination of machine computation and human compassion – expressed by thousands of digital volunteers who publish, process, and summarize potentially life-saving information. This book brings together computational methods from many disciplines: natural language processing, semantic technologies, data mining, machine learning, network analysis, human-computer interaction, and information visualization, focusing on methods that are commonly used for processing social media messages under time-critical constraints, and offering more than 500 references to in-depth information…(More)”

Enhancing Public Innovation by Transforming Public Governance


Book edited by Jacob Torfing and Peter Triantafillou: “Rising and changing citizen expectations, dire fiscal constraints, unfulfilled political aspirations, high professional ambitions, and a growing number of stubborn societal problems have generated an increasing demand for innovation of public policies and services. Drawing on the latest research, this book examines how current systems of public governance can be transformed in order to enhance public innovation. It scrutinizes the need for new roles and public sector reforms, and analyzes how the gradual transition towards New Public Governance can stimulate the exploration and exploitation of new and bold ideas in the public sector. It argues that the key to public innovation lies in combining and balancing elements from Classic Public Administration, New Public Management and New Public Governance, and theorizes how it can be enhanced by multi-actor collaboration for the benefit of public officials, private stakeholders, citizens, and society at large.

  • Examines the relationship between different styles of public governance and public innovation
  • Provides case studies and evidence-based mappings of the innovation outputs of concrete public projects, initiatives and steering mechanisms
  • Analyses the specific role of key actor groups in and around the public sector for spurring public innovation
  • Explores the diversity of public innovation in different countries around the world…(More)”

What Algorithmic Injustice Looks Like in Real Life


Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner at Pacific Standard: “Courtrooms across the nation are using computer programs to predict who will be a future criminal. The programs help inform decisions on everything from bail to sentencing. They are meant to make the criminal justice system fairer — and to weed out human biases.

ProPublica tested one such program and found that it’s often wrong — and biased against blacks.

We looked at the risk scores the program spit out for more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014. We checked to see how many defendants were charged with new crimes over the next two years — the same benchmark used by the creators of the algorithm. Our analysis showed:

  • The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
  • White defendants were mislabeled as low risk more often than black defendants.

What does that look like in real life? Here are five comparisons of defendants — one black and one white — who were charged with similar offenses but got very different scores.

Two Shoplifting Arrests

James Rivelli, 53: In August 2014, Rivelli allegedly shoplifted seven boxes of Crest Whitestrips from a CVS. An employee called the police. When the cops found Rivelli and pulled him over, they found the Whitestrips as well as heroin and drug paraphernalia in his car. He was charged with two felony counts and four misdemeanors for grand theft, drug possession, and driving with a suspended license and expired tags.

Past offenses: He had been charged with felony aggravated assault for domestic violence in 1996, felony grand theft also in 1996, and a misdemeanor theft in 1998. He also says that he was incarcerated in Massachusetts for felony drug trafficking.

COMPAS score: 3 — low

Subsequent offense: In April 2015, he was charged with two felony counts of grand theft in the 3rd degree for shoplifting about $1,000 worth of tools from a Home Depot.

He says: Rivelli says his crimes were fueled by drug use and he is now sober. “I’m surprised [my risk score] is so low,” Rivelli said in an interview in his mother’s apartment in April. “I spent five years in state prison in Massachusetts.”…(More)

While governments talk about smart cities, it’s citizens who create them


Carlo Ratti at the Conversation: “The Australian government recently released an ambitious Smart Cities Plan, which suggests that cities should be first and foremost for people:

If our cities are to continue to meet their residents’ needs, it is essential for people to engage and participate in planning and policy decisions that have an impact on their lives.

Such statements are a good starting point – and should probably become central to Australia’s implementation efforts. A lot of knowledge has been collected over the past decade from successful and failed smart cities experiments all over the world; reflecting on them could provide useful information for the Australian government as it launches its national plan.

What is a smart city?

But, before embarking on such review, it would help to start from a definition of “smart city”.

The term has been used and abused in recent years, so much so that today it has lost meaning. It is often used to encompass disparate applications: we hear people talk and write about “smart city” when they refer to anything from citizen engagement to Zipcar, from open data to Airbnb, from smart biking to broadband.

Where to start with a definition? It is a truism to say the internet has transformed our lives over the past 20 years. Everything in the way we work, meet, mate and so on is very different today than it was just a few decades ago, thanks to a network of connectivity that now encompasses most people on the planet.

In a similar way, we are today at the beginning of a new technological revolution: the internet is entering physical space – the very space of our cities – and is becoming the Internet of Things; it is opening the door to a new world of applications that, as with the first wave of the internet, can incorporate many domains….

What should governments do?

In the above technological context, what should governments do? Over the past few years, the first wave of smart city applications followed technological excitement.

For instance, some of Korea’s early experiments such as Songdo City were engineered by the likes of Cisco, with technology deployment assisted by top-down policy directives.

In a similar way, in 2010, Rio de Janeiro launched the Integrated Centre of Command and Control, engineered by IBM. It’s a large control room for the city, which collects real-time information from cameras and myriad sensors suffused in the urban fabric.

Such approaches revealed many shortcomings, most notably the lack of civic engagement. It is as if they thought of the city simply as a “computer in open air”. These approaches led to several backlashes in the research and academic community.

A more interesting lesson can come from the US, where the focus is more on developing a rich Internet of Things innovation ecosystem. There are many initiatives fostering spaces – digital and physical – for people to come together and collaborate on urban and civic innovations….

That isn’t to say that governments should take a completely hands-off approach to urban development. Governments certainly have an important role to play. This includes supporting academic research and promoting applications in fields that might be less appealing to venture capital – unglamorous but nonetheless crucial domains such as municipal waste or water services.

The public sector can also promote the use of open platforms and standards in such projects, which would speed up adoption in cities worldwide.

Still, the overarching goal should always be to focus on citizens. They are in the best position to determine how to transform their cities and to make decisions that will have – as the Australian Smart Cities Plan puts it – “an impact on their lives”….(more)”

Combatting Police Discrimination in the Age of Big Data


Paper by Sharad Goel, Maya Perelman, Ravi Shroff and David Alan Sklansky: “The exponential growth of available information about routine police activities offers new opportunities to improve the fairness and effectiveness of police practices. We illustrate the point by showing how a particular kind of calculation made possible by modern, large-scale datasets — determining the likelihood that stopping and frisking a particular pedestrian will result in the discovery of contraband or other evidence of criminal activity — could be used to reduce the racially disparate impact of pedestrian searches and to increase their effectiveness. For tools of this kind to achieve their full potential in improving policing, though, the legal system will need to adapt. One important change would be to understand police tactics such as investigatory stops of pedestrians or motorists as programs, not as isolated occurrences. Beyond that, the judiciary will need to grow more comfortable with statistical proof of discriminatory policing, and the police will need to be more receptive to the assistance that algorithms can provide in reducing bias….(More)”

AI lawyer speeds up legal research


Springwise: “Lawyers have to maintain and recall vast amounts of information in the form of legislation, case law and secondary cases, and they spend up to a fifth of their time on legal research. But an AI app called Ross Intelligence could soon help with that. The program, which is built on IBM’s super-computer Watson, uses natural language processing to answer legal questions in a fraction of the time that it would take a legal assistant.

To begin, legal professionals can ask Ross a question as they would ask a colleague. Then the program reads through the entire body of law and returns a cited answer as well topical readings. Ross also monitors the law constantly to keep the user updated about changes that might affect their case, so they don’t need to sift through the mass of legal news….(More)”

Private Data and the Public Good


Gideon Mann‘s remarks on the occasion of the Robert Khan distinguished lecture at The City College of New York on 5/22/16: and opportunities about a specific aspect of this relationship, the broader need for computer science to engage with the real world. Right now, a key aspect of this relationship is being built around the risks and opportunities of the emerging role of data.

Ultimately, I believe that these relationships, between computer science andthe real world, between data science and real problems, hold the promise tovastly increase our public welfare. And today, we, the people in this room,have a unique opportunity to debate and define a more moral dataeconomy….

The hybrid research model proposes something different. The hybrid research model, embeds, as it were, researchers as practitioners.The thought was always that you would be going about your regular run of business,would face a need to innovate to solve a crucial problem, and would do something novel. At that point, you might choose to work some extra time and publish a paper explaining your innovation. In practice, this model rarely works as expected. Tight deadlines mean the innovation that people do in their normal progress of business is incremental..

This model separated research from scientific publication, and shortens thetime-window of research, to what can be realized in a few year time zone.For me, this always felt like a tremendous loss, with respect to the older so-called “ivory tower” research model. It didn’t seem at all clear how this kindof model would produce the sea change of thought engendered byShannon’s work, nor did it seem that Claude Shannon would ever want towork there. This kind of environment would never support the freestanding wonder, like the robot mouse that Shannon worked on. Moreover, I always believed that crucial to research is publication and participation in the scientific community. Without this engagement, it feels like something different — innovation perhaps.

It is clear that the monopolistic environment that enabled AT&T to support this ivory tower research doesn’t exist anymore. .

Now, the hybrid research model was one model of research at Google, butthere is another model as well, the moonshot model as exemplified byGoogle X. Google X brought together focused research teams to driveresearch and development around a particular project — Google Glass and the Self-driving car being two notable examples. Here the focus isn’t research, but building a new product, with research as potentially a crucial blocking issue. Since the goal of Google X is directly to develop a new product, by definition they don’t publish papers along the way, but they’re not as tied to short-term deliverables as the rest of Google is. However, they are again decidedly un-Bell-Labs like — a secretive, tightly focused, non-publishing group. DeepMind is a similarly constituted initiative — working, for example, on a best-in-the-world Go playing algorithm, with publications happening sparingly.

Unfortunately, both of these approaches, the hybrid research model and the moonshot model stack the deck towards a particular kind of research — research that leads to relatively short term products that generate corporate revenue. While this kind of research is good for society, it isn’t the only kind of research that we need. We urgently need research that is longterm, and that is undergone even without a clear financial local impact. Insome sense this is a “tragedy of the commons”, where a shared public good (the commons) is not supported because everyone can benefit from itwithout giving back. Academic research is thus a non-rival, non-excludible good, and thus reasonably will be underfunded. In certain cases, this takes on an ethical dimension — particularly in health care, where the choice ofwhat diseases to study and address has a tremendous potential to affect human life. Should we research heart disease or malaria? This decisionmakes a huge impact on global human health, but is vastly informed by the potential profit from each of these various medicines….

Private Data means research is out of reach

The larger point that I want to make, is that in the absence of places where long-term research can be done in industry, academia has a tremendous potential opportunity. Unfortunately, it is actually quite difficult to do the work that needs to be done in academia, since many of the resources needed to push the state of the art are only found in industry: in particular data.

Of course, academia also lacks machine resources, but this is a simpler problem to fix — it’s a matter of money, resources form the government could go to enabling research groups building their own data centers or acquiring the computational resources from the market, e.g. Amazon. This is aided by the compute philanthropy that Google and Microsoft practice that grant compute cycles to academic organizations.

But the data problem is much harder to address. The data being collected and generated at private companies could enable amazing discoveries and research, but is impossible for academics to access. The lack of access to private data from companies actually is much more significant effects than inhibiting research. In particular, the consumer level data, collected by social networks and internet companies could do much more than ad targeting.

Just for public health — suicide prevention, addiction counseling, mental health monitoring — there is enormous potential in the use of our online behavior to aid the most needy, and academia and non-profits are set-up to enable this work, while companies are not.

To give a one examples, anorexia and eating disorders are vicious killers. 20 million women and 10 million men suffer from a clinically significant eating disorder at some time in their life, and sufferers of eating disorders have the highest mortality rate of any other mental health disorder — with a jaw-dropping estimated mortality rate of 10%, both directly from injuries sustained by the disorder and by suicide resulting from the disorder.

Eating disorders are particular in that sufferers often seek out confirmatory information, blogs, images and pictures that glorify and validate what sufferers see as “lifestyle” choices. Browsing behavior that seeks out images and guidance on how to starve yourself is a key indicator that someone is suffering. Tumblr, pinterest, instagram are places that people host and seek out this information. Tumblr has tried to help address this severe mental health issue by banning blogs that advocate for self-harm and by adding PSA announcements to query term searches for queries for or related to anorexia. But clearly — this is not the be all and end all of work that could be done to detect and assist people at risk of dying from eating disorders. Moreover, this data could also help understand the nature of those disorders themselves…..

There is probably a role for a data ombudsman within private organizations — someone to protect the interests of the public’s data inside of an organization. Like a ‘public editor’ in a newspaper according to how you’ve set it up. There to protect and articulate the interests of the public, which means probably both sides — making sure a company’s data is used for public good where appropriate, and making sure the ‘right’ to privacy of the public is appropriately safeguarded (and probably making sure the public is informed when their data is compromised).

Next, we need a platform to make collaboration around social good between companies and between companies and academics. This platform would enable trusted users to have access to a wide variety of data, and speed process of research.

Finally, I wonder if there is a way that government could support research sabbaticals inside of companies. Clearly, the opportunities for this research far outstrip what is currently being done…(more)”

Foundation Transparency: Game Over?


Brad Smith at Glass Pockets (Foundation Center): “The tranquil world of America’s foundations is about to be shaken, but if you read the Center for Effective Philanthropy’s (CEP) new study — Sharing What Matters, Foundation Transparency — you would never know it.

Don’t get me wrong. That study, like everything CEP produces, is carefully researched, insightful and thoroughly professional. But it misses the single biggest change in foundation transparency in decades: the imminent release by the Internal Revenue Service of foundation 990-PF (and 990) tax returns as machine-readable open data.

Clara Miller, President of the Heron Foundation, writes eloquently in her manifesto, Building a Foundation for the 21St Century: “…the private foundation model was designed to be protective and separate, much like a terrarium.”

Terrariums, of course, are highly “curated” environments over which their creators have complete control. The CEP study, proves that point, to the extent that much of the study consists of interviews with foundation leaders and reviews of their websites as if transparency were a kind of optional endeavor in which foundations may choose to participate, if at all, and to what degree.

To be fair, CEP also interviewed the grantees of various foundations (sometimes referred to as “partners”), which helps convey the reality that foundations have stakeholders beyond their four walls. However, the terrarium metaphor is about to become far more relevant as the release of 990 tax returns as open data will literally make it possible for anyone to look right through those glass walls to the curated foundation world within.

What Is Open Data?

It is safe to say that most foundation leaders and a fair majority of their staff do not understand what open data really is. Open data is free, yes, but more importantly it is digital and machine-readable. This means it can be consumed in enormous volumes at lightning speed, directly by computers.

Once consumed, open data can be tagged, sorted, indexed and searched using statistical methods to make obvious comparisons while discovering previously undetected correlations. Anyone with a computer, some coding skills and a hard drive or cloud storage can access open data. In today’s world, a lot of people meet those requirements, and they are free to do whatever they please with your information once it is, as open data enthusiasts like to say, “in the wild.”

What is the Internal Revenue Service Releasing?

Thanks to the Aspen Institute’s leadership of a joint effort – funded by foundations and including Foundation Center, GuideStar, the National Center for Charitable Statistics, the Johns Hopkins Center for Civil Society Studies, and others – the IRS has started to make some 1,000,000 Form 990s and 40,000 Form 990PF available as machine-readable open data.

Previously, all Form 990s had been released as image (TIFF) files, essentially a picture, making it both time-consuming and expensive to extract useful data from them. Credit where credit is due; a kick in the butt in the form of a lawsuit from open data crusader Carl Malamud helped speed the process along.

The current test phase includes only those tax returns that were digitally filed by nonprofits and community foundations (990s) and private foundations (990PFs). Over time, the IRS will phase in a mandatory digital filing requirement for all Form 990s, and the intent is to release them all as open data. In other words, that which is born digital will be opened up to the public in digital form. Because of variations in the 990 forms, getting the information from them into a database will still require some technical expertise, but will be far more feasible and faster than ever before.

The Good

The work of organizations like Foundation Center– who have built expensive infrastructure in order to turn years of 990 tax returns into information that can be used by nonprofits looking for funding, researchers trying to understand the role of foundations and foundations, themselves, seeking to benchmark themselves against peers—will be transformed.

Work will shift away from the mechanics of capturing and processing the data to higher level analysis and visualization to stimulate the generation and sharing of new insights and knowledge. This will fuel greater collaboration between peer organizations, innovation, the merging of previous disparate bodies of data, better philanthropy, and a stronger social sector… (more)

 

How Open Data Is Creating New Opportunities in the Public Sector


Martin Yan at GovTech: Increased availability of open data in turn increases the ease with which citizens and their governments can collaborate, as well as equipping citizens to be active in identifying and addressing issues themselves. Technology developers are able to explore innovative uses of open data in combination with digital tools, new apps or other products that can tackle recognized inefficiencies. Currently, both the public and private sectors are teeming with such apps and projects….

Open data has proven to be a catalyst for the creation of new tools across industries and public-sector uses. Examples of a few successful projects include:

  • Citymapper — The popular real-time public transport app uses open data from Apple, Google, Cyclestreets, OpenStreetMaps and more sources to help citizens navigate cities. Features include A-to-B trip planning with ETA, real-time departures, bike routing, transit maps, public transport line status, real-time disruption alerts and integration with Uber.
  • Dataverse Project — This project from Harvard’s Institute for Quantitative Social Science makes it easy to share, explore and analyze research data. By simplifying access to this data, the project allows researchers to replicate others’ work to the benefit of all.
  • Liveplasma — An interactive search engine, Liveplasma lets users listen to music and view a web-like visualization of similar songs and artists, seeing how they are related and enabling discovery. Content from YouTube is streamed into the data visualizations.
  • Provenance — The England-based online platform lets users trace the origin and history of a product, also providing its manufacturing information. The mission is to encourage transparency in the practices of the corporations that produce the products we all use.

These examples demonstrate open data’s reach, value and impact well beyond the public sector. As open data continues to be put to wider use, the results will not be limited to increased efficiency and reduced wasteful spending in government, but will also create economic growth and jobs due to the products and services using the information as a foundation.

However, in the end, it won’t be the data alone that solves issues. Rather, it will be dependent on individual citizens, developers and organizations to see the possibilities, take up the call to arms and use this available data to introduce changes that make our world better….(More)”