The Three Worlds of Governance: Arguments for a Parsimonious Theory of Quality of Government.


New Working Paper by Bo Rothstein for the Quality of Governance Institute: “It is necessary to conceptualize and provide better measures of good governance because in contrast to democratization, empirical studies show that it has strong positive effects on measures of human well-being, social trust, life satisfaction, peace and political legitimacy. A central problem is that the term “governance” is conceptualized differently in three main approaches to governance which has led to much confusion. To avoid this, the term quality of government (QoG) is preferred.
This paper argues for a parsimonious conceptualization of QoG built the “Rawls-Machiavelli pro-gramme”. This is a combination of the Rawlsian understanding of what should be seen as a just political order and the empirical strategy used by Machiavelli stating what is possible to implement. It is argued that complex definitions are impossible to operationalize and that such a strategy would leave political science without a proper conceptualization as well as measures of the part of the state that is most important for humans’ well-being and political legitimacy. The theory proposed is that impartiality in the exercise of public power should be the basic norm for how QoG should be defined. The advantage with this strategy is that it does not include in the definition of QoG what we want to explain (efficiency, prosperity, administrative capacity and other “good outcomes”) and that recent empirical research shows that this theory can be operationalized and used to measure QoG in ways that have the predicted outcomes.”

City Data: Big, Open and Linked


Working Paper by Mark S. Fox (University of Toronto): “Cities are moving towards policymaking based on data. They are publishing data using Open Data standards, linking data from disparate sources, allowing the crowd to update their data with Smart Phone Apps that use Open APIs, and applying “Big Data” Techniques to discover relationships that lead to greater efficiencies.
One Big City Data example is from New York City (Schönberger & Cukier, 2013). Building owners were illegally converting their buildings into rooming houses that contained 10 times the number people they were designed for. These buildings posed a number of problems, including fire hazards, drugs, crime, disease and pest infestations. There are over 900,000 properties in New York City and only 200 inspectors who received over 25,000 illegal conversion complaints per year. The challenge was to distinguish nuisance complaints from those worth investigating where current methods were resulting in only 13% of the inspections resulting in vacate orders.
New York’s Analytics team created a dataset that combined data from 19 agencies including buildings, preservation, police, fire, tax, and building permits. By combining data analysis with expertise gleaned from inspectors (e.g., buildings that recently received a building permit were less likely to be a problem as they were being well maintained), the team was able to develop a rating system for complaints. Based on their analysis of this data, they were able to rate complaints such that in 70% of their visits, inspectors issued vacate orders; a fivefold increase in efficiency…
This paper provides an introduction to the concepts that underlie Big City Data. It explains the concepts of Open, Unified, Linked and Grounded data that lie at the heart of the Semantic Web. It then builds on this by discussing Data Analytics, which includes Statistics, Pattern Recognition and Machine Learning. Finally we discuss Big Data as the extension of Data Analytics to the Cloud where massive amounts of computing power and storage are available for processing large data sets. We use city data to illustrate each.”