e-Consultation Platforms: Generating or Just Recycling Ideas?


Chapter by Efthimios TambourisAnastasia Migotzidou, and Konstantinos Tarabanis in Electronic Participation: “A number of governments worldwide employ web-based e-consultation platforms to enable stakeholders commenting on draft legislation. Stakeholders’ input includes arguing in favour or against the proposed legislation as well as proposing alternative ideas. In this paper, we empirically investigate the relationship between the volume of contributions in these platforms and the amount of new ideas that are generated. This enables us to determine whether participants in such platforms keep generating new ideas or just recycle a finite number of ideas. We capitalised on argumentation models to code and analyse a large number of draft law consultations published inopengov.gr, the official e-consultation platform for draft legislation in Greece. Our results suggest that as the number of posts grows, the number of new ideas continues to increase. The results of this study improve our understanding of the dynamics of these consultations and enable us to design better platforms….(More)”

 

Policy makers’ perceptions on the transformational effect of Web 2.0 technologies on public services delivery


Paper by Manuel Pedro Rodríguez Bolívar at Electronic Commerce Research: “The growing participation in social networking sites is altering the nature of social relations and changing the nature of political and public dialogue. This paper contributes to the current debate on Web 2.0 technologies and their implications for local governance, identifying the perceptions of policy makers on the use of Web 2.0 in providing public services and on the changing roles that could arise from the resulting interaction between local governments and their stakeholders. The results obtained suggest that policy makers are willing to implement Web 2.0 technologies in providing public services, but preferably under the Bureaucratic model framework, thus retaining a leading role in this implementation. The learning curve of local governments in the use of Web 2.0 technologies is a factor that could influence policy makers’ perceptions. In this respect, many research gaps are identified and further study of the question is recommended….(More)”

One way traffic: The open data initiative project and the need for an effective demand side initiative in Ghana


Paper by Frank L. K. Ohemeng and Kwaku Ofosu-Adarkwa in the Government Information Quarterly: “In recent years the necessity for governments to develop new public values of openness and transparency, and thereby increase their citizenries’ sense of inclusiveness, and their trust in and confidence about their governments, has risen to the point of urgency. The decline of trust in governments, especially in developing countries, has been unprecedented and continuous. A new paradigm that signifies a shift to citizen-driven initiatives over and above state- and market-centric ones calls for innovative thinking that requires openness in government. The need for this new synergy notwithstanding, Open Government cannot be considered truly open unless it also enhances citizen participation and engagement. The Ghana Open Data Initiative (GODI) project strives to create an open data community that will enable government (supply side) and civil society in general (demand side) to exchange data and information. We argue that the GODI is too narrowly focused on the supply side of the project, and suggest that it should generate an even platform to improve interaction between government and citizens to ensure a balance in knowledge sharing with and among all constituencies….(More)”

What factors influence transparency in US local government?


Grichawat Lowatcharin and Charles Menifield at LSE Impact Blog: “The Internet has opened a new arena for interaction between governments and citizens, as it not only provides more efficient and cooperative ways of interacting, but also more efficient service delivery, and more efficient transaction activities. …But to what extent does increased Internet access lead to higher levels of government transparency? …While we found Internet access to be a significant predictor of Internet-enabled transparency in our simplest model, this finding did not hold true in our most extensive model. This does not negate that fact that the variable is an important factor in assessing transparency levels and Internet access. …. Our data shows that total land area, population density, percentage of minority, education attainment, and the council-manager form of government are statistically significant predictors of Internet-enabled transparency.  These findings both confirm and negate the findings of previous researchers. For example, while the effect of education on transparency appears to be the most consistent finding in previous research, we also noted that the rural/urban (population density) dichotomy and the education variable are important factors in assessing transparency levels. Hence, as governments create strategic plans that include growth models, they should not only consider the budgetary ramifications of growth, but also the fact that educated residents want more web based interaction with government. This finding was reinforced by a recent Census Bureau report indicating that some of the cities and counties in Florida and California had population increases greater than ten thousand persons per month during the period 2013-2014.

This article is based on the paper ‘Determinants of Internet-enabled Transparency at the Local Level: A Study of Midwestern County Web Sites’, in State and Local Government Review. (More)”

New Privacy Research Has Implications for Design and Policy


 at PrivacyTech: “Try visualizing the Internet’s basic architecture. Could you draw it? What would be your mental model for it?

Let’s be more specific: Say you just purchased shoes off a website using your mobile phone at work. How would you visualize that digital process? Would a deeper knowledge of this architecture make more apparent the myriad potential privacy risks in this transaction? Or to put it another way, what would your knowledge, or lack thereof, for these architectural underpinnings reveal about your understanding of privacy and security risks?

Whether you’re a Luddite or a tech wiz, creating these mental models of the Internet is not the easiest endeavor. Just try doing so yourself.

It is an exercise, however, that several individuals underwent for new research that has instructive implications for privacy and security pros.

“So everything I do on the Internet or that other people do on the Internet is basically asking the Internet for information, and the Internet is sending us to various places where the information is and then bringing us back.” – CO1

You’d think those who have a better understanding of how the Internet works would probably have a better understanding of the privacy and security risks, right? Most likely. Paradoxically, though, a better technological understanding may have very little influence on an individual’s response to potential privacy risks.

This is what a dedicated team of researchers from Carnegie Mellon University worked to discover recently in their award-winning paper, “My Data Just Goes Everywhere”: User Mental Models of the Internet and Implications for Privacy and Security—a culmination of research from Ruogu Kang, Laura Dabbish, Nathaniel Fruchter and Sara Kiesler—all from CMU’s Human-Computer Interaction Institute and the Heinz College in Pittsburgh, PA.

“I try to browse through the terms and conditions but there’s so much there I really don’t retain it.” – T11

Presented at the CyLab Usable Privacy and Security Laboratory’s (CUPS) 11thSymposium on Usable Privacy and Security (SOUPS), their research demonstrated that even though savvy and non-savvy users of the Internet have much different perceptions of its architecture, such knowledge was not predictive of whether a user would take the necessary steps to protect their privacy online. Experience, rather, appears to play a more determinate role.

Kang, who led the team, said she was surprised by the results….(More)”

Mining Administrative Data to Spur Urban Revitalization


New paper by Ben Green presented at the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: “After decades of urban investment dominated by sprawl and outward growth, municipal governments in the United States are responsible for the upkeep of urban neighborhoods that have not received sufficient resources or maintenance in many years. One of city governments’ biggest challenges is to revitalize decaying neighborhoods given only limited resources. In this paper, we apply data science techniques to administrative data to help the City of Memphis, Tennessee improve distressed neighborhoods. We develop new methods to efficiently identify homes in need of rehabilitation and to predict the impacts of potential investments on neighborhoods. Our analyses allow Memphis to design neighborhood-improvement strategies that generate greater impacts on communities. Since our work uses data that most US cities already collect, our models and methods are highly portable and inexpensive to implement. We also discuss the challenges we encountered while analyzing government data and deploying our tools, and highlight important steps to improve future data-driven efforts in urban policy….(More)”

Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government


New paper by Jan Whittington et al: “Cities hold considerable information, including details about the daily lives of residents and employees, maps of critical infrastructure, and records of the officials’ internal deliberations. Cities are beginning to realize that this data has economic and other value: If done wisely, the responsible release of city information can also release greater efficiency and innovation in the public and private sector. New services are cropping up that leverage open city data to great effect.

Meanwhile, activist groups and individual residents are placing increasing pressure on state and local government to be more transparent and accountable, even as others sound an alarm over the privacy issues that inevitably attend greater data promiscuity. This takes the form of political pressure to release more information, as well as increased requests for information under the many public records acts across the country.

The result of these forces is that cities are beginning to open their data as never before. It turns out there is surprisingly little research to date into the important and growing area of municipal open data. This article is among the first sustained, cross-disciplinary assessments of an open municipal government system. We are a team of researchers in law, computer science, information science, and urban studies. We have worked hand-in-hand with the City of Seattle, Washington for the better part of a year to understand its current procedures from each disciplinary perspective. Based on this empirical work, we generate a set of recommendations to help the city manage risk latent in opening its data….(More)”

Algorithms and Bias


Q. and A. With Cynthia Dwork in the New York Times: “Algorithms have become one of the most powerful arbiters in our lives. They make decisions about the news we read, the jobs we get, the people we meet, the schools we attend and the ads we see.

Yet there is growing evidence that algorithms and other types of software can discriminate. The people who write them incorporate their biases, and algorithms often learn from human behavior, so they reflect the biases we hold. For instance, research has shown that ad-targeting algorithms have shown ads for high-paying jobs to men but not women, and ads for high-interest loans to people in low-income neighborhoods.

Cynthia Dwork, a computer scientist at Microsoft Research in Silicon Valley, is one of the leading thinkers on these issues. In an Upshot interview, which has been edited, she discussed how algorithms learn to discriminate, who’s responsible when they do, and the trade-offs between fairness and privacy.

Q: Some people have argued that algorithms eliminate discriminationbecause they make decisions based on data, free of human bias. Others say algorithms reflect and perpetuate human biases. What do you think?

A: Algorithms do not automatically eliminate bias. Suppose a university, with admission and rejection records dating back for decades and faced with growing numbers of applicants, decides to use a machine learning algorithm that, using the historical records, identifies candidates who are more likely to be admitted. Historical biases in the training data will be learned by the algorithm, and past discrimination will lead to future discrimination.

Q: Are there examples of that happening?

A: A famous example of a system that has wrestled with bias is the resident matching program that matches graduating medical students with residency programs at hospitals. The matching could be slanted to maximize the happiness of the residency programs, or to maximize the happiness of the medical students. Prior to 1997, the match was mostly about the happiness of the programs.

This changed in 1997 in response to “a crisis of confidence concerning whether the matching algorithm was unreasonably favorable to employers at the expense of applicants, and whether applicants could ‘game the system,’ ” according to a paper by Alvin Roth and Elliott Peranson published in The American Economic Review.

Q: You have studied both privacy and algorithm design, and co-wrote a paper, “Fairness Through Awareness,” that came to some surprising conclusions about discriminatory algorithms and people’s privacy. Could you summarize those?

A: “Fairness Through Awareness” makes the observation that sometimes, in order to be fair, it is important to make use of sensitive information while carrying out the classification task. This may be a little counterintuitive: The instinct might be to hide information that could be the basis of discrimination….

Q: The law protects certain groups from discrimination. Is it possible to teach an algorithm to do the same?

A: This is a relatively new problem area in computer science, and there are grounds for optimism — for example, resources from the Fairness, Accountability and Transparency in Machine Learning workshop, which considers the role that machines play in consequential decisions in areas like employment, health care and policing. This is an exciting and valuable area for research. …(More)”

Designing Successful Governance Groups


The Berkman Center for Internet & Society, together with the Global Network of Internet and Society Research Centers (NoC), is pleased to announce the release of a new publication, “Designing Successful Governance Groups: Lessons for Leaders from Real-World Examples,” authored by Ryan Budish, Sarah Myers West, and Urs Gasser.

Solutions to many of the world’s most pressing governance challenges, ranging from natural resource management to the governance of the Internet, require leaders to engage in multistakeholder processes. Yet, relatively little is known how to successfully lead such processes.  This paper outlines a set of useful, actionable steps for policymakers and other stakeholders charged with creating, convening, and leading governance groups. The tools for success described in this document are distilled from research published earlier this year by Berkman and the NoC, a comprehensive report entitled “Multistakeholder as Governance Groups: Observations From Case Studies,” which closely examines 12 examples of real-world governance structures from around the globe and draws new conclusions about how to successfully form and operate governance groups.

This new publication, “Designing Successful Governance Groups,” focuses on the operational recommendations drawn from the earlier case studies and their accompanying synthesis paper. It provides an actionable starting place for those interested in understanding some of the critical ingredients for successful multistakeholder governance.

At the core of this paper are three steps that have helped conveners of successful governance groups:

  1. Establish clear success criteria

  2. Set the initial framework conditions for the group

  3. Continually adjust steps 1 and 2 based on evolving contextual factors

The paper explores these three steps in greater detail and explains how they help implement one central idea: Governance groups work best when they are flexible and adaptive to new circumstances and needs and have conveners who understand how their decisions will affect the inclusiveness, transparency, accountability, and effectiveness of the group….(More)”

What We’ve Learned About Sharing Our Data Analysis


Jeremy Singer-Vine at Source: “Last Friday morning, Jessica Garrison, Ken Bensinger, and I published a BuzzFeed News investigation highlighting the ease with which American employers have exploited and abused a particular type of foreign worker—those on seasonal H–2 visas. The article drew on seven months’ worth of reporting, scores of interviews, hundreds of documents—and two large datasets maintained by the Department of Labor.

That same morning, we published the corresponding data, methodologies, and analytic code on GitHub. This isn’t the first time we’ve open-sourced our data and analysis; far from it. But the H–2 project represents our most ambitious effort yet. In this post, I’ll describe our current thinking on “reproducible data analyses,” and how the H–2 project reflects those thoughts.

What Is “Reproducible Data Analysis”?

It’s helpful to break down a couple of slightly oversimplified definitions. Let’s call “open-sourcing” the act of publishing the raw code behind a software project. And let’s call “reproducible data analysis” the act of open-sourcing the code and data required to reproduce a set of calculations.

Journalism has seen a mini-boom of reproducible data analysis in the past year or two. (It’s far froma novel concept, of course.) FiveThirtyEight publishes data and re-runnable computer code for many of their stories. You can download the brains and brawn behind Leo, the New York Times’ statistical model for forecasting the outcome of the 2014 midterm Senate elections. And if you want to re-runBarron’s magazine’s analysis of SEC Rule 605 reports, you can do that, too. The list goes on.

….

Why Reproducible Data Analysis?

At BuzzFeed News, our main motivation is simple: transparency. If an article includes our own calculations (and are beyond a grade-schooler’s pen-and-paper calculations), then you should be able to see—and potentially criticize—how we did it…..

There are reasons, of course, not to publish a fully-reproducible analysis. The most obvious and defensible reason: Your data includes Social Security numbers, state secrets, or other sensitive information. Sometimes, you’ll be able to scrub these bits from your data. Other times, you won’t. (Adetailed methodology is a good alternative.)

How To Publish Reproducible Data Analysis?

At BuzzFeed News, we’re still figuring out the best way to skin this cat. Other news organizations might be arrive at entirely opposite conclusions. That said, here are some tips, based on our experience:

Describe the main data sources, and how you got them. Art appraisers and data-driven reporters agree: Provenance matters. Who collected the data? What universe of things does it quantify? How did you get it?.… (More)”