The Open Science Prize


The Open Science Prize is a new initiative from the Wellcome Trust, US National Institutes of Health and Howard Hughes Medical Institute to encourage and support the prototyping and development of services, tools and/or platforms that enable open content – including publications, datasets, code and other research outputs – to be discovered, accessed and re-used in ways that will advance research, spark innovation and generate new societal benefits….
The volume of digital objects for research available to researchers and the wider public is greater now than ever before, and so, consequently, are the opportunities to mine and extract value from existing open content and to generate new discoveries and other societal benefits. A key obstacle in realizing these benefits is the discoverability of open content, and the ability to access and utilize it.
The goal of this Prize is to stimulate the development of novel and ground-breaking tools and platforms to enable the reuse and repurposing of open digital research objects relevant to biomedical or health applications.  A Prize model is necessary to help accelerate the field of open biomedical research beyond what current funding mechanisms can achieve.  We also hope to demonstrate the huge potential value of Open Science approaches, and to generate excitement, momentum and further investment in the field….(More)”.

How Does Civil Society Use Budget Information? Mapping Fiscal Transparency Gaps and Needs in Developing Countries


Paolo de Renzio and Massimo Mastruzzi at International Budget Partnership (IBP): “Governments sometimes complain that the budget information they make publicly available is seldom accessed and utilized. On the other hand, civil society organizations (CSOs) often claim that the information governments make available is very difficult to understand and not detailed enough to allow for meaningful analysis and advocacy. Is there a mismatch between the budget information supplied by governments and demand among civil society?

This paper examines the “demand side” of fiscal transparency using findings from a global survey of 176 individuals working in civil society that use budget information for analysis and advocacy activities. Based on the responses, the authors identify a “fiscal transparency effectiveness gap” between the fiscal information that governments often provide and the information that CSOs need.

These findings are used to develop a set of recommendations to help governments ensure their transparency practices deliver increased citizen engagement, improved oversight, and enhanced accountability….(More)”

Global Standards in National Contexts: The Role of Transnational Multi-Stakeholder Initiatives in Public Sector Governance Reform


Paper by Brandon Brockmyer: “Multi-stakeholder initiatives (i.e., partnerships between governments, civil society, and the private sector) are an increasingly prevalent strategy promoted by multilateral, bilateral, and nongovernmental development organizations for addressing weaknesses in public sector governance. Global public sector governance MSIs seek to make national governments more transparent and accountable by setting shared standards for information disclosure and multi- stakeholder collaboration. However, research on similar interventions implemented at the national or subnational level suggests that the effectiveness of these initiatives is likely to be mediated by a variety of socio-political factors.

This dissertation examines the transnational evidence base for three global public sector governance MSIs — the Extractive Industries Transparency Initiative, the Construction Sector Transparency Initiative, and the Open Government Partnership — and investigates their implementation within and across three shared national contexts — Guatemala, the Philippines, and Tanzania — in order to determine whether and how these initiatives lead to improvements in proactive transparency (i.e., discretionary release of government data), demand-driven transparency (i.e., reforms that increase access to government information upon request), and accountability (i.e., the extent to which government officials are compelled to publicly explain their actions and/or face penalties or sanction for them), as well as the extent to which they provide participating governments with an opportunity to project a public image of transparency and accountability, while maintaining questionable practices in these areas (i.e., openwashing).

The evidence suggests that global public sector governance MSIs often facilitate gains in proactive transparency by national governments, but that improvements in demand-driven transparency and accountability remain relatively rare. Qualitative comparative analysis reveals that a combination of multi-stakeholder power sharing and civil society capacity is sufficient to drive improvements in proactive transparency, while the absence of visible, high-level political support is sufficient to impede such reforms. The lack of demand-driven transparency or accountability gains suggests that national-level coalitions forged by global MSIs are often too narrow to successfully advocate for broader improvements to public sector governance. Moreover, evidence for openwashing was found in one-third of cases, suggesting that national governments sometimes use global MSIs to deliberately mislead international observers and domestic stakeholders about their commitment to reform….(More)”

The Algorithm as a Human Artifact: Implications for Legal {Re}Search


Paper by Susan Nevelow Mart: “When legal researchers search in online databases for the information they need to solve a legal problem, they need to remember that the algorithms that are returning results to them were designed by humans. The world of legal research is a human-constructed world, and the biases and assumptions the teams of humans that construct the online world bring to the task are imported into the systems we use for research. This article takes a look at what happens when six different teams of humans set out to solve the same problem: how to return results relevant to a searcher’s query in a case database. When comparing the top ten results for the same search entered into the same jurisdictional case database in Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel, and Westlaw, the results are a remarkable testament to the variability of human problem solving. There is hardly any overlap in the cases that appear in the top ten results returned by each database. An average of forty percent of the cases were unique to one database, and only about 7% of the cases were returned in search results in all six databases. It is fair to say that each different set of engineers brought very different biases and assumptions to the creation of each search algorithm. One of the most surprising results was the clustering among the databases in terms of the percentage of relevant results. The oldest database providers, Westlaw and Lexis, had the highest percentages of relevant results, at 67% and 57%, respectively. The newer legal database providers, Fastcase, Google Scholar, Casetext, and Ravel, were also clustered together at a lower relevance rate, returning approximately 40% relevant results.

Legal research has always been an endeavor that required redundancy in searching; one resource does not usually provide a full answer, just as one search will not provide every necessary result. The study clearly demonstrates that the need for redundancy in searches and resources has not faded with the rise of the algorithm. From the law professor seeking to set up a corpus of cases to study, the trial lawyer seeking that one elusive case, the legal research professor showing students the limitations of algorithms, researchers who want full results will need to mine multiple resources with multiple searches. And more accountability about the nature of the algorithms being deployed would allow all researchers to craft searches that would be optimally successful….(More)”.

Discrimination by algorithm: scientists devise test to detect AI bias


 at the Guardian: “There was the voice recognition software that struggled to understand women, the crime prediction algorithm that targeted black neighbourhoods and the online ad platform which was more likely to show men highly paid executive jobs.

Concerns have been growing about AI’s so-called “white guy problem” and now scientists have devised a way to test whether an algorithm is introducing gender or racial biases into decision-making.

Mortiz Hardt, a senior research scientist at Google and a co-author of the paper, said: “Decisions based on machine learning can be both incredibly useful and have a profound impact on our lives … Despite the need, a vetted methodology in machine learning for preventing this kind of discrimination based on sensitive attributes has been lacking.”

The paper was one of several on detecting discrimination by algorithms to be presented at the Neural Information Processing Systems (NIPS) conference in Barcelona this month, indicating a growing recognition of the problem.

Nathan Srebro, a computer scientist at the Toyota Technological Institute at Chicago and co-author, said: “We are trying to enforce that you will not have inappropriate bias in the statistical prediction.”

The test is aimed at machine learning programs, which learn to make predictions about the future by crunching through vast quantities of existing data. Since the decision-making criteria are essentially learnt by the computer, rather than being pre-programmed by humans, the exact logic behind decisions is often opaque, even to the scientists who wrote the software….“Our criteria does not look at the innards of the learning algorithm,” said Srebro. “It just looks at the predictions it makes.”

Their approach, called Equality of Opportunity in Supervised Learning, works on the basic principle that when an algorithm makes a decision about an individual – be it to show them an online ad or award them parole – the decision should not reveal anything about the individual’s race or gender beyond what might be gleaned from the data itself.

For instance, if men were on average twice as likely to default on bank loans than women, and if you knew that a particular individual in a dataset had defaulted on a loan, you could reasonably conclude they were more likely (but not certain) to be male.

However, if an algorithm calculated that the most profitable strategy for a lender was to reject all loan applications from men and accept all female applications, the decision would precisely confirm a person’s gender.

“This can be interpreted as inappropriate discrimination,” said Srebro….(More)”.

Science Can Restore America’s Faith in Democracy


Ariel Procaccia in Wired: “…Like most other countries, individual states in the US employ the antiquated plurality voting system, in which each voter casts a vote for a single candidate, and the person who amasses the largest number of votes is declared the winner. If there is one thing that voting experts unanimously agree on, it is that plurality voting is a bad idea, or at least a badly outdated one….. Maine recently became the first US state to adopt instant-runoff voting; the approach will be used for choosing the governor and members of Congress and the state legislature….

So why aren’t we already using cutting-edge voting systems in national elections? Perhaps because changing election systems usually itself requires an election, where short-term political considerations may trump long-term, scientifically grounded reasoning….Despite these difficulties, in the last few years state-of-the-art voting systems have made the transition from theory to practice, through not-for-profit online platforms that focus on facilitating elections in cities and organizations, or even just on helping a group of friends decide where to go to dinner. For example, the Stanford Crowdsourced Democracy Team has created an online tool whereby residents of a city can vote on how to allocate the city’s budget for public projects such as parks and roads. This tool has been used by New York City, Boston, Chicago, and Seattle to allocate millions of dollars. Building on this success, the Stanford team is experimenting with groundbreaking methods, inspired by computational thinking, to elicit and aggregate the preferences of residents.

The Princeton-based project All Our Ideas asks voters to compare pairs of ideas, and then aggregates these comparisons via statistical methods, ultimately providing a ranking of all the ideas. To date, roughly 14 million votes have been cast using this system, and it has been employed by major cities and organizations. Among its more whimsical use cases is the Washington Post’s 2010 holiday gift guide, where the question was “what gift would you like to receive this holiday season”; the disappointingly uncreative top idea, based on tens of thousands of votes, was “money”.

Finally, the recently launched website RoboVote (which I created with collaborators at Carnegie Mellon and Harvard) offers AI-driven voting methods to help groups of people make smart collective decisions. Applications range from selecting a spot for a family vacation or a class president, to potentially high-stakes choices such as which product prototype to develop or which movie script to produce.

These examples show that centuries of research on voting can, at long last, make a societal impact in the internet age. They demonstrate what science can do for democracy, albeit on a relatively small scale, for now….(More)’

Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability


 and  in New Media and Society: “Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able to know how it works and govern it—a pattern that recurs in recent work about transparency and computational systems. But can “black boxes’ ever be opened, and if so, would that ever be sufficient? In this article, we critically interrogate the ideal of transparency, trace some of its roots in scientific and sociotechnical epistemological cultures, and present 10 limitations to its application. We specifically focus on the inadequacy of transparency for understanding and governing algorithmic systems and sketch an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals….(More)”

New Tech Helps Tenants Make Their Case in Court


Corinne Ramey at the Wall Street Journal: “Tenants and their advocates are using new technology to document a lack of heat in apartment buildings, a condition they say has been difficult to prove in housing-court cases.

Small sensors provided by the New York City-based nonprofit Heat Seek, now installed in some city apartments, measure temperatures and transmit the data to a server. Tenant advocates say the data buttress their contention that some landlords withhold heat as a way to oust rent-regulated tenants.

“It’s really exciting to be able to track this information and hopefully get the courts to accept it, so we can show what everybody knows to be true,” said Sunny Noh, a supervising attorney at the Legal Aid Society’s Tenant Rights Coalition, which filed a civil suit using Heat Seek’s data.

…The smaller device, called a cell, transmits hourly temperature readings to the larger device, called a hub, through a radio signal. The hub is equipped with a small modem, which it uses to send data to a web server. The nonprofit typically places a hub in each building and the cells inside individual apartments.

Last winter, Heat Seek’s data were used in eight court cases representing a total of about 20 buildings, all of which were settled, Ms. Francois said. Currently, the sensors are in about six buildings and she anticipates about 50 buildings will have them by the end of the winter.

Data from Heat Seek haven’t been admitted at a trial yet, said Ms. Noh, the Legal Aid lawyer.

The nonprofit, with help from housing lawyers, has chosen to focus on gentrifying neighborhoods because they expect landlords there have more incentives to force out rent-regulated tenants….(More)

Privacy of Public Data


Paper by Kirsten E. Martin and Helen Nissenbaum: “The construct of an information dichotomy has played a defining role in regulating privacy: information deemed private or sensitive typically earns high levels of protection, while lower levels of protection are accorded to information deemed public or non-sensitive. Challenging this dichotomy, the theory of contextual integrity associates privacy with complex typologies of information, each connected with respective social contexts. Moreover, it contends that information type is merely one among several variables that shape people’s privacy expectations and underpin privacy’s normative foundations. Other contextual variables include key actors – information subjects, senders, and recipients – as well as the principles under which information is transmitted, such as whether with subjects’ consent, as bought and sold, as required by law, and so forth. Prior work revealed the systematic impact of these other variables on privacy assessments, thereby debunking the defining effects of so-called private information.

In this paper, we shine a light on the opposite effect, challenging conventional assumptions about public information. The paper reports on a series of studies, which probe attitudes and expectations regarding information that has been deemed public. Public records established through the historical practice of federal, state, and local agencies, as a case in point, are afforded little privacy protection, or possibly none at all. Motivated by progressive digitization and creation of online portals through which these records have been made publicly accessible our work underscores the need for more concentrated and nuanced privacy assessments, even more urgent in the face of vigorous open data initiatives, which call on federal, state, and local agencies to provide access to government records in both human and machine readable forms. Within a stream of research suggesting possible guard rails for open data initiatives, our work, guided by the theory of contextual integrity, provides insight into the factors systematically shaping individuals’ expectations and normative judgments concerning appropriate uses of and terms of access to information.

Using a factorial vignette survey, we asked respondents to rate the appropriateness of a series of scenarios in which contextual elements were systematically varied; these elements included the data recipient (e.g. bank, employer, friend,.), the data subject, and the source, or sender, of the information (e.g. individual, government, data broker). Because the object of this study was to highlight the complexity of people’s privacy expectations regarding so-called public information, information types were drawn from data fields frequently held in public government records (e.g. voter registration, marital status, criminal standing, and real property ownership).

Our findings are noteworthy on both theoretical and practical grounds. In the first place, they reinforce key assertions of contextual integrity about the simultaneous relevance to privacy of other factors beyond information types. In the second place, they reveal discordance between truisms that have frequently shaped public policy relevant to privacy. …(More)”

 

The Econocracy: The perils of leaving economics to the experts


Cover

Book by Joe Earle, Cahal Moran, Zach Ward-Perkins, and Series edited by Mick Moran: “One hundred years ago the idea of ‘the economy’ didn’t exist. Now, improving the economy has come to be seen as perhaps the most important task facing modern societies. Politics and policymaking are conducted in the language of economics and economic logic shapes how political issues are thought about and addressed. The result is that the majority of citizens, who cannot speak this language, are locked out of politics while political decisions are increasingly devolved to experts. The econocracy explains how economics came to be seen this way – and the damaging consequences. It opens up the discipline and demonstrates its inner workings to the wider public so that the task of reclaiming democracy can begin….(More)”