Iranian youth get app to dodge morality police


BBC Trending: “An anonymous team of Iranian app developers have come up with a solution to help young fashion conscious Iranians avoid the country’s notorious morality police known in Persian as “Ershad” or guidance.

Ershad’s mobile checkpoints which usually consist of a van, a few bearded menand one or two women in black chadors, are deployed in towns across Iran andappear with no notice.

Ershad personnel have a very extensive list of powers ranging from issuing warnings and forcing those they accuse of violating Iran’s Islamic code of conduct, to make a written statement pledging to never do so again, to fines or even prosecuting offenders.

The new phone app which is called “Gershad” (probably meaning get aroundErshad instead of facing them) however, will alert users to checkpoints and helpthem to avoid them by choosing a different route.

The data for the app is crowdsourced. It relies on users to point out the location of the Ershad vans on maps and when a sufficient number of users point out the same point, an alert will show up on the map for other users. When the number decreases, the alert will fade gradually from the map.

Screengrab of Tehran on Gershad

In a statement on their web page the app’s developers explain their motives in thisway: “Why do we have to be humiliated for our most obvious right which is the rightto wear what we want? Social media networks and websites are full of footage and photos of innocent women who have been beaten up and dragged on the ground by the Ershad patrol agents.”…

According to the designers of Gershad, in 2014 alone, around three million people were issued with official warnings, 18,000 were prosecuted and more than 200,000 were made to write formal pledges of repentance….

If the app, lives up to the claims made for it, Gershad will be a lifesaver for the growing numbers of young Iranians who are pushing the boundaries of what is allowed and finding themselves on the wrong side of what an Ershad agent sees as acceptable….(More)”

Three and a half degrees of separation


Sergey EdunovCarlos DiukIsmail Onur FilizSmriti Bhagat and Moira Burke at Facebook Research: “…How connected is the world? Playwrights, poets, and scientists have proposed that everyone on the planet is connected to everyone else by six other people. In honor of Friends Day, we’ve crunched the Facebook friend graph and determined that the number is 3.57. Each person in the world (at least among the 1.59 billion people active on Facebook) is connected to every other person by an average of three and a half other people. The average distance we observe is 4.57, corresponding to 3.57 intermediaries or “degrees of separation.” Within the US, people are connected to each other by an average of 3.46 degrees.

Our collective “degrees of separation” have shrunk over the past five years. In 2011, researchers at Cornell, the Università degli Studi di Milano, and Facebook computed the average across the 721 million people using the site then, and found that it was 3.74 [4,5]. Now, with twice as many people using the site, we’ve grown more interconnected, thus shortening the distance between any two people in the world.

Calculating this number across billions of people and hundreds of billions of friendship connections is challenging; we use statistical techniques described below to precisely estimate distance based on de-identified, aggregate data.

….Calculating degrees of separation in a network with hundreds of billions of edges is a monumental task, because the number of people reached grows very quickly with the degree of separation.

Imagine a person with 100 friends. If each of his friends also has 100 friends, then the number of friends-of-friends will be 10,000. If each of those friends-of-friends also has 100 friends then the number of friends-of-friends-of-friends will be 1,000,000. Some of those friends may overlap, so we need to filter down to the unique connections. We’re only two hops away and the number is already big. In reality this number grows even faster since most people on Facebook have more than 100 friends. We also need to do this computation 1.6 billion times; that is, for every person on Facebook.

Rather than calculate it exactly, we relied on statistical algorithms developed by Kang and others [6-8] to estimate distances with great accuracy, basically finding the approximate number of people within 1, 2, 3 (and so on) hops away from a source….(More)

My degrees of separation: Please log in to Facebook to see your number.

Open government data and why it matters


Australian Government: “This was a key focus of the Prime Minister’s $1.1 billion innovation package announced this month.

The Bureau of Communications Research (BCR) today released analysis of the impact of open government data, revealing its potential to generate up to $25 billion per year, or 1.5 per cent of Australia’s GDP.

In Australia, users can already access and re-use more than 7000 government data sets published on data.gov.au,’ said Dr Paul Paterson, Chief Economist and Head of the Bureau of Communications Research (BCR).

‘Some of the high-value data sets include geospatial/mapping data, health data, transport data, mining data, environmental data, demographics data, and real-time emergency data.

‘Many Australians are unaware of the flow-on benefits from open government data as a result of the increased innovation and informed choice it creates. For example open data has the power to generate new careers, more efficient government revenues, improved business practices, and drive better public engagement,

Open data dusts off the art world


Suzette Lohmeyer at GCN: “Open data is not just for spreadsheets. Museums are finding ways to convert even the provenance of artwork into open data, offering an out-of-the-box lesson in accessibility to public sector agencies. The specific use case could be of interest to government as well — many cities and states have sizeable art collections, and the General Services Administration owns more than 26,000 pieces.

Open data solving art history mysteries?

Making provenance data open and accessible gives more people information about a piece’s sometimes sordid history, including clues that might uncover evidence of Nazi confiscation. Read more.

Most art pieces have a few skeletons in their closet, or at least a backstory worthy of The History Channel. That provenance, or ownership information, has traditionally been stored in manila folders, only occasionally dusted off by art historians for academic papers or auction houses to verify authenticity. Many museums have some provenance data in collection management systems, but the narratives that tell the history of the work are often stored as semi-structured data, formatted according to the needs of individual institutions, making the information both hard to search and share across systems.

Enter Art Tracks from Pittsburgh’s Carnegie Museum of Art (CMOA) — a new open source, open data initiative that aims to turn provenance into structured data by building a suite of open source software tools so an artwork’s past can be available to museum goers, curators, researchers and software developers.

 

….The Art Tracks software is all open source. The code libraries and the user-facing provenance entry tool called Elysa (E-lie-za) are all “available on GitHub for use, modification and tinkering,” Berg-Fulton explained. “That’s a newer way of working for our museum, but that openness gives others a chance to lean on our technical expertise and improve their own records and hopefully contribute back to the software to improve that as well.”

Using an open data format, Berg-Fulton said, also creates opportunities for ongoing partnerships with other experts across the museum community so that provenance becomes a constant conversation.

This is a move Berg-Fulton said CMOA has been “dying to make,” because the more people that have access to data, the more ways it can be interpreted. “When you give people data, they do cool things with it, like help you make your own records better, or interpret it in a way you’ve never thought of,” she said. “It feels like the right thing to do in light of our duty to public trust.”….(More)”

Give Up Your Data to Cure Disease


David B. Agus in The New York Times: “How far would you go to protect your health records? Your privacy matters, of course, but consider this: Mass data can inform medicine like nothing else and save countless lives, including, perhaps, your own.

Over the past several years, using some $30 billion in federal stimulus money, doctors and hospitals have been installing electronic health record systems. ….Yet neither doctors nor patients are happy. Doctors complain about the time it takes to update digital records, while patients worry about confidentiality…

We need to get over it. These digital databases offer an incredible opportunity to examine trends that will fundamentally change how doctors treat patients. They will help develop cures, discover new uses for drugs and better track the spread of scary new illnesses like the Zika virus….

Case in point: Last year, a team led by researchers at the MD Anderson Cancer Center and Washington University found that a common class of heart drugs called beta blockers, which block the effects of adrenaline, may prolong ovarian cancer patients’ survival. This discovery came after the researchers reviewed more than 1,400 patient records, and identified an obvious pattern among those with ovarian cancer who were using beta blockers, most often to control their blood pressure. Women taking earlier versions of this class of drug typically lived for almost eight years after their cancer diagnosis, compared with just three and a half years for the women not taking any beta blocker….

We need to move past that. For one thing, more debate over data sharing is already leading to more data security. Last month a bill was signed into law calling for the Department of Health and Human Services to create a health care industry cybersecurity task force, whose members would hammer out new voluntary standards.

New technologies — and opportunities — come with unprecedented risks and the need for new policies and strategies. We must continue to improve our encryption capabilities and other methods of data security and, most important, mandate that they are used. The hack of the Anthem database last year, for instance, which allowed 80 million personal records to be accessed, was shocking not only for the break-in, but for the lack of encryption….

Medical research is making progress every day, but the next step depends less on scientists and doctors than it does on the public. Each of us has the potential to be part of tomorrow’s cures. (More)”

New Tools for Collaboration: The Experience of the U.S. Intelligence Community


IBM Center for Business of Government: “This report is intended for an audience beyond the U.S. Intelligence Community—senior managers in government, their advisors and students of government performance who are interested in the progress of collaboration in a difficult environment. …

The purpose of this report is to learn lessons by looking at the use of internal collaborative tools across the Intelligence Community. The initial rubric was tools, but the real focus is collaboration, for while the tools can enable, what ultimately matters are policies and practices interacting with organizational culture. It looks for good practices to emulate. The ultimate question is how and how much could, and should, collaborative tools foster integration across the Community. The focus is analysis and the analytic process, but collaborative tools can and do serve many other functions in the Intelligence Community—from improving logistics or human resources, to better connecting collection and analysis, to assisting administration and development, to facilitating, as one interlocutor put it, operational “go” decisions. Yet it is in the analytic realm that collaboration is both most visible and most rubs against traditional work processes that are not widely collaborative.

The report defines terms and discusses concepts, first exploring collaboration and coordination, then defining collaborative tools and social media, then surveying the experience of the private sector. The second section of the report uses those distinctions to sort out the blizzard of collaborative tools that have been created in the various intelligence agencies and across them. The third section outlines the state of collaboration, again both within agencies and across them. The report concludes with findings and recommendations for the Community. The recommendations amount to a continuum of possible actions in making more strategic what is and will continue to be more a bottom-up process of creating and adopting collaborative tools and practices….(More)”

The Promise and Perils of Open Medical Data


Sharona Hoffman at the Hastings Center: “Not long ago I visited the Personal Genome Project’s website. The PGP describes its mission as “creating public genome, health, and trait data.” In the “Participant Profiles” section, I found several entries that disclosed the names of individuals along with their date of birth, sex, weight, height, blood type, race, health conditions, medications, allergies, medical procedures, and more. Other profiles did not feature names but provided all of the other details. I had no special access to this information. It is available to absolutely anyone with Internet access. The PGP is part of a trend known as “open data.” Many government and private entities have launched initiatives to compile very large data resources (also known as “big data”) and to make them available to the public. President Obama himself has endorsed open data by issuing a May 2013 executive order directing that, to the extent permitted by law, the federal government must release its data to the public in forms that make it easy to locate, access, and use.

Read more:http://www.thehastingscenter.org/Publications/HCR/Detail.aspx?id=7731#ixzz3zOSM2kF0

The rise of the citizen expert


Beth Noveck (The GovLab) at Policy Network: “Does the EU need to be more democratic? It is not surprising that Jürgen Habermas, Europe’s most famous democratic theorist, laments the dearth of mechanisms for “fulfilling the citizens’ political will” in European institutions. The controversial handling of the Greek debt crisis, according to Habermas, was clear evidence of the need for more popular input into otherwise technocratic decision-making. Incremental progress toward participation does not excuse a growing crisis of democratic legitimacy that, he says, is undermining the European project….

For participatory democrats like Habermas, opportunities for deliberative democratic input by citizens is essential to legitimacy. And, to be sure, the absence of such opportunities is no guarantee of more effective outcomes. A Greek referendum in July 2015 scuttled European austerity plans.

But pitting technocracy against citizenship is a false dichotomy resulting from the long-held belief, even among reformers, that only professional public servants or credentialed elites possess the requisite abilities to govern in a complex society. Citizens are spectators who can express opinions but cognitive incapacity, laziness or simply the complexity of modern society limit participation to asking people what they feel by means of elections, opinion polls, or social media.

Although seeing technocracy as the antinomy of citizenship made sense when expertise was difficult to pinpoint, now tools like LinkedIn, which make knowhow more searchable, are making it possible for public institutions to get more help from more diverse sources – including from within the civil service – systematically and could enable more members of the public to participate actively in governing based on what they know and care about. It is high time for institutions to begin to leverage such platforms to match the need for expertise to the demand for it and, in the process, increase engagement becoming more effective and more legitimate.

Such software does more than catalogue credentials. The internet is radically decreasing the costs of identifying diverse forms of expertise so that the person who has taken courses on an online learning platform can showcase those credentials with a searchable digital badge. The person who has answered thousands of questions on a question-and-answer website can demonstrate their practical ability and willingness to help. Ratings by other users further attest to the usefulness of their contributions. In short, it is becoming possible to discover what people know and can do in ever more finely tuned ways and match people to opportunities to participate that speak to their talents….

In an era in which it is commonplace for companies to use technology to segment customers in an effort to promote their products more effectively, the idea of matching might sound obvious. To be sure, it is common practice in business – but in the public sphere, the notion that participation should be tailored to the individual’s abilities and tethered to day-to-day practices of governing, not politicking, is new.  More accurately, it is a revival of Athenian life where citizen competence and expertise were central to economic and military success.

What makes this kind of targeted engagement truly democratic – and citizenship in this vision more active, robust, and meaningful – is that such targeting allows us to multiply the number and frequency of ways to engage productively in a manner consistent with each person’s talents. When we move away from focusing on citizen opinion to discovering citizen expertise, we catalyse participation that is also independent of geographical boundaries….(More)”

Moving from Open Data to Open Knowledge: Announcing the Commerce Data Usability Project


Jeffrey Chen, Tyrone Grandison, and Kristen Honey at the US Department of Commerce: “…in 2016, the DOC is committed to building on this momentum with new and expanded efforts to transform open data into knowledge into action.

DOC Open Data Graphic
Graphic Credit: Radhika Bhatt, Commerce Data Service

DOC has been in the business of open data for a long time. DOC’s National Oceanic and Atmospheric Administration (NOAA) alone collects and disseminates huge amounts of data that fuel the global weather economy—and this information represents just a fraction of the tens of thousands of datasets that DOC collects and manages, on topics ranging from satellite imagery to material standards to demographic surveys.

Unfortunately, far too many DOC datasets are either hard to find, difficult to use, and/or not yet publicly available on Data.gov, the home of U.S. government’s open data. This challenge is not exclusive to DOC; and indeed, under Project Open Data, Federal agencies are working hard on various efforts to make tax-payer funded data more easily discoverable.

CDUP screenshot

One of these efforts is DOC’s Commerce Data Usability Project (CDUP). To unlock the power of data, just making data open isn’t enough. It’s critical to make data easier to find and use—to provide information and tools that make data accessible and actionable for all users. That’s why DOC formed a public-private partnership to create CDUP, a collection of online data tutorials that provide students, developers, and entrepreneurs with the necessary context and code for them to start quickly extracting value from various datasets. Tutorials exist on topics such as:

  • NOAA’s Severe Weather Data Inventory (SWDI), demonstrating how to use hail data to save life and property. The tutorial helps users see that hail events often occur in the summer (late night to early morning), and in midwestern and southern states.
  • Security vulnerability data from the National Institute of Standards and Technology (NIST). The tutorial helps users see that spikes and dips in security incidents consistently occur in the same set of weeks each year.
  • Visible Infrared Imaging Radiometer Suite (VIIRS) data from the National Oceanic and Atmospheric Administration (NOAA). The tutorial helps users understand how to use satellite imagery to estimate populations.
  • American Community Survey (ACS) data from the U.S. Census Bureau. The tutorial helps users understand how nonprofits can identify communities that they want to serve based on demographic traits.

In the coming months, CDUP will continue to expand with a rich, diverse set of additional tutorials….(More)

Crowdsourcing City Government: Using Tournaments to Improve Inspection Accuracy


Edward Glaeser, Andrew Hillis, Scott Kominers and Michael Luca in American Economic Review: Papers and Proceedings:The proliferation of big data makes it possible to better target city services like hygiene inspections, but city governments rarely have the in-house talent needed for developing prediction algorithms. Cities could hire consultants, but a cheaper alternative is to crowdsource competence by making data public and offering a reward for the best algorithm. A simple model suggests that open tournaments dominate consulting contracts when cities can tolerate risk and when there is enough labor with low opportunity costs. We also report on an inexpensive Boston-based restaurant tournament, which yielded algorithms that proved reasonably accurate when tested “out-of-sample” on hygiene inspections….(More)”