Can Crowdsourcing Help Make Life Easier For People With Disabilities?


Sean Captain at FastCompany: “These days GPS technology can get you as close as about 10 feet from your destination, close enough to see it—assuming you can see.

But those last few feet are a chasm for the blind (and GPS accuracy sometimes falls only within about 30 feet).

“Actually finding the bus stop, not the right street, but standing in the right place when the bus comes, is pretty hard,” says Dave Power, president and CEO of the Perkins School for the Blind near Boston. Helen Keller’s alma mater is developing a mobile app that will provide audio directions—contributed by volunteers—so that blind people can get close enough to the stop for the bus driver to notice them.

Perkins’s app is one of 29 projects that recently received a total of $20 million in funding from Google.org’s Google Impact Challenge: Disabilities awards. Several of the winning initiatives rely on crowdsourced information to help the disabled—be they blind, in a wheelchair, or cognitively impaired. It’s a commonsense approach to tackling big logistical projects in a world full of people who have snippets of downtime during which they might perform bite-size acts of kindness online. But moving these projects from being just clever concepts to extensive services, based on the goodwill of volunteers, is going to be quite a hurdle.

People with limited mobility may have trouble traversing the last few feet between them and a wheelchair ramp, automatic doors, or other accommodations that aren’t easy to find (or may not even exist in some places).Wheelmap, based in Berlin, is trying to help by building online maps of accessible locations. Its website incorporates crowdsourced data. The site lets users type in a city and search for accessible amenities such as restaurants, hotels, and public transit.

Paris-based J’accede (which received 500,000 euros from Google, which is the equivalent of about $565,000) provides similar capabilities in both a website and an app, with a slicker design somewhat resembling TripAdvisor.

Both services have a long way to go. J’accede lists 374 accessible bars/restaurants in its hometown and a modest selection in other French cities like Marseille. “We still have a lot of work to do to cover France,” says J’accede’s president Damien Birambeau in an email. The goal is to go global though, and the site is available in English, German, and Spanish, in addition to French. Likewise, Wheelmap (which got 825,000 euros, or $933,000) performs best in the German capital of Berlin and cities like Hamburg, but is less useful in other places.

These sites face the same challenge as many other volunteer-based, crowdsourced projects: getting a big enough crowd to contribute information to the service. J’accede hopes to make the process easier. In June, it will connect itself with Google Places, so contributors will only need to supply details about accommodations at a site; information like the location’s address and phone number will be pulled in automatically. But both J’accede and Wheelmap recognize that crowdsourcing has its limits. They are now going beyond voluntary contributions, setting up automated systems to scrape information from other databases of accessible locations, such as those maintained by governments.

Wheelmap and J’accede are dwarfed by general-interest crowdsourced sites like TripAdvisor and Yelp, which offer some information about accessibility, too. For instance, among the many filters they offer users searching for restaurants—such as price range and cuisine type—TripAdvisor and Yelp both offer a Wheelchair Accessible checkbox. Applying that filter to Parisian establishments brings up about 1,000 restaurants on TripAdvisor and 2,800 in Yelp.

So what can Wheelmap and J’accede provide that the big players can’t? Details. “A person in a wheelchair, for example, will face different obstacles than a partially blind person or a person with cognitive disabilities,” says Birambeau. “These different needs and profiles means that we need highly detailed information about the accessibility of public places.”…(More)”

Critics allege big data can be discriminatory, but is it really bias?


Pradip Sigdyal at CNBC: “…The often cited case of big data discrimination points to a research conducted few years ago by Latanya Sweeny, who heads the Data Privacy Lab at Harvard University.

The case involves Google ad results when searching for certain kinds of names on the internet. In her research, Sweeney found that distinct sounding names often associated with blacks showed up with a disproportionately higher number of arrest record ads compared to white sounding names by roughly 18 percent of the time. Google has since fixed the issue, although they never publicly stated what they did to correct the problem.

The proliferation of big data in the last few years has seen other allegations of improper use and bias. These allegations run the gamut, from online price discrimination and consequences of geographic targeting to the controversial use of crime predicting technology by law enforcement, and lack of sufficient representative[data] sampleused in some public works decisions.

The benefits of big data need to be balanced with the risks associated with applying modern technologies to address societal issues. Yet data advocates believe that democratization of data has in essence givenpower to the people to affect change by transferring ‘tribal knowledge’ from experts to data-savvy practitioners.

Big data is here to stay

According to some advocates, the problem is not so much that ‘big data discriminates’, but that failures by data professionals risk misinterpreting the findings at the heart of data mining and statistical learning. They add that the benefits far outweigh the concerns.

“In my academic research and industry consulting, I have seen tremendous benefits accruing to firms, organizations and consumers alike from the use of data-driven decision-making, data science, and business analytics,” Anindya Ghose, the director of Center for Business Analytics at New York University’s Stern School of Business, said.

“To be perfectly honest, I do not at all understand these big-data cynics who engage in fear mongering about the implications of data analytics,” Ghose said.

“Here is my message to the cynics and those who keep cautioning us: ‘Deal with it, big data analytics is here to stay forever’.”…(More)”

OSoMe: The IUNI observatory on social media


Clayton A Davis et al at Peer J. PrePrint:  “The study of social phenomena is becoming increasingly reliant on big data from online social networks. Broad access to social media data, however, requires software development skills that not all researchers possess. Here we present the IUNI Observatory on Social Media, an open analytics platform designed to facilitate computational social science. The system leverages a historical, ongoing collection of over 70 billion public messages from Twitter. We illustrate a number of interactive open-source tools to retrieve, visualize, and analyze derived data from this collection. The Observatory, now available at osome.iuni.iu.edu, is the result of a large, six-year collaborative effort coordinated by the Indiana University Network Science Institute.”…(More)”

What’s Wrong with Open-Data Sites–and How We Can Fix Them


César A. Hidalgo at Scientific American: “Imagine shopping in a supermarket where every item is stored in boxes that look exactly the same. Some are filled with cereal, others with apples, and others with shampoo. Shopping would be an absolute nightmare! The design of most open data sites—the (usually government) sites that distribute census, economic and other data to be used and redistributed freely—is not exactly equivalent to this nightmarish supermarket. But it’s pretty close.

During the last decade, such sites—data.gov, data.gov.uk, data.gob.cl,data.gouv.fr, and many others—have been created throughout the world. Most of them, however, still deliver data as sets of links to tables, or links to other sites that are also hard to comprehend. In the best cases, data is delivered through APIs, or application program interfaces, which are simple data query languages that require a user to have a basic knowledge of programming. So understanding what is inside each dataset requires downloading, opening, and exploring the set in ways that are extremely taxing for users. The analogy of the nightmarish supermarket is not that far off.

THE U.S. GOVERNMENT’S OPEN DATA SITE

The consensus among those who have participated in the creation of open data sites is that current efforts have failed and we need new options. Pointing your browser to these sites should show you why. Most open data sites are badly designed, and here I am not talking about their aesthetics—which are also subpar—but about the conceptual model used to organize and deliver data to users. The design of most open data sites follows a throwing-spaghetti-against-the-wall strategy, where opening more data, instead of opening data better, has been the driving force.

Some of the design flaws of current open data sites are pretty obvious. The datasets that are more important, or could potentially be more useful, are not brought into the surface of these sites or are properly organized. In our supermarket analogy, not only all boxes look the same, but also they are sorted in the order they came. This cannot be the best we can do.

There are other design problems that are important, even though they are less obvious. The first one is that most sites deliver data in the way in which it is collected, instead of used. People are often looking for data about a particular place, occupation, industry, or about an indicator (such as income, or population). If the data they need comes from the national survey of X, or the bureau of Y, it is secondary and often—although not always—irrelevant to the user. Yet, even though this is not the way we should be giving data back to users, this is often what open data sites do.

The second non-obvious design problem, which is probably the most important, is that most open data sites bury data in what is known as the deep web. The deep web is the fraction of the Internet that is not accessible to search engines, or that cannot be indexed properly. The surface of the web is made of text, pictures, and video, which search engines know how to index. But search engines are not good at knowing that the number that you are searching for is hidden in row 17,354 of a comma separated file that is inside a zip file linked in a poorly described page of an open data site. In some cases, pressing a radio button and selecting options from a number of dropdown menus can get you the desired number, but this does not help search engines either, because crawlers cannot explore dropdown menus. To make open data really open, we need to make it searchable, and for that we need to bring data to the surface of the web.

So how do we that? The solution may not be simple, but it starts by taking design seriously. This is something that I’ve been doing for more than half a decade when creating data visualization engines at MIT. The latest iteration of our design principles are now embodied in DataUSA, a site we created in a collaboration between Deloitte, Datawheel, and my group at MIT.

So what is design, and how do we use it to improve open data sites? My definition of design is simple. Design is discovering the forms that best fulfill a function….(More)”

Design principles for engaging and retaining virtual citizen scientists


Dara M. WaldJustin Longo and A. R. Dobell at Conservation Biology: “Citizen science initiatives encourage volunteer participants to collect and interpret data and contribute to formal scientific projects. The growth of virtual citizen science (VCS), facilitated through websites and mobile applications since the mid-2000s, has been driven by a combination of software innovations and mobile technologies, growing scientific data flows without commensurate increases in resources to handle them, and the desire of internet-connected participants to contribute to collective outputs. However, the increasing availability of internet-based activities requires individual VCS projects to compete for the attention of volunteers and promote their long-term retention. We examined program and platform design principles that might allow VCS initiatives to compete more effectively for volunteers, increase productivity of project participants, and retain contributors over time. We surveyed key personnel engaged in managing a sample of VCS projects to identify the principles and practices they pursued for these purposes and led a team in a heuristic evaluation of volunteer engagement, website or application usability, and participant retention. We received 40 completed survey responses (33% response rate) and completed a heuristic evaluation of 20 VCS program sites. The majority of the VCS programs focused on scientific outcomes, whereas the educational and social benefits of program participation, variables that are consistently ranked as important for volunteer engagement and retention, were incidental. Evaluators indicated usability, across most of the VCS program sites, was higher and less variable than the ratings for participant engagement and retention. In the context of growing competition for the attention of internet volunteers, increased attention to the motivations of virtual citizen scientists may help VCS programs sustain the necessary engagement and retention of their volunteers….(More)”

A Framework for Understanding Data Risk


Sarah Telford and Stefaan G. Verhulst at Understanding Risk Forum: “….In creating the policy, OCHA partnered with the NYU Governance Lab (GovLab) and Leiden University to understand the policy and privacy landscape, best practices of partner organizations, and how to assess the data it manages in terms of potential harm to people.

We seek to share our findings with the UR community to get feedback and start a conversation around the risk to using certain types of data in humanitarian and development efforts and when understanding risk.

What is High-Risk Data?

High-risk data is generally understood as data that includes attributes about individuals. This is commonly referred to as PII or personally identifiable information. Data can also create risk when it identifies communities or demographics within a group and ties them to a place (i.e., women of a certain age group in a specific location). The risk comes when this type of data is collected and shared without proper authorization from the individual or the organization acting as the data steward; or when the data is being used for purposes other than what was initially stated during collection.

The potential harms of inappropriately collecting, storing or sharing personal data can affect individuals and communities that may feel exploited or vulnerable as the result of how data is used. This became apparent during the Ebola outbreak of 2014, when a number of data projects were implemented without appropriate risk management measures. One notable example was the collection and use of aggregated call data records (CDRs) to monitor the spread of Ebola, which not only had limited success in controlling the virus, but also compromised the personal information of those in Ebola-affected countries. (See Ebola: A Big Data Disaster).

A Data-Risk Framework

Regardless of an organization’s data requirements, it is useful to think through the potential risks and harms for its collection, storage and use. Together with the Harvard Humanitarian Initiative, we have set up a four-step data risk process that includes doing an assessment and inventory, understanding risks and harms, and taking measures to counter them.

  1. Assessment – The first step is to understand the context within which the data is being generated and shared. The key questions to ask include: What is the anticipated benefit of using the data? Who has access to the data? What constitutes the actionable information for a potential perpetrator? What could set off the threat to the data being used inappropriately?
  1. Data Inventory – The second step is to take inventory of the data and how it is being stored. Key questions include: Where is the data – is it stored locally or hosted by a third party? Where could the data be housed later? Who might gain access to the data in the future? How will we know – is data access being monitored?
  1. Risks and Harms – The next step is to identify potential ways in which risk might materialize. Thinking through various risk-producing scenarios will help prepare staff for incidents. Examples of risks include: your organization’s data being correlated with other data sources to expose individuals; your organization’s raw data being publicly released; and/or your organization’s data system being maliciously breached.
  1. Counter-Measures – The next step is to determine what measures would prevent risk from materializing. Methods and tools include developing data handling policies, implementing access controls to the data, and training staff on how to use data responsibly….(More)

Big Risks, Big Opportunities: the Intersection of Big Data and Civil Rights


Latest White House report on Big Data charts pathways for fairness and opportunity but also cautions against re-encoding bias and discrimination into algorithmic systems: ” Advertisements tailored to reflect previous purchasing decisions; targeted job postings based on your degree and social networks; reams of data informing predictions around college admissions and financial aid. Need a loan? There’s an app for that.

As technology advances and our economic, social, and civic lives become increasingly digital, we are faced with ethical questions of great consequence. Big data and associated technologies create enormous new opportunities to revisit assumptions and instead make data-driven decisions. Properly harnessed, big data can be a tool for overcoming longstanding bias and rooting out discrimination.

The era of big data is also full of risk. The algorithmic systems that turn data into information are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them. Predictors of success can become barriers to entry; careful marketing can be rooted in stereotype. Without deliberate care, these innovations can easily hardwire discrimination, reinforce bias, and mask opportunity.

Because technological innovation presents both great opportunity and great risk, the White House has released several reports on “big data” intended to prompt conversation and advance these important issues. The topics of previous reports on data analytics included privacy, prices in the marketplace, and consumer protection laws. Today, we are announcing the latest report on big data, one centered on algorithmic systems, opportunity, and civil rights.

The first big data report warned of “the potential of encoding discrimination in automated decisions”—that is, discrimination may “be the inadvertent outcome of the way big data technologies are structured and used.” A commitment to understanding these risks and harnessing technology for good prompted us to specifically examine the intersection between big data and civil rights.

Using case studies on credit lending, employment, higher education, and criminal justice, the report we are releasing today illustrates how big data techniques can be used to detect bias and prevent discrimination. It also demonstrates the risks involved, particularly how technologies can deliberately or inadvertently perpetuate, exacerbate, or mask discrimination.

The purpose of the report is not to offer remedies to the issues it raises, but rather to identify these issues and prompt conversation, research—and action—among technologists, academics, policy makers, and citizens, alike.

The report includes a number of recommendations for advancing work in this nascent field of data and ethics. These include investing in research, broadening and diversifying technical leadership, cross-training, and expanded literacy on data discrimination, bolstering accountability, and creating standards for use within both the government and the private sector. It also calls on computer and data science programs and professionals to promote fairness and opportunity as part of an overall commitment to the responsible and ethical use of data.

Big data is here to stay; the question is how it will be used: to advance civil rights and opportunity, or to undermine them….(More)”

Hail the maintainers


Andrew Russell & Lee Vinsel at AEON: “The trajectory of ‘innovation’ from core, valued practice to slogan of dystopian societies, is not entirely surprising, at a certain level. There is a formulaic feel: a term gains popularity because it resonates with the zeitgeist, reaches buzzword status, then suffers from overexposure and cooptation. Right now, the formula has brought society to a question: after ‘innovation’ has been exposed as hucksterism, is there a better way to characterise relationships between society and technology?

There are three basic ways to answer that question. First, it is crucial to understand that technology is not innovation. Innovation is only a small piece of what happens with technology. This preoccupation with novelty is unfortunate because it fails to account for technologies in widespread use, and it obscures how many of the things around us are quite old. In his book, Shock of the Old (2007), the historian David Edgerton examines technology-in-use. He finds that common objects, like the electric fan and many parts of the automobile, have been virtually unchanged for a century or more. When we take this broader perspective, we can tell different stories with drastically different geographical, chronological, and sociological emphases. The stalest innovation stories focus on well-to-do white guys sitting in garages in a small region of California, but human beings in the Global South live with technologies too. Which ones? Where do they come from? How are they produced, used, repaired? Yes, novel objects preoccupy the privileged, and can generate huge profits. But the most remarkable tales of cunning, effort, and care that people direct toward technologies exist far beyond the same old anecdotes about invention and innovation.

Second, by dropping innovation, we can recognise the essential role of basic infrastructures. ‘Infrastructure’ is a most unglamorous term, the type of word that would have vanished from our lexicon long ago if it didn’t point to something of immense social importance. Remarkably, in 2015 ‘infrastructure’ came to the fore of conversations in many walks of American life. In the wake of a fatal Amtrak crash near Philadelphia, President Obama wrestled with Congress to pass an infrastructure bill that Republicans had been blocking, but finally approved in December 2015. ‘Infrastructure’ also became the focus of scholarly communities in history and anthropology, even appearing 78 times on the programme of the annual meeting of the American Anthropological Association. Artists, journalists, and even comedians joined the fray, most memorably with John Oliver’s hilarious sketch starring Edward Norton and Steve Buscemi in a trailer for an imaginary blockbuster on the dullest of subjects. By early 2016, the New York Review of Books brought the ‘earnest and passive word’ to the attention of its readers, with a depressing essay titled ‘A Country Breaking Down’.

Despite recurring fantasies about the end of work, the central fact of our industrial civilisation is labour, most of which falls far outside the realm of innovation

The best of these conversations about infrastructure move away from narrow technical matters to engage deeper moral implications. Infrastructure failures – train crashes, bridge failures, urban flooding, and so on – are manifestations of and allegories for America’s dysfunctional political system, its frayed social safety net, and its enduring fascination with flashy, shiny, trivial things. But, especially in some corners of the academic world, a focus on the material structures of everyday life can take a bizarre turn, as exemplified in work that grants ‘agency’ to material things or wraps commodity fetishism in the language of high cultural theory, slick marketing, and design. For example, Bloomsbury’s ‘Object Lessons’ series features biographies of and philosophical reflections on human-built things, like the golf ball. What a shame it would be if American society matured to the point where the shallowness of the innovation concept became clear, but the most prominent response was an equally superficial fascination with golf balls, refrigerators, and remote controls.

Third, focusing on infrastructure or on old, existing things rather than novel ones reminds us of the absolute centrality of the work that goes into keeping the entire world going…..

 

We organised a conference to bring the work of the maintainers into clearer focus. More than 40 scholars answered a call for papers asking, ‘What is at stake if we move scholarship away from innovation and toward maintenance?’ Historians, social scientists, economists, business scholars, artists, and activists responded. They all want to talk about technology outside of innovation’s shadow.

One important topic of conversation is the danger of moving too triumphantly from innovation to maintenance. There is no point in keeping the practice of hero-worship that merely changes the cast of heroes without confronting some of the deeper problems underlying the innovation obsession. One of the most significant problems is the male-dominated culture of technology, manifest in recent embarrassments such as the flagrant misogyny in the ‘#GamerGate’ row a couple of years ago, as well as the persistent pay gap between men and women doing the same work.

There is an urgent need to reckon more squarely and honestly with our machines and ourselves. Ultimately, emphasising maintenance involves moving from buzzwords to values, and from means to ends. In formal economic terms, ‘innovation’ involves the diffusion of new things and practices. The term is completely agnostic about whether these things and practices are good. Crack cocaine, for example, was a highly innovative product in the 1980s, which involved a great deal of entrepreneurship (called ‘dealing’) and generated lots of revenue. Innovation! Entrepreneurship! Perhaps this point is cynical, but it draws our attention to a perverse reality: contemporary discourse treats innovation as a positive value in itself, when it is not.

Entire societies have come to talk about innovation as if it were an inherently desirable value, like love, fraternity, courage, beauty, dignity, or responsibility. Innovation-speak worships at the altar of change, but it rarely asks who benefits, to what end? A focus on maintenance provides opportunities to ask questions about what we really want out of technologies. What do we really care about? What kind of society do we want to live in? Will this help get us there? We must shift from means, including the technologies that underpin our everyday actions, to ends, including the many kinds of social beneficence and improvement that technology can offer. Our increasingly unequal and fearful world would be grateful….(More)”

Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland


Tanja Aitamurto and Hélène Landemore in Policy & Internet: “This article examines the emergence of democratic deliberation in a crowdsourced law reform process. The empirical context of the study is a crowdsourced legislative reform in Finland, initiated by the Finnish government. The findings suggest that online exchanges in the crowdsourced process qualify as democratic deliberation according to the classical definition. We introduce the term “crowdsourced deliberation” to mean an open, asynchronous, depersonalized, and distributed kind of online deliberation occurring among self-selected participants in the context of an attempt by government or another organization to open up the policymaking or lawmaking process. The article helps to characterize the nature of crowdsourced policymaking and to understand its possibilities as a practice for implementing open government principles. We aim to make a contribution to the literature on crowdsourcing in policymaking, participatory and deliberative democracy and, specifically, the newly emerging subfield in deliberative democracy that focuses on “deliberative systems.”…(More)”

Impact of open government: Mapping the research landscape


Stephen Davenport at OGP Blog: “Government reformers and development practitioners in the open government space are experiencing the heady times associated with a newly-defined agenda. The opportunity for innovation and positive change can at times feel boundless. Yet, working in a nascent field also means a relative lack of “proven” tools and solutions (to such extent as they ever exist in development).

More research on the potential for open government initiatives to improve lives is well underway. However, keeping up with the rapidly evolving landscape of ongoing research, emerging hypotheses, and high-priority knowledge gaps has been a challenge, even as investment in open government activities has accelerated. This becomes increasing important as we gather to talk progress at the OGP Africa Regional Meeting 2016(link is external) and GIFT(link is external) consultations in Cape Town next week (May 4-6) .

Who’s doing what?
To advance the state of play, a new report commissioned by the World Bank, “Open Government Impact and Outcomes: Mapping the Landscape of Ongoing Research”(link is external), categorizes and takes stock of existing research. The report represents the first output of a newly-formed consortium (link is external) that aims to generate practical, evidence-based guidance for open government stakeholders, building on and complementing the work of organizations across the academic-practitioner spectrum.

The mapping exercise led to the creation of an interactive platform (link is external) with detailed information on how to find out more about each of the research projects covered, organized by a new typology for open government interventions. The inventory is limited in scope given practical and other considerations. It includes only projects that are currently underway. It is meant to be a forward-looking overview, rather than a literature review–and are relatively large and international in nature.

Charting a course: How can the World Bank add value?
The scope for increasing the open government knowledge base remains vast. The report suggests that, given its role as a lender, convener, and a policy advisor the World Bank is well positioned to complement and support existing research in a number of ways, such as:

  • Taking a demand-driven approach, focusing on specific areas where it can identify lessons for stakeholders seeking to turn open government enthusiasm into tangible results.
  • Linking researchers with governments and practitioners to study specific areas of interest (in particular, access to information and social accountability interventions).
  • Evaluating the impact of open government reforms against baseline data that may not be public yet, but that are accessible to the World Bank.
  • Contributing to a better understanding of the role and impact of ICTs through work like the recently-published study (link is external)that examines the relationship between digital citizen engagement and government responsiveness.
  • Ensuring that World Bank loans and projects are conceived as opportunities for knowledge generation, while incorporating the most relevant and up-to-date evidence on what works in different contexts.
  • Leveraging its involvement in the Open Government Partnership to help stakeholders make evidence-based reform commitments….(More)