Five ways tech is crowdsourcing women’s empowerment


Zara Rahman in The Guardian: “Around the world, women’s rights advocates are crowdsourcing their own data rather than relying on institutional datasets.

Citizen-generated data is especially important for women’s rights issues. In many countries the lack of women in positions of institutional power, combined with slow, bureaucratic systems and a lack of prioritisation of women’s rights issues means data isn’t gathered on relevant topics, let alone appropriately responded to by the state.

Even when data is gathered by institutions, societal pressures may mean it remains inadequate. In the case of gender-based violence, for instance, women often suffer in silence, worrying nobody will believe them or that they will be blamed. Providing a way for women to contribute data anonymously or, if they so choose, with their own details, can be key to documenting violence and understanding the scale of a problem, and thus deciding upon appropriate responses.

Crowdsourcing data on street harassment in Egypt

Using open source platform Ushahidi, HarassMap provides women with a way to document incidences of street harassment. The project, which began in 2010, is raising awareness of how common street harassment is, giving women’s rights advocates a concrete way to highlight the scale of the problem….

Documenting experiences of reporting sexual harassment and violence to the police in India

Last year, The Ladies Finger, a women’s zine based in India, partnered with Amnesty International to support its Ready to Report campaign, which aimed to make it easier for survivors of sexual violence to file a police complaint. Using social media and through word of mouth, it asked the community if they had experiences to share about reporting sexual assault and harassment to the police. Using these crowdsourced leads, The Ladies Finger’s reporters spoke to people willing to share their experiences and put together a series of detailed contextualised stories. They included a piece that evoked a national outcry and spurred the Uttar Pradesh government to make an arrest for stalking, after six months of inaction….

Reporting sexual violence in Syria

Women Under Siege is a global project by Women’s Media Centre that is investigating how rape and sexual violence is used in conflicts. Its Syria project crowdsources data on sexual violence in the war-torn country. Like HarassMap, it uses the Ushahidi platform to geolocate where acts of sexual violence take place. Where possible, initial reports are contextualised with deeper media reports around the case in question….

Finding respectful gynaecologists in India

After recognising that many women in her personal networks were having bad experiences with gynaecologists in India, Delhi-based Amba Azaad began – with the help of her friends – putting together a list of gynaecologists who had treated patients respectfully called Gynaecologists We Trust. As the site says, “Finding doctors who are on our side is hard enough, and when it comes to something as intimate as our internal plumbing, it’s even more difficult.”…

Ending tech-related violence against women

In 2011, Take Back the Tech, an initiative from the Association for Progressive Communications, started a map gathering incidences of tech-related violence against women. Campaign coordinator Sara Baker says crowdsourcing data on this topic is particularly useful as “victims/survivors are often forced to tell their stories repeatedly in an attempt to access justice with little to no action taken on the part of authorities or intermediaries”. Rather than telling that story multiple times and seeing it go nowhere, their initiative gives people “the opportunity to make their experience visible (even if anonymously) and makes them feel like someone is listening and taking action”….(More)

Private Provision of Public Goods via Crowdfunding


Paper by Robert Chovanculiak and Marek Hudík: “Private provision of public goods is typically associated with three main problems: (1) high organization costs, (2) the assurance problem, and (3) the free-rider problem. We argue that technologies which enable crowdfunding (the method of funding projects by raising small amounts of money from a large number of people via the internet), have made the overall conditions for private provision of public goods more favorable: these technologies lowered the organization costs and enabled to employ more efficient mechanisms which reduce the assurance and free-rider problems. It follows that if the reason for government provision of public goods is higher efficiency as suggested by the standard theory, then with the emergence of crowdfunding we should observe a decline of the government role in this area….(More)”

Crowdsourcing City Government: Using Tournaments to Improve Inspection Accuracy


Edward Glaeser, Andrew Hillis, Scott Kominers and Michael Luca in American Economic Review: Papers and Proceedings:The proliferation of big data makes it possible to better target city services like hygiene inspections, but city governments rarely have the in-house talent needed for developing prediction algorithms. Cities could hire consultants, but a cheaper alternative is to crowdsource competence by making data public and offering a reward for the best algorithm. A simple model suggests that open tournaments dominate consulting contracts when cities can tolerate risk and when there is enough labor with low opportunity costs. We also report on an inexpensive Boston-based restaurant tournament, which yielded algorithms that proved reasonably accurate when tested “out-of-sample” on hygiene inspections….(More)”

 

Platform for Mumbai’s slum entrepreneurs


Springwise: “We recently saw an initiative that empowered startup talent in a Finnish refugee camp, and now Design Museum Dharavi is a mobile museum that will provide a platform for makers in the Mumbai neighborhood.

The initiative is a brainchild of artist Jorge Rubio and Creative Industries Fund NL. Taking the model of a pop-up, it will stop at various locations throughout the neighborhood. Despite being an ‘informal settlement’, Dharavi is famed for producing very little waste due to a culture of recycling and repurposing. The mobile museum will showcase local makers, enable them to connect with potential clients and run workshops, ultimately elevating the global social perception towards life in the so-called ‘slums’. Home to over a million people, Dharavi has the additional tourism pull from appearing on the film Slumdog Millionaire…..(More)”

A Gargantuan Challenge for The Megalopolis: Mexico City calls citizens to help map its complex public bus system


“Mexico City, the largest and oldest urban agglomeration in the American continent. The city is home to an incredible diversity of people and cultures, and its size and its diversity also poses certain challenges. In a city with such big scale (the metropolitan area measures 4,887 mi2) transportation is one of its main problems. Finding ways to improve how people move within requires imagination and cooperation from decision makers and society alike.

The scale and dynamism of Mexico City’s public transport system represents a challenge to generate quality information. Processes for the generation of mobility data are time-consuming and expensive. Given this scenario, the best alternative for the city is to include transport users in generating this information.

The megalopolis lacks an updated, open database of its more than 1,500 bus routes. To tackle this problem, Laboratorio para la Ciudad (Mexico City’s experimental office and creative think-tank, reporting to the Mayor) partnered with 12 organizations that include NGOs and  other government offices to develop Mapatón CDMX: a crowdsourcing and gamification experiment to map the city’s bus routes through civic collaboration and technology.

After one year of designing and testing a strategy, the team behind Mapatón CDMX is calling citizens to map the public transport system by participating on a city game from January 29th to February 14th 2016. The game’s goal is to map routes of licenced public transport (buses, minibuses and vans) from start to finish in order to score points, which is done through an app for Android devices that gathers GPS data from the user inside the bus.

The mappers will participate individually or in groups with friends and family for two weeks. As an incentive and once the mapping marathon is finished, those participants with higher scores will earn cash prizes and electronic devices. (A smart algorithm creates incentives to map the longest or most ignored routes, giving mappers extra points.) But what is most valuable: the data resulting will be openly available at the end of February 2016, much faster and cheaper than with traditional processes.

Mapatón CDMX is an innovative and effective way to generate updated and open information about transport routes as the game harnesses collective intelligence of the gargantuan city. Organisers consider that the open database may be used by anyone to create for example data driven policy, strategies for academic analysis, maps for users, applications, visualizations, among many other digital products….(More)”

Crowdsourcing Diagnosis for Patients With Undiagnosed Illnesses: An Evaluation of CrowdMed


Paper by Ashley N.D Meyer et al in the Journal of Medical Internet Research: ” Background: Despite visits to multiple physicians, many patients remain undiagnosed. A new online program, CrowdMed, aims to leverage the “wisdom of the crowd” by giving patients an opportunity to submit their cases and interact with case solvers to obtain diagnostic possibilities.

Objective: To describe CrowdMed and provide an independent assessment of its impact.

Methods: Patients submit their cases online to CrowdMed and case solvers sign up to help diagnose patients. Case solvers attempt to solve patients’ diagnostic dilemmas and often have an interactive online discussion with patients, including an exchange of additional diagnostic details. At the end, patients receive detailed reports containing diagnostic suggestions to discuss with their physicians and fill out surveys about their outcomes. We independently analyzed data collected from cases between May 2013 and April 2015 to determine patient and case solver characteristics and case outcomes.

Results: During the study period, 397 cases were completed. These patients previously visited a median of 5 physicians, incurred a median of US $10,000 in medical expenses, spent a median of 50 hours researching their illnesses online, and had symptoms for a median of 2.6 years. During this period, 357 active case solvers participated, of which 37.9% (132/348) were male and 58.3% (208/357) worked or studied in the medical industry. About half (50.9%, 202/397) of patients were likely to recommend CrowdMed to a friend, 59.6% (233/391) reported that the process gave insights that led them closer to the correct diagnoses, 57% (52/92) reported estimated decreases in medical expenses, and 38% (29/77) reported estimated improvement in school or work productivity.

Conclusions: Some patients with undiagnosed illnesses reported receiving helpful guidance from crowdsourcing their diagnoses during their difficult diagnostic journeys. However, further development and use of crowdsourcing methods to facilitate diagnosis requires long-term evaluation as well as validation to account for patients’ ultimate correct diagnoses….(More)”

Human-machine superintelligence pegged as key to solving global problems


Ravi Mandalia at Dispatch Tribunal: “Global complex problems such as climate change and geopolitical conflicts need a new approach if we want to solve them and researchers have suggested that human-machine super intelligence could be the key.

These so called ‘wicked’ problems are some of the most dire ones that need our immediate attention and researchers from the Human Computation Institute (HCI) and Cornell University have presented their new vision of human computation that could help solve these problems in an article published in the journal Science.

Scientists behind the article have cited how power of human computation has helped push the traditional limits to new heights – something that was not achievable until now. Humans are still ahead of machines at great many things – cognitive abilities is one the key areas – but if their powers are combined with those of machines, the result would be multidimensional collaborative networks that achieve what traditional problem-solving cannot.

Researchers have already proved that micro-tasking has helped with some complex problems including build the world’s most complete map of human retinal neurons; however, this approach isn’t always viable to solve much more complex problems of today and entirely new and innovative approach is required to solve “wicked problems” – those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences (e.g., corruption resulting from financial aid given in response to a natural disaster).

Recently developed human computation technologies that provide real-time access to crowd-based inputs could enable creation of more flexible collaborative environments and such setups are more apt for addressing the most challenging issues.

This idea is already taking shape in several human computation projects, including YardMap.org, which was launched by the Cornell in 2012 to map global conservation efforts one parcel at a time.

“By sharing and observing practices in a map-based social network, people can begin to relate their individual efforts to the global conservation potential of living and working landscapes,” says Janis Dickinson, Professor and Director of Citizen Science at the Cornell Lab of Ornithology.

YardMap allows participants to interact and build on each other’s work – something that crowdsourcing alone cannot achieve. The project serves as an important model for how such bottom-up, socially networked systems can bring about scalable changes how we manage residential landscapes.

HCI has recently set out to use crowd-power to accelerate Cornell-based Alzheimer’s disease research. WeCureAlz.com combines two successful microtasking systems into an interactive analytic pipeline that builds blood flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system….(More)”

Can crowdsourcing decipher the roots of armed conflict?


Stephanie Kanowitz at GCN: “Researchers at Pennsylvania State University and the University of Texas at Dallas are proving that there’s accuracy, not just safety, in numbers. The Correlates of War project, a long-standing effort that studies the history of warfare, is now experimenting with crowdsourcing as a way to more quickly and inexpensively create a global conflict database that could help explain when and why countries go to war.

The goal is to facilitate the collection, dissemination and use of reliable data in international relations, but a byproduct has emerged: the development of technology that uses machine learning and natural language processing to efficiently, cost-effectively and accurately create databases from news articles that detail militarized interstate disputes.

The project is in its fifth iteration, having released the fourth set of Militarized Dispute (MID) Data in 2014. To create those earlier versions, researchers paid subject-matter experts such as political scientists to read and hand code newswire articles about disputes, identifying features of possible militarized incidents. Now, however, they’re soliciting help from anyone and everyone — and finding the results are much the same as what the experts produced, except the results come in faster and with significantly less expense.

As news articles come across the wire, the researchers pull them and formulate questions about them that help evaluate the military events. Next, the articles and questions are loaded onto the Amazon Mechanical Turk, a marketplace for crowdsourcing. The project assigns articles to readers, who typically spend about 10 minutes reading an article and responding to the questions. The readers submit the answers to the project researchers, who review them. The project assigns the same article to multiple workers and uses computer algorithms to combine the data into one annotation.

A systematic comparison of the crowdsourced responses with those of trained subject-matter experts showed that the crowdsourced work was accurate for 68 percent of the news reports coded. More important, the aggregation of answers for each article showed that common answers from multiple readers strongly correlated with correct coding. This allowed researchers to easily flag the articles that required deeper expert involvement and process the majority of the news items in near-real time and at limited cost….(more)”

How Much Development Data Is Enough?


Keith D. Shepherd at Project Syndicate: “Rapid advances in technology have dramatically lowered the cost of gathering data. Sensors in space, the sky, the lab, and the field, along with newfound opportunities for crowdsourcing and widespread adoption of the Internet and mobile telephones, are making large amounts of information available to those for whom it was previously out of reach. A small-scale farmer in rural Africa, for example, can now access weather forecasts and market prices at the tap of a screen.

This data revolution offers enormous potential for improving decision-making at every level – from the local farmer to world-spanning development organizations. But gathering data is not enough. The information must also be managed and evaluated – and doing this properly can be far more complicated and expensive than the effort to collect it. If the decisions to be improved are not first properly identified and analyzed, there is a high risk that much of the collection effort could be wasted or misdirected.

This conclusion is itself based on empirical analysis. The evidence is weak, for example, that monitoring initiatives in agriculture or environmental management have had a positive impact. Quantitative analysis of decisions across many domains, including environmental policy, business investments, and cyber security, has shown that people tend to overestimate the amount of data needed to make a good decision or misunderstand what type of data are needed.

Furthermore, grave errors can occur when large data sets are mined using machine algorithms without having first having properly examined the decision that needs to be made. There are many examples of cases in which data mining has led to the wrong conclusion – including in medical diagnoses or legal cases – because experts in the field were not consulted and critical information was left out of the analysis.

Decision science, which combines understanding of behavior with universal principles of coherent decision-making, limits these risks by pairing empirical data with expert knowledge. If the data revolution is to be harnessed in the service of sustainable development, the best practices of this field must be incorporated into the effort.

The first step is to identify and frame frequently recurring decisions. In the field of development, these include large-scale decisions such as spending priorities – and thus budget allocations – by governments and international organizations. But it also includes choices made on a much smaller scale: farmers pondering which crops to plant, how much fertilizer to apply, and when and where to sell their produce.

The second step is to build a quantitative model of the uncertainties in such decisions, including the various triggers, consequences, controls, and mitigants, as well as the different costs, benefits, and risks involved. Incorporating – rather than ignoring – difficult-to-measure, highly uncertain factors leads to the best decisions…..

The third step is to compute the value of obtaining additional information – something that is possible only if the uncertainties in all of the variables have been quantified. The value of information is the amount a rational decision-maker would be willing to pay for it. So we need to know where additional data will have value for improving a decision and how much we should spend to get it. In some cases, no further information may be needed to make a sound decision; in others, acquiring further data could be worth millions of dollars….(More)”

Collective Intelligence in Law Reforms: When the Logic of the Crowds and the Logic of Policymaking Collide


Paper by Tanja Aitamurto: “…shows how the two virtues of collective intelligence – cognitive diversity and large crowds –turn into perils in crowdsourced policymaking. That is because of a conflict between the logic of the crowds and the logic of policymaking. The crowd’s logic differs from that of traditional policymaking in several aspects. To mention some of those: In traditional policymaking it is a small group of experts making proposals to the policy, whereas in crowdsourced policymaking, it is a large, anonymous crowd with a mixed level of expertise. The crowd proposes atomic ideas, whereas traditional policymaking is used to dealing with holistic and synthesized proposals. By drawing on data from a crowdsourced law-making process in Finland, the paper shows how the logics of the crowds and policymaking collide in practice. The conflict prevents policymaking fully benefiting from the crowd’s input, and it also hinders governments from adopting crowdsourcing more widely as a practice for deploying open policymaking practices….(More)”