MIT Crowdsources the Next Great (free) IQ Test


ThePsychReport: “Raven’s Matrices have long been a gold standard for psychologists needing to measure general intelligence. But the good ones, the ones scientists like to use, are too expensive for most research projects.

Christopher Chabris, associate professor of psychology at Union College, and David Engel, postdoctoral associate at MIT Sloan School of Management, think the public can help. They recently launched a campaign to crowdsource “the next great IQ test.” The Matrix Reasoning Challenge, created through MIT’s Center for Collective Intelligence with Anita Woolley and Tom Malone,  calls on the public to design and submit matrix puzzles – 3×3 grids that asks subjects to complete a pattern by filling in a missing square.

Chabris says they aren’t trying to compete with commercially available tests used for diagnostic or clinical purposes, but rather want to provide a trustworthy and free alternative for scientists. Because these types of puzzles are nonverbal, culturally neutral, and objective, they have wide-ranging applications and are particularly useful when conducting research across various demographics. If this project is successful, a lot more scientists could do a lot more research.

A simple example of a matrix puzzle. Source: Matrix Reasoning Challenge

“Researchers typically don’t have that much money,” Chabris said. “They can’t afford pay per use tests. Sometimes they have no research budgets, or if they do, they’re not large enough for that kind of thing. Our real goal is to create something useful for researchers.”

Through the Matrix Reasoning Challenge, Chabris and Engel also hope to better understand how crowdsourcing can be used to problem-solve in social and cognitive sciences.

Social scientists already widely use crowdsourcing sites like Amazon’s Mechanical Turk to recruit participants for their studies, but the matrix project is different in that it seeks to tap into the public’s expertise to help solve scientific problems. Scientists in computer science and bioinformatics have been able to harness this expertise to yield some incredible results. Using TopCoder.com, NASA was able to find a more efficient way to deploy solar panels on the International Space Station. Harvard Medical School was able to develop better software for analyzing immune-system genes. With The Matrix Reasoning Challenge, Chabris and Engel are beginning to explore crowdsourcing’s potential in the social sciences.”

Needed: A New Generation of Game Changers to Solve Public Problems


Beth Noveck: “In order to change the way we govern, it is important to train and nurture a new generation of problem solvers who possess the multidisciplinary skills to become effective agents of change. That’s why we at the GovLab have launched The GovLab Academy with the support of the Knight Foundation.
In an effort to help people in their own communities become more effective at developing and implementing creative solutions to compelling challenges, The Gov Lab Academy is offering two new training programs:
1) An online platform with an unbundled and evolving set of topics, modules and instructors on innovations in governance, including themes such as big and open data and crowdsourcing and forthcoming topics on behavioral economics, prizes and challenges, open contracting and performance management for governance;
2) Gov 3.0: A curated and sequenced, 14-week mentoring and training program.
While the online-platform is always freely available, Gov 3.0 begins on January 29, 2014 and we invite you to to participate. Please forward this email to your networks and help us spread the word about the opportunity to participate.
Please consider applying (individuals or teams may apply), if you are:

  • an expert in communications, public policy, law, computer science, engineering, business or design who wants to expand your ability to bring about social change;

  • a public servant who wants to bring innovation to your job;

  • someone with an important idea for positive change but who lacks key skills or resources to realize the vision;

  • interested in joining a network of like-minded, purpose-driven individuals across the country; or

  • someone who is passionate about using technology to solve public problems.

The program includes live instruction and conversation every Wednesday from 5:00– 6:30 PM EST for 14 weeks starting Jan 29, 2014. You will be able to participate remotely via Google Hangout.

Gov 3.0 will allow you to apply evolving technology to the design and implementation of effective solutions to public interest challenges. It will give you an overview of the most current approaches to smarter governance and help you improve your skills in collaboration, communication, and developing and presenting innovative ideas.

Over 14 weeks, you will develop a project and a plan for its implementation, including a long and short description, a presentation deck, a persuasive video and a project blog. Last term’s projects covered such diverse issues as post-Fukushima food safety, science literacy for high schoolers and prison reform for the elderly. In every case, the goal was to identify realistic strategies for making a difference quickly.  You can read the entire Gov 3.0 syllabus here.

The program will include national experts and instructors in technology and governance both as guests and as mentors to help you design your project. Last term’s mentors included current and former officials from the White House and various state, local and international governments, academics from a variety of fields, and prominent philanthropists.

People who complete the program will have the opportunity to apply for a special fellowship to pursue their projects further.

Previously taught only on campus, we are offering Gov 3.0 in beta as an online program. This is not a MOOC. It is a mentoring-intensive coaching experience. To maximize the quality of the experience, enrollment is limited.

Please submit your application by January 22, 2014. Accepted applicants (individuals and teams) will be notified on January 24, 2014. We hope to expand the program in the future so please use the same form to let us know if you would like to be kept informed about future opportunities.”

Innovation by Competition: How Challenges and Competition Get the Most Out of the Crowd


Innocentive: “Crowdsourcing has become the 21st century’s alternative to problem solving in place of traditional employee-based strategies. It has become the modern solution to provide for needed services, content, and ideas. Crowdsourced ideas are paving the way for today’s organizations to tackle innovation challenges that confront them in today’s competitive global marketplace. To put it all in perspective, crowds used to be thought of as angry mobs. Today, crowds are more like friendly and helpful contributors. What an interesting juxtaposition, eh?
Case studies proving the effectiveness of crowdsourcing to conquer innovation challenge, particularly in the fields of science and engineering abound. Despite this fact that success stories involving crowdsourcing are plentiful, very few firms are really putting its full potential to use. Advances in ALS and AIDS research have both made huge advances thanks to crowdsourcing, just to name a couple.
Biologists at the University of Washington were able to map the structure of an AIDS related virus thanks to the collaboration involved with crowdsourcing. How did they do this?  With the help of gamers playing a game designed to help get the information the University of Washington needed. It was a solution that remained unattainable for over a decade until enough top notch scientific minds were expertly probed from around the world with effective crowdsourcing techniques.
Dr. Seward Rutkove discovered an ALS biomarker to accurately measure the progression of the disease in patients through the crowdsourcing tactics utilized in a prize contest by an organization named Prize4Life, who utilized our Challenge Driven Innovation approach to engage the crowd.
The truth is, the concept of crowdsourcing to innovate has been around for centuries. But, with the growing connectedness of the world due to sheer Internet access, the power and ability to effectively crowdsource has increased exponentially. It’s time for corporations to realize this, and stop relying on stale sources of innovation. ..”

EPA Launches New Citizen Science Website


Press Release:The U.S. Environmental Protection Agency has revamped its Citizen Science website to provide new resources and success stories to assist the public in conducting scientific research and collecting data to better understand their local environment and address issues of concern. The website can be found at www.epa.gov/region2/citizenscience.
“Citizen Science is an increasingly important part of EPA’s commitment to using sound science and technology to protect people’s health and safeguard the environment,” said Judith A. Enck, EPA Regional Administrator. “The EPA encourages the public to use the new website as a tool in furthering their scientific investigations and developing solutions to pollution problems.”
The updated website now offers detailed information about air, water and soil monitoring, including recommended types of equipment and resources for conducting investigations. It also includes case studies and videotapes that showcase successful citizen science projects in New York and New Jersey, provides funding opportunities, quality assurance information and workshops and webinars.”

Bad Data


Bad Data is a site providing real-world examples of how not to prepare or provide data. It showcases the poorly structured, the mis-formatted, or the just plain ugly. Its primary purpose is to educate – though there may also be some aspect of entertainment.
As a side-product it also provides a source of good practice material for budding data wranglers (the repo in fact began as a place to keep practice data for Data Explorer).
New examples wanted and welcome – submit them here »

Examples

Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk


New paper by Catherine E. Schmitt-Sands and Richard J. Smith: “While the internet has created new opportunities for research, managing the increased complexity of relationships and knowledge also creates challenges. Amazon.com has a Mechanical Turk service that allows people to crowdsource simple tasks for a nominal fee. The online workers may be anywhere in North America or India and range in ability. Social science researchers are only beginning to use this service. While researchers have used crowdsourcing to find research subjects or classify texts, we used Mechanical Turk to conduct a policy scan of local government websites. This article describes the process used to train and ensure quality of the policy scan. It also examines choices in the context of research ethics.”

From funding agencies to scientific agency –


New paper on “Collective allocation of science funding as an alternative to peer review”: “Publicly funded research involves the distribution of a considerable amount of money. Funding agencies such as the US National Science Foundation (NSF), the US National Institutes of Health (NIH) and the European Research Council (ERC) give billions of dollars or euros of taxpayers’ money to individual researchers, research teams, universities, and research institutes each year. Taxpayers accordingly expect that governments and funding agencies will spend their money prudently and efficiently.

Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from. Nevertheless, most funders rely on a tried and tested method in line with the tradition of the scientific community: the peer review of individual proposals to identify the most promising projects for funding. This method has been considered the gold standard for assessing the scientific value of research projects essentially since the end of the Second World War.

However, there is mounting critique of the use of peer review to direct research funding. High on the list of complaints is the cost, both in terms of time and money. In 2012, for example, NSF convened more than 17,000 scientists to review 53,556 proposals [1]. Reviewers generally spend a considerable time and effort to assess and rate proposals of which only a minority can eventually get funded. Of course, such a high rejection rate is also frustrating for the applicants. Scientists spend an increasing amount of time writing and submitting grant proposals. Overall, the scientific community invests an extraordinary amount of time, energy, and effort into the writing and reviewing of research proposals, most of which end up not getting funded at all. This time would be better invested in conducting the research in the first place.

Peer review may also be subject to biases, inconsistencies, and oversights. The need for review panels to reach consensus may lead to sub‐optimal decisions owing to the inherently stochastic nature of the peer review process. Moreover, in a period where the money available to fund research is shrinking, reviewers may tend to “play it safe” and select proposals that have a high chance of producing results, rather than more challenging and ambitious projects. Additionally, the structuring of funding around calls‐for‐proposals to address specific topics might inhibit serendipitous discovery, as scientists work on problems for which funding happens to be available rather than trying to solve more challenging problems.

The scientific community holds peer review in high regard, but it may not actually be the best possible system for identifying and supporting promising science. Many proposals have been made to reform funding systems, ranging from incremental changes to peer review—including careful selection of reviewers [2] and post‐hoc normalization of reviews [3]—to more radical proposals such as opening up review to the entire online population [4] or removing human reviewers altogether by allocating funds through an objective performance measure [5].

We would like to add another alternative inspired by the mathematical models used to search the internet for relevant information: a highly decentralized funding model in which the wisdom of the entire scientific community is leveraged to determine a fair distribution of funding. It would still require human insight and decision‐making, but it would drastically reduce the overhead costs and may alleviate many of the issues and inefficiencies of the proposal submission and peer review system, such as bias, “playing it safe”, or reluctance to support curiosity‐driven research.

Our proposed system would require funding agencies to give all scientists within their remit an unconditional, equal amount of money each year. However, each scientist would then be required to pass on a fixed percentage of their previous year’s funding to other scientists whom they think would make best use of the money (Fig 1). Every year, then, scientists would receive a fixed basic grant from their funding agency combined with an elective amount of funding donated by their peers. As a result of each scientist having to distribute a given percentage of their previous year’s budget to other scientists, money would flow through the scientific community. Scientists who are generally anticipated to make the best use of funding will accumulate more.”

Crowdsourcing forecasts on science and technology events and innovations


Kurzweil News: “George Mason University launched today, Jan. 10, the largest and most advanced science and technology prediction market in the world: SciCast.
The federally funded research project aims to improve the accuracy of science and technology forecasts. George Mason research assistant professor Charles Twardy is the principal investigator of the project.
SciCast crowdsources forecasts on science and technology events and innovations from aerospace to zoology.
For example, will Amazon use drones for commercial package delivery by the end of 2017? Today, SciCast estimates the chance at slightly more than 50 percent. If you think that is too low, you can estimate a higher chance. SciCast will use your estimate to adjust the combined forecast.
Forecasters can update their forecasts at any time; in the above example, perhaps after the Federal Aviation Administration (FAA) releases its new guidelines for drones. The continually updated and reshaped information helps both the public and private sectors better monitor developments in a variety of industries. SciCast is a real-time indicator of what participants think is going to happen in the future.
“Combinatorial” prediction market better than simple average


How SciCast works (Credit: George Mason University)
The idea is that collective wisdom from diverse, informed opinions can provide more accurate predictions than individual forecasters, a notion borne out by other crowdsourcing projects. Simply taking an average is almost always better than going with the “best” expert. But in a two-year test on geopolitical questions, the SciCast method did 40 percent better than the simple average.
SciCast uses the first general “combinatorial” prediction market. In a prediction market, forecasters spend points to adjust the group forecast. Significant changes “cost” more — but “pay” more if they turn out to be right. So better forecasters gain more points and therefore more influence, improving the accuracy of the system.
In a combinatorial market like SciCast, forecasts can influence each other. For example, forecasters might have linked cherry production to honeybee populations. Then, if forecasters increase the estimated percentage of honeybee colonies lost this winter, SciCast automatically reduces the estimated 2014 cherry production. This connectivity among questions makes SciCast more sophisticated than other prediction markets.
SciCast topics include agriculture, biology and medicine, chemistry, computational sciences, energy, engineered technologies, global change, information systems, mathematics, physics, science and technology business, social sciences, space sciences and transportation….

Crowdsourcing forecasts on science and technology events and innovations

George Mason University’s just-launched SciCast is largest and most advanced science and technology prediction market in the world
January 10, 2014


Example of SciCast crowdsourced forecast (credit: George Mason University)
George Mason University launched today, Jan. 10, the largest and most advanced science and technology prediction market in the world: SciCast.
The federally funded research project aims to improve the accuracy of science and technology forecasts. George Mason research assistant professor Charles Twardy is the principal investigator of the project.
SciCast crowdsources forecasts on science and technology events and innovations from aerospace to zoology.
For example, will Amazon use drones for commercial package delivery by the end of 2017? Today, SciCast estimates the chance at slightly more than 50 percent. If you think that is too low, you can estimate a higher chance. SciCast will use your estimate to adjust the combined forecast.
Forecasters can update their forecasts at any time; in the above example, perhaps after the Federal Aviation Administration (FAA) releases its new guidelines for drones. The continually updated and reshaped information helps both the public and private sectors better monitor developments in a variety of industries. SciCast is a real-time indicator of what participants think is going to happen in the future.
“Combinatorial” prediction market better than simple average


How SciCast works (Credit: George Mason University)
The idea is that collective wisdom from diverse, informed opinions can provide more accurate predictions than individual forecasters, a notion borne out by other crowdsourcing projects. Simply taking an average is almost always better than going with the “best” expert. But in a two-year test on geopolitical questions, the SciCast method did 40 percent better than the simple average.
SciCast uses the first general “combinatorial” prediction market. In a prediction market, forecasters spend points to adjust the group forecast. Significant changes “cost” more — but “pay” more if they turn out to be right. So better forecasters gain more points and therefore more influence, improving the accuracy of the system.
In a combinatorial market like SciCast, forecasts can influence each other. For example, forecasters might have linked cherry production to honeybee populations. Then, if forecasters increase the estimated percentage of honeybee colonies lost this winter, SciCast automatically reduces the estimated 2014 cherry production. This connectivity among questions makes SciCast more sophisticated than other prediction markets.
SciCast topics include agriculture, biology and medicine, chemistry, computational sciences, energy, engineered technologies, global change, information systems, mathematics, physics, science and technology business, social sciences, space sciences and transportation.
Seeking futurists to improve forecasts, pose questions


(Credit: George Mason University)
“With so many science and technology questions, there are many niches,” says Twardy, a researcher in the Center of Excellence in Command, Control, Communications, Computing and Intelligence (C4I), based in Mason’s Volgenau School of Engineering.
“We seek scientists, statisticians, engineers, entrepreneurs, policymakers, technical traders, and futurists of all stripes to improve our forecasts, link questions together and pose new questions.”
Forecasters discuss the questions, and that discussion can lead to new, related questions. For example, someone asked,Will Amazon deliver its first package using an unmanned aerial vehicle by Dec. 31, 2017?
An early forecaster suggested that this technology is likely to first be used in a mid-sized town with fewer obstructions or local regulatory issues. Another replied that Amazon is more likely to use robots to deliver packages within a short radius of a conventional delivery vehicle. A third offered information about an FAA report related to the subject.
Any forecaster could then write a question about upcoming FAA rulings, and link that question to the Amazon drones question. Forecasters could then adjust the strength of the link.
“George Mason University has succeeded in launching the world’s largest forecasting tournament for science and technology,” says Jason Matheny, program manager of Forecasting Science and Technology at the Intelligence Advanced Research Projects Activity, based in Washington, D.C. “SciCast can help the public and private sectors to better understand a range of scientific and technological trends.”
Collaborative but Competitive
More than 1,000 experts and enthusiasts from science and tech-related associations, universities and interest groups preregistered to participate in SciCast. The group is collaborative in spirit but also competitive. Participants are rewarded for accurate predictions by moving up on the site leaderboard, receiving more points to spend influencing subsequent prognostications. Participants can (and should) continually update their predictions as new information is presented.
SciCast has partnered with the American Association for the Advancement of Science, the Institute of Electrical and Electronics Engineers, and multiple other science and technology professional societies.
Mason members of the SciCast project team include Twardy; Kathryn Laskey, associate director for the C4I and a professor in the Department of Systems Engineering and Operations Research; associate professor of economics Robin Hanson; C4I research professor Tod Levitt; and C4I research assistant professors Anamaria Berea, Kenneth Olson and Wei Sun.
To register for SciCast, visit www.SciCast.org, or for more information, e-mail support@scicast.org. SciCast is open to anyone age 18 or older.”

New Book: Open Data Now


New book by Joel Gurin (The GovLab): “Open Data is the world’s greatest free resource–unprecedented access to thousands of databases–and it is one of the most revolutionary developments since the Information Age began. Combining two major trends–the exponential growth of digital data and the emerging culture of disclosure and transparency–Open Data gives you and your business full access to information that has never been available to the average person until now. Unlike most Big Data, Open Data is transparent, accessible, and reusable in ways that give it the power to transform business, government, and society.
Open Data Now is an essential guide to understanding all kinds of open databases–business, government, science, technology, retail, social media, and more–and using those resources to your best advantage. You’ll learn how to tap crowds for fast innovation, conduct research through open collaboration, and manage and market your business in a transparent marketplace.
Open Data is open for business–and the opportunities are as big and boundless as the Internet itself. This powerful, practical book shows you how to harness the power of Open Data in a variety of applications:

  • HOT STARTUPS: turn government data into profitable ventures
  • SAVVY MARKETING: understand how reputational data drives your brand
  • DATA-DRIVEN INVESTING: apply new tools for business analysis
  • CONSUMER IN FORMATION: connect with your customers using smart disclosure
  • GREEN BUSINESS: use data to bet on sustainable companies
  • FAST R&D: turn the online world into your research lab
  • NEW OPPORTUNITIES: explore open fields for new businesses

Whether you’re a marketing professional who wants to stay on top of what’s trending, a budding entrepreneur with a billion-dollar idea and limited resources, or a struggling business owner trying to stay competitive in a changing global market–or if you just want to understand the cutting edge of information technology–Open Data Now offers a wealth of big ideas, strategies, and techniques that wouldn’t have been possible before Open Data leveled the playing field.
The revolution is here and it’s now. It’s Open Data Now.”

From Faith-Based to Evidence-Based: The Open Data 500 and Understanding How Open Data Helps the American Economy


Beth Noveck in Forbes: “Public funds have, after all, paid for their collection, and the law says that federal government data are not protected by copyright. By the end of 2009, the US and the UK had the only two open data one-stop websites where agencies could post and citizens could find open data. Now there are over 300 such portals for government data around the world with over 1 million available datasets. This kind of Open Data — including weather, safety and public health information as well as information about government spending — can serve the country by increasing government efficiency, shedding light on regulated industries, and driving innovation and job creation.

It’s becoming clear that open data has the potential to improve people’s lives. With huge advances in data science, we can take this data and turn it into tools that help people choose a safer hospital, pick a better place to live, improve the performance of their farm or business by having better climate models, and know more about the companies with whom they are doing business. Done right, people can even contribute data back, giving everyone a better understanding, for example of nuclear contamination in post-Fukushima Japan or incidences of price gouging in America’s inner cities.

The promise of open data is limitless. (see the GovLab index for stats on open data) But it’s important to back up our faith with real evidence of what works. Last September the GovLab began the Open Data 500 project, funded by the John S. and James L. Knight Foundation, to study the economic value of government Open Data extensively and rigorously.  A recent McKinsey study pegged the annual global value of Open Data (including free data from sources other than government), at $3 trillion a year or more. We’re digging in and talking to those companies that use Open Data as a key part of their business model. We want to understand whether and how open data is contributing to the creation of new jobs, the development of scientific and other innovations, and adding to the economy. We also want to know what government can do better to help industries that want high quality, reliable, up-to-date information that government can supply. Of those 1 million datasets, for example, 96% are not updated on a regular basis.

The GovLab just published an initial working list of 500 American companies that we believe to be using open government data extensively.  We’ve also posted in-depth profiles of 50 of them — a sample of the kind of information that will be available when the first annual Open Data 500 study is published in early 2014. We are also starting a similar study for the UK and Europe.

Even at this early stage, we are learning that Open Data is a valuable resource. As my colleague Joel Gurin, author of Open Data Now: the Secret to Hot Start-Ups, Smart Investing, Savvy Marketing and Fast Innovation, who directs the project, put it, “Open Data is a versatile and powerful economic driver in the U.S. for new and existing businesses around the country, in a variety of ways, and across many sectors. The diversity of these companies in the kinds of data they use, the way they use it, their locations, and their business models is one of the most striking things about our findings so far.” Companies are paradoxically building value-added businesses on top of public data that anyone can access for free….”

FULL article can be found here.