With Wikistrat, crowdsourcing gets geopolitical


Aaron Stanley in the Financial Times: “In January, while the world was focused on the build up to the Winter Olympics in Sochi, a team of analysts scattered around the globe huddled in front of their computer screens and forecast that ethnic strife in Ukraine would lead to the eventual incorporation of Crimea into neighbouring Russia.

Much to the surprise of western intelligence, in a matter of weeks Vladimir Putin’s troops would occupy the disputed peninsula and a referendum would be passed authorising secession from Ukraine.

That a dispersed team of thinkers – assembled by a consultancy known as Wikistrat – could out-forecast the world’s leading intelligence agencies seems almost farcical. But it is an eye-opening example of yet another way that crowdsourcing is upending conventional wisdom.
Crowdsourcing has long been heralded as a means to shake up stale thinking in corporate spheres by providing cheaper, faster means of processing information and problem solving. But now even traditionally enigmatic defence and intelligence organisations and other geopolitical soothsayers are getting in on the act by using the “wisdom of the crowd” to predict how the chips of world events might fall.
Meanwhile, companies with crucial geopolitical interests, such as energy and financial services firms, have begun commissioning crowdsourced simulations of their own from Wikistrat to better gauge investment risk.

While some intelligence agencies have experimented with crowdsourcing to gain insights from the general public, Wikistrat uses a “closed crowd” of subject experts and bills itself as the world’s first crowdsourced analytical services consultancy.

A typical simulation, run on its interactive web platform, has roughly 70 participants. The crowd’s expertise and diversity is combined with Wikistrat’s patented model of “collaborative competition” that rewards participants for the breadth and quality of their contributions. The process is designed to provide a fresh view and shatter the traditional confines of groupthink….”

Using Crowds for Evaluation Tasks: Validity by Numbers vs. Validity by Expertise


Paper by Christoph Hienerth and Frederik Riar:Developing and commercializing novel ideas is central to innovation processes. As the outcome of such ideas cannot fully be foreseen, the evaluation of them is crucial. With the rise of the internet and ICT, more and new kinds of evaluations are done by crowds. This raises the question whether individuals in crowds possess necessary capabilities to evaluate and whether their outcomes are valid. As empirical insights are not yet available, this paper deals with the examination of evaluation processes and general evaluation components, the discussion of underlying characteristics and mechanism of these components affecting evaluation outcomes (i.e. evaluation validity). We further investigate differences between firm- and crowd-based evaluation using different cases of applications, and develop a theoretical framework towards evaluation validity, i.e. validity by numbers vs. the validity by expertise. The identified factors that influence the validity of evaluations are: (1) the number of evaluation tasks, (2) complexity, (3) expertise, (4) costs, and (5) time to outcome. For each of these factors, hypotheses are developed based on theoretical arguments. We conclude with implications, proposing a model of evaluation validity.”

City 72 Toolkit


“An effective preparedness platform customizable to your city. City72 is an open-source emergency preparedness platform that promotes community resilience and connection. This Toolkit is designed specifically for emergency preparedness organizations and provides the information and resources to create a customized City72 site for any city or region. It includes: how to create localized content, access to the code to build and install your City72 website, and tips for how to manage and promote your site.”

In democracy and disaster, emerging world embraces 'open data'


Jeremy Wagstaff’ at Reuters: “Open data’ – the trove of data-sets made publicly available by governments, organizations and businesses – isn’t normally linked to high-wire politics, but just may have saved last month’s Indonesian presidential elections from chaos.
Data is considered open when it’s released for anyone to use and in a format that’s easy for computers to read. The uses are largely commercial, such as the GPS data from U.S.-owned satellites, but data can range from budget numbers and climate and health statistics to bus and rail timetables.
It’s a revolution that’s swept the developed world in recent years as governments and agencies like the World Bank have freed up hundreds of thousands of data-sets for use by anyone who sees a use for them. Data.gov, a U.S. site, lists more than 100,000 data-sets, from food calories to magnetic fields in space.
Consultants McKinsey reckon open data could add up to $3 trillion worth of economic activity a year – from performance ratings that help parents find the best schools to governments saving money by releasing budget data and asking citizens to come up with cost-cutting ideas. All the apps, services and equipment that tap the GPS satellites, for example, generate $96 billion of economic activity each year in the United States alone, according to a 2011 study.
But so far open data has had a limited impact in the developing world, where officials are wary of giving away too much information, and where there’s the issue of just how useful it might be: for most people in emerging countries, property prices and bus schedules aren’t top priorities.
But last month’s election in Indonesia – a contentious face-off between a disgraced general and a furniture-exporter turned reformist – highlighted how powerful open data can be in tandem with a handful of tech-smart programmers, social media savvy and crowdsourcing.
“Open data may well have saved this election,” said Paul Rowland, a Jakarta-based consultant on democracy and governance…”
 

Google's fact-checking bots build vast knowledge bank


Hal Hodson in the New Scientist: “The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world’s facts

GOOGLE is building the largest store of knowledge in human history – and it’s doing so without any human help. Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.

The breadth and accuracy of this gathered knowledge is already becoming the foundation of systems that allow robots and smartphones to understand what people ask them. It promises to let Google answer questions like an oracle rather than a search engine, and even to turn a new lens on human history.

Knowledge Vault is a type of “knowledge base” – a system that stores information so that machines as well as people can read it. Where a database deals with numbers, a knowledge base deals with facts. When you type “Where was Madonna born” into Google, for example, the place given is pulled from Google’s existing knowledge base.

This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far. So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.

Knowledge Vault has pulled in 1.6 billion facts to date. Of these, 271 million are rated as “confident facts”, to which Google’s model ascribes a more than 90 per cent chance of being true. It does this by cross-referencing new facts with what it already knows.

“It’s a hugely impressive thing that they are pulling off,” says Fabian Suchanek, a data scientist at Télécom ParisTech in France.

Google’s Knowledge Graph is currently bigger than the Knowledge Vault, but it only includes manually integrated sources such as the CIA Factbook.

Knowledge Vault offers Google fast, automatic expansion of its knowledge – and it’s only going to get bigger. As well as the ability to analyse text on a webpage for facts to feed its knowledge base, Google can also peer under the surface of the web, hunting for hidden sources of data such as the figures that feed Amazon product pages, for example.

Tom Austin, a technology analyst at Gartner in Boston, says that the world’s biggest technology companies are racing to build similar vaults. “Google, Microsoft, Facebook, Amazon and IBM are all building them, and they’re tackling these enormous problems that we would never even have thought of trying 10 years ago,” he says.

The potential of a machine system that has the whole of human knowledge at its fingertips is huge. One of the first applications will be virtual personal assistants that go way beyond what Siri and Google Now are capable of, says Austin…”

Technology’s Crucial Role in the Fight Against Hunger


Crowdsourcing, predictive analytics and other new tools could go far toward finding innovative solutions for America’s food insecurity.

National Geographic recently sent three photographers to explore hunger in the United States. It was an effort to give a face to a very troubling statistic: Even today, one-sixth of Americans do not have enough food to eat. Fifty million people in this country are “food insecure” — having to make daily trade-offs among paying for food, housing or medical care — and 17 million of them skip at least one meal a day to get by. When choosing what to eat, many of these individuals must make choices between lesser quantities of higher-quality food and larger quantities of less-nutritious processed foods, the consumption of which often leads to expensive health problems down the road.
This is an extremely serious, but not easily visible, social problem. Nor does the challenge it poses become any easier when poorly designed public-assistance programs continue to count the sauce on a pizza as a vegetable. The deficiencies caused by hunger increase the likelihood that a child will drop out of school, lowering her lifetime earning potential. In 2010 alone, food insecurity cost America $167.5 billion, a figure that includes lost economic productivity, avoidable health-care expenses and social-services programs.
As much as we need specific policy innovations, if we are to eliminate hunger in America food insecurity is just one of many extraordinarily complex and interdependent “systemic” problems facing us that would benefit from the application of technology, not just to identify innovative solutions but to implement them as well. In addition to laudable policy initiatives by such states as Illinois and Nevada, which have made hunger a priority, or Arkansas, which suffers the greatest level of food insecurity but which is making great strides at providing breakfast to schoolchildren, we can — we must — bring technology to bear to create a sustained conversation between government and citizens to engage more Americans in the fight against hunger.

Identifying who is genuinely in need cannot be done as well by a centralized government bureaucracy — even one with regional offices — as it can through a distributed network of individuals and organizations able to pinpoint with on-the-ground accuracy where the demand is greatest. Just as Ushahidi uses crowdsourcing to help locate and identify disaster victims, it should be possible to leverage the crowd to spot victims of hunger. As it stands, attempts to eradicate so-called food deserts are often built around developing solutions for residents rather than with residents. Strategies to date tend to focus on the introduction of new grocery stores or farmers’ markets but with little input from or involvement of the citizens actually affected.

Applying predictive analytics to newly available sources of public as well as private data, such as that regularly gathered by supermarkets and other vendors, could also make it easier to offer coupons and discounts to those most in need. In addition, analyzing nonprofits’ tax returns, which are legally open and available to all, could help map where the organizations serving those in need leave gaps that need to be closed by other efforts. The Governance Lab recently brought together U.S. Department of Agriculture officials with companies that use USDA data in an effort to focus on strategies supporting a White House initiative to use climate-change and other open data to improve food production.

Such innovative uses of technology, which put citizens at the center of the service-delivery process and streamline the delivery of government support, could also speed the delivery of benefits, thus reducing both costs and, every bit as important, the indignity of applying for assistance.

Being open to new and creative ideas from outside government through brainstorming and crowdsourcing exercises using social media can go beyond simply improving the quality of the services delivered. Some of these ideas, such as those arising from exciting new social-science experiments involving the use of incentives for “nudging” people to change their behaviors, might even lead them to purchase more healthful food.

Further, new kinds of public-private collaborative partnerships could create the means for people to produce their own food. Both new kinds of financing arrangements and new apps for managing the shared use of common real estate could make more community gardens possible. Similarly, with the kind of attention, convening and funding that government can bring to an issue, new neighbor-helping-neighbor programs — where, for example, people take turns shopping and cooking for one another to alleviate time away from work — could be scaled up.

Then, too, advances in citizen engagement and oversight could make it more difficult for lawmakers to cave to the pressures of lobbying groups that push for subsidies for those crops, such as white potatoes and corn, that result in our current large-scale reliance on less-nutritious foods. At the same time, citizen scientists reporting data through an app would be able do a much better job than government inspectors in reporting what is and is not working in local communities.

As a society, we may not yet be able to banish hunger entirely. But if we commit to using new technologies and mechanisms of citizen engagement widely and wisely, we could vastly reduce its power to do harm.

An Air-Quality Monitor You Take with You


MIT Technology Review: “A startup is building a wearable air-quality monitor using a sensing technology that can cheaply detect the presence of chemicals around you in real time. By reporting the information its sensors gather to an app on your smartphone, the technology could help people with respiratory conditions and those who live in highly polluted areas keep tabs on exposure.
Berkeley, California-based Chemisense also plans to crowdsource data from users to show places around town where certain compounds are identified.
Initially, the company plans to sell a $150 wristband geared toward kids with asthma—of which there are nearly 7 million in the U.S., according to data from the Centers for Disease Control and Prevention— to help them identify places and pollutants that tend to provoke attacks,  and track their exposure to air pollution over time. The company hopes people with other respiratory conditions, and those who are just concerned about air pollution, will be interested, too.
In the U.S., air quality is monitored at thousands of stations across the country; maps and forecasts can be viewed online. But these monitors offer accurate readings only in their location.
Chemisense has not yet made its initial product, but it expects it will be a wristband using polymers treated with charged nanoparticles of carbon such that the polymers swell in the presence of certain chemical vapors, changing the resistance of a circuit.”

The city as living labortory: A playground for the innovative development of smart city applications


Paper by Veeckman, Carina and van der Graaf, Shenja: “Nowadays the smart-city concept is shifting from a top-down, mere technological approach towards bottom-up processes that are based on the participation of creative citizens, research organisations and companies. Here, the city acts as an urban innovation ecosystem in which smart applications, open government data and new modes of participation are fostering innovation in the city. However, detailed analyses on how to manage smart city initiatives as well as descriptions of underlying challenges and barriers seem still scarce. Therefore, this paper investigates four, collaborative smart city initiatives in Europe to learn how cities can optimize the citizen’s involvement in the context of open innovation. The analytical framework focuses on the innovation ecosystem and the civic capacities to engage in the public domain. Findings show that public service delivery can be co-designed between the city and citizens, if different toolkits aligned with the specific capacities and skills of the users are provided. By providing the right tools, even ordinary citizens can take a much more active role in the evolution of their cities and generate solutions from which both the city and everyday urban life can possibly benefit.”

Using technology, data and crowdsourcing to hack infrastructure problems


Courtney M. Fowler at CAFWD.ORG: “Technology has become a way of life for most Americans, not just for communication but also for many daily activities. However, there’s more that can be done than just booking a trip or crushing candy. With a majority of Americans now owning smartphones, it’s only becoming more obvious that there’s room for governments to engage the public and provide more bang for their buck via technology.
CA Fwd has been putting on an “Open Data roadshow” around the state to highlight ways the marriage of tech and info can make government more efficient and transparent.
Jurisdictions have also been discovering that using technology and smartphone apps can be beneficial in the pursuit of improving infrastructure. Saving any amount of money on such projects is especially important for California, where it’s been estimated the state will only have half of the $765 billion needed for infrastructure investments over the next decade.
One of the best examples of applying technology to infrastructure problems comes from South Carolina, where an innovative bridge-monitoring system is producing real savings, despite being in use on only eight bridges.
Girder sensors are placed on each bridge so that they can measure its carrying capacity and can be monitored 24/7. Although, the monitors don’t eliminate the need for inspections, the technology does make the need for them significantly less frequent. Data from the monitors also led the South Carolina Department of Transportation to correct one bridge’s problems with a $100,000 retrofit, rather than spending $800,000 to replace it…”
In total, having the monitors on just eight bridges, at a cost of about $50,000 per bridge, saved taxpayers $5 million.
That kind of innovation and savings is exactly what California needs to ensure that infrastructure projects happen in a more timely and efficient fashion in the future. It’s also what is driving civic innovators to bring together technology and crowdsourcing and make sure infrastructure projects also are results oriented.