Pitch at ClimateCoLab: “The proposed #CrowdCriMa would be a disaster management platform based on an innovative, interactive and accountable Digital Governance Framework– in which common people, crisis response or disaster response workers, health workers, decision makers would participate actively. This application would be available for mobile phones and other smart devices.
Crowdsourcing Unheard Voices
The main function would be to help collecting voice messages of disaster victims in the forms of phone call, recorded voice, SMS, E-mail and Fax to seek urgent help from the authority and spread those voices via online media, social media, SMS and etc to inform the world about the situation. As still in developing countries, Fax communication is more powerful than SMS or email, we have also included FAX as one of the reporting tools.
People will be able to record their observations, potential crisis, seek helps and appeals for funds for disaster response works, different environment related activities (e.g. project for pollution free environment and etc). To have all functions in the #CrowdCriMa platform, an IVR system, FrontlineSMS-type software will be developed / integrated in the proposed platform.
A cloud-based information management system would be used to sustain the flow of information. This would help not to lose any information if communications infrastructures are not functioning properly during and after the disaster.
Crowdfunding:
Another function of this #CrowdCriMa platform would be the crowdfunding function. When individual donor logs in, they find the list of issues / crisis where fund is needed. An innovative and sustainable approach would be taken to meet the financial need in crisis / disaster and post-crisis financial empowerment work for victims.
Some services are available differently but innovative parts of this proposal is several services to deal disaster would be in platform so people do not need to use different platforms for disaster management work. ..”
Detroit and Big Data Take on Blight
Susan Crawford in Bloomberg View: “The urban blight that has been plaguing Detroit was, until very recently, made worse by a dearth of information about the problem. No one could tell how many buildings needed fixing or demolition, or how effectively city services were being delivered to them (or not). Today, thanks to the combined efforts of a scrappy small business, tech-savvy city leadership and substantial philanthropic support, the extent of the problem is clear.
The question now is whether Detroit has the heart to use the information to make hard choices about its future.
In the past, when the city foreclosed on properties for failure to pay back taxes, it had no sense of where those properties were clustered. The city would auction off the houses for the bargain-basement price of $500 each, but the auction was entirely undocumented, so neighbors were unaware of investment opportunities, big buyers were gaming the system, and, as often as not, arsonists would then burn the properties down. The result of this blind spot was lost population, lost revenue and even more blight.
Then along came Jerry Paffendorf, a San Francisco transplant, who saw what was needed. His company, Loveland Technologies, started mapping all the tax-foreclosed and auctioned properties. Impressed with Paffendorf’s zeal, the city’s Blight Task Force, established by President Barack Obama and funded by foundations and the state Housing Development Authority, hired his team to visit every property in the city. That led to MotorCityMapping.org, the first user-friendly collection of information about all the attributes of every property in Detroit — including photographs.
Paffendorf calls this map a “scan of the genome of the city.” It shows more than 84,000 blighted structures and vacant lots; in eight neighborhoods, crime, fires and other torments have led to the abandonment of more than a third of houses and businesses. To demolish all those houses, as recommended by the Blight Task Force, will cost almost $2 billion. Still more money will then be needed to repurpose the sites….”
Social Media and the ‘Spiral of Silence’
Report by By Keith Hampton, Lee Rainie, Weixu Lu, Maria Dwyer, Inyoung Shin and Kristen Purcell : “A major insight into human behavior from pre-internet era studies of communication is the tendency of people not to speak up about policy issues in public—or among their family, friends, and work colleagues—when they believe their own point of view is not widely shared. This tendency is called the “spiral of silence.”1
Some social media creators and supporters have hoped that social media platforms like Facebook and Twitter might produce different enough discussion venues that those with minority views might feel freer to express their opinions, thus broadening public discourse and adding new perspectives to everyday discussion of political issues.
We set out to study this by conducting a survey of 1,801 adults.2 It focused on one important public issue: Edward Snowden’s 2013 revelations of widespread government surveillance of Americans’ phone and email records. We selected this issue because other surveys by the Pew Research Center at the time we were fielding this poll showed that Americans were divided over whether the NSA contractor’s leaks about surveillance were justified and whether the surveillance policy itself was a good or bad idea. For instance, Pew Research found in one survey that 44% say the release of classified information harms the public interest while 49% said it serves the public interest.
The survey reported in this report sought people’s opinions about the Snowden leaks, their willingness to talk about the revelations in various in-person and online settings, and their perceptions of the views of those around them in a variety of online and off-line contexts.
This survey’s findings produced several major insights:
- People were less willing to discuss the Snowden-NSA story in social media than they were in person. 86% of Americans were willing to have an in-person conversation about the surveillance program, but just 42% of Facebook and Twitter users were willing to post about it on those platforms…”
Assessing Social Value in Open Data Initiatives: A Framework
Paper by Gianluigi Viscusi, Marco Castelli and Carlo Batini in Future Internet Journal: “Open data initiatives are characterized, in several countries, by a great extension of the number of data sets made available for access by public administrations, constituencies, businesses and other actors, such as journalists, international institutions and academics, to mention a few. However, most of the open data sets rely on selection criteria, based on a technology-driven perspective, rather than a focus on the potential public and social value of data to be published. Several experiences and reports confirm this issue, such as those of the Open Data Census. However, there are also relevant best practices. The goal of this paper is to investigate the different dimensions of a framework suitable to support public administrations, as well as constituencies, in assessing and benchmarking the social value of open data initiatives. The framework is tested on three initiatives, referring to three different countries, Italy, the United Kingdom and Tunisia. The countries have been selected to provide a focus on European and Mediterranean countries, considering also the difference in legal frameworks (civic law vs. common law countries)”
Google's fact-checking bots build vast knowledge bank
Hal Hodson in the New Scientist: “The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world’s facts
GOOGLE is building the largest store of knowledge in human history – and it’s doing so without any human help. Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.
The breadth and accuracy of this gathered knowledge is already becoming the foundation of systems that allow robots and smartphones to understand what people ask them. It promises to let Google answer questions like an oracle rather than a search engine, and even to turn a new lens on human history.
Knowledge Vault is a type of “knowledge base” – a system that stores information so that machines as well as people can read it. Where a database deals with numbers, a knowledge base deals with facts. When you type “Where was Madonna born” into Google, for example, the place given is pulled from Google’s existing knowledge base.
This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far. So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.
Knowledge Vault has pulled in 1.6 billion facts to date. Of these, 271 million are rated as “confident facts”, to which Google’s model ascribes a more than 90 per cent chance of being true. It does this by cross-referencing new facts with what it already knows.
“It’s a hugely impressive thing that they are pulling off,” says Fabian Suchanek, a data scientist at Télécom ParisTech in France.
Google’s Knowledge Graph is currently bigger than the Knowledge Vault, but it only includes manually integrated sources such as the CIA Factbook.
Knowledge Vault offers Google fast, automatic expansion of its knowledge – and it’s only going to get bigger. As well as the ability to analyse text on a webpage for facts to feed its knowledge base, Google can also peer under the surface of the web, hunting for hidden sources of data such as the figures that feed Amazon product pages, for example.
Tom Austin, a technology analyst at Gartner in Boston, says that the world’s biggest technology companies are racing to build similar vaults. “Google, Microsoft, Facebook, Amazon and IBM are all building them, and they’re tackling these enormous problems that we would never even have thought of trying 10 years ago,” he says.
The potential of a machine system that has the whole of human knowledge at its fingertips is huge. One of the first applications will be virtual personal assistants that go way beyond what Siri and Google Now are capable of, says Austin…”
Data Never Sleeps 2.0 (Infographic)
Domo: “A few years ago, Domo created a wildly-popular infographic that catalogued how much data is created by common web services every minute. Since the internet landscape changes so quickly, we thought it would be interesting to revisit the topic and see what’s changed, through the same ‘one minute’ lens…”
Big Data: Google Searches Predict Unemployment in Finland
Paper by Tuhkuri, Joonas: “There are over 3 billion searches globally on Google every day. This report examines whether Google search queries can be used to predict the present and the near future unemployment rate in Finland. Predicting the present and the near future is of interest, as the official records of the state of the economy are published with a delay. To assess the information contained in Google search queries, the report compares a simple predictive model of unemployment to a model that contains a variable, Google Index, formed from Google data. In addition, cross-correlation analysis and Granger-causality tests are performed. Compared to a simple benchmark, Google search queries improve the prediction of the present by 10 % measured by mean absolute error. Moreover, predictions using search terms perform 39 % better over the benchmark for near future unemployment 3 months ahead. Google search queries also tend to improve the prediction accuracy around turning points. The results suggest that Google searches contain useful information of the present and the near future unemployment rate in Finland.”
Crowd-Sourced, Gamified Solutions to Geopolitical Issues
Gamification Corp: “Daniel Green, co-founder and CTO of Wikistrat, spoke at GSummit 2014 on an intriguing topic: How Gamification Motivates All Age Groups: Or How to Get Retired Generals to Play Games Alongside Students and Interns.
Wikistrat, a crowdsourced consulting company, leverages a worldwide network of experts from various industries to solve some of the world’s geopolitical problems through the power of gamification. Wikistrat also leverages fun, training, mentorship, and networking as core concepts in their company.
Dan (@wsdan) spoke with TechnologyAdvice host Clark Buckner about Wikistrat’s work, origins, what clients can expect from working with Wikistrat, and how gamification correlates with big data and business intelligence. Listen to the podcast and read the summary below:
Wikistrat aims to solve a common problem faced by most governments and organizations when generating strategies: “groupthink.” Such entities can devise a diverse set of strategies, but they always seem to find their resolution in the most popular answer.
In order to break group thinking, Wikistrat carries out geopolitical simulations that work around “collaborative competition.” The process involves:
-
Securing analysts: Wikistrat recruits a diverse group of analysts who are experts in certain fields and located in different strategic places.
-
Competing with ideas: These analysts are placed in an online environment where, instead of competing with each other, one analyst contributes an idea, then other analysts create 2-3 more ideas based on the initial idea.
-
Breaking group thinking: Now the competition becomes only about ideas. People champion the ideas they care about rather than arguing with other analysts. That’s when Wikistrat breaks group thinking and helps their clients discover ideas they may have never considered before.
Gamification occurs when analysts create different scenarios for a specific angle or question the client raises. Plus, Wikistrat’s global analyst coverage is so good that they tout having at least one expert in every country. They accomplished this by allowing anyone—not just four-star generals—to register as an analyst. However, applicants must submit a resume and a writing sample, as well as pass a face-to-face interview….”
Out in the Open: This Man Wants to Turn Data Into Free Food (And So Much More)
Klint Finley in Wired: “Let’s say your city releases a list of all trees planted on its public property. It would be a godsend—at least in theory. You could filter the data into a list of all the fruit and nut trees in the city, transfer it into an online database, and create a smartphone app that helps anyone find free food.
In far too many cases, the data just sits there on a computer server, unseen and unused. Sometimes, no one knows about the data, or no one knows what to do with it. Other times, the data is just too hard to work with. If you’re building that free food app, how do you update your database when the government releases a new version of the spreadsheet? And if you let people report corrections to the data, how do you contribute that data back to the city?
These are the sorts of problems that obsess 25-year-old software developer Max Ogden, and they’re the reason he built Dat, a new piece of open source software that seeks to restart the open data revolution. Basically, Dat is a way of synchronizing data between two or more sources, tracking any changes to that data, and handling transformations from one data format to another. The aim is a simple one: Ogden wants to make it easier for governments to share their data with a world of software developers.
That’s just the sort of thing that government agencies are looking for, says Waldo Jaquith, the director of US Open Data Institute, the non-profit that is now hosting Dat…
Git is a piece of software originally written by Linux creator Linus Torvalds. It keeps track of code changes and makes it easier to integrate code submissions from outside developers. Ogden realized what developers needed wasn’t a GitHub for data, but a Git for data. And that’s what Dat is.
Instead of CouchDB, Dat relies on a lightweight, open-source data storage system from Google called LevelDB. The rest of the software was written in JavaScript by Ogden and his growing number of collaborators, which enables them to keep things minimal and easily run the software on multiple operating systems, including Windows, Linux and Macintosh OS X….”
“
Twitter Analytics Project HealthMap Outperforming WHO in Ebola Tracking
HIS Talk: “HealthMap, a collaborative data analytics project launched in 2006 between Harvard Medical School and Boston Children’s Hospital, has been quietly tracking the recent Ebola outbreak in Western Africa with notable accuracy, beating the World Health Organization’s own tracking efforts by two weeks in some instances.
HealthMap aggregates information from a variety of online sources to plot real-time disease outbreaks. Currently, the platform analyzes data from the World Health Organization, Google News, and GeoSentinel, a global disease tracking platform that tracks major geography changes in diseases carried through travelers, foreign visitors, and immigrants. The analytics project also got a new source of feeder-data this February when Twitter announced that the HealthMap project had been selected as a Twitter Data Grant recipient, which gives the 45 epidemiologists working on the project access to the “fire hose” of unfiltered data generated from Twitter’s 500 million daily tweets….”