Why This Company Is Crowdsourcing, Gamifying The World's Most Difficult Problems


FastCompany: “The biggest consultancy firms–the McKinseys and Janeses of the world–make many millions of dollars predicting the future and writing what-if reports for clients. This model is built on the idea that those companies know best–and that information and ideas should be handed down from on high.
But one consulting house, Wikistrat, is upending the model: Instead of using a stable of in-house analysts, the company crowdsources content and pays the crowd for its time. Wikistrat’s hundreds of analysts–primarily consultants, academics, journalists, and retired military personnel–are compensated for participating in what they call “crowdsourced simulations.” In other words, make money for brainstorming.

According to Joel Zamel, Wikistrat’s founder, approximately 850 experts in various fields rotate in and out of different simulations and project exercises for the company. While participating in a crowdsourced simulation, consultants are are paid a flat fee plus performance bonuses based on a gamification engine where experts compete to win extra cash. The company declined revealing what the fee scale is, but as of 2011 bonus money appears to be in the $10,000 range.
Zamel characterizes the company’s clients as a mix of government agencies worldwide and multinational corporations. The simulations are semi-anonymous for players; consultants don’t know who their paper is being written for or who the end consumer is, but clients know which of Wikistrat’s contestants are participating in the brainstorm exercise. Once an exercise is over, the discussions from the exercise are taken by full-time employees at Wikistrat and converted into proper reports for clients.
“We’ve developed a quite significant crowd network and a lot of functionality into the platform,” Zamel tells Fast Company. “It uses a gamification engine we created that incentivizes analysts by ranking them at different levels for the work they do on the platform. They are immediately rewarded through the engine, and we also track granular changes made in real time. This allows us to track analyst activity and encourages them to put time and energy into Wiki analysis.” Zamel says projects typically run between three and four weeks, with between 50 and 100 analysts working on a project for generally between five and 12 hours per week. Most of the analysts, he says, view this as a side income on top of their regular work at day jobs but some do much more: Zamel cited one PhD candidate in Australia working 70 hours a week on one project instead of 10 to 15 hours.
Much of Wikistrat’s output is related to current events. Although Zamel says the bulk of their reports are written for clients and not available for public consumption, Wikistrat does run frequent public simulations as a way of attracting publicity and recruiting talent for the organization. Their most recent crowdsourced project is called Myanmar Moving Forward and runs from November 25 to December 9. According to Wikistrat, they are asking their “Strategic community to map out Myanmar’s current political risk factor and possible futures (positive, negative, or mixed) for the new democracy in 2015. The simulation is designed to explore the current social, political, economic, and geopolitical threats to stability–i.e. its political risk–and to determine where the country is heading in terms of its social, political, economic, and geopolitical future.”…

Selected Readings on Crowdsourcing Data


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing data was originally published in 2013.

As institutions seek to improve decision-making through data and put public data to use to improve the lives of citizens, new tools and projects are allowing citizens to play a role in both the collection and utilization of data. Participatory sensing and other citizen data collection initiatives, notably in the realm of disaster response, are allowing citizens to crowdsource important data, often using smartphones, that would be either impossible or burdensomely time-consuming for institutions to collect themselves. Civic hacking, often performed in hackathon events, on the other hand, is a growing trend in which governments encourage citizens to transform data from government and other sources into useful tools to benefit the public good.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Baraniuk, Chris. “Power Politechs.” New Scientist 218, no. 2923 (June 29, 2013): 36–39. http://bit.ly/167ul3J.

  • In this article, Baraniuk discusses civic hackers, “an army of volunteer coders who are challenging preconceptions about hacking and changing the way your government operates. In a time of plummeting budgets and efficiency drives, those in power have realised they needn’t always rely on slow-moving, expensive outsourcing and development to improve public services. Instead, they can consider running a hackathon, at which tech-savvy members of the public come together to create apps and other digital tools that promise to enhace the provision of healthcare, schools or policing.”
  • While recognizing that “civic hacking has established a pedigree that demonstrates its potential for positive impact,” Baraniuk argues that a “more rigorous debate over how this activity should evolve, or how authorities ought to engage in it” is needed.

Barnett, Brandon, Muki Hansteen Izora, and Jose Sia. “Civic Hackathon Challenges Design Principles: Making Data Relevant and Useful for Individuals and Communities.” Hack for Change, https://bit.ly/2Ge6z09.

  • In this paper, researchers from Intel Labs offer “guiding principles to support the efforts of local civic hackathon organizers and participants as they seek to design actionable challenges and build useful solutions that will positively benefit their communities.”
  • The authors proposed design principles are:
    • Focus on the specific needs and concerns of people or institutions in the local community. Solve their problems and challenges by combining different kinds of data.
    • Seek out data far and wide (local, municipal, state, institutional, non-profits, companies) that is relevant to the concern or problem you are trying to solve.
    • Keep it simple! This can’t be overstated. Focus [on] making data easily understood and useful to those who will use your application or service.
    • Enable users to collaborate and form new communities and alliances around data.

Buhrmester, Michael, Tracy Kwang, and Samuel D. Gosling. “Amazon’s Mechanical Turk A New Source of Inexpensive, Yet High-Quality, Data?” Perspectives on Psychological Science 6, no. 1 (January 1, 2011): 3–5. http://bit.ly/H56lER.

  • This article examines the capability of Amazon’s Mechanical Turk to act a source of data for researchers, in addition to its traditional role as a microtasking platform.
  • The authors examine the demographics of MTurkers and find that “MTurk participants are slightly more demographically diverse than are standard Internet samples and are significantly more diverse than typical American college samples; (b) participation is affected by compensation rate and task length, but participants can still be recruited rapidly and inexpensively; (c) realistic compensation rates do not affect data quality; and (d) the data obtained are at least as reliable as those obtained via traditional methods.”
  • The paper concludes that, just as MTurk can be a strong tool for crowdsourcing tasks, data derived from MTurk can be high quality while also being inexpensive and obtained rapidly.

Goodchild, Michael F., and J. Alan Glennon. “Crowdsourcing Geographic Information for Disaster Response: a Research Frontier.” International Journal of Digital Earth 3, no. 3 (2010): 231–241. http://bit.ly/17MBFPs.

  • This article examines issues of data quality in the face of the new phenomenon of geographic information being generated by citizens, in order to examine whether this data can play a role in emergency management.
  • The authors argue that “[d]ata quality is a major concern, since volunteered information is asserted and carries none of the assurances that lead to trust in officially created data.”
  • Due to the fact that time is crucial during emergencies, the authors argue that, “the risks associated with volunteered information are often outweighed by the benefits of its use.”
  • The paper examines four wildfires in Santa Barbara in 2007-2009 to discuss current challenges with volunteered geographical data, and concludes that further research is required to answer how volunteer citizens can be used to provide effective assistance to emergency managers and responders.

Hudson-Smith, Andrew, Michael Batty, Andrew Crooks, and Richard Milton. “Mapping for the Masses Accessing Web 2.0 Through Crowdsourcing.” Social Science Computer Review 27, no. 4 (November 1, 2009): 524–538. http://bit.ly/1c1eFQb.

  • This article describes the way in which “we are harnessing the power of web 2.0 technologies to create new approaches to collecting, mapping, and sharing geocoded data.”
  • The authors examine GMapCreator and MapTube, which allow users to do a range of map-related functions such as create new maps, archive existing maps, and share or produce bottom-up maps through crowdsourcing.
  • They conclude that “these tools are helping to define a neogeography that is essentially ‘mapping for the masses,’ while noting that there are many issues of quality, accuracy, copyright, and trust that will influence the impact of these tools on map-based communication.”

Kanhere, Salil S. “Participatory Sensing: Crowdsourcing Data from Mobile Smartphones in Urban Spaces.” In Distributed Computing and Internet Technology, edited by Chittaranjan Hota and Pradip K. Srimani, 19–26. Lecture Notes in Computer Science 7753. Springer Berlin Heidelberg. 2013. https://bit.ly/2zX8Szj.

  • This paper provides a comprehensive overview of participatory sensing — a “new paradigm for monitoring the urban landscape” in which “ordinary citizens can collect multi-modal data streams from the surrounding environment using their mobile devices and share the same using existing communications infrastructure.”
  • In addition to examining a number of innovative applications of participatory sensing, Kanhere outlines the following key research challenges:
    • Dealing with incomplete samples
    •  Inferring user context
    • Protecting user privacy
    • Evaluating data trustworthiness
    • Conserving energy

Making Europe's cities smarter


Press Release: “At a conference today hosted by the European Commission, city leaders, CEOs and civil society leaders discussed the actions outlined in the “Smart Cities Strategic Implementation Plan” and how to put them into practice. The Commission announced that it will launch an ‘Invitation for Smart City and Community Commitments’ in spring 2014 to mobilise work on the action plan’s priorities. The plan is part of Europe’s fifth “Innovation Partnership”.
Commission Vice-President Siim Kallas, in charge of transport, said: “I am very pleased to see transport operators, telecoms companies, vehicle manufacturers, city planners, energy companies and researchers all gathered in one room to discuss the future of our cities. The Smart Cities initiative is a great opportunity to make changes happen for less congestion and better business opportunities in our cities. We need to keep up the momentum and move from plan to action now.”
Commission Vice-President Neelie Kroes, responsible for the Digital Agenda, said: “The future of infrastructure and city planning will be based on integrating ICT systems and using big data to make our cities better places to live and work. We need to base those new systems on open standards for hardware, software, data and services which this European Innovation Partnership will develop.”
Günther H. Oettinger, EU Commissioner for energy, said: “The European Innovation Partnership for Smart Cities and Communities is about making investments in sustainable development in as many cities as possible. Creating equal partnerships between cities and companies based on synergies between ICT, energy and mobility will lead to projects that make a real difference in our everyday lives.”
The Commission intends to make available approximately EUR 200 million for Smart Cities and communities in the 2014-2015 budgets of the Horizon 2020 research and innovation programme, to accelerate progress and enlarge the scale of roll-out of smart cities solutions. There will also be possibilities to access the European Structural and Investment Funds.
For more information: http://ec.europa.eu/eip/smartcities/”

Big Data needs Big Theory


Geoffrey West, former President of the Santa Fe Institute: “As the world becomes increasingly complex and interconnected, some of our biggest challenges have begun to seem intractable. What should we do about uncertainty in the financial markets? How can we predict energy supply and demand? How will climate change play out? How do we cope with rapid urbanization? Our traditional approaches to these problems are often qualitative and disjointed and lead to unintended consequences. To bring scientific rigor to the challenges of our time, we need to develop a deeper understanding of complexity itself….
The digital revolution is driving much of the increasing complexity and pace of life we are now seeing, but this technology also presents an opportunity. The ubiquity of cell phones and electronic transactions, the increasing use of personal medical probes, and the concept of the electronically wired “smart city” are already providing us with enormous amounts of data. With new computational tools and techniques to digest vast, interrelated databases, researchers and practitioners in science, technology, business and government have begun to bring large-scale simulations and models to bear on questions formerly out of reach of quantitative analysis, such as how cooperation emerges in society, what conditions promote innovation, and how conflicts spread and grow.
The trouble is, we don’t have a unified, conceptual framework for addressing questions of complexity. We don’t know what kind of data we need, nor how much, or what critical questions we should be asking. “Big data” without a “big theory” to go with it loses much of its potency and usefulness, potentially generating new unintended consequences.
When the industrial age focused society’s attention on energy in its many manifestations—steam, chemical, mechanical, and so on—the universal laws of thermodynamics came as a response. We now need to ask if our age can produce universal laws of complexity that integrate energy with information. What are the underlying principles that transcend the extraordinary diversity and historical contingency and interconnectivity of financial markets, populations, ecosystems, war and conflict, pandemics and cancer? An overarching predictive, mathematical framework for complex systems would, in principle, incorporate the dynamics and organization of any complex system in a quantitative, computable framework.
We will probably never make detailed predictions of complex systems, but coarse-grained descriptions that lead to quantitative predictions for essential features are within our grasp. We won’t predict when the next financial crash will occur, but we ought to be able to assign a probability of one occurring in the next few years. The field is in the midst of a broad synthesis of scientific disciplines, helping reverse the trend toward fragmentation and specialization, and is groping toward a more unified, holistic framework for tackling society’s big questions. The future of the human enterprise may well depend on it.”

Mexico City Open Database Improves Transit Efficiency, Helps Commuters


The World Bank: “Mexico City residents make 32 million vehicle trips a day, of which over 20 million are via public transport. These use 12 subway lines, four rapid transit lines, eight trolleybus and light rail lines, a suburban rail line, a hundred formal bus routes and over 1,400 “colectivo” minibus routes, along 260 public bike stations. Since the 1970s, five separate agencies have supervised this network, grouped under SETRAVI, Mexico City’s public transit authority. And although each agency has made attempts to collect and store data on passenger counts, route licenses, travel times, and stop locations, these data have never been assembled in one place….

In November 2012, the Bank’s Latin America and Caribbean Transport Unit—with support from the Energy Sector Management Assistance Program (ESMAP)—began providing SETRAVI with technical assistance to develop a new digital platform to collect and manage urban transport data.  This new system is built to the General Transit Feed Specification (GTFS), the de facto standard for cities in recording transit data.
GTFS, created in 2005 by Google and the US city of Portland, Oregon., is an open standard that can be shared and used by anyone. It enables the collection, storage, publication and updating of information on transit routes, times, stops and other important public transport data.
Representatives from each transit agency were enrolled by SETRAVI to crisscross the capital, using TransitWand, an open-source app on their mobile phones, to collect real-time data such as routes, speed, location of bus stops and frequency of train departures.  The data collected were then fed into a data management portal and converted into GTFS.
Despite its simplicity and ease of use, there was one major hurdle to adapting GTFS for Mexico City. The standard was too rigid to incorporate data related to non-scheduled services such as the thousands of colectivo minibuses traversing the city.  As such, another objective of the World Bank scheme was to pilot a “GTFS-Lite” specification that could measure forms of transport that operated with flexible routes and stopping points.
With “GTFS-lite”, Mexico City’s urban planners have access to comparable data on minibuses. This helps them visualize route configurations to determine where best to add or eliminate services, how to plan for integration with more structured transit services, regulate and improve service, and plan for the longer-term future.
Mexico City’s GTFS data have been made public, so that third party software developers can use them to innovate and create applications—such as trip planners and timetable publishers—that can be used on smartphones and other devices.
The GTFS feed for Mexico City will also help the city’s transit agencies develop practical open tools. For example, a real-time tracking tool that informs users of disruptions in the system and provides route change options has already been developed with World Bank assistance…”

New U.S. Open Government National Action Plan


The White House Fact Sheet: “In September 2011, President Obama joined the leaders of seven other nations in announcing the launch of the Open Government Partnership (OGP) – a global effort to encourage transparent, effective, and accountable governance.
Two years later, OGP has grown to 60 countries that have made more than 1000 commitments to improve the governance of more than two billion people around the globe.  OGP is now a global community of government reformers, civil society leaders, and business innovators working together to develop and implement ambitious open government reforms and advance good governance…
Today at the OGP summit in London, the United States announced a new U.S. Open Government National Action Plan that includes six ambitious new commitments that will advance these efforts even further.  Those commitments include expanding open data, modernizing the Freedom of Information Act (FOIA), increasing fiscal transparency, increasing corporate transparency, advancing citizen engagement and empowerment, and more effectively managing public resources.
Expand Open Data:  Open Data fuels innovation that grows the economy and advances government transparency and accountability.  Government data has been used by journalists to uncover variations in hospital billings, by citizens to learn more about the social services provided by charities in their communities, and by entrepreneurs building new software tools to help farmers plan and manage their crops.  Building upon the successful implementation of open data commitments in the first U.S. National Action Plan, the new Plan will include commitments to make government data more accessible and useful for the public, such as reforming how Federal agencies manage government data as a strategic asset, launching a new version of Data.gov, and expanding agriculture and nutrition data to help farmers and communities.
Modernize the Freedom of Information Act (FOIA):  The FOIA encourages accountability through transparency and represents a profound national commitment to open government principles.  Improving FOIA administration is one of the most effective ways to make the U.S. Government more open and accountable.  Today, the United States announced a series of commitments to further modernize FOIA processes, including launching a consolidated online FOIA service to improve customers’ experience and making training resources available to FOIA professionals and other Federal employees.
Increase Fiscal Transparency:   The Administration will further increase the transparency of where Federal tax dollars are spent by making federal spending data more easily available on USASpending.gov; facilitating the publication of currently unavailable procurement contract information; and enabling Americans to more easily identify who is receiving tax dollars, where those entities or individuals are located, and how much they receive.
Increase Corporate Transparency:  Preventing criminal organizations from concealing the true ownership and control of businesses they operate is a critical element in safeguarding U.S. and international financial markets, addressing tax avoidance, and combatting corruption in the United States and abroad.  Today we committed to take further steps to enhance transparency of legal entities formed in the United States.
Advance Citizen Engagement and Empowerment:  OGP was founded on the principle that an active and robust civil society is critical to open and accountable governance.  In the next year, the Administration will intensify its efforts to roll back and prevent new restrictions on civil society around the world in partnership with other governments, multilateral institutions, the philanthropy community, the private sector, and civil society.  This effort will focus on improving the legal and regulatory framework for civil society, promoting best practices for government-civil society collaboration, and conceiving of new and innovative ways to support civil society globally.
More Effectively Manage Public Resources:   Two years ago, the Administration committed to ensuring that American taxpayers receive every dollar due for the extraction of the nation’s natural resources by committing to join the Extractive Industries Transparency Initiative (EITI).  We continue to work toward achieving full EITI compliance in 2016.  Additionally, the U.S. Government will disclose revenues on geothermal and renewable energy and discuss future disclosure of timber revenues.
For more information on OGP, please visit www.opengovpartnership.org or follow @opengovpart on Twitter.”
See also White House Plans a Single FOIA Portal Across Government

Making government simpler is complicated


Mike Konczal in The Washington Post: “Here’s something a politician would never say: “I’m in favor of complex regulations.” But what would the opposite mean? What would it mean to have “simple” regulations?

There are two definitions of “simple” that have come to dominate liberal conversations about government. One is the idea that we should make use of “nudges” in regulation. The other is the idea that we should avoid “kludges.” As it turns out, however, these two definitions conflict with each other —and the battle between them will dominate conversations about the state in the years ahead.

The case for “nudges”

The first definition of a “simple” regulation is one emphasized in Cass Sunstein’s recent book titled Simpler: The Future of Government (also see here). A simple policy is one that simply “nudges” people into one choice or another using a variety of default rules, disclosure requirements, and other market structures. Think, for instance, of rules that require fast-food restaurants to post calories on their menus, or a mortgage that has certain terms clearly marked in disclosures.

These sorts of regulations are deemed “choice preserving.” Consumers are still allowed to buy unhealthy fast-food meals or sign up for mortgages they can’t reasonably afford. The regulations are just there to inform people about their choices. These rules are designed to keep the market “free,” where all possibilities are ultimately possible, although there are rules to encourage certain outcomes.
In his book, however, Sunstein adds that there’s another very different way to understand the term “simple.” What most people mean when they think of simple regulations is a rule that is “simple to follow.” Usually a rule is simple to follow because it outright excludes certain possibilities and thus ensures others. Which means, by definition, it limits certain choices.

The case against “kludges”
This second definition of simple plays a key role in political scientist Steve Teles’ excellent recent essay, “Kludgeocracy in America.” For Teles, a “kludge” is a “clumsy but temporarily effective” fix for a policy problem. (The term comes from computer science.) These kludges tend to pile up over time, making government cumbersome and inefficient overall.
Teles focuses on several ways that kludges are introduced into policy, with a particularly sharp focus on overlapping jurisdictions and the related mess of federal and state overlap in programs. But, without specifically invoking it, he also suggests that a reliance on “nudge” regulations can lead to more kludges.
After all, non-kludge policy proposal is one that will be simple to follow and will clearly cause a certain outcome, with an obvious causality chain. This is in contrast to a web of “nudges” and incentives designed to try and guide certain outcomes.

Why “nudges” aren’t always simpler
The distinction between the two is clear if we take a specific example core to both definitions: retirement security.
For Teles, “one of the often overlooked benefits of the Social Security program… is that recipients automatically have taxes taken out of their paychecks, and, then without much effort on their part, checks begin to appear upon retirement. It’s simple and direct. By contrast, 401(k) retirement accounts… require enormous investments of time, effort, and stress to manage responsibly.”

Yet 401(k)s are the ultimately fantasy laboratory for nudge enthusiasts. A whole cottage industry has grown up around figuring out ways to default people into certain contributions, on designing the architecture of choices of investments, and trying to effortlessly and painlessly guide people into certain savings.
Each approach emphasizes different things. If you want to focus your energy on making people better consumers and market participations, expanding our government’s resources and energy into 401(k)s is a good choice. If you want to focus on providing retirement security directly, expanding Social Security is a better choice.
The first is “simple” in that it doesn’t exclude any possibility but encourages market choices. The second is “simple” in that it is easy to follow, and the result is simple as well: a certain amount of security in old age is provided directly. This second approach understands the government as playing a role in stopping certain outcomes, and providing for the opposite of those outcomes, directly….

Why it’s hard to create “simple” regulations
Like all supposed binaries this is really a continuum. Taxes, for instance, sit somewhere in the middle of the two definitions of “simple.” They tend to preserve the market as it is but raise (or lower) the price of certain goods, influencing choices.
And reforms and regulations are often most effective when there’s a combination of these two types of “simple” rules.
Consider an important new paper, “Regulating Consumer Financial Products: Evidence from Credit Cards,” by Sumit Agarwal, Souphala Chomsisengphet, Neale Mahoney and Johannes Stroebel. The authors analyze the CARD Act of 2009, which regulated credit cards. They found that the nudge-type disclosure rules “increased the number of account holders making the 36-month payment value by 0.5 percentage points.” However, more direct regulations on fees had an even bigger effect, saving U.S. consumers $20.8 billion per year with no notable reduction in credit access…..
The balance between these two approaches of making regulations simple will be front and center as liberals debate the future of government, whether they’re trying to pull back on the “submerged state” or consider the implications for privacy. The debate over the best way for government to be simple is still far from over.”

If big data is an atomic bomb, disarmament begins in Silicon Valley


at GigaOM: “Big data is like atomic energy, according to scientist Albert-László Barabási in a Monday column on Politico. It’s very beneficial when used ethically, and downright destructive when turned into a weapon. He argues scientists can help resolve the damage done by government spying by embracing the principles of nuclear nonproliferation that helped bring an end to Cold War fears and distrust.
Barabási’s analogy is rather poetic:

“Powered by the right type of Big Data, data mining is a weapon. It can be just as harmful, with long-term toxicity, as an atomic bomb. It poisons trust, straining everything from human relations to political alliances and free trade. It may target combatants, but it cannot succeed without sifting through billions of data points scraped from innocent civilians. And when it is a weapon, it should be treated like a weapon.”

I think he’s right, but I think the fight to disarm the big data bomb begins in places like Silicon Valley and Madison Avenue. And it’s not just scientists; all citizens should have a role…
I write about big data and data mining for a living, and I think the underlying technologies and techniques are incredibly valuable, even if the applications aren’t always ideal. On the one hand, advances in machine learning from companies such as Google and Microsoft are fantastic. On the other hand, Facebook’s newly expanded Graph Search makes Europe’s proposed right-to-be-forgotten laws seem a lot more sensible.
But it’s all within the bounds of our user agreements and beauty is in the eye of the beholder.
Perhaps the reason we don’t vote with our feet by moving to web platforms that embrace privacy, even though we suspect it’s being violated, is that we really don’t know what privacy means. Instead of regulating what companies can and can’t do, perhaps lawmakers can mandate a degree of transparency that actually lets users understand how data is being used, not just what data is being collected. Great, some company knows my age, race, ZIP code and web history: What I really need to know is how it’s using that information to target, discriminate against or otherwise serve me.
An intelligent national discussion about the role of the NSA is probably in order. For all anyone knows,  it could even turn out we’re willing to put up with more snooping than the goverment might expect. But until we get a handle on privacy from the companies we choose to do business with, I don’t think most Americans have the stomach for such a difficult fight.”

5 Ways Cities Are Using Big Data


Eric Larson in Mashable: “New York City released more than 200 high-value data sets to the public on Monday — a way, in part, to provide more content for open-sourced mapping projects like OpenStreetMap.
It’s one of the many releases since the Local Law 11 of 2012 passed in February, which calls for more transparency of the city government’s collected data.
But it’s not just New York: Cities across the world, large and small, are utilizing big data sets — like traffic statistics, energy consumption rates and GPS mapping — to launch projects to help their respective communities.
We rounded up a few of our favorites below….

1. Seattle’s Power Consumption

The city of Seattle recently partnered with Microsoft and Accenture on a pilot project to reduce the area’s energy usage. Using Microsoft’s Azure cloud, the project will collect and analyze hundreds of data sets collected from four downtown buildings’ management systems.
With predictive analytics, then, the system will work to find out what’s working and what’s not — i.e. where energy can be used less, or not at all. The goal is to reduce power usage by 25%.

2. SpotHero

Finding parking spots — especially in big cities — is undoubtably a headache.

SpotHero is an app, for both iOS and Android devices, that tracks down parking spots in a select number of cities. How it works: Users type in an address or neighborhood (say, Adams Morgan in Washington, D.C.) and are taken to a listing of available garages and lots nearby — complete with prices and time durations.
The app tracks availability in real-time, too, so a spot is updated in the system as soon as it’s snagged.
Seven cities are currently synced with the app: Washington, D.C., New York, Chicago, Baltimore, Boston, Milwaukee and Newark, N.J.

3. Adopt-a-Hydrant

Anyone who’s spent a winter in Boston will agree: it snows.

In January, the city’s Office of New Urban Mechanics released an app called Adopt-a-Hydrant. The program is mapped with every fire hydrant in the city proper — more than 13,000, according to a Harvard blog post — and lets residents pledge to shovel out one, or as many as they choose, in the almost inevitable event of a blizzard.
Once a pledge is made, volunteers receive a notification if their hydrant — or hydrants — become buried in snow.

4. Adopt-a-Sidewalk

Similar to Adopt-a-Hydrant, Chicago’s Adopt-a-Sidewalk app lets residents of the Windy City pledge to shovel sidewalks after snowfall. In a city just as notorious for snowstorms as Boston, it’s an effective way to ensure public spaces remain free of snow and ice — especially spaces belonging to the elderly or disabled.

If you’re unsure which part of town you’d like to “adopt,” just register on the website and browse the map — you’ll receive a pop-up notification for each street you swipe that’s still available.

5. Less Congestion for Lyon

Last year, researchers at IBM teamed up with the city of Lyon, France (about four hours south of Paris), to build a system that helps traffic operators reduce congestion on the road.

The system, called the “Decision Support System Optimizer (DSSO),” uses real-time traffic reports to detect and predict congestions. If an operator sees that a traffic jam is likely to occur, then, she/he can adjust traffic signals accordingly to keep the flow of cars moving smoothly.
It’s an especially helpful tool for emergencies — say, when an ambulance is en route to the hospital. Over time, the algorithms in the system will “learn” from its most successful recommendations, then apply that knowledge when making future predictions.”

Open-Government Laws Fuel Hedge-Fund Profits


Wall Street Journal: “Hedge Funds Are Using FOIA Requests to Obtain Nonpublic Information From Federal Agencies…When SAC Capital Advisors LP was weighing an investment in Vertex Pharmaceuticals Inc., the hedge-fund firm contacted a source it knew would provide nonpublic information without blinking: the federal government.
An investment manager for an SAC affiliate asked the Food and Drug Administration last December for any “adverse event reports” for Vertex’s recently approved cystic-fibrosis drug. Under the Freedom of Information Act, the agency had to hand over the material, which revealed no major problems. The bill: $72.50, cheaper than the price of two Vertex shares.
SAC and its affiliate, Sigma Capital Management LLC, snapped up 13,500 Vertex shares in the first quarter and options to buy 25,000 more, securities filings indicate. The stock rose that quarter, then surged 62% on a single day in April when Vertex announced positive results from safety tests on a separate cystic-fibrosis drug designed to be used in combination with the first.
Finance professionals have been pulling every lever they can these days to extract information from the government. Many have discovered that the biggest lever of all is the one available to everyone—the Freedom of Information Act—conceived by advocates of open government to shine light on how officials make decisions. FOIA is part of an array of techniques sophisticated investors are using to try to obtain potentially market-moving information about products, legislation, regulation and government economic statistics.
“It’s an information arms race,” says Les Funtleyder, a longtime portfolio manager and now partner at private-equity firm Poliwogg Holdings Inc. “It’s important to try every avenue. If anyone else is doing it, you need to, too.”
A review by The Wall Street Journal of more than 100,000 of the roughly three million FOIA requests filed over the past five years, including all of those sent to the FDA, shows that investors use the process to troll for all kinds of information. They ask the Environmental Protection Agency about pollution regulations, the Department of Energy about grants for energy-efficient vehicles, and the Securities and Exchange Commission about whether publicly held companies are under investigation. Such requests are perfectly legal.”
See also “Making FOIA More Free and Open” (Joel Gurin)