Open Data Could Unlock $230 Billion In Energy-Efficiency Savings


Jeff McMahon at Forbes: “Energy-efficiency startups just need access to existing data—on electricity usage, housing characteristics, renovations and financing—to unlock hundreds of billions of dollars in savings, two founders of  startups said in Chicago Tuesday.
“One of the big barriers to scaling energy efficiency is the lack of data in the market,” said Andy Frank of Sealed, a startup that encourages efficiency improvements by guaranteeing homeowners a lower bill than they’re paying now.
In a forum hosted by the Energy Policy Institute at Chicago, Frank and Matt Gee, founder of Effortless Energy, advocated an open-energy-data warehouse that would collect anonymized data from utilities, cities, contractors, and financiers, to make the data available for research, government, and industry.
“There needs to be some sort of entity that organizes all this information and has it in some sort of standard format,” said Gee, whose startup pays for home improvements up front and then splits the savings with investors and the homeowner.
According to Gee, the current $9.5 billion energy-efficiency market operates without data on the actual savings it produces for homeowners. He outlined the current market like this:

  1. A regulatory body, usually a public utility commission, mandates that a utility spend money on efficiency.
  2. The utility passes on the cost to customers through an efficiency surcharge (this is how the $9.5 billion is raised).
  3. The utility hires a program implementer.
  4. The program implementer sends auditors to customer homes.
  5. Potential savings from improvements like new insulation or new appliances are estimated based on models.
  6. Those modeled estimates determine what the contractor can do in the home.
  7. The modeled estimates determine what financing is available.

In some cases, utilities will hire consultants to estimate the savings generated from these improvements. California utilities spend $40 million a year estimating savings, Gee said, but actual savings are neither verified nor integrated in the process.
“Nowhere in this process do actual savings enter,” Gee said. “They don’t drive anyone’s incentives, which is just absolutely astounding, right? The opportunity here is that energy efficiency actually pays for itself. It should be something that’s self-financed.”
For that to happen, the market needs reliable information on how much energy is currently being wasted and how much is saved by improvements….”

Sharing in a Changing Climate


Helen Goulden in the Huffington Post: “Every month, a social research agency conducts a public opinion survey on 30,000 UK households. As part of this households are asked about what issues they think are the most important; things such as crime, unemployment, inequality, public health etc. Climate change has ranked so consistently low on these surveys that they don’t both asking any more.
On first glance, it would appear that most people don’t care about a changing climate.
Yet, that’s simply not true. Many people care deeply, but fleetingly – in the same way they may consider their own mortality before getting back to thinking about what to have for tea. And others care, but fail to change their behaviour in a way that’s proportionate to their concerns. Certainly that’s my unhappy stomping ground.
Besides what choices do we really have? Even the most progressive, large organisations have been glacial to move towards any form of real form of sustainability. For many years we have struggled with the Frankenstein-like task of stitching ‘sustainability’ onto existing business and economic models and the results, I think, speak for themselves.
That the Collaborative Economy presents us with an opportunity – in Napster-like ways – to disrupt and evolve toward something more sustainable is compelling idea. Looking out to a future filled with opportunities to reconfigure how we produce, consume and dispose of the things we want and need to live, work and play.
Whether the journey toward sustainability is short or long, it will be punctuated with a good degree of turbulence, disruption and some largely unpredictable events. How we deal with those events and what role communities, collaboration and technology play may set the framework and tone for how that future evolves. Crises and disruption to our entrenched living patterns present ripe opportunities for innovation and space for adopting new behaviours and practices.
No-one is immune from the impact of erratic and extreme weather events. And if we accept that these events are going to increase in frequency, we must draw the conclusion that emergency state and government resources may be drawn more thinly over time.
Across the world, there is a fairly well organised state and international infrastructure for dealing with emergencies , involving everyone from the Disaster Emergency Committee, the UN, central and local government and municipalities, not for profit organisations and of course, the military. There is a clear reason why we need this kind of state emergency response; I’m not suggesting that we don’t.
But through the rise of open data and mass participation in platforms that share location, identity and inventory, we are creating a new kind of mesh; a social and technological infrastructure that could considerably strengthen our ability to respond to unpredictable events.
In the last few years we have seen a sharp rise in the number of tools and crowdsourcing platforms and open source sensor networks that are focused on observing, predicting or responding to extreme events:
• Apps like Shake Alert, which emits a minute warning that an earthquake is coming
• Rio’s sensor network, which measures rainfall outside the city and can predict flooding
• Open Source sensor software Arduino which is being used to crowd-source weather and pollution data
• Propeller Health, which is using Asthma sensors on inhalers to crowd-source pollution hotspots
• Safecast, which was developed for crowdsourcing radiation levels in Japan
Increasingly we have the ability to deploy open source, distributed and networked sensors and devices for capturing and aggregating data that can help us manage our responses to extreme weather (and indeed, other kinds of) events.
Look at platforms like LocalMind and Foursquare. Today, I might be using them to find out whether there’s a free table at a bar or finding out what restaurant my friends are in. But these kind of social locative platforms present an infrastructure that could be life-saving in any kind of situation where you need to know where to go quickly to get out of trouble. We know that in the wake of disruptive events and disasters, like bombings, riots etc, people now intuitively and instinctively take to technology to find out what’s happening, where to go and how to co-ordinate response efforts.
During the 2013 Bart Strike in San Francisco, ventures like Liquid Space and SideCar enabled people to quickly find alternative places to work, or alternatives to public transport, to mitigate the inconvenience of the strike. The strike was a minor inconvenience compared to the impact of a hurricane and flood but nevertheless, in both those instances, ventures decided waive their fees; as did AirBnB when 1,400 New York AirBnB hosts opened their doors to people who had been left homeless through Hurricane Sandy in 2012.
The impulse to help is not new. The matching of people’s offers of help and resources to on-the-ground need, in real time, is.”

Sammies finalists are harnessing technology to help the public


Lisa Rein in the Washington Post: “One team of federal agents led Medicare investigations that resulted in more than 600 convictions in South Florida, recovering hundreds of millions of dollars. Another official boosted access to burial sites for veterans across the country. And one guided an initiative to provide safe drinking water to 5 million people in Uganda and Kenya. These are some of the 33 individuals and teams of federal employees nominated for the 13th annual Samuel J. Heyman Service to America Medals, among the highest honors in government. The 2014 finalists reflect the achievements of public servants in fields from housing to climate change, their work conducted in Washington and locations as far-flung as Antarctica and Alabama…
Many of them have excelled in harnessing new technology in ways that are pushing the limits of what government thought was possible even a few years ago. Michael Byrne of the Federal Communications Commission, for example, put detailed data about broadband availability in the hands of citizens and policymakers using interactive online maps and other visualizations. At the Environmental Protection Agency, Douglas James Norton made water quality data that had never been public available on the Web for citizens, scientists and state agencies.”

The Surprising Accuracy Of Crowdsourced Predictions About The Future


Adele Peters in FastCo-Exist:If you have a question about what’s going to happen next in Syria or North Korea, you might get more accurate predictions by asking a group of ordinary people than from foreign policy experts or even, possibly, CIA agents with classified information. Over the last few years, the Good Judgment Project has proven that crowdsourcing predictions is a surprisingly accurate way to forecast the future.

The project, sponsored by the U.S. Director of National Intelligence office, is currently working with 3,000 people to test their ability to predict outcomes in everything from world politics to the economy. They aren’t experts, just people who are interested in the news.

“We just needed lots of people; we had very few restrictions,” says Don Moore, an associate professor at University of California-Berkeley, who co-led the project. “We wanted people who were interested, and curious, who were moderately well-educated and at least aware enough of the world around them that they listened to the news.”
The group has tackled 250 questions in the experiment so far. None of them have been simple; current questions include whether Turkey will get a new constitution and whether the U.S. and the E.U. will reach a trade deal. But the group consistently got answers right more often than individual experts, just through some simple online research and, in some cases, discussions with each other.
The crowdsourced predictions are even reportedly more accurate than those from intelligence agents. One report says that when “superpredictors,” the people who are right most often, are grouped together in teams, they can outperform agents with classified information by as much as 30%. (The researchers can’t confirm this fact, since the accuracy of spies is, unsurprisingly, classified).
…Crowdsourcing could be useful for any type of prediction, Moore says, not only what’s happening in world politics. “Every major decision depends on a forecast of the future,” he explains. “A company deciding to launch a new product has to figure out what sales might be like. A candidate trying to decide whether to run for office has to forecast how they’ll do in the election. In trying to decide whom to marry, you have to decide what your future looks like together.”
“The way corporations do forecasting now is an embarrassment,” he adds. “Many of the tools we’re developing would be enormously helpful.”
The project is currently recruiting new citizen predictors here.”

 

The Universe Is Programmable. We Need an API for Everything


Keith Axline in Wired: “Think about it like this: In the Book of Genesis, God is the ultimate programmer, creating all of existence in a monster six-day hackathon.
Or, if you don’t like Biblical metaphors, you can think about it in simpler terms. Robert Moses was a programmer, shaping and re-shaping the layout of New York City for more than 50 years. Drug developers are programmers, twiddling enzymes to cure what ails us. Even pickup artists and conmen are programmers, running social scripts on people to elicit certain emotional results.

Keith Axline in Wired: “Everyone is becoming a programmer. The next step is to realize that everything is a program.

The point is that, much like the computer on your desk or the iPhone in your hand, the entire Universe is programmable. Just as you can build apps for your smartphones and new services for the internet, so can you shape and re-shape almost anything in this world, from landscapes and buildings to medicines and surgeries to, well, ideas — as long as you know the code.
That may sound like little more than an exercise in semantics. But it’s actually a meaningful shift in thinking. If we look at the Universe as programmable, we can start treating it like software. In short, we can improve almost everything we do with the same simple techniques that have remade the creation of software in recent years, things like APIs, open source code, and the massively popular code-sharing service GitHub.
The great thing about the modern software world is that you don’t have to build everything from scratch. Apple provides APIs, or application programming interfaces, that can help you build apps on their devices. And though Tim Cook and company only give you part of what you need, you can find all sorts of other helpful tools elsewhere, thanks to the open source software community.
The same is true if you’re building, say, an online social network. There are countless open source software tools you can use as the basic building blocks — many of them open sourced by Facebook. If you’re creating almost any piece of software, you can find tools and documentation that will help you fashion at least a small part of it. Chances are, someone has been there before, and they’ve left some instructions for you.
Now we need to discover and document the APIs for the Universe. We need a standard way of organizing our knowledge and sharing it with the world at large, a problem for which programmers already have good solutions. We need to give everyone a way of handling tasks the way we build software. Such a system, if it can ever exist, is still years away — decades at the very least — and the average Joe is hardly ready for it. But this is changing. Nowadays, programming skills and the DIY ethos are slowly spreading throughout the population. Everyone is becoming a programmer. The next step is to realize that everything is a program.

What Is an API?

The API may sound like just another arcane computer acronym. But it’s really one of the most profound metaphors of our time, an idea hiding beneath the surface of each piece of tech we use everyday, from iPhone apps to Facebook. To understand what APIs are and why they’re useful, let’s look at how programmers operate.
If I’m building a smartphone app, I’m gonna need — among so many other things — a way of validating a signup form on a webpage to make sure a user doesn’t, say, mistype their email address. That validation has nothing to do with the guts of my app, and it’s surprisingly complicated, so I don’t really want to build it from scratch. Apple doesn’t help me with that, so I start looking on the web for software frameworks, plugins, Software Developer Kits (SDKs) — anything that will help me build my signup tool.
Hopefully, I’ll find one. And if I do, chances are it will include some sort of documentation or “Readme file” explaining how this piece of code is supposed to be used so that I can tailor it to my app. This Readme file should contain installation instructions as well as the API for the code. Basically, an API lays out the code’s inputs and outputs. It shows what me what I have to send the code and what it will spit back out. It shows how I bolt it onto my signup form. So the name is actually quite explanatory: Application Programming Interface. An API is essentially an instruction manual for a piece of software.
Now, let’s combine this with the idea that everything is an application: molecules, galaxies, dogs, people, emotional states, abstract concepts like chaos. If you do something to any these things, they’ll respond in some way. Like software, they have inputs and outputs. What we need to do is discover and document their APIs.
We aren’t dealing with software code here. Inputs and outputs can themselves be anything. But we can closely document these inputs and their outputs — take what we know about how we interface with something and record it in a standard way that it can be used over and over again. We can create a Readme file for everything.
We can start by doing this in small, relatively easy ways. How about APIs for our cities? New Zealand just open sourced aerial images of about 95 percent of its land. We could write APIs for what we know about building in those areas, from properties of the soil to seasonal weather patterns to zoning laws. All this knowledge exists but it hasn’t been organized and packaged for use by anyone who is interested. And we could go still further — much further.
For example, between the science community, the medical industry and the billions of human experiences, we could probably have a pretty extensive API mapped out of the human stomach — one that I’d love to access when I’m up at 3am with abdominal pains. Maybe my microbiome is out of whack and there’s something I have on-hand that I could ingest to make it better. Or what if we cracked the API for the signals between our eyes and our brain? We wouldn’t need to worry about looking like Glassholes to get access to always-on augmented reality. We could just get an implant. Yes, these APIs will be slightly different for everyone, but that brings me to the next thing we need.

A GitHub for Everything

We don’t just need a Readme for the Universe. We need a way of sharing this Readme and changing it as need be. In short, we need a system like GitHub, the popular online service that lets people share and collaborate on software code.
Let’s go back to the form validator I found earlier. Say I made some modifications to it that I think other programmers would find useful. If the validator is on GitHub, I can create a separate but related version — a fork — that people can find and contribute to, in the same way I first did with the original software.

This creates a tree of knowledge, with giant groups of people creating and merging branches, working on their small section and then giving it back to the whole.

GitHub not only enables this collaboration, but every change is logged into separate versions. If someone were so inclined, they could go back and replay the building of the validator, from the very first save all the way up to my changes and whoever changes it after me. This creates a tree of knowledge, with giant groups of people creating and merging branches, working on their small section and then giving it back to the whole.
We should be able to funnel all existing knowledge of how things work — not just software code — into a similar system. That way, if my brain-eye interface needs to be different, I (or my personal eye technician) can “fork” the API. In a way, this sort of thing is already starting to happen. People are using GitHub to share government laws, policy documents, Gregorian chants, and the list goes on. The ultimate goal should be to share everything.
Yes, this idea is similar to what you see on sites like Wikipedia, but the stuff that’s shared on Wikipedia doesn’t let you build much more than another piece of text. We don’t just need to know what things are. We need to know how they work in ways that let us operate on them.

The Open Source Epiphany

If you’ve never programmed, all this can sound a bit, well, abstract. But once you enter the coding world, getting a loose grasp on the fundamentals of programming, you instantly see the utility of open source software. “Oooohhh, I don’t have to build this all myself,” you say. “Thank God for the open source community.” Because so many smart people contribute to open source, it helps get the less knowledgeable up to speed quickly. Those acolytes then pay it forward with their own contributions once they’ve learned enough.
Today, more and more people are jumping on this train. More and more people are becoming programmers of some shape or form. It wasn’t so long ago that basic knowledge of HTML was considered specialized geek speak. But now, it’s a common requirement for almost any desk job. Gone are the days when kids made fun of their parents for not being able to set the clock on the VCR. Now they get mocked for mis-cropping their Facebook profile photos.
These changes are all part of the tech takeover of our lives that is trickling down to the masses. It’s like how the widespread use of cars brought a general mechanical understanding of engines to dads everywhere. And this general increase in aptitude is accelerating along with the technology itself.
Steps are being taken to make programming a skill that most kids get early in school along with general reading, writing, and math. In the not too distant future, people will need to program in some form for their daily lives. Imagine the world before the average person knew how to write a letter, or divide two numbers, compared to now. A similar leap is around the corner…”

Open Government Data Gains Global Momentum


Wyatt Kash in Information Week: “Governments across the globe are deepening their strategic commitments and working more closely to make government data openly available for public use, according to public and private sector leaders who met this week at the inaugural Open Government Data Forum in Abu Dhabi, hosted by the United Nations and the United Arab Emirates, April 28-29.

Data experts from Europe, the Middle East, the US, Canada, Korea, and the World Bank highlighted how one country after another has set into motion initiatives to expand the release of government data and broaden its use. Those efforts are gaining traction due to multinational organizations, such as the Open Government Partnership, the Open Data Institute, The World Bank, and the UN’s e-government division, that are trying to share practices and standardize open data tools.
In the latest example, the French government announced April 24 that it is joining the Open Government Partnership, a group of 64 countries working jointly to make their governments more open, accountable, and responsive to citizens. The announcement caps a string of policy shifts, which began with the formal release of France’s Open Data Strategy in May 2011 and which parallel similar moves by the US.
The strategy committed France to providing “free access and reuse of public data… using machine-readable formats and open standards,” said Romain Lacombe, head of innovation for the French prime minister’s open government task force, Etalab. The French government is taking steps to end the practice of selling datasets, such as civil and case-law data, and is making them freely reusable. France launched a public data portal, Data.gouv.fr, in December 2011 and joined a G8 initiative to engage with open data innovators worldwide.
For South Korea, open data is not just about achieving greater transparency and efficiency, but is seen as digital fuel for a nation that by 2020 expects to achieve “ambient intelligence… when all humans and things are connected together,” said Dr. YoungSun Lee, who heads South Korea’s National Information Society Agency.
He foresees open data leading to a shift in the ways government will function: from an era of e-government, where information is delivered to citizens, to one where predictive analysis will foster a “creative government,” in which “government provides customized services for each individual.”
The open data movement is also propelling innovative programs in the United Arab Emirates. “The role of open data in directing economic and social decisions pertaining to investments… is of paramount importance” to the UAE, said Dr. Ali M. Al Khouri, director general of the Emirates Identity Authority. It also plays a key role in building public trust and fighting corruption, he said….”

Findings of the Big Data and Privacy Working Group Review


John Podesta at the White House Blog: “Over the past several days, severe storms have battered Arkansas, Oklahoma, Mississippi and other states. Dozens of people have been killed and entire neighborhoods turned to rubble and debris as tornadoes have touched down across the region. Natural disasters like these present a host of challenges for first responders. How many people are affected, injured, or dead? Where can they find food, shelter, and medical attention? What critical infrastructure might have been damaged?
Drawing on open government data sources, including Census demographics and NOAA weather data, along with their own demographic databases, Esri, a geospatial technology company, has created a real-time map showing where the twisters have been spotted and how the storm systems are moving. They have also used these data to show how many people live in the affected area, and summarize potential impacts from the storms. It’s a powerful tool for emergency services and communities. And it’s driven by big data technology.
In January, President Obama asked me to lead a wide-ranging review of “big data” and privacy—to explore how these technologies are changing our economy, our government, and our society, and to consider their implications for our personal privacy. Together with Secretary of Commerce Penny Pritzker, Secretary of Energy Ernest Moniz, the President’s Science Advisor John Holdren, the President’s Economic Advisor Jeff Zients, and other senior officials, our review sought to understand what is genuinely new and different about big data and to consider how best to encourage the potential of these technologies while minimizing risks to privacy and core American values.
Over the course of 90 days, we met with academic researchers and privacy advocates, with regulators and the technology industry, with advertisers and civil rights groups. The President’s Council of Advisors for Science and Technology conducted a parallel study of the technological trends underpinning big data. The White House Office of Science and Technology Policy jointly organized three university conferences at MIT, NYU, and U.C. Berkeley. We issued a formal Request for Information seeking public comment, and hosted a survey to generate even more public input.
Today, we presented our findings to the President. We knew better than to try to answer every question about big data in three months. But we are able to draw important conclusions and make concrete recommendations for Administration attention and policy development in a few key areas.
There are a few technological trends that bear drawing out. The declining cost of collection, storage, and processing of data, combined with new sources of data like sensors, cameras, and geospatial technologies, mean that we live in a world of near-ubiquitous data collection. All this data is being crunched at a speed that is increasingly approaching real-time, meaning that big data algorithms could soon have immediate effects on decisions being made about our lives.
The big data revolution presents incredible opportunities in virtually every sector of the economy and every corner of society.
Big data is saving lives. Infections are dangerous—even deadly—for many babies born prematurely. By collecting and analyzing millions of data points from a NICU, one study was able to identify factors, like slight increases in body temperature and heart rate, that serve as early warning signs an infection may be taking root—subtle changes that even the most experienced doctors wouldn’t have noticed on their own.
Big data is making the economy work better. Jet engines and delivery trucks now come outfitted with sensors that continuously monitor hundreds of data points and send automatic alerts when maintenance is needed. Utility companies are starting to use big data to predict periods of peak electric demand, adjusting the grid to be more efficient and potentially averting brown-outs.
Big data is making government work better and saving taxpayer dollars. The Centers for Medicare and Medicaid Services have begun using predictive analytics—a big data technique—to flag likely instances of reimbursement fraud before claims are paid. The Fraud Prevention System helps identify the highest-risk health care providers for waste, fraud, and abuse in real time and has already stopped, prevented, or identified $115 million in fraudulent payments.
But big data raises serious questions, too, about how we protect our privacy and other values in a world where data collection is increasingly ubiquitous and where analysis is conducted at speeds approaching real time. In particular, our review raised the question of whether the “notice and consent” framework, in which a user grants permission for a service to collect and use information about them, still allows us to meaningfully control our privacy as data about us is increasingly used and reused in ways that could not have been anticipated when it was collected.
Big data raises other concerns, as well. One significant finding of our review was the potential for big data analytics to lead to discriminatory outcomes and to circumvent longstanding civil rights protections in housing, employment, credit, and the consumer marketplace.
No matter how quickly technology advances, it remains within our power to ensure that we both encourage innovation and protect our values through law, policy, and the practices we encourage in the public and private sector. To that end, we make six actionable policy recommendations in our report to the President:
Advance the Consumer Privacy Bill of Rights. Consumers deserve clear, understandable, reasonable standards for how their personal information is used in the big data era. We recommend the Department of Commerce take appropriate consultative steps to seek stakeholder and public comment on what changes, if any, are needed to the Consumer Privacy Bill of Rights, first proposed by the President in 2012, and to prepare draft legislative text for consideration by stakeholders and submission by the President to Congress.
Pass National Data Breach Legislation. Big data technologies make it possible to store significantly more data, and further derive intimate insights into a person’s character, habits, preferences, and activities. That makes the potential impacts of data breaches at businesses or other organizations even more serious. A patchwork of state laws currently governs requirements for reporting data breaches. Congress should pass legislation that provides for a single national data breach standard, along the lines of the Administration’s 2011 Cybersecurity legislative proposal.
Extend Privacy Protections to non-U.S. Persons. Privacy is a worldwide value that should be reflected in how the federal government handles personally identifiable information about non-U.S. citizens. The Office of Management and Budget should work with departments and agencies to apply the Privacy Act of 1974 to non-U.S. persons where practicable, or to establish alternative privacy policies that apply appropriate and meaningful protections to personal information regardless of a person’s nationality.
Ensure Data Collected on Students in School is used for Educational Purposes. Big data and other technological innovations, including new online course platforms that provide students real time feedback, promise to transform education by personalizing learning. At the same time, the federal government must ensure educational data linked to individual students gathered in school is used for educational purposes, and protect students against their data being shared or used inappropriately.
Expand Technical Expertise to Stop Discrimination. The detailed personal profiles held about many consumers, combined with automated, algorithm-driven decision-making, could lead—intentionally or inadvertently—to discriminatory outcomes, or what some are already calling “digital redlining.” The federal government’s lead civil rights and consumer protection agencies should expand their technical expertise to be able to identify practices and outcomes facilitated by big data analytics that have a discriminatory impact on protected classes, and develop a plan for investigating and resolving violations of law.
Amend the Electronic Communications Privacy Act. The laws that govern protections afforded to our communications were written before email, the internet, and cloud computing came into wide use. Congress should amend ECPA to ensure the standard of protection for online, digital content is consistent with that afforded in the physical world—including by removing archaic distinctions between email left unread or over a certain age.
We also identify several broader areas ripe for further study, debate, and public engagement that, collectively, we hope will spark a national conversation about how to harness big data for the public good. We conclude that we must find a way to preserve our privacy values in both the domestic and international marketplace. We urgently need to build capacity in the federal government to identify and prevent new modes of discrimination that could be enabled by big data. We must ensure that law enforcement agencies using big data technologies do so responsibly, and that our fundamental privacy rights remain protected. Finally, we recognize that data is a valuable public resource, and call for continuing the Administration’s efforts to open more government data sources and make investments in research and technology.
While big data presents new challenges, it also presents immense opportunities to improve lives, the United States is perhaps better suited to lead this conversation than any other nation on earth. Our innovative spirit, technological know-how, and deep commitment to values of privacy, fairness, non-discrimination, and self-determination will help us harness the benefits of the big data revolution and encourage the free flow of information while working with our international partners to protect personal privacy. This review is but one piece of that effort, and we hope it spurs a conversation about big data across the country and around the world.
Read the Big Data Report.
See the fact sheet from today’s announcement.

A New Map Gives New Yorkers the Power to Report Traffic Hazards


Sarah Goodyear in the Atlantic/Cities: “Ask any New Yorker about unsafe conditions on the city’s streets. Go ahead, ask.
You might want to sit down. This is going to take a while.
New York City’s streets are some of the most heavily used public spaces in the nation. A lot of the time, the swirling mass of users share space remarkably well. Every second in New York, it sometimes seems, a thousand people just barely miss colliding, thanks to a finely honed sense of self-preservation and spatial awareness.
The dark side is that sometimes, they do collide. These famously chaotic and contested streets are often life-threatening. Drivers routinely drive well over the 30 mph speed limit, run red lights, and fail to yield to pedestrians in crosswalks.  Pedestrians step out into traffic, sometimes without looking at what’s coming their way. Bicyclists ride the wrong way up one-way streets.
In recent years, the city has begun to address the problem, mainly through design solutions like better bike infrastructure, pedestrian refuges, and crosswalk countdown clocks. Still, last year, 286 New Yorkers died in traffic crashes.
Mayor Bill de Blasio vowed almost as soon as he was sworn into office in January to pursue an initiative called Vision Zero, which aims to eliminate traffic fatalities through a combination of design, enforcement, and education.
A new tool in the Vision Zero effort was unveiled earlier this week: a map of the city on which people can log their observations and complaints about chronically unsafe conditions. The map offers a menu of icons including red-light running, double-parking, failure to yield, and speeding, and allows users to plot them on a map of the city’s streets. Sites where pedestrian fatalities have occurred since 2009 are marked, and the most dangerous streets in each borough for people on foot are colored red.

The map, a joint project of DOT, the NYPD, and the Taxi and Limousine Commission, has only been live for a couple of days. Already, it is speckled with dozens of multicolored dots indicating problem areas. (Full disclosure: The map was designed by OpenPlans, a nonprofit affiliated with Streetsblog, where I worked several years ago.)…”

Can technology end homelessness?


Geekwire: “At the heart of Seattle’s Pioneer Square neighborhood exists a unique juxtaposition.
Inside a two-story brick building is the Impact Hub co-working space and business incubator, a place where entrepreneurs are busily working on ideas to improve the world we live in.
hacktoendhomelessnessBut walk outside the Impact Hub’s doors, and you’ll enter an entirely different world.
Homelessness. Drugs. Violence.
Now, those two contrasting scenes are coming together.
This weekend, more than 100 developers, designers, entrepreneurs and do-gooders will team up at the Impact Hub for the first-ever Hack to End Homelessness, a four-day event that encourages participants to envision and create ideas to alleviate the homelessness problem in Seattle.
The Washington Low Income Housing Alliance, Real Change and several other local homeless services and advocacy groups have already submitted project proposals, which range from an e-commerce site showcasing artwork of homeless youth to a social network focusing on low-end mobile phones for people who are homeless.
Seattle has certainly made an effort to fix its homelessness problem. Back in 2005, the Committee to End Homelessness established a 10-year plan to dramatically reduce the number of people without homes in the region. By the end of 2014, the goal was to “virtually end,” homelessness in King County.
But fast-forward to today and that hasn’t exactly come to fruition. There are more than 2,300 people in Seattle sleeping in the streets — up 16 percent from 2013 — and city data shows nearly 10,000 households checking into shelters or transitional housing last year. Thousands of others may not be on the streets or in shelters, yet still live without a permanent place to sleep at night.
While some efforts of the committee have helped curb homelessness, it’s clear that there is still a problem — one that has likely been affected by rising rent prices in the area.
Candace Faber, one of the event organizers, said that her team has been shocked by the growth of homelessness in the Seattle metropolitan area. They’re worried not only about how many people do not have a permanent home, but what kind of impact the problem is having on the city as a whole.
“With Seattle experiencing the highest rent hikes in the nation, we’re concerned that, without action, our city will not be able to remain the dynamic, affordable place it is now,” Faber said. “We don’t want to lose our entrepreneurial spirit or wind up with a situation like San Francisco, where you can’t afford to innovate without serious VC backing and there’s serious tension between the housing community and tech workers.”
That raises the question: How, exactly, can technology fix the homeless problem? The stories of these Seattle entrepreneurs helps to provide the answer.

FROM SHELTERS TO STARTUPS

Kyle Kesterson knows a thing or two about being homeless.
That’s because the Freak’n Genius co-founder and CEO spent his childhood living in 14 different homes and apartments, in addition to a bevy of shelters and transitional houses. The moving around and lack of permanent housing made going to school difficult, and finding acceptance anywhere was nearly impossible.
“I was always the new kid, the poor kid, and the smallest kid,” Kesterson says now. “You just become the target of getting picked on.”
By the time he was 15, Kesterson realized that school wasn’t a place that fit his learning style. So, he dropped out to help run his parents’ house-cleaning business in Seattle.
That’s when Kesterson, now a talented artist and designer, further developed his creative skills. The Yuba City, Calif. native would spend hours locked in a room perusing through deviantART.com, a new Internet community where other artists from around the world were sharing their own work and receiving feedback.

So now Kesterson, who plans on attending the final presentations at the Hack for Homelessness event on Sunday, is using his own experiences to teach youth about finding solutions to problems with a entrepreneurial lens. When it comes to helping at-risk youth, or those that are homeless, Kesterson says it’s about finding a thriving and supportive environment — the same one he surrounded himself with while surfing through deviantART 14 years ago.
“Externally, our environment plays a significant role in either setting people up for growth and success, or weighting people down, sucking the life out of them, and eventually leaving them at or near rock bottom,” he said.
For Kesterson, it’s entrepreneurs who can help create these environments for people, and show them that they have the ability and power to solve problems and truly make a difference.
“Entrepreneurs need to first focus on the external and internal environments of those that are homeless,” he said. “Support, help, and inspire. Become a part of their network to mentor and make connections with the challenges they are faced with the way we lean on our own mentor networks.”

FIXING THE ROOT

Lindsay Caron Epstein has always, in some shape or form, been an entrepreneur at heart.
She figured out a way to survive after moving to Arizona from New Jersey with only $100. She figured out how to land a few minimum wage jobs and eventually start a non-profit community center for at-risk youth at just 22 years old.
And now, Caron using her entrepreneurial spirit to help figure out ways to fix social challenges like homelessness.
The 36-year-old is CEO and founder of ActivateHub, a startup working alongside other socially-conscious companies in Seattle’s Fledge Accelerator. ActivateHub is a “community building social action network,” or a place where people can find local events put on by NGOs and other organizations working on a wide variety of issues.
Caron found the inspiration to start the company after organizing programs for troubled youth in Arizona and studying the homelessness problem while in school. She became fascinated with how communities were built in a way that could help people and pull them out of tough situations, but there didn’t appear to be an easy way for people to get involved.
“If you do a Google search for poverty, homelessness, climate change — any issue you care about — you’ll just find news articles and blogs,” Caron explained. “You don’t find who in your community is working on those problems and you don’t find out how you can get involved.”
Caron says her company can help those that may not have a home or have anything to do. ActivateHub, she said, might give them a reason to become engaged in something and create a sense of value in the community.
“It gives people a reason to clean up and enables them to make connections,” said Caron, who will also be attending this weekend’s event. “Some people need that inspiration and purpose to change their situation, and a lot of times that motivation isn’t there.”
Of course, ActivateHub alone isn’t going to solve the homelessness problem by itself. Caron knows this and thinks that entrepreneurs can help by focusing on more preventative measures. Sure, technology can be used to help connect homeless people to certain resources, but there’s a deeper issue at hand for Caron…”

Mapping the Intersection Between Social Media and Open Spaces in California


Stamen Design: “Last month, Stamen launched parks.stamen.com, a project we created in partnership with the Electric Roadrunner Lab, with the goal of revealing the diversity of social media activity that happens inside parks and other open spaces in California. If you haven’t already looked at the site, please go visit it now! Find your favorite park, or the parks that are nearest to you, or just stroll between random parks using the wander button. For more background about the goals of the project, read Eric’s blog post: A Conversation About California Parks.
In this post I’d like to describe some of the algorithms we use to collect the social media data that feeds the park pages. Currently we collect data from four social media platforms: Twitter, Foursquare, Flickr, and Instagram. We chose these because they all have public APIs (Application Programming Interfaces) that are easy to work with, and we expect they will provide a view into the different facets of each park, and the diverse communities who enjoy these parks. Each social media service creates its own unique geographies, and its own way of representing these parks. For example, the kinds of photos you upload to Instagram might be different from the photos you upload to Flickr. The way you describe experiences using Twitter might be different from the moments you document by checking into Foursquare. In the future we may add more feeds, but for now there’s a lot we can learn from these four.
Through the course of collecting data from these social network services, I also found that each service’s public API imposes certain constraints on our queries, producing their own intricate patterns. Thus, the quirks of how each API was written results in distinct and fascinating geometries. Also, since we are only interested in parks for this project, the process of culling non-park-related content further produces unusual and interesting patterns. Rural areas have large parks that cover huge areas, while cities have lots of (relatively) tiny parks, which creates its own challenges for how we query the APIs.
Broadly, we followed a similar approach for all the social media services. First, we grab the geocoded data from the APIs. This ignores any media that don’t have a latitude and longitude associated with them. In Foursquare, almost all checkins have a latitude and longitude, and for Flickr and Instagram most photos have a location associated with them. However, for Twitter, only around 1% of all tweets have geographic coordinates. But as we will see, even 1% still results in a whole lot of tweets!
After grabbing the social media data, we intersect it with the outlines of parks and open spaces in California, using polygons from the California Protected Areas Database maintained by GreenInfo Network. Everything that doesn’t intersect one of these parks, we throw away. The following maps represent the data as it looks before the filtering process.
But enough talking, let’s look at some maps!”