Privacy Identity Innovation: Innovator Spotlight


pii2014: “Every year, we invite a select group of startup CEOs to present their technologies on stage at Privacy Identity Innovation as part of the Innovator Spotlight program. This year’s conference (pii2014) is taking place November 12-14 in Silicon Valley, and we’re excited to announce that the following eight companies will be participating in the pii2014 Innovator Spotlight:
* BeehiveID – Led by CEO Mary Haskett, BeehiveID is a global identity validation service that enables trust by identifying bad actors online BEFORE they have a chance to commit fraud.
* Five – Led by CEO Nikita Bier, Five is a mobile chat app crafted around the experience of a house party. With Five, you can browse thousands of rooms and have conversations about any topic.
* Glimpse – Led by CEO Elissa Shevinsky, Glimpse is a private (disappearing) photo messaging app just for groups.
* Humin – Led by CEO Ankur Jain, Humin is a phone and contacts app designed to think about people the way you naturally do by remembering the context of your relationships and letting you search them the way you think.
* Kpass – Led by CEO Dan Nelson, Kpass is an identity platform that provides brands, apps and developers with an easy-to-implement technology solution to help manage the notice and consent requirements for the Children’s Online Privacy Protection Act (COPPA) laws.
* Meeco – Led by CEO Katryna Dow, Meeco is a Life Management Platform that offers an all-in-one solution for you to transact online, collect your own personal data, and be more anonymous with greater control over your own privacy.
* TrustLayers – Led by CEO Adam Towvim, TrustLayers is privacy intelligence for big data. TrustLayers enables confident use of personal data, keeping companies secure in the knowledge that the organization team is following the rules.
* Virtru – Led by CEO John Ackerly, Virtru is the first company to make email privacy accessible to everyone. With a single plug-in, Virtru empowers individuals and businesses to control who receives, reviews, and retains their digital information — wherever it travels, throughout its lifespan.
Learn more about the startups on the Innovator Spotlight page…”

Quantifying the Livable City


Brian Libby at City Lab: “By the time Constantine Kontokosta got involved with New York City’s Hudson Yards development, it was already on track to be historically big and ambitious.
 
Over the course of the next decade, developers from New York’s Related Companies and Canada-based Oxford Properties Group are building the largest real-estate development in United States history: a 28-acre neighborhood on Manhattan’s far West Side over a Long Island Rail Road yard, with some 17 million square feet of new commercial, residential, and retail space.
Hudson Yards is also being planned as an innovative model of efficiency. Its waste management systems, for example, will utilize a vast vacuum-tube system to collect garbage from each building into a central terminal, meaning no loud garbage trucks traversing the streets by night. Onsite power generation will prevent blackouts like those during Hurricane Sandy, and buildings will be connected through a micro-grid that allows them to share power with each other.
Yet it was Kontokosta, the deputy director of academics at New York University’s Center for Urban Science and Progress (CUSP), who conceived of Hudson Yards as what is now being called the nation’s first “quantified community.” This entails an unprecedentedly wide array of data being collected—not just on energy and water consumption, but real-time greenhouse gas emissions and airborne pollutants, measured with tools like hyper-spectral imagery.

New York has led the way in recent years with its urban data collection. In 2009, Mayor Michael Bloomberg signed Local Law 84, which requires privately owned buildings over 50,000 square feet in size to provide annual benchmark reports on their energy and water use. Unlike a LEED rating or similar, which declares a building green when it opens, the city benchmarking is a continuous assessment of its operations…”

The government wants to study ‘social pollution’ on Twitter


in the Washington Post: “If you take to Twitter to express your views on a hot-button issue, does the government have an interest in deciding whether you are spreading “misinformation’’? If you tweet your support for a candidate in the November elections, should taxpayer money be used to monitor your speech and evaluate your “partisanship’’?

My guess is that most Americans would answer those questions with a resounding no. But the federal government seems to disagree. The National Science Foundation , a federal agency whose mission is to “promote the progress of science; to advance the national health, prosperity and welfare; and to secure the national defense,” is funding a project to collect and analyze your Twitter data.
The project is being developed by researchers at Indiana University, and its purported aim is to detect what they deem “social pollution” and to study what they call “social epidemics,” including how memes — ideas that spread throughout pop culture — propagate. What types of social pollution are they targeting? “Political smears,” so-called “astroturfing” and other forms of “misinformation.”
Named “Truthy,” after a term coined by TV host Stephen Colbert, the project claims to use a “sophisticated combination of text and data mining, social network analysis, and complex network models” to distinguish between memes that arise in an “organic manner” and those that are manipulated into being.

But there’s much more to the story. Focusing in particular on political speech, Truthy keeps track of which Twitter accounts are using hashtags such as #teaparty and #dems. It estimates users’ “partisanship.” It invites feedback on whether specific Twitter users, such as the Drudge Report, are “truthy” or “spamming.” And it evaluates whether accounts are expressing “positive” or “negative” sentiments toward other users or memes…”

Chicago uses big data to save itself from urban ills


Aviva Rutkin in the New Scientist: “THIS year in Chicago, some kids will get lead poisoning from the paint or pipes in their homes. Some restaurants will cook food in unsanitary conditions and, here and there, a street corner will be suddenly overrun with rats. These kinds of dangers are hard to avoid in a city of more than 2.5 million people. The problem is, no one knows for certain where or when they will pop up.

The Chicago city government is hoping to change that by knitting powerful predictive models into its everyday city inspections. Its latest project, currently in pilot tests, analyses factors such as home inspection records and census data, and uses the results to guess which buildings are likely to cause lead poisoning in children – a problem that affects around 500,000 children in the US each year. The idea is to identify trouble spots before kids are exposed to dangerous lead levels.

“We are able to prevent problems instead of just respond to them,” says Jay Bhatt, chief innovation officer at the Chicago Department of Public Health. “These models are just the beginning of the use of predictive analytics in public health and we are excited to be at the forefront of these efforts.”

Chicago’s projects are based on the thinking that cities already have what they need to raise their municipal IQ: piles and piles of data. In 2012, city officials built WindyGrid, a platform that collected data like historical facts about buildings and up-to-date streams such as bus locations, tweets and 911 calls. The project was designed as a proof of concept and was never released publicly but it led to another, called Plenario, that allowed the public to access the data via an online portal.

The experience of building those tools has led to more practical applications. For example, one tool matches calls to the city’s municipal hotline complaining about rats with conditions that draw rats to a particular area, such as excessive moisture from a leaking pipe, or with an increase in complaints about garbage. This allows officials to proactively deploy sanitation crews to potential hotspots. It seems to be working: last year, resident requests for rodent control dropped by 15 per cent.

Some predictions are trickier to get right. Charlie Catlett, director of the Urban Center for Computation and Data in Chicago, is investigating an old axiom among city cops: that violent crime tends to spike when there’s a sudden jump in temperature. But he’s finding it difficult to test its validity in the absence of a plausible theory for why it might be the case. “For a lot of things about cities, we don’t have that underlying theory that tells us why cities work the way they do,” says Catlett.

Still, predictive modelling is maturing, as other cities succeed in using it to tackle urban ills….Such efforts can be a boon for cities, making them more productive, efficient and safe, says Rob Kitchin of Maynooth University in Ireland, who helped launched a real-time data site for Dublin last month called the Dublin Dashboard. But he cautions that there’s a limit to how far these systems can aid us. Knowing that a particular street corner is likely to be overrun with rats tomorrow doesn’t address what caused the infestation in the first place. “You might be able to create a sticking plaster or be able to manage it more efficiently, but you’re not going to be able to solve the deep structural problems….”

Traversing Digital Babel


New book by Alon Peled: “The computer systems of government agencies are notoriously complex. New technologies are piled on older technologies, creating layers that call to mind an archaeological dig. Obsolete programming languages and closed mainframe designs offer barriers to integration with other agency systems. Worldwide, these unwieldy systems waste billions of dollars, keep citizens from receiving services, and even—as seen in interoperability failures on 9/11 and during Hurricane Katrina—cost lives. In this book, Alon Peled offers a groundbreaking approach for enabling information sharing among public sector agencies: using selective incentives to “nudge” agencies to exchange information assets. Peled proposes the establishment of a Public Sector Information Exchange (PSIE), through which agencies would trade information.
After describing public sector information sharing failures and the advantages of incentivized sharing, Peled examines the U.S. Open Data program, and the gap between its rhetoric and results. He offers examples of creative public sector information sharing in the United States, Australia, Brazil, the Netherlands, and Iceland. Peled argues that information is a contested commodity, and draws lessons from the trade histories of other contested commodities—including cadavers for anatomical dissection in nineteenth-century Britain. He explains how agencies can exchange information as a contested commodity through a PSIE program tailored to an individual country’s needs, and he describes the legal, economic, and technical foundations of such a program. Touching on issues from data ownership to freedom of information, Peled offers pragmatic advice to politicians, bureaucrats, technologists, and citizens for revitalizing critical information flows.”

The Role Of Open Data In Choosing Neighborhood


PlaceILive Blog: “To what extent is it important to get familiar with our environment?
If we think about how the world surrounding us has changed throughout the years, it is not so unreasonable that, while walking to work, we might encounter some new little shops, restaurants, or gas stations we had never noticed before. Likewise, how many times did we wander about for hours just to find green spaces for a run? And the only one we noticed was even more polluted than other urban areas!
Citizens are not always properly informed about the evolution of the places they live in. And that is why it would be crucial for people to be constantly up-to-date with accurate information of the neighborhood they have chosen or are going to choose.
London is a neat evidence of how transparency in providing data is basic in order to succeed as a Smart City.
The GLA’s London Datastore, for instance, is a public platform of datasets revealing updated figures on the main services offered by the town, in addition to population’s lifestyle and environmental risks. These data are then made more easily accessible to the community through the London Dashboard.
The importance of dispensing free information can be also proved by the integration of maps, which constitute an efficient means of geolocation. Consulting a map where it’s easy to find all the services you need as close as possible can be significant in the search for a location.
Wheel 435
(source: Smart London Plan)
The Open Data Index, published by The Open Knowledge Foundation in 2013, is another useful tool for data retrieval: it showcases a rank of different countries in the world with scores based on openness and availability of data attributes such as transport timetables and national statistics.
Here it is possible to check UK Open Data Census and US City Open Data Census.
As it was stated, making open data available and easily findable online not only represented a success for US cities but favoured apps makers and civic hackers too. Lauren Reid, a spokesperson at Code for America, reported according to Government Technology: “The more data we have, the better picture we have of the open data landscape.”
That is, on the whole, what Place I Live puts the biggest effort into: fostering a new awareness of the environment by providing free information, in order to support citizens willing to choose the best place they can live.
The outcome is soon explained. The website’s homepage offers visitors the chance to type address of their interest, displaying an overview of neighborhood parameters’ evaluation and a Life Quality Index calculated for every point on the map.
The research of the nearest medical institutions, schools or ATMs thus gets immediate and clear, as well as the survey about community’s generic information. Moreover, data’s reliability and accessibility are constantly examined by a strong team of professionals with high competence in data analysis, mapping, IT architecture and global markets.
For the moment the company’s work is focused on London, Berlin, Chicago, San Francisco and New York, while higher goals to reach include more than 200 cities.
US Open Data Census finally saw San Francisco’s highest score achievement as a proof of the city’s labour in putting technological expertise at everyone’s disposal, along with the task of fulfilling users’ needs through meticulous selections of datasets. This challenge seems to be successfully overcome by San Francisco’s new investment, partnering with the University of Chicago, in a data analytics dashboard on sustainability performance statistics named Sustainable Systems Framework, which is expected to be released in beta version by the the end of 2015’s first quarter.
 
Another remarkable collaboration in Open Data’s spread comes from the Bartlett Centre for Advanced Spatial Analysis (CASA) of the University College London (UCL); Oliver O’Brien, researcher at UCL Department of Geography and software developer at the CASA, is indeed one of the contributors to this cause.
Among his products, an interesting accomplishment is London’s CityDashboard, a real-time reports’ control panel in terms of spatial data. The web page also allows to visualize the whole data translated into a simplified map and to look at other UK cities’ dashboards.
Plus, his Bike Share Map is a live global view to bicycle sharing systems in over a hundred towns around the world, since bike sharing has recently drawn a greater public attention as an original form of transportation, in Europe and China above all….”

CC Science → Sensored City


Citizen Sourced Data: “We routinely submit data to others and then worry about liberating the data from the silos. What if we could invert the model? What if collected data were first put into a completely free and open repository accessible to everyone so anyone could build applications with the data? What if the data itself were free so everyone could have an equal opportunity to create and even monetize their creativity? Funded by a generous grant from Robert Wood Johnson Foundation, we intend to do just that.
Partnering with Manylabs, a San Francisco-based sensor tools and education nonprofit, and Urban Matter, Inc., a Brooklyn-based design studio, and in collaboration with the City of Louisville, Kentucky, and Propeller Health, maker of a mobile platform for respiratory health management, we will design, develop and install a network of sensor-based hardware that will collect environmental information at high temporal and spatial scales and store it in a software platform designed explicitly for storing and retrieving such data.
Further, we will design, create and install a public data art installation that will be powered by the data we collect thereby communicating back to the public what has been collected about them.”

Innovation in Philanthropy is not a Hack-a-thon


Sam McAfee in Medium: “…Antiquated funding models and lack of a rapid data-driven evaluation process aren’t the only issues though. Most of the big ideas in the technology-for-social-impact space are focused either on incremental improvements to existing service models, maybe leveraging online services or mobile applications to improve cost-efficiency marginally. Or they solve only a very narrow niche problem for a small audience, often applying a technology that was already in development, and just happened to find a solution in the field.

Innovation Requires Disruption

When you look at innovation in the commercial world, like the Ubers and AirBnBs of the world, what you see is a clear and substantive break from previous modes of thinking about transportation and accommodation. And it’s not the technology itself that is all that impressive. There is nothing ground-breaking technically under the hood of either of those products that wasn’t already lying around for a decade. What makes them different is that they created business models that stepped completely out of the existing taxi and hotel verticals, and simply used technology to leverage existing frustrations with those antiquated models and harness latent demands, to produce a new, vibrant commercial ecosystem.

Now, let’s imagine the same framework in the social sector, where there are equivalent long-standing traditional modes of providing resources. To find new ways of meeting human needs that disrupt those models requires both safe-to-fail experimentation and rapid feedback and iteration in the field, with clear success criteria. Such rapid development can only be accomplished by a sharp, nimble and multifaceted team of thinkers and doers who are passionate about the problem, yes, but also empowered and enabled to break a few institutional eggs on the way to the creative omelet.

Agile and Lean are Proven Methods

It turns out that there are proven working models for cultivating and fostering this kind of innovative thinking and experimentation. As I mentioned above, agile and lean are probably the single greatest contribution to the world by the tech sector, far more impactful than any particular technology produced by it. Small, cross-functional teams working on tight, iterative timeframes, using an iterative data-informed methodology, can create new and disruptive solutions to big, difficult problems. They are able to do this precisely because they are unhindered by the hulking bureaucratic structures of the old guard. This is precisely why so many Fortune 500 companies are experimenting with innovation and R&D laboratories. Because they know their existing staff, structures, and processes cannot produce innovation within those constraints. Only the small, nimble teams can do it, and they can only do it if they are kept separate from, protected from even, the traditional production systems of the previous product cycle.

Yet big philanthropy still have barely experimented with this model, only trying it in a few isolated instances. Here at Neo, for example, we are working on a project for teachers funded by a forward-thinking foundation. What our client is trying to disrupt is no less than the entire US education system, and with goals and measurements developed by teachers for teachers, not by Silicon Valley hotshots who have no clue how to fix education.

Small, cross-functional teams working on tight, iterative timeframes, using an iterative data-informed methodology, can create new and disruptive solutions to big, difficult problems.

To start with, the project was funded in iterations of six-weeks at a time, each with a distinct and measurable goal. We built a small cross-functional team to tackle some of the tougher issues faced by teachers trying to raise the level of excellence in their classrooms. The team was empowered to talk directly to teachers, and incorporate their feedback into new versions of the project, released on almost a daily basis. We have iterated the design more than sixteen times in less then four months, and it’s starting to really take shape.

We have no idea whether this particular project will be successful in the long run. But what we do know is that the client and their funder have had the courage to step out of the traditional project funding models and apply agile and lean thinking to a very tough problem. And we’re proud to be invited along for the ride.

The vast majority of the social sector is still trying to tackle social problems with program and funding models that were pioneered early in the last century. Agile and lean methods hold the key to finally breaking the mold of the old, traditional model of resourcing social change initiatives. The philanthropic community should be interested in the agile and lean methods produced by the technology sector, not the money produced by it, and start reorganizing project teams and resource allocation strategies and timelines in line this proven innovation model.

Only then we will be in a position to really innovate for social change.”

Canada's Action Plan on Open Government 2014-2016


Draft action plan: “Canada’s second Action Plan on Open Government consists of twelve commitments that will advance open government principles in Canada over the next two years and beyond. The Directive on Open Government, new policy direction to federal departments and agencies on open government, will provide foundational support for each of the additional commitments which fall under three streams: Open Data, Open Information, and Open Dialogue.
Figure 1: Our Commitments
Open Government Directive Diagram

 

More:

Table of Contents