When Tech Culture And Urbanism Collide


John Tolva: “…We can build upon the success of the work being done at the intersection of technology and urban design, right now.

For one, the whole realm of social enterprise — for-profit startups that seek to solve real social problems — has a huge overlap with urban issues. Impact Engine in Chicago, for instance, is an accelerator squarely focused on meaningful change and profitable businesses. One of their companies, Civic Artworks, has set as its goal rebalancing the community planning process.

The Code for America Accelerator and Tumml, both located in San Francisco, morph the concept of social innovation into civic/urban innovation. The companies nurtured by CfA and Tumml are filled with technologists and urbanists working together to create profitable businesses. Like WorkHands, a kind of LinkedIn for blue collar trades. Would something like this work outside a city? Maybe. Are its effects outsized and scale-ready in a city? Absolutely. That’s the opportunity in urban innovation.

Scale is what powers the sharing economy and it thrives because of the density and proximity of cities. In fact, shared resources at critical density is one of the only good definitions for what a city is. It’s natural that entrepreneurs have overlaid technology on this basic fact of urban life to amplify its effects. Would TaskRabbit, Hailo or LiquidSpace exist in suburbia? Probably, but their effects would be minuscule and investors would get restless. The city in this regard is the platform upon which sharing economy companies prosper. More importantly, companies like this change the way the city is used. It’s not urban planning, but it is urban (re)design and it makes a difference.

A twist that many in the tech sector who complain about cities often miss is that change in a city is not the same thing as change in city government. Obviously they are deeply intertwined; change is mighty hard when it is done at cross-purposes with government leadership. But it happens all the time. Non-government actors — foundations, non-profits, architecture and urban planning firms, real estate developers, construction companies — contribute massively to the shape and health of our cities.

Often this contribution is powered through policies of open data publication by municipal governments. Open data is the raw material of a city, the vital signs of what has happened there, what is happening right now, and the deep pool of patterns for what might happen next.

Tech entrepreneurs would do well to look at the organizations and companies capitalizing on this data as the real change agents, not government itself. Even the data in many cases is generated outside government. Citizens often do the most interesting data-gathering, with tools like LocalData. The most exciting thing happening at the intersection of technology and cities today — what really makes them “smart” — is what is happening at the periphery of city government. It’s easy to belly-ache about government and certainly there are administrations that to do not make data public (or shut it down), but tech companies who are truly interested in city change should know that there are plenty of examples of how to start up and do it.

And yet, the somewhat staid world of architecture and urban-scale design presents the most opportunity to a tech community interested in real urban change. While technology obviously plays a role in urban planning — 3D visual design tools like Revit and mapping services like ArcGIS are foundational for all modern firms — data analytics as a serious input to design matters has only been used in specialized (mostly energy efficiency) scenarios. Where are the predictive analytics, the holistic models, the software-as-a-service providers for the brave new world of urban informatics and The Internet of Things? Technologists, it’s our move.

Something’s amiss when some city governments — rarely the vanguard in technological innovation — have more sophisticated tools for data-driven decision-making than the private sector firms who design the city. But some understand the opportunity. Vannevar Technology is working on it, as is Synthicity. There’s plenty of room for the most positive aspects of tech culture to remake the profession of urban planning itself. (Look to NYU’s Center for Urban Science and Progress and the University of Chicago’s Urban Center for Computation and Data for leadership in this space.)…”

Brainlike Computers, Learning From Experience


The New York Times: “Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.

The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.

“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.

Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.

But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.

In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.

The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.”

Can a Better Taxonomy Help Behavioral Energy Efficiency?


Article at GreenTechEfficiency: “Hundreds of behavioral energy efficiency programs have sprung up across the U.S. in the past five years, but the effectiveness of the programs — both in terms of cost savings and reduced energy use — can be difficult to gauge.
Of nearly 300 programs, a new report from the American Council for an Energy-Efficient Economy was able to accurately calculate the cost of saved energy from only ten programs….
To help utilities and regulators better define and measure behavioral programs, ACEEE offers a new taxonomy of utility-run behavior programs that breaks them into three major categories:
Cognition: Programs that focus on delivering information to consumers.  (This includes general communication efforts, enhanced billing and bill inserts, social media and classroom-based education.)
Calculus: Programs that rely on consumers making economically rational decisions. (This includes real-time and asynchronous feedback, dynamic pricing, games, incentives and rebates and home energy audits.)
Social interaction: Programs whose key drivers are social interaction and belonging. (This includes community-based social marketing, peer champions, online forums and incentive-based gifts.)
….
While the report was mostly preliminary, it also offered four steps forward for utilities that want to make the most of behavioral programs.
Stack. The types of programs might fit into three broad categories, but judiciously blending cues based on emotion, reason and social interaction into programs is key, according to ACEEE. Even though the report recommends stacked programs that have a multi-modal approach, the authors acknowledge, “This hypothesis will remain untested until we see more stacked programs in the marketplace.”
Track. Just like other areas of grid modernization, utilities need to rethink how they collect, analyze and report the data coming out of behavioral programs. This should include metrics that go beyond just energy savings.
Share. As with other utility programs, behavior-based energy efficiency programs can be improved upon if utilities share results and if reporting is standardized across the country instead of varying by state.
Coordinate. Sharing is only the first step. Programs that merge water, gas and electricity efficiency can often gain better results than siloed programs. That approach, however, requires a coordinated effort by regional utilities and a change to how programs are funded and evaluated by regulators.”

6 New Year’s Strategies for Open Data Entrepreneurs


The GovLab’s Senior Advisor Joel Gurin: “Open Data has fueled a wide range of startups, including consumer-focused websites, business-to-business services, data-management tech firms, and more. Many of the companies in the Open Data 500 study are new ones like these. New Year’s is a classic time to start new ventures, and with 2014 looking like a hot year for Open Data, we can expect more startups using this abundant, free resource. For my new book, Open Data Now, I interviewed dozens of entrepreneurs and distilled six of the basic strategies that they’ve used.
1. Learn how to add value to free Open Data. We’re seeing an inversion of the value proposition for data. It used to be that whoever owned the data—particularly Big Data—had greater opportunities than those who didn’t. While this is still true in many areas, it’s also clear that successful businesses can be built on free Open Data that anyone can use. The value isn’t in the data itself but rather in the analytical tools, expertise, and interpretation that’s brought to bear. One oft-cited example: The Climate Corporation, which built a billion-dollar business out of government weather and satellite data that’s freely available for use.
2. Focus on big opportunities: health, finance, energy, education. A business can be built on just about any kind of Open Data. But the greatest number of startup opportunities will likely be in the four big areas where the federal government is focused on Open Data release. Last June’s Health Datapalooza showcased the opportunities in health. Companies like Opower in energy, GreatSchools in education, and Calcbench, SigFig, and Capital Cube in finance are examples in these other major sectors.
3. Explore choice engines and Smart Disclosure apps. Smart Disclosure – releasing data that consumers can use to make marketplace choices – is a powerful tool that can be the basis for a new sector of online startups. No one, it seems, has quite figured out how to make this form of Open Data work best, although sites like CompareTheMarket in the UK may be possible models. Business opportunities await anyone who can find ways to provide these much-needed consumer services. One example: Kayak, which competed in the crowded travel field by providing a great consumer interface, and which was sold to Priceline for $1.8 billion last year.
4. Help consumers tap the value of personal data. In a privacy-conscious society, more people will be interested in controlling their personal data and sharing it selectively for their own benefit. The value of personal data is just being recognized, and opportunities remain to be developed. There are business opportunities in setting up and providing “personal data vaults” and more opportunity in applying the many ways they can be used. Personal and Reputation.com are two leaders in this field.
5. Provide new data solutions to governments at all levels. Government datasets at the federal, state, and local level can be notoriously difficult to use. The good news is that these governments are now realizing that they need help. Data management for government is a growing industry, as Socrata, OpenGov, 3RoundStones, and others are finding, while companies like Enigma.io are turning government data into a more usable resource.
6. Look for unusual Open Data opportunities. Building a successful business by gathering data on restaurant menus and recipes is not an obvious route to success. But it’s working for Food Genius, whose founders showed a kind of genius in tapping an opportunity others had missed. While the big areas for Open Data are becoming clear, there are countless opportunities to build more niche businesses that can still be highly successful. If you have expertise in an area and see a customer need, there’s an increasingly good chance that the Open Data to help meet that need is somewhere to be found.”

The Postmodernity of Big Data


Essay by in the New Inquiry: “Big Data fascinates because its presence has always been with us in nature. Each tree, drop of rain, and the path of each grain of sand, both responds to and creates millions of data points, even on a short journey. Nature is the original algorithm, the most efficient and powerful. Mathematicians since the ancients have looked to it for inspiration; techno-capitalists now look to unlock its mysteries for private gain. Playing God has become all the more brisk and profitable thanks to cloud computing.
But beyond economic motivations for Big Data’s rise, are there also epistemological ones? Has Big Data come to try to fill the vacuum of certainty left by postmodernism? Does data science address the insecurities of the postmodern thought?
It turns out that trying to explain Big Data is like trying to explain postmodernism. Neither can be summarized effectively in a phrase, despite their champions’ efforts. Broad epistemological developments are compressed into cursory, ex post facto descriptions. Attempts to define Big Data, such as IBM’s marketing copy, which promises “insights gleaned” from “enterprise data warehouses that implement massively parallel processing,” “real-time scalability” and “parsing structured and unstructured sources,” focus on its implementation at the expense of its substance, decontextualizing it entirely . Similarly, definitions of postmodernism, like art critic Thomas McEvilley’s claim that it is “a renunciation that involves recognition of the relativity of the self—of one’s habit systems, their tininess, silliness, and arbitrariness” are accurate but abstract to the point of vagueness….
Big Data might come to be understood as Big Postmodernism: the period in which the influx of unstructured, non-teleological, non-narrative inputs ceased to destabilize the existing order but was instead finally mastered processed by sufficiently complex, distributed, and pluralized algorithmic regime. If Big Data has a skepticism built in, how this is different from the skepticism of postmodernism is perhaps impossible to yet comprehend”.

Open data policies, their implementation and impact: A framework for comparison


Paper by A Zuiderwijk, M Janssen in the Government Information Quarterly: “In developing open data policies, governments aim to stimulate and guide the publication of government data and to gain advantages from its use. Currently there is a multiplicity of open data policies at various levels of government, whereas very little systematic and structured research has been done on the issues that are covered by open data policies, their intent and actual impact. Furthermore, no suitable framework for comparing open data policies is available, as open data is a recent phenomenon and is thus in an early stage of development. In order to help bring about a better understanding of the common and differentiating elements in the policies and to identify the factors affecting the variation in policies, this paper develops a framework for comparing open data policies. The framework includes the factors of environment and context, policy content, performance indicators and public values. Using this framework, seven Dutch governmental policies at different government levels are compared. The comparison shows both similarities and differences among open data policies, providing opportunities to learn from each other’s policies. The findings suggest that current policies are rather inward looking, open data policies can be improved by collaborating with other organizations, focusing on the impact of the policy, stimulating the use of open data and looking at the need to create a culture in which publicizing data is incorporated in daily working processes. The findings could contribute to the development of new open data policies and the improvement of existing open data policies.”

A Bottom-Up Smart City?


Alicia Rouault at Data-Smart City Solutions: “America’s shrinking cities face a tide of disinvestment, abandonment, vacancy, and a shift toward deconstruction and demolition followed by strategic reinvestment, rightsizing, and a host of other strategies designed to renew once-great cities. Thriving megacity regions are experiencing rapid growth in population, offering a different challenge for city planners to redefine density, housing, and transportation infrastructure. As cities shrink and grow, policymakers are increasingly called to respond to these changes by making informed, data-driven decisions. What is the role of the citizen in this process of collecting and understanding civic data?
Writing for Forbes in “Open Sourcing the Neighborhood,” Professor of Sociology at Columbia University Saskia Sassen calls for “open source urbanism” as an antidote to the otherwise top-down smart city movement. This form of urbanism involves opening traditional verticals of information within civic and governmental institutions. Citizens can engage with and understand the logic behind decisions by exploring newly opened administrative data. Beyond opening these existing datasets, Sassen points out that citizen experts hold invaluable institutional memory that can serve as an alternate and legitimate resource for policymakers, economists, and urban planners alike.
In 2012, we created a digital platform called LocalData to address the production and use of community-generated data in a municipal context. LocalData is a digital mapping service used globally by universities, non-profits, and municipal governments to gather and understand data at a neighborhood scale. In contrast to traditional Census or administrative data, which is produced by a central agency and collected infrequently, our platform provides a simple method for both community-based organizations and municipal employees to gather real-time data on project-specific indicators: property conditions, building inspections, environmental issues or community assets. Our platform then visualizes data and exports it into formats integrated with existing systems in government to seamlessly provide accurate and detailed information for decision makers.
LocalData began as a project in Detroit, Michigan where the city was tackling a very real lack of standard, updated, and consistent condition information on the quality and status of vacant and abandoned properties. Many of these properties were owned by the city and county due to high foreclosure rates. One of Detroit’s strategies for combating crime and stabilizing neighborhoods is to demolish property in a targeted fashion. This strategy serves as a political win as much as providing an effective way to curb the secondary effects of vacancy: crime, drug use, and arson. Using LocalData, the city mapped critical corridors of emergent commercial property as an analysis tool for where to place investment, and documented thousands of vacant properties to understand where to target demolition.
Vacancy is not unique to the Midwest. Following our work with the Detroit Mayor’s office and planning department, LocalData has been used in dozens of other cities in the U.S. and abroad. Currently the Smart Chicago Collaborative is using LocalData to conduct a similar audit of vacant and abandoned property in southwest Chicagos. Though an effective tool for capturing building-specific information, LocalData has also been used to capture behavior and movement of goods. The MIT Megacities Logistics Lab has used LocalData to map and understand the intensity of urban supply chains by interviewing shop owners and mapping delivery routes in global megacities in Mexico, Colombia, Brazil and the U.S. The resulting information has been used with analytical models to help both city officials and companies to design better city logistics policies and operations….”

Using Social Media in Rulemaking: Possibilities and Barriers


New paper by Michael Herz (Cardozo Legal Studies Research Paper No. 417): “Web 2.0” is characterized by interaction, collaboration, non-static web sites, use of social media, and creation of user-generated content. In theory, these Web 2.0 tools can be harnessed not only in the private sphere but as tools for an e-topia of citizen engagement and participatory democracy. Notice-and-comment rulemaking is the pre-digital government process that most approached (while still falling far short of) the e-topian vision of public participation in deliberative governance. The notice-and-comment process for federal agency rulemaking has now changed from a paper process to an electronic one. Expectations for this switch were high; many anticipated a revolution that would make rulemaking not just more efficient, but also more broadly participatory, democratic, and dialogic. In the event, the move online has not produced a fundamental shift in the nature of notice-and-comment rulemaking. At the same time, the online world in general has come to be increasingly characterized by participatory and dialogic activities, with a move from static, text-based websites to dynamic, multi-media platforms with large amounts of user-generated content. This shift has not left agencies untouched. To the contrary, agencies at all levels of government have embraced social media – by late 2013 there were over 1000 registered federal agency twitter feeds and over 1000 registered federal agency Facebook pages, for example – but these have been used much more as tools for broadcasting the agency’s message than for dialogue or obtaining input. All of which invites the questions whether agencies could or should directly rely on social media in the rulemaking process.
This study reviews how federal agencies have been using social media to date and considers the practical and legal barriers to using social media in rulemaking, not just to raise the visibility of rulemakings, which is certainly happening, but to gather relevant input and help formulate the content of rules.
The study was undertaken for the Administrative Conference of the United States and is the basis for a set of recommendations adopted by ACUS in December 2013. Those recommendations overlap with but are not identical to the recommendations set out herein.”

How could technology improve policy-making?


Beccy Allen from the Hansard Society (UK): “How can civil servants be sure they have the most relevant, current and reliable data? How can open data be incorporated into the policy making process now and what is the potential for the future use of this vast array of information? How can parliamentary clerks ensure they are aware of the broadest range of expert opinion to inform committee scrutiny? And how can citizens’ views help policy makers to design better policy at all stages of the process?
These are the kind of questions that Sense4us will be exploring over the next three years. The aim is to build a digital tool for policy-makers that can:

  1. locate a broad range of relevant and current information, specific to a particular policy, incorporating open data sets and citizens’ views particularly from social media; and
  2. simulate the consequences and impact of potential policies, allowing policy-makers to change variables and thereby better understand the likely outcomes of a range of policy options before deciding which to adopt.

It is early days for open data and open policy making. The word ‘digital’ peppers the Civil Service Reform Plan but the focus is often on providing information and transactional services digitally. Less attention is paid to how digital tools could improve the nature of policy-making itself.
The Sense4us tool aims to help bridge the gap. It will be developed in consultation with policy-makers at different levels of government across Europe to ensure its potential use by a wide range of stakeholders. At the local level, our partners GESIS (the Leibniz-Institute for the Social Sciences) will be responsible for engaging with users at the city level in Berlin and in the North Rhine-Westphalia state legislature At the multi-national level Government to You (Gov2u) will engage with users in the European Parliament and Commission. Meanwhile the Society will be responsible for national level consultation with civil servants, parliamentarians and parliamentary officials in Whitehall and Westminster exploring how the tool can be used to support the UK policy process. Our academic partners leading on technical development of the tool are the IT Innovation Centre at Southampton University, eGovlab at Stockholm University, the University of Koblenz-Landau and the Knowledge Media Institute at the Open University.”

Big Data Becomes a Mirror


Book Review of ‘Uncharted,’ by Erez Aiden and Jean-Baptiste Michel in the New York Times: “Why do English speakers say “drove” rather than “drived”?

As graduate students at the Harvard Program for Evolutionary Dynamics about eight years ago, Erez Aiden and Jean-Baptiste Michel pondered the matter and decided that something like natural selection might be at work. In English, the “-ed” past-tense ending of Proto-Germanic, like a superior life form, drove out the Proto-Indo-European system of indicating tenses by vowel changes. Only the small class of verbs we know as irregular managed to resist.

To test this evolutionary premise, Mr. Aiden and Mr. Michel wound up inventing something they call culturomics, the use of huge amounts of digital information to track changes in language, culture and history. Their quest is the subject of “Uncharted: Big Data as a Lens on Human Culture,” an entertaining tour of the authors’ big-data adventure, whose implications they wildly oversell….

Invigorated by the great verb chase, Mr. Aiden and Mr. Michel went hunting for bigger game. Given a large enough storehouse of words and a fine filter, would it be possible to see cultural change at the micro level, to follow minute fluctuations in human thought processes and activities? Tiny factoids, multiplied endlessly, might assume imposing dimensions.

By chance, Google Books, the megaproject to digitize every page of every book ever printed — all 130 million of them — was starting to roll just as the authors were looking for their next target of inquiry.

Meetings were held, deals were struck and the authors got to it. In 2010, working with Google, they perfected the Ngram Viewer, which takes its name from the computer-science term for a word or phrase. This “robot historian,” as they call it, can search the 30 million volumes already digitized by Google Books and instantly generate a usage-frequency timeline for any word, phrase, date or name, a sort of stock-market graph illustrating the ups and downs of cultural shares over time.

Mr. Aiden, now director of the Center for Genome Architecture at Rice University, and Mr. Michel, who went on to start the data-science company Quantified Labs, play the Ngram Viewer (books.google.com/ngrams) like a Wurlitzer…

The Ngram Viewer delivers the what and the when but not the why. Take the case of specific years. All years get attention as they approach, peak when they arrive, then taper off as succeeding years occupy the attention of the public. Mentions of the year 1872 had declined by half in 1896, a slow fade that took 23 years. The year 1973 completed the same trajectory in less than half the time.

“What caused that change?” the authors ask. “We don’t know. For now, all we have are the naked correlations: what we uncover when we look at collective memory through the digital lens of our new scope.” Someone else is going to have to do the heavy lifting.”