What Jelly Means


Steven Johnson: “A few months ago, I found this strange white mold growing in my garden in California. I’m a novice gardener, and to make matters worse, a novice Californian, so I had no idea what these small white cells might portend for my flowers.
This is one of those odd blank spots — I used the call them Googleholes in the early days of the service — where the usual Delphic source of all knowledge comes up relatively useless. The Google algorithm doesn’t know what those white spots are, the way it knows more computational questions, like “what is the top-ranked page for “white mold?” or “what is the capital of Illinois?” What I want, in this situation, is the distinction we usually draw between information and wisdom. I don’t just want to know what the white spots are; I want to know if I should be worried about them, or if they’re just a normal thing during late summer in Northern California gardens.
Now, I’m sure I know a dozen people who would be able to answer this question, but the problem is I don’t really know which people they are. But someone in my extended social network has likely experienced these white spots on their plants, or better yet, gotten rid of them.  (Or, for all I know, ate them — I’m trying not to be judgmental.) There are tools out there that would help me run the social search required to find that person. I can just bulk email my entire address book with images of the mold and ask for help. I could go on Quora, or a gardening site.
But the thing is, it’s a type of question that I find myself wanting to ask a lot, and there’s something inefficient about trying to figure the exact right tool to use to ask it each time, particularly when we have seen the value of consolidating so many of our queries into a single, predictable search field at Google.
This is why I am so excited about the new app, Jelly, which launched today. …
Jelly, if you haven’t heard, is the brainchild of Biz Stone, one of Twitter’s co-founders.  The service launches today with apps on iOS and Android. (Biz himself has a blog post and video, which you should check out.) I’ve known Biz since the early days of Twitter, and I’m excited to be an adviser and small investor in a company that shares so many of the values around networks and collective intelligence that I’ve been writing about since Emergence.
The thing that’s most surprising about Jelly is how fun it is to answer questions. There’s something strangely satisfying in flipping through the cards, reading questions, scanning the pictures, and looking for a place to be helpful. It’s the same broad gesture of reading, say, a Twitter feed, and pleasantly addictive in the same way, but the intent is so different. Scanning a twitter feed while waiting for the train has the feel of “Here we are now, entertain us.” Scanning Jelly is more like: “I’m here. How can I help?”

Open Government Strategy Continues with US Currency Production API


Eric Carter in the ProgrammableWeb: “Last year, the Executive branch of the US government made huge strides in opening up government controlled data to the developer community. Projects such as the Open Data Policy and the Machine Readable Executive Order have led the US government to develop an API strategy. Today, ProgrammableWeb takes a look at another open government API: the Annual Production Figures of United States Currency API.

The US Treasury’s Bureau of Engraving and Printing (BEP) provides the dataset available through the Production Figures API. The data available consists of the number of $1, $5, $10, $20, $50, $100 notes printed each year from 1980 to 2012. With this straightforward, seemingly basic set of data available, the question becomes: “Why is this data useful“? To answer this, one should consider the purpose of the Executive Order:

“Openness in government strengthens our democracy, promotes the delivery of efficient and effective services to the public, and contributes to economic growth. As one vital benefit of open government, making information resources easy to find, accessible, and usable can fuel entrepreneurship, innovation, and scientific discovery that improves Americans’ lives and contributes significantly to job creation.”

The API uses HTTP and can return requests in XML, JSON, or CSV data formats. As stated, the API retrieves the number of bills of a designated currency for the desired year. For more information and code samples, visit the API docs.”
 

E-government research in the United States


Paper by JT Snead, E Wright in Government Information Quarterly: “The purpose of this exploratory study is to review scholarly publications and assess egovernment research efforts as a field of study specific to the United States e-government environment. Study results reveal that researchers who focus on the U.S. e-government environment assess specific e-government topics at the federal, state, and local levels; however, there are gaps in the research efforts by topic areas and across different levels of government, which indicate opportunities for future areas of research. Results also find that a multitude of methodology approaches are used to assess e-government. Issues, however, exist that include lack of or weak presentations of methodologies in publications, few studies include multi-method evaluation approaches for data collection and analysis efforts, and few studies take a theory-based approach to understanding the U.S. e-government environment.”

Protecting personal data in E-government: A cross-country study


Paper by Yuehua Wu in Government Information Quarterly: “This paper presents the findings of a comparative study of laws and policies employed to protect personal data processed in the context of e-government in three countries (the United States, Germany, and China) with rather different approaches. Drawing on governance theory, the paper seeks to document the mechanisms utilized and to understand the factors that shape the governance modes adopted. The cases reveal that national government regulations have not kept pace with technological change and with the current information practices of the public sector. Nonetheless, traditional government regulation remains the major governance mode for the issue under discussion. Self-regulation and code-based regulation serve supplementary roles to traditional government regulation. National context is found to impact the form and level of data protection and the choice of governance modes.”

The GovLab Index: Open Data


Please find below the latest installment in The GovLab Index series, inspired by Harper’s Index. “The GovLab Index: Open Data — December 2013” provides an update on our previous Open Data installment, and highlights global trends in Open Data and the release of public sector information. Previous installments include Measuring Impact with Evidence, The Data Universe, Participation and Civic Engagement and Trust in Institutions.
Value and Impact

  • Potential global value of open data estimated by McKinsey: $3 trillion annually
  • Potential yearly value for the United States: $1.1 trillion 
  • Europe: $900 billion
  • Rest of the world: $1.7 trillion
  • How much the value of open data is estimated to grow per year in the European Union: 7% annually
  • Value of releasing UK’s geospatial data as open data: 13 million pounds per year by 2016
  • Estimated worth of business reuse of public sector data in Denmark in 2010: more than €80 million a year
  • Estimated worth of business reuse of public sector data across the European Union in 2010: €27 billion a year
  • Total direct and indirect economic gains from easier public sector information re-use across the whole European Union economy, as of May 2013: €140 billion annually
  • Economic value of publishing data on adult cardiac surgery in the U.K., as of May 2013: £400 million
  • Economic value of time saved for users of live data from the Transport for London apps, as of May 2013: between £15 million and £58 million
  • Estimated increase in GDP in England and Wales in 2008-2009 due to the adoption of geospatial information by local public services providers: +£320m
  • Average decrease in borrowing costs in sovereign bond markets for emerging market economies when implementing transparent practices (measured by accuracy and frequency according to IMF policies, across 23 countries from 1999-2002): 11%
  • Open weather data supports an estimated $1.5 billion in applications in the secondary insurance market – but much greater value comes from accurate weather predictions, which save the U.S. annually more than $30 billion
  • Estimated value of GPS data: $90 billion

Efforts and Involvement

  • Number of U.S. based companies identified by the GovLab that use government data in innovative ways: 500
  • Number of open data initiatives worldwide in 2009: 2
  • Number of open data initiatives worldwide in 2013: over 300
  • Number of countries with open data portals: more than 40
  • Countries who share more information online than the U.S.: 14
  • Number of cities globally that participated in 2013 International Open Data Hackathon Day: 102
  • Number of U.S. cities with Open Data Sites in 2013: 43
  • U.S. states with open data initiatives: 40
  • Membership growth in the Open Government Partnership in two years: from 8 to 59 countries
  • Number of time series indicators (GDP, foreign direct investment, life expectancy, internet users, etc.) in the World Bank Open Data Catalog: over 8,000
  • How many of 77 countries surveyed by the Open Data Barometer have some form of Open Government Data Initiative: over 55%
  • How many OGD initiatives have dedicated resources with senior level political backing: over 25%
  • How many countries are in the Open Data Index: 70
    • How many of the 700 key datasets in the Index are open: 84
  • Number of countries in the Open Data Census: 77
    • How many of the 727 key datasets in the Census are open: 95
  • How many countries surveyed have formal data policies in 2013: 55%
  • Those who have machine-readable data available: 25%
  • Top 5 countries in Open Data rankings: United Kingdom, United States, Sweden, New Zealand, Norway
  • The different levels of Open Data Certificates a data user or publisher can achieve “along the way to world-class open data”: 4 levels, Raw, Pilot, Standard and Expert
  • The number of data ecosystems categories identified by the OECD: 3, data producers, infomediaries, and users

Examining Datasets
FULL VERSION AT http://thegovlab.org/govlab-index-open-data-updated/
 

When Lean Startup Arrives in a Trojan Horse–Innovation in Extreme Bureaucracy


Steven Hodas @ The Lean Startup Conference 2013 –…Steven runs an procurement-innovation program in one of the world’s most notorious bureaucracies: the New York City Department of Education. In a fear-driven atmosphere, with lots of incentive to not be embarrassed, he’ll talk about the challenges he’s faced and progress he’s made testing new ideas.

When Tech Culture And Urbanism Collide


John Tolva: “…We can build upon the success of the work being done at the intersection of technology and urban design, right now.

For one, the whole realm of social enterprise — for-profit startups that seek to solve real social problems — has a huge overlap with urban issues. Impact Engine in Chicago, for instance, is an accelerator squarely focused on meaningful change and profitable businesses. One of their companies, Civic Artworks, has set as its goal rebalancing the community planning process.

The Code for America Accelerator and Tumml, both located in San Francisco, morph the concept of social innovation into civic/urban innovation. The companies nurtured by CfA and Tumml are filled with technologists and urbanists working together to create profitable businesses. Like WorkHands, a kind of LinkedIn for blue collar trades. Would something like this work outside a city? Maybe. Are its effects outsized and scale-ready in a city? Absolutely. That’s the opportunity in urban innovation.

Scale is what powers the sharing economy and it thrives because of the density and proximity of cities. In fact, shared resources at critical density is one of the only good definitions for what a city is. It’s natural that entrepreneurs have overlaid technology on this basic fact of urban life to amplify its effects. Would TaskRabbit, Hailo or LiquidSpace exist in suburbia? Probably, but their effects would be minuscule and investors would get restless. The city in this regard is the platform upon which sharing economy companies prosper. More importantly, companies like this change the way the city is used. It’s not urban planning, but it is urban (re)design and it makes a difference.

A twist that many in the tech sector who complain about cities often miss is that change in a city is not the same thing as change in city government. Obviously they are deeply intertwined; change is mighty hard when it is done at cross-purposes with government leadership. But it happens all the time. Non-government actors — foundations, non-profits, architecture and urban planning firms, real estate developers, construction companies — contribute massively to the shape and health of our cities.

Often this contribution is powered through policies of open data publication by municipal governments. Open data is the raw material of a city, the vital signs of what has happened there, what is happening right now, and the deep pool of patterns for what might happen next.

Tech entrepreneurs would do well to look at the organizations and companies capitalizing on this data as the real change agents, not government itself. Even the data in many cases is generated outside government. Citizens often do the most interesting data-gathering, with tools like LocalData. The most exciting thing happening at the intersection of technology and cities today — what really makes them “smart” — is what is happening at the periphery of city government. It’s easy to belly-ache about government and certainly there are administrations that to do not make data public (or shut it down), but tech companies who are truly interested in city change should know that there are plenty of examples of how to start up and do it.

And yet, the somewhat staid world of architecture and urban-scale design presents the most opportunity to a tech community interested in real urban change. While technology obviously plays a role in urban planning — 3D visual design tools like Revit and mapping services like ArcGIS are foundational for all modern firms — data analytics as a serious input to design matters has only been used in specialized (mostly energy efficiency) scenarios. Where are the predictive analytics, the holistic models, the software-as-a-service providers for the brave new world of urban informatics and The Internet of Things? Technologists, it’s our move.

Something’s amiss when some city governments — rarely the vanguard in technological innovation — have more sophisticated tools for data-driven decision-making than the private sector firms who design the city. But some understand the opportunity. Vannevar Technology is working on it, as is Synthicity. There’s plenty of room for the most positive aspects of tech culture to remake the profession of urban planning itself. (Look to NYU’s Center for Urban Science and Progress and the University of Chicago’s Urban Center for Computation and Data for leadership in this space.)…”

Brainlike Computers, Learning From Experience


The New York Times: “Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.

The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.

“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.

Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.

But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.

In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.

The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.”

Crowdsourcing drug discovery: Antitumour compound identified


David Bradley in Spectroscopy.now: “American researchers have used “crowdsourcing” – the cooperation of a large number of interested non-scientists via the internet – to help them identify a new fungus. The species contains unusual metabolites, isolated and characterized, with the help of vibrational circular dichroism (VCD). One compound reveals itself to have potential antitumour activity.
So far, a mere 7 percent of the more than 1.5 million species of fungi thought to exist have been identified and an even smaller fraction of these have been the subject of research seeking bioactive natural products. …Robert Cichewicz of the University of Oklahoma, USA, and his colleagues hoped to remedy this situation by working with a collection of several thousand fungal isolates from three regions: Arctic Alaska, tropical Hawaii, and subtropical to semiarid Oklahoma. Collaborator Susan Mooberry of the University of Texas at San Antonio carried out biological assays on many fungal isolates looking for antitumor activity among the metabolites in Cichewicz’s collection. A number of interesting substances were identified…
However, the researchers realized quickly enough that the efforts of a single research team were inadequate if samples representing the immense diversity of the thousands of fungi they hoped to test were to be obtained and tested. They thus turned to the help of citizen scientists in a “crowdsourcing” initiative. In this approach, lay people with an interest in science, and even fellow scientists in other fields, were recruited to collect and submit soil from their gardens.
As the samples began to arrive, the team quickly found among them a previously unknown fungal strain – a Tolypocladium species – growing in a soil sample from Alaska. Colleague Andrew Miller of the University of Illinois did the identification of this new fungus, which was found to be highly responsive to making new compounds based on changes in its laboratory growth conditions. Moreover, extraction of the active chemicals from the isolate revealed a unique metabolite which was shown to have significant antitumour activity in laboratory tests. The team suggests that this novel substance may represent a valuable new approach to cancer treatment because it precludes certain biochemical mechanisms that lead to the emergence of drug resistance in cancer with conventional drugs…
The researchers point out the essential roles that citizen scientists can play. “Many of the groundbreaking discoveries, theories, and applied research during the last two centuries were made by scientists operating from their own homes,” Cichewicz says. “Although much has changed, the idea that citizen scientists can still participate in research is a powerful means for reinvigorating the public’s interest in science and making important discoveries,” he adds.”

A Bottom-Up Smart City?


Alicia Rouault at Data-Smart City Solutions: “America’s shrinking cities face a tide of disinvestment, abandonment, vacancy, and a shift toward deconstruction and demolition followed by strategic reinvestment, rightsizing, and a host of other strategies designed to renew once-great cities. Thriving megacity regions are experiencing rapid growth in population, offering a different challenge for city planners to redefine density, housing, and transportation infrastructure. As cities shrink and grow, policymakers are increasingly called to respond to these changes by making informed, data-driven decisions. What is the role of the citizen in this process of collecting and understanding civic data?
Writing for Forbes in “Open Sourcing the Neighborhood,” Professor of Sociology at Columbia University Saskia Sassen calls for “open source urbanism” as an antidote to the otherwise top-down smart city movement. This form of urbanism involves opening traditional verticals of information within civic and governmental institutions. Citizens can engage with and understand the logic behind decisions by exploring newly opened administrative data. Beyond opening these existing datasets, Sassen points out that citizen experts hold invaluable institutional memory that can serve as an alternate and legitimate resource for policymakers, economists, and urban planners alike.
In 2012, we created a digital platform called LocalData to address the production and use of community-generated data in a municipal context. LocalData is a digital mapping service used globally by universities, non-profits, and municipal governments to gather and understand data at a neighborhood scale. In contrast to traditional Census or administrative data, which is produced by a central agency and collected infrequently, our platform provides a simple method for both community-based organizations and municipal employees to gather real-time data on project-specific indicators: property conditions, building inspections, environmental issues or community assets. Our platform then visualizes data and exports it into formats integrated with existing systems in government to seamlessly provide accurate and detailed information for decision makers.
LocalData began as a project in Detroit, Michigan where the city was tackling a very real lack of standard, updated, and consistent condition information on the quality and status of vacant and abandoned properties. Many of these properties were owned by the city and county due to high foreclosure rates. One of Detroit’s strategies for combating crime and stabilizing neighborhoods is to demolish property in a targeted fashion. This strategy serves as a political win as much as providing an effective way to curb the secondary effects of vacancy: crime, drug use, and arson. Using LocalData, the city mapped critical corridors of emergent commercial property as an analysis tool for where to place investment, and documented thousands of vacant properties to understand where to target demolition.
Vacancy is not unique to the Midwest. Following our work with the Detroit Mayor’s office and planning department, LocalData has been used in dozens of other cities in the U.S. and abroad. Currently the Smart Chicago Collaborative is using LocalData to conduct a similar audit of vacant and abandoned property in southwest Chicagos. Though an effective tool for capturing building-specific information, LocalData has also been used to capture behavior and movement of goods. The MIT Megacities Logistics Lab has used LocalData to map and understand the intensity of urban supply chains by interviewing shop owners and mapping delivery routes in global megacities in Mexico, Colombia, Brazil and the U.S. The resulting information has been used with analytical models to help both city officials and companies to design better city logistics policies and operations….”