Paper by JT Snead, E Wright in Government Information Quarterly: “The purpose of this exploratory study is to review scholarly publications and assess egovernment research efforts as a field of study specific to the United States e-government environment. Study results reveal that researchers who focus on the U.S. e-government environment assess specific e-government topics at the federal, state, and local levels; however, there are gaps in the research efforts by topic areas and across different levels of government, which indicate opportunities for future areas of research. Results also find that a multitude of methodology approaches are used to assess e-government. Issues, however, exist that include lack of or weak presentations of methodologies in publications, few studies include multi-method evaluation approaches for data collection and analysis efforts, and few studies take a theory-based approach to understanding the U.S. e-government environment.”
Protecting personal data in E-government: A cross-country study
The GovLab Index: Open Data
Please find below the latest installment in The GovLab Index series, inspired by Harper’s Index. “The GovLab Index: Open Data — December 2013” provides an update on our previous Open Data installment, and highlights global trends in Open Data and the release of public sector information. Previous installments include Measuring Impact with Evidence, The Data Universe, Participation and Civic Engagement and Trust in Institutions.
Value and Impact
- Potential global value of open data estimated by McKinsey: $3 trillion annually
- Potential yearly value for the United States: $1.1 trillion
- Europe: $900 billion
- Rest of the world: $1.7 trillion
- How much the value of open data is estimated to grow per year in the European Union: 7% annually
- Value of releasing UK’s geospatial data as open data: 13 million pounds per year by 2016
- Estimated worth of business reuse of public sector data in Denmark in 2010: more than €80 million a year
- Estimated worth of business reuse of public sector data across the European Union in 2010: €27 billion a year
- Total direct and indirect economic gains from easier public sector information re-use across the whole European Union economy, as of May 2013: €140 billion annually
- Economic value of publishing data on adult cardiac surgery in the U.K., as of May 2013: £400 million
- Economic value of time saved for users of live data from the Transport for London apps, as of May 2013: between £15 million and £58 million
- Estimated increase in GDP in England and Wales in 2008-2009 due to the adoption of geospatial information by local public services providers: +£320m
- Average decrease in borrowing costs in sovereign bond markets for emerging market economies when implementing transparent practices (measured by accuracy and frequency according to IMF policies, across 23 countries from 1999-2002): 11%
- Open weather data supports an estimated $1.5 billion in applications in the secondary insurance market – but much greater value comes from accurate weather predictions, which save the U.S. annually more than $30 billion
- Estimated value of GPS data: $90 billion
Efforts and Involvement
- Number of U.S. based companies identified by the GovLab that use government data in innovative ways: 500
- Number of open data initiatives worldwide in 2009: 2
- Number of open data initiatives worldwide in 2013: over 300
- Number of countries with open data portals: more than 40
- Countries who share more information online than the U.S.: 14
- Number of cities globally that participated in 2013 International Open Data Hackathon Day: 102
- Number of U.S. cities with Open Data Sites in 2013: 43
- U.S. states with open data initiatives: 40
- Membership growth in the Open Government Partnership in two years: from 8 to 59 countries
- Number of time series indicators (GDP, foreign direct investment, life expectancy, internet users, etc.) in the World Bank Open Data Catalog: over 8,000
- How many of 77 countries surveyed by the Open Data Barometer have some form of Open Government Data Initiative: over 55%
- How many OGD initiatives have dedicated resources with senior level political backing: over 25%
- How many countries are in the Open Data Index: 70
- How many of the 700 key datasets in the Index are open: 84
- Number of countries in the Open Data Census: 77
- How many of the 727 key datasets in the Census are open: 95
- How many countries surveyed have formal data policies in 2013: 55%
- Those who have machine-readable data available: 25%
- Top 5 countries in Open Data rankings: United Kingdom, United States, Sweden, New Zealand, Norway
- The different levels of Open Data Certificates a data user or publisher can achieve “along the way to world-class open data”: 4 levels, Raw, Pilot, Standard and Expert
- The number of data ecosystems categories identified by the OECD: 3, data producers, infomediaries, and users
Examining Datasets…
FULL VERSION AT http://thegovlab.org/govlab-index-open-data-updated/
When Lean Startup Arrives in a Trojan Horse–Innovation in Extreme Bureaucracy
Steven Hodas @ The Lean Startup Conference 2013 –…Steven runs an procurement-innovation program in one of the world’s most notorious bureaucracies: the New York City Department of Education. In a fear-driven atmosphere, with lots of incentive to not be embarrassed, he’ll talk about the challenges he’s faced and progress he’s made testing new ideas.
When Tech Culture And Urbanism Collide
John Tolva: “…We can build upon the success of the work being done at the intersection of technology and urban design, right now.
For one, the whole realm of social enterprise — for-profit startups that seek to solve real social problems — has a huge overlap with urban issues. Impact Engine in Chicago, for instance, is an accelerator squarely focused on meaningful change and profitable businesses. One of their companies, Civic Artworks, has set as its goal rebalancing the community planning process.
The Code for America Accelerator and Tumml, both located in San Francisco, morph the concept of social innovation into civic/urban innovation. The companies nurtured by CfA and Tumml are filled with technologists and urbanists working together to create profitable businesses. Like WorkHands, a kind of LinkedIn for blue collar trades. Would something like this work outside a city? Maybe. Are its effects outsized and scale-ready in a city? Absolutely. That’s the opportunity in urban innovation.
Scale is what powers the sharing economy and it thrives because of the density and proximity of cities. In fact, shared resources at critical density is one of the only good definitions for what a city is. It’s natural that entrepreneurs have overlaid technology on this basic fact of urban life to amplify its effects. Would TaskRabbit, Hailo or LiquidSpace exist in suburbia? Probably, but their effects would be minuscule and investors would get restless. The city in this regard is the platform upon which sharing economy companies prosper. More importantly, companies like this change the way the city is used. It’s not urban planning, but it is urban (re)design and it makes a difference.
A twist that many in the tech sector who complain about cities often miss is that change in a city is not the same thing as change in city government. Obviously they are deeply intertwined; change is mighty hard when it is done at cross-purposes with government leadership. But it happens all the time. Non-government actors — foundations, non-profits, architecture and urban planning firms, real estate developers, construction companies — contribute massively to the shape and health of our cities.
Often this contribution is powered through policies of open data publication by municipal governments. Open data is the raw material of a city, the vital signs of what has happened there, what is happening right now, and the deep pool of patterns for what might happen next.
Tech entrepreneurs would do well to look at the organizations and companies capitalizing on this data as the real change agents, not government itself. Even the data in many cases is generated outside government. Citizens often do the most interesting data-gathering, with tools like LocalData. The most exciting thing happening at the intersection of technology and cities today — what really makes them “smart” — is what is happening at the periphery of city government. It’s easy to belly-ache about government and certainly there are administrations that to do not make data public (or shut it down), but tech companies who are truly interested in city change should know that there are plenty of examples of how to start up and do it.
And yet, the somewhat staid world of architecture and urban-scale design presents the most opportunity to a tech community interested in real urban change. While technology obviously plays a role in urban planning — 3D visual design tools like Revit and mapping services like ArcGIS are foundational for all modern firms — data analytics as a serious input to design matters has only been used in specialized (mostly energy efficiency) scenarios. Where are the predictive analytics, the holistic models, the software-as-a-service providers for the brave new world of urban informatics and The Internet of Things? Technologists, it’s our move.
Something’s amiss when some city governments — rarely the vanguard in technological innovation — have more sophisticated tools for data-driven decision-making than the private sector firms who design the city. But some understand the opportunity. Vannevar Technology is working on it, as is Synthicity. There’s plenty of room for the most positive aspects of tech culture to remake the profession of urban planning itself. (Look to NYU’s Center for Urban Science and Progress and the University of Chicago’s Urban Center for Computation and Data for leadership in this space.)…”
Brainlike Computers, Learning From Experience
The New York Times: “Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.
The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.
The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.
In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.
Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.
“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.
Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.
But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.
In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.
The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.”
Crowdsourcing drug discovery: Antitumour compound identified
David Bradley in Spectroscopy.now: “American researchers have used “crowdsourcing” – the cooperation of a large number of interested non-scientists via the internet – to help them identify a new fungus. The species contains unusual metabolites, isolated and characterized, with the help of vibrational circular dichroism (VCD). One compound reveals itself to have potential antitumour activity.
So far, a mere 7 percent of the more than 1.5 million species of fungi thought to exist have been identified and an even smaller fraction of these have been the subject of research seeking bioactive natural products. …Robert Cichewicz of the University of Oklahoma, USA, and his colleagues hoped to remedy this situation by working with a collection of several thousand fungal isolates from three regions: Arctic Alaska, tropical Hawaii, and subtropical to semiarid Oklahoma. Collaborator Susan Mooberry of the University of Texas at San Antonio carried out biological assays on many fungal isolates looking for antitumor activity among the metabolites in Cichewicz’s collection. A number of interesting substances were identified…
However, the researchers realized quickly enough that the efforts of a single research team were inadequate if samples representing the immense diversity of the thousands of fungi they hoped to test were to be obtained and tested. They thus turned to the help of citizen scientists in a “crowdsourcing” initiative. In this approach, lay people with an interest in science, and even fellow scientists in other fields, were recruited to collect and submit soil from their gardens.
As the samples began to arrive, the team quickly found among them a previously unknown fungal strain – a Tolypocladium species – growing in a soil sample from Alaska. Colleague Andrew Miller of the University of Illinois did the identification of this new fungus, which was found to be highly responsive to making new compounds based on changes in its laboratory growth conditions. Moreover, extraction of the active chemicals from the isolate revealed a unique metabolite which was shown to have significant antitumour activity in laboratory tests. The team suggests that this novel substance may represent a valuable new approach to cancer treatment because it precludes certain biochemical mechanisms that lead to the emergence of drug resistance in cancer with conventional drugs…
The researchers point out the essential roles that citizen scientists can play. “Many of the groundbreaking discoveries, theories, and applied research during the last two centuries were made by scientists operating from their own homes,” Cichewicz says. “Although much has changed, the idea that citizen scientists can still participate in research is a powerful means for reinvigorating the public’s interest in science and making important discoveries,” he adds.”
A Bottom-Up Smart City?
Alicia Rouault at Data-Smart City Solutions: “America’s shrinking cities face a tide of disinvestment, abandonment, vacancy, and a shift toward deconstruction and demolition followed by strategic reinvestment, rightsizing, and a host of other strategies designed to renew once-great cities. Thriving megacity regions are experiencing rapid growth in population, offering a different challenge for city planners to redefine density, housing, and transportation infrastructure. As cities shrink and grow, policymakers are increasingly called to respond to these changes by making informed, data-driven decisions. What is the role of the citizen in this process of collecting and understanding civic data?
Writing for Forbes in “Open Sourcing the Neighborhood,” Professor of Sociology at Columbia University Saskia Sassen calls for “open source urbanism” as an antidote to the otherwise top-down smart city movement. This form of urbanism involves opening traditional verticals of information within civic and governmental institutions. Citizens can engage with and understand the logic behind decisions by exploring newly opened administrative data. Beyond opening these existing datasets, Sassen points out that citizen experts hold invaluable institutional memory that can serve as an alternate and legitimate resource for policymakers, economists, and urban planners alike.
In 2012, we created a digital platform called LocalData to address the production and use of community-generated data in a municipal context. LocalData is a digital mapping service used globally by universities, non-profits, and municipal governments to gather and understand data at a neighborhood scale. In contrast to traditional Census or administrative data, which is produced by a central agency and collected infrequently, our platform provides a simple method for both community-based organizations and municipal employees to gather real-time data on project-specific indicators: property conditions, building inspections, environmental issues or community assets. Our platform then visualizes data and exports it into formats integrated with existing systems in government to seamlessly provide accurate and detailed information for decision makers.
LocalData began as a project in Detroit, Michigan where the city was tackling a very real lack of standard, updated, and consistent condition information on the quality and status of vacant and abandoned properties. Many of these properties were owned by the city and county due to high foreclosure rates. One of Detroit’s strategies for combating crime and stabilizing neighborhoods is to demolish property in a targeted fashion. This strategy serves as a political win as much as providing an effective way to curb the secondary effects of vacancy: crime, drug use, and arson. Using LocalData, the city mapped critical corridors of emergent commercial property as an analysis tool for where to place investment, and documented thousands of vacant properties to understand where to target demolition.
Vacancy is not unique to the Midwest. Following our work with the Detroit Mayor’s office and planning department, LocalData has been used in dozens of other cities in the U.S. and abroad. Currently the Smart Chicago Collaborative is using LocalData to conduct a similar audit of vacant and abandoned property in southwest Chicagos. Though an effective tool for capturing building-specific information, LocalData has also been used to capture behavior and movement of goods. The MIT Megacities Logistics Lab has used LocalData to map and understand the intensity of urban supply chains by interviewing shop owners and mapping delivery routes in global megacities in Mexico, Colombia, Brazil and the U.S. The resulting information has been used with analytical models to help both city officials and companies to design better city logistics policies and operations….”
Using Social Media in Rulemaking: Possibilities and Barriers
New paper by Michael Herz (Cardozo Legal Studies Research Paper No. 417): “Web 2.0” is characterized by interaction, collaboration, non-static web sites, use of social media, and creation of user-generated content. In theory, these Web 2.0 tools can be harnessed not only in the private sphere but as tools for an e-topia of citizen engagement and participatory democracy. Notice-and-comment rulemaking is the pre-digital government process that most approached (while still falling far short of) the e-topian vision of public participation in deliberative governance. The notice-and-comment process for federal agency rulemaking has now changed from a paper process to an electronic one. Expectations for this switch were high; many anticipated a revolution that would make rulemaking not just more efficient, but also more broadly participatory, democratic, and dialogic. In the event, the move online has not produced a fundamental shift in the nature of notice-and-comment rulemaking. At the same time, the online world in general has come to be increasingly characterized by participatory and dialogic activities, with a move from static, text-based websites to dynamic, multi-media platforms with large amounts of user-generated content. This shift has not left agencies untouched. To the contrary, agencies at all levels of government have embraced social media – by late 2013 there were over 1000 registered federal agency twitter feeds and over 1000 registered federal agency Facebook pages, for example – but these have been used much more as tools for broadcasting the agency’s message than for dialogue or obtaining input. All of which invites the questions whether agencies could or should directly rely on social media in the rulemaking process.
This study reviews how federal agencies have been using social media to date and considers the practical and legal barriers to using social media in rulemaking, not just to raise the visibility of rulemakings, which is certainly happening, but to gather relevant input and help formulate the content of rules.
The study was undertaken for the Administrative Conference of the United States and is the basis for a set of recommendations adopted by ACUS in December 2013. Those recommendations overlap with but are not identical to the recommendations set out herein.”
Open Data in Action
- Brightscope, a San Diego-based company that leverages data from the Department of Labor, the Security and Exchange Commission, and the Census Bureau to rate consumers’ 401k plans objectively on performance and fees, so companies can choose better plans and employees can make better decisions about their retirement options.
- AllTuition, a Chicago-based startup that provides services—powered by data from Department of Education on Federal student financial aid programs and student loans— to help students and parents manage the financial-aid process for college, in part by helping families keep track of deadlines, and walking them through the required forms.
- Archimedes, a San Francisco healthcare modeling and analytics company, that leverages Federal open data from the National Institutes of Health, the Centers for Disease Control and Prevention, and the Center for Medicaid and Medicare Services, to provide doctors more effective individualized treatment plans and to enable patients to make informed health decisions.
See also:
Open Government Data: Companies Cash In