Brainlike Computers, Learning From Experience


The New York Times: “Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.

The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.

“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.

Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.

But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.

In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.

The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.”

Rethinking Why People Participate


Tiago Peixoto: “Having a refined understanding of what leads people to participate is one of the main concerns of those working with citizen engagement. But particularly when it comes to participatory democracy, that understanding is only partial and, most often, the cliché “more research is needed” is definitely applicable. This is so for a number of reasons, four of which are worth noting here.

  1. The “participatory” label is applied to greatly varied initiatives, raising obvious methodological challenges for comparative research and cumulative learning. For instance, while both participatory budgeting and online petitions can be roughly categorized as “participatory” processes, they are entirely different in terms of fundamental aspects such as their goals, institutional design and expected impact on decision-making.
  2. The fact that many participatory initiatives are conceived as “pilots” or one-off events gives researchers little time to understand the phenomenon, come up with sound research questions, and test different hypotheses over time.  The “pilotitis” syndrome in the tech4accountability space is a good example of this.
  3. When designing and implementing participatory processes, in the face of budget constraints the first victims are documentation, evaluation and research. Apart from a few exceptions, this leads to a scarcity of data and basic information that undermines even the most heroic “archaeological” efforts of retrospective research and evaluation (a far from ideal approach).
  4. The semantic extravaganza that currently plagues the field of citizen engagement, technology and open government makes cumulative learning all the more difficult.

Precisely for the opposite reasons, our knowledge of electoral participation is in better shape. First, despite the differences between elections, comparative work is relatively easy, which is attested by the high number of cross-country studies in the field. Second, the fact that elections (for the most part) are repeated regularly and following a similar design enables the refinement of hypotheses and research questions over time, and specific time-related analysis (see an example here [PDF]). Third, when compared to the funds allocated to research in participatory initiatives, the relative amount of resources channeled into electoral studies and voting behavior is significantly higher. Here I am not referring to academic work only but also to the substantial resources invested by the private sector and parties towards a better understanding of elections and voting behavior. This includes a growing body of knowledge generated by get-out-the-vote (GOTV) research, with fascinating experimental evidence from interventions that seek to increase participation in elections (e.g. door-to-door campaigns, telemarketing, e-mail). Add to that the wealth of electoral data that is available worldwide (in machine-readable formats) and you have some pretty good knowledge to tap into. Finally, both conceptually and terminologically, the field of electoral studies is much more consistent than the field of citizen engagement which, in the long run, tends to drastically impact how knowledge of a subject evolves.
These reasons should be sufficient to capture the interest of those who work with citizen engagement. While the extent to which the knowledge from the field of electoral participation can be transferred to non-electoral participation remains an open question, it should at least provide citizen engagement researchers with cues and insights that are very much worth considering…”

Can a Better Taxonomy Help Behavioral Energy Efficiency?


Article at GreenTechEfficiency: “Hundreds of behavioral energy efficiency programs have sprung up across the U.S. in the past five years, but the effectiveness of the programs — both in terms of cost savings and reduced energy use — can be difficult to gauge.
Of nearly 300 programs, a new report from the American Council for an Energy-Efficient Economy was able to accurately calculate the cost of saved energy from only ten programs….
To help utilities and regulators better define and measure behavioral programs, ACEEE offers a new taxonomy of utility-run behavior programs that breaks them into three major categories:
Cognition: Programs that focus on delivering information to consumers.  (This includes general communication efforts, enhanced billing and bill inserts, social media and classroom-based education.)
Calculus: Programs that rely on consumers making economically rational decisions. (This includes real-time and asynchronous feedback, dynamic pricing, games, incentives and rebates and home energy audits.)
Social interaction: Programs whose key drivers are social interaction and belonging. (This includes community-based social marketing, peer champions, online forums and incentive-based gifts.)
….
While the report was mostly preliminary, it also offered four steps forward for utilities that want to make the most of behavioral programs.
Stack. The types of programs might fit into three broad categories, but judiciously blending cues based on emotion, reason and social interaction into programs is key, according to ACEEE. Even though the report recommends stacked programs that have a multi-modal approach, the authors acknowledge, “This hypothesis will remain untested until we see more stacked programs in the marketplace.”
Track. Just like other areas of grid modernization, utilities need to rethink how they collect, analyze and report the data coming out of behavioral programs. This should include metrics that go beyond just energy savings.
Share. As with other utility programs, behavior-based energy efficiency programs can be improved upon if utilities share results and if reporting is standardized across the country instead of varying by state.
Coordinate. Sharing is only the first step. Programs that merge water, gas and electricity efficiency can often gain better results than siloed programs. That approach, however, requires a coordinated effort by regional utilities and a change to how programs are funded and evaluated by regulators.”

Crowdsourcing drug discovery: Antitumour compound identified


David Bradley in Spectroscopy.now: “American researchers have used “crowdsourcing” – the cooperation of a large number of interested non-scientists via the internet – to help them identify a new fungus. The species contains unusual metabolites, isolated and characterized, with the help of vibrational circular dichroism (VCD). One compound reveals itself to have potential antitumour activity.
So far, a mere 7 percent of the more than 1.5 million species of fungi thought to exist have been identified and an even smaller fraction of these have been the subject of research seeking bioactive natural products. …Robert Cichewicz of the University of Oklahoma, USA, and his colleagues hoped to remedy this situation by working with a collection of several thousand fungal isolates from three regions: Arctic Alaska, tropical Hawaii, and subtropical to semiarid Oklahoma. Collaborator Susan Mooberry of the University of Texas at San Antonio carried out biological assays on many fungal isolates looking for antitumor activity among the metabolites in Cichewicz’s collection. A number of interesting substances were identified…
However, the researchers realized quickly enough that the efforts of a single research team were inadequate if samples representing the immense diversity of the thousands of fungi they hoped to test were to be obtained and tested. They thus turned to the help of citizen scientists in a “crowdsourcing” initiative. In this approach, lay people with an interest in science, and even fellow scientists in other fields, were recruited to collect and submit soil from their gardens.
As the samples began to arrive, the team quickly found among them a previously unknown fungal strain – a Tolypocladium species – growing in a soil sample from Alaska. Colleague Andrew Miller of the University of Illinois did the identification of this new fungus, which was found to be highly responsive to making new compounds based on changes in its laboratory growth conditions. Moreover, extraction of the active chemicals from the isolate revealed a unique metabolite which was shown to have significant antitumour activity in laboratory tests. The team suggests that this novel substance may represent a valuable new approach to cancer treatment because it precludes certain biochemical mechanisms that lead to the emergence of drug resistance in cancer with conventional drugs…
The researchers point out the essential roles that citizen scientists can play. “Many of the groundbreaking discoveries, theories, and applied research during the last two centuries were made by scientists operating from their own homes,” Cichewicz says. “Although much has changed, the idea that citizen scientists can still participate in research is a powerful means for reinvigorating the public’s interest in science and making important discoveries,” he adds.”

6 New Year’s Strategies for Open Data Entrepreneurs


The GovLab’s Senior Advisor Joel Gurin: “Open Data has fueled a wide range of startups, including consumer-focused websites, business-to-business services, data-management tech firms, and more. Many of the companies in the Open Data 500 study are new ones like these. New Year’s is a classic time to start new ventures, and with 2014 looking like a hot year for Open Data, we can expect more startups using this abundant, free resource. For my new book, Open Data Now, I interviewed dozens of entrepreneurs and distilled six of the basic strategies that they’ve used.
1. Learn how to add value to free Open Data. We’re seeing an inversion of the value proposition for data. It used to be that whoever owned the data—particularly Big Data—had greater opportunities than those who didn’t. While this is still true in many areas, it’s also clear that successful businesses can be built on free Open Data that anyone can use. The value isn’t in the data itself but rather in the analytical tools, expertise, and interpretation that’s brought to bear. One oft-cited example: The Climate Corporation, which built a billion-dollar business out of government weather and satellite data that’s freely available for use.
2. Focus on big opportunities: health, finance, energy, education. A business can be built on just about any kind of Open Data. But the greatest number of startup opportunities will likely be in the four big areas where the federal government is focused on Open Data release. Last June’s Health Datapalooza showcased the opportunities in health. Companies like Opower in energy, GreatSchools in education, and Calcbench, SigFig, and Capital Cube in finance are examples in these other major sectors.
3. Explore choice engines and Smart Disclosure apps. Smart Disclosure – releasing data that consumers can use to make marketplace choices – is a powerful tool that can be the basis for a new sector of online startups. No one, it seems, has quite figured out how to make this form of Open Data work best, although sites like CompareTheMarket in the UK may be possible models. Business opportunities await anyone who can find ways to provide these much-needed consumer services. One example: Kayak, which competed in the crowded travel field by providing a great consumer interface, and which was sold to Priceline for $1.8 billion last year.
4. Help consumers tap the value of personal data. In a privacy-conscious society, more people will be interested in controlling their personal data and sharing it selectively for their own benefit. The value of personal data is just being recognized, and opportunities remain to be developed. There are business opportunities in setting up and providing “personal data vaults” and more opportunity in applying the many ways they can be used. Personal and Reputation.com are two leaders in this field.
5. Provide new data solutions to governments at all levels. Government datasets at the federal, state, and local level can be notoriously difficult to use. The good news is that these governments are now realizing that they need help. Data management for government is a growing industry, as Socrata, OpenGov, 3RoundStones, and others are finding, while companies like Enigma.io are turning government data into a more usable resource.
6. Look for unusual Open Data opportunities. Building a successful business by gathering data on restaurant menus and recipes is not an obvious route to success. But it’s working for Food Genius, whose founders showed a kind of genius in tapping an opportunity others had missed. While the big areas for Open Data are becoming clear, there are countless opportunities to build more niche businesses that can still be highly successful. If you have expertise in an area and see a customer need, there’s an increasingly good chance that the Open Data to help meet that need is somewhere to be found.”

The Postmodernity of Big Data


Essay by in the New Inquiry: “Big Data fascinates because its presence has always been with us in nature. Each tree, drop of rain, and the path of each grain of sand, both responds to and creates millions of data points, even on a short journey. Nature is the original algorithm, the most efficient and powerful. Mathematicians since the ancients have looked to it for inspiration; techno-capitalists now look to unlock its mysteries for private gain. Playing God has become all the more brisk and profitable thanks to cloud computing.
But beyond economic motivations for Big Data’s rise, are there also epistemological ones? Has Big Data come to try to fill the vacuum of certainty left by postmodernism? Does data science address the insecurities of the postmodern thought?
It turns out that trying to explain Big Data is like trying to explain postmodernism. Neither can be summarized effectively in a phrase, despite their champions’ efforts. Broad epistemological developments are compressed into cursory, ex post facto descriptions. Attempts to define Big Data, such as IBM’s marketing copy, which promises “insights gleaned” from “enterprise data warehouses that implement massively parallel processing,” “real-time scalability” and “parsing structured and unstructured sources,” focus on its implementation at the expense of its substance, decontextualizing it entirely . Similarly, definitions of postmodernism, like art critic Thomas McEvilley’s claim that it is “a renunciation that involves recognition of the relativity of the self—of one’s habit systems, their tininess, silliness, and arbitrariness” are accurate but abstract to the point of vagueness….
Big Data might come to be understood as Big Postmodernism: the period in which the influx of unstructured, non-teleological, non-narrative inputs ceased to destabilize the existing order but was instead finally mastered processed by sufficiently complex, distributed, and pluralized algorithmic regime. If Big Data has a skepticism built in, how this is different from the skepticism of postmodernism is perhaps impossible to yet comprehend”.

The Effective Use of Crowdsourcing in E-Governance


Paper by Jayakumar Sowmya and Hussain Shafiq Pyarali: “The rise of Web 2.0 paradigm has empowered the Internet users to share information and generate content on social networking and media sharing platforms such as wikis and blogs. The trend of harnessing the wisdom of public using Web 2.0 distributed networks through open calls is termed as ‘Crowdsourcing’. In addition to businesses, this powerful idea of using collective intelligence or the ‘wisdom of crowd’ applies to different situations, such as in governments and non-profit organizations which have started utilizing crowdsourcing as an essential problem -solving tool. In addition, the widespread and easy access to technologies such as the Internet, mobile phones and other communication devices has resulted in an exponential growth in the use of crowdsourcing for government policy advocacy, e-democracy and e-governance during the past decade. However, utilizing collective intelligence and efforts of public to find solutions to real life problems using web 2.0 tools does come with its share of associated challenges and limitations. This paper aims at identifying and examining the value-adding strategies which contribute to the success of crowdsourcing in e-governance. The qualitative case study analysis and emphatic design methodology are employed to evaluate the effectiveness of the identified strategic and functional components, by analyzing the characteristics of some of the notable cases of crowdsourcing in e-governance and the findings are tabulated and discussed. The paper concludes with the limitations and the implications for future research”.

Open data policies, their implementation and impact: A framework for comparison


Paper by A Zuiderwijk, M Janssen in the Government Information Quarterly: “In developing open data policies, governments aim to stimulate and guide the publication of government data and to gain advantages from its use. Currently there is a multiplicity of open data policies at various levels of government, whereas very little systematic and structured research has been done on the issues that are covered by open data policies, their intent and actual impact. Furthermore, no suitable framework for comparing open data policies is available, as open data is a recent phenomenon and is thus in an early stage of development. In order to help bring about a better understanding of the common and differentiating elements in the policies and to identify the factors affecting the variation in policies, this paper develops a framework for comparing open data policies. The framework includes the factors of environment and context, policy content, performance indicators and public values. Using this framework, seven Dutch governmental policies at different government levels are compared. The comparison shows both similarities and differences among open data policies, providing opportunities to learn from each other’s policies. The findings suggest that current policies are rather inward looking, open data policies can be improved by collaborating with other organizations, focusing on the impact of the policy, stimulating the use of open data and looking at the need to create a culture in which publicizing data is incorporated in daily working processes. The findings could contribute to the development of new open data policies and the improvement of existing open data policies.”

People Powered Social Innovation: The Need for Citizen Engagement


Paper for the Lien Centre for Social Innovation (Singapore): “the Citizen engagement is widely regarded as critical to the development and implementation of social innovation. What is citizen engagement? What does it mean in the context of social innovation? Julie Simon and Anna Davies discuss the importance as well as the implications of engaging the ground…”

A Bottom-Up Smart City?


Alicia Rouault at Data-Smart City Solutions: “America’s shrinking cities face a tide of disinvestment, abandonment, vacancy, and a shift toward deconstruction and demolition followed by strategic reinvestment, rightsizing, and a host of other strategies designed to renew once-great cities. Thriving megacity regions are experiencing rapid growth in population, offering a different challenge for city planners to redefine density, housing, and transportation infrastructure. As cities shrink and grow, policymakers are increasingly called to respond to these changes by making informed, data-driven decisions. What is the role of the citizen in this process of collecting and understanding civic data?
Writing for Forbes in “Open Sourcing the Neighborhood,” Professor of Sociology at Columbia University Saskia Sassen calls for “open source urbanism” as an antidote to the otherwise top-down smart city movement. This form of urbanism involves opening traditional verticals of information within civic and governmental institutions. Citizens can engage with and understand the logic behind decisions by exploring newly opened administrative data. Beyond opening these existing datasets, Sassen points out that citizen experts hold invaluable institutional memory that can serve as an alternate and legitimate resource for policymakers, economists, and urban planners alike.
In 2012, we created a digital platform called LocalData to address the production and use of community-generated data in a municipal context. LocalData is a digital mapping service used globally by universities, non-profits, and municipal governments to gather and understand data at a neighborhood scale. In contrast to traditional Census or administrative data, which is produced by a central agency and collected infrequently, our platform provides a simple method for both community-based organizations and municipal employees to gather real-time data on project-specific indicators: property conditions, building inspections, environmental issues or community assets. Our platform then visualizes data and exports it into formats integrated with existing systems in government to seamlessly provide accurate and detailed information for decision makers.
LocalData began as a project in Detroit, Michigan where the city was tackling a very real lack of standard, updated, and consistent condition information on the quality and status of vacant and abandoned properties. Many of these properties were owned by the city and county due to high foreclosure rates. One of Detroit’s strategies for combating crime and stabilizing neighborhoods is to demolish property in a targeted fashion. This strategy serves as a political win as much as providing an effective way to curb the secondary effects of vacancy: crime, drug use, and arson. Using LocalData, the city mapped critical corridors of emergent commercial property as an analysis tool for where to place investment, and documented thousands of vacant properties to understand where to target demolition.
Vacancy is not unique to the Midwest. Following our work with the Detroit Mayor’s office and planning department, LocalData has been used in dozens of other cities in the U.S. and abroad. Currently the Smart Chicago Collaborative is using LocalData to conduct a similar audit of vacant and abandoned property in southwest Chicagos. Though an effective tool for capturing building-specific information, LocalData has also been used to capture behavior and movement of goods. The MIT Megacities Logistics Lab has used LocalData to map and understand the intensity of urban supply chains by interviewing shop owners and mapping delivery routes in global megacities in Mexico, Colombia, Brazil and the U.S. The resulting information has been used with analytical models to help both city officials and companies to design better city logistics policies and operations….”