Peter Ryder and Shaun Abrahamson in Innovation Excellence: “When we open up the innovation process to talent outside our organization we are trying to channel the abilities of a lot of people we don’t know, in the hope that a few of them have ideas we need. Crowdsourcing is the term most closely associated with the process. But over the last decade, many organizations have been not only sourcing ideas from crowds but also getting feedback on ideas….
We call the intersection of lower transaction costs and brainstorming at scale enabled by online connections crowdstorming.
Getting ideas, getting feedback, identifying talent to work with, filtering ideas, earning media, enabling stakeholders to select ideas to change the organization/stakeholder relationship — the crowd’s role and the crowdstorming process has become more complex as it has expanded to involve external talent in new ways. …
Seventy-five years ago, the British economist, Ronald Coase, suggested that high transaction costs – the overhead to find, recruit, negotiate and contract with talent—required organizations to bring the best talent in house. While Coase’s equation still holds true, the Internet has allowed organizations to revisit under what conditions they want and need full time employees. When we have the ability to efficiently tap resources anywhere, anytime at low cost, new opportunities emerge.”
Complex Algorithm Auto-Writes Books, Could Transform Science
Mashable: “Could a sophisticated algorithm be the future of science? One innovative economist thinks so.
Phil Parker, who holds a doctorate in business economics from the Wharton School, has built an algorithm that auto-writes books. Now he’s taking that model and applying it to loftier goals than simply penning periodicals: namely, medicine and forensics. Working with professors and researchers at NYU, Parker is trying to decode complex genetic structures and find cures for diseases. And he’s doing it with the help of man’s real best friend: technology.
Parker’s recipe is a complex computer program that mimics formulaic writing….
Parker’s been at this for years. His formula, originally used for printing, is able to churn out entire books in minutes. It’s similar to the work being done by Narrative Science and StatSheet, except those companies are known for short form auto-writing for newspapers. Parker’s work is much longer, focusing on obscure non-fiction and even poetry.
It’s not creative writing, though, and Parker isn’t interested in introspection, exploring emotion or storytelling. He’s interested in exploiting reproducible patterns — that’s how his algorithm can find, collect and “write” so quickly. And how he can apply that model to other disciplines, like science.
Parker’s method seems to be a success; indeed, his ICON Group International, Inc., has auto-written so many books that Parker has lost count. But this isn’t the holy grail of literature, he insists. Instead, he says, his work is a play on mechanizing processes to create a simple formula. And he thinks that “finding new knowledge structures within data” stretches far beyond print.”
Techs and the City
Alec Appelbaum, who teaches at Pratt Institute in The New York Times: “THIS spring New York City is rolling out its much-ballyhooed bike-sharing program, which relies on a sophisticated set of smartphone apps and other digital tools to manage it. The city isn’t alone: across the country, municipalities are buying ever more complicated technological “solutions” for urban life.
But higher tech is not always essential tech. Cities could instead be making savvier investments in cheaper technology that may work better to stoke civic involvement than the more complicated, expensive products being peddled by information-technology developers….
To be sure, big tech can zap some city weaknesses. According to I.B.M., its predictive-analysis technology, which examines historical data to estimate the next crime hot spots, has helped Memphis lower its violent crime rate by 30 percent.
But many problems require a decidedly different approach. Take the seven-acre site in Lower Manhattan called the Seward Park Urban Renewal Area, where 1,000 mixed-income apartments are set to rise. A working-class neighborhood that fell to bulldozers in 1969, it stayed bare as co-ops nearby filled with affluent families, including my own.
In 2010, with the city ready to invite developers to bid for the site, long-simmering tensions between nearby public-housing tenants and wealthier dwellers like me turned suddenly — well, civil.
What changed? Was it some multimillion-dollar “open democracy” platform from Cisco, or a Big Data program to suss out the community’s real priorities? Nope. According to Dominic Pisciotta Berg, then the chairman of the local community board, it was plain old e-mail, and the dialogue it facilitated. “We simply set up an e-mail box dedicated to receiving e-mail comments” on the renewal project, and organizers would then “pull them together by comment type and then consolidate them for display during the meetings,” he said. “So those who couldn’t be there had their voices considered and those who were there could see them up on a screen and adopted, modified or rejected.”
Through e-mail conversations, neighbors articulated priorities — permanently affordable homes, a movie theater, protections for small merchants — that even a supercomputer wouldn’t necessarily have identified in the data.
The point is not that software is useless. But like anything else in a city, it’s only as useful as its ability to facilitate the messy clash of real human beings and their myriad interests and opinions. And often, it’s the simpler software, the technology that merely puts people in contact and steps out of the way, that works best.”
Principles and Practices for a Federal Statistical Agency
New National Academies Publication : “Publicly available statistics from government agencies that are credible, relevant, accurate, and timely are essential for policy makers, individuals, households, businesses, academic institutions, and other organizations to make informed decisions. Even more, the effective operation of a democratic system of government depends on the unhindered flow of statistical information to its citizens.
In the United States, federal statistical agencies in cabinet departments and independent agencies are the governmental units whose principal function is to compile, analyze, and disseminate information for such statistical purposes as describing population characteristics and trends, planning and monitoring programs, and conducting research and evaluation. The work of these agencies is coordinated by the U.S. Office of Management and Budget. Statistical agencies may acquire information not only from surveys or censuses of people and organizations, but also from such sources as government administrative records, private-sector datasets, and Internet sources that are judged of suitable quality and relevance for statistical use. They may conduct analyses, but they do not advocate policies or take partisan positions. Statistical purposes for which they provide information relate to descriptions of groups and exclude any interest in or identification of an individual person, institution, or economic unit.
Four principles are fundamental for a federal statistical agency: relevance to policy issues, credibility among data users, trust among data providers, and independence from political and other undue external influence. Principles and Practices for a Federal Statistical Agency: Fifth Edition explains these four principles in detail.”
Mary Meeker’s Internet Trends Report
AllThingsD: For the second year in a row, Mary Meeker is unveiling her now famed Internet Trends report at the D11 Conference.
Meeker, the Kleiner Perkins Caufield & Byers partner, highlights growth of Internet usage and other activities on mobile devices and updates that now infamous gap between mobile internet usage and mobile monetization.
But there are many new additions. Among them are the rise of wearable tech as perhaps the next big tech cycle of the coming decade and a look at how Americans’ online sharing habits compare to the rest of the world.
Here’s Meeker’s full presentation:
KPCB Internet Trends 2013 from Kleiner Perkins Caufield & Byers
The Internet as Politicizing Instrument
New Issue of Transformations (Editorial): “This issue of Transformations presents essays responding to Marcus Breen’s recent book Uprising: The Internet’s Unintended Consequences. Breen asks whether the Internet can become a politicising instrument for the new online proletariat – the individualised users isolated by the monitor screen. He asks “if the proletariat can use the Internet, is it freed from the moral and social constraints of the past that were imposed by conventional media and its regulation of the public space?” (32) This question raises further issues. Does this freedom translate into an emancipatory politics where the proletariat is able to pursue its own ends, or does it simply reproduce the power relation between the user-subject and the Internet and those who control and manage it. The articles in this issue respond in various ways to these questions.
Marcus Breen’s own article “The Internet and Privatism: Reconstructing the Monitor Space” makes a case for privatism – the restriction of subjective life to isolated or privatised experience, especially in relation to the computer monitor – as the new modality of meaning making in the Internet era. Using approaches associated with cultural and media studies, the paper traces the way the Internet has influenced the shift in the culture towards values associated with the confluence of ideas around the private, best described by privatism.
Fidele Vlavo’s article investigates the central discourses that have constructed the internet as a democratic and public environment removed from state and corporate control. The aim is to call attention to the issues that have limited the development of the internet as a tool for socio-political empowerment. The paper first retraces the early discursive constructions that insist on representing the internet as a decentralised and open structure. It also questions the role played by the digerati (or cyber elite) in the formulation of contradictory demands for public interests, self-governance, and entrepreneurial rights. Finally, it examines the emergence of two early virtual communities and their attempts to facilitate free speech and self-regulation. In the context of activists advocating freedom of expression and government institutions re-organizing legislation to control the Internet, the examination of these discourses provides a useful starting point for the (re)assessment of the potential of direct online mobilization.
Emit Snake-Being’s article examines the limits of the Internet as a politicising instrument by showing how Internet users are subject to the controls of the search engine algorithm, managed by elite groups whose purpose is to reproduce themselves in terms of neo-liberal capitalism. Invoking recent political events in the Middle East and in London in which a wired proletariat sought to resist and overturn political authorities through Internet communication, Snake-Beings argues that such events are compromised by the fact that they owe their possibility to Internet providers and their commercial imperatives. Snake-Being’s article, as well as most of the other articles in this issue, offers a timely reminder not only of the possibilities, but of the limits of the Internet as a politicising instrument for progressive, emancipatory politics.
Frances Shaw’s paper concerns the way in which the logic of surveillance operates in contested sites in cities where live coverage of demonstrations against capitalism leads to confrontation between demonstrators and police. Through a detailed account of the “Occupy Sydney” demonstration in 2011, Shaw shows how both demonstrators and police engaged in tactics of surveillance and resistance to counter each other’s power and authority. In an age of instant communication and global surveillance, freedom of movement and freedom from surveillance in public spaces is drawn into the logics of power mediated by mobile ‘phones and computer based communication technology.
Karyl Ketchum’s paper offers detailed analysis of two Internet sites to show how the proletarianisation of the Internet is gendered in terms of male interests. Picking up on Breen’s argument that Internet proletarianisation leads to an open system that “supports both anything and anyone,” she argues that, in the domain of online pornography, this new-found freedom turns out to be “the power of computer analytics to harness and hone the shifting meanings of white Western Enlightenment masculinities in new globalising postcolonial contexts, economies and geopolitical struggles.” Furthermore, Ketchum shows how this default to male interests was also at work in American reporting of the Arab Spring revolutions in Egypt and other Middle Eastern countries. The YouTube video posted by a young Egyptian woman, Asmaa Mahfouz, which sparked the revolution in Egypt that eventually overthrew the Mubarak government, was not given due coverage by the Western media, so that “women like Mahfouz all but disappear from Western accounts of the Arab Spring.”
Liden and Giritli Nygren’s paper addresses the challenges to the theories of the political sphere posed by a digital society. It is suggested that this is most evident at the intersection between understandings of technology, performativities, and politics that combines empirical closeness with abstract understandings of socio-political and cultural contexts. The paper exemplifies this by reporting on a study of online citizen dialogue in the making, in this case concerning school planning in a Swedish municipality. Applying these theoretical perspectives to this case provides some key findings. The technological design is regarded as restricting the potential dialogue, as is outlined in different themes where the participants enact varying positions—taxpayers, citizen consumers, or local residents. The political analysis stresses a dialogue that lacks both polemic and public perspectives, and rather is characterized by the expression of different special interests. Together, these perspectives can provide the foundation for the development of applying theories in a digital society.
The Internet and Privatism: Reconstructing the Monitor Space (Marcus Breen)
The Digital Hysterias of Decentralisation, Entrepreneurship and Open Community (Fidele Vlavo)
From Ideology to Algorithm: the Opaque Politics of the Internet (Emit Snake-Beings)
“Walls of Seeing”: Protest Surveillance, Embodied Boundaries, and Counter-Surveillance at Occupy Sydney (Frances Shaw)
Gendered Uprisings: Desire, Revolution, and the Internet’s “Unintended Consequences”(Karyl E. Ketchum)
Analysing the Intersections between Technology, Performativity, and Politics: the Case of Local Citizen Dialogue (Gustav Lidén and Katarina Giritli Nygren)”
Disruptive technologies: Advances that will transform life, business, and the global economy
New Report by McKinsey Global Institute: “Disruptive technologies: Advances that will transform life, business, and the global economy, a report from the McKinsey Global Institute, cuts through the noise and identifies 12 technologies that could drive truly massive economic transformations and disruptions in the coming years. The report also looks at exactly how these technologies could change our world, as well as their benefits and challenges, and offers guidelines to help leaders from businesses and other institutions respond.
We estimate that, together, applications of the 12 technologies discussed in the report could have a potential economic impact between $14 trillion and $33 trillion a year in 2025. This estimate is neither predictive nor comprehensive. It is based on an in-depth analysis of key potential applications and the value they could create in a number of ways, including the consumer surplus that arises from better products, lower prices, a cleaner environment, and better health….
Policy makers can use advanced technology to address their own operational challenges (for example, by deploying the Internet of Things to improve infrastructure management). The nature of work will continue to change, and that will require strong education and retraining programs. To address challenges that the new technologies themselves will bring, policy makers can use some of those very technologies—for example, by creating new educational and training systems with the mobile Internet, which can also help address an ever-increasing productivity imperative to deliver public services more efficiently and effectively. To develop a more nuanced and useful view of technology’s impact, governments may also want to consider new metrics that capture more than GDP effects. This approach can help policy makers balance the need to encourage growth with their responsibility to look out for the public welfare as new technologies reshape economies and lives.”
How We Imagined the Internet Before the Internet Even Existed
Matt Novak in Paleofuture : “In a few years, men will be able to communicate more effectively through a machine than face to face. Sounds obvious today. But in 1968, a full year before ARPANET made its first connection? It was downright clairvoyant…
The paper was written by J.C.R. Licklider and Robert Taylor, illustrated by Rowland B. Wilson, and appeared in the April 1968 issue of Science and Technology. The article includes some of the most amazingly accurate predictions for what networked computing would eventually allow….
The article rather boldly predicts that the computerized networks of the future will be even more important for communication than the “printing press and the picture tube”—another idea not taken for granted in 1968:
Creative, interactive communication requires a plastic or moldable medium that can be modeled, a dynamic medium in which premises will flow into consequences, and above all a common medium that can be contributed to and experimented with by all.
Such a medium is at hand—the programmed digital computer. Its presence can change the nature and value of communication even more profoundly than did the printing press and the picture tube, for, as we shall show, a well-programmed computer can provide direct access both to informational resources and to the processes for making use of the resources.
The paper predicts that the person-to-person interaction that a networked computer system allows for will not only build relationships between individuals, but will build communities.
What will on-line interactive communities be like? In most fields they will consist of geographically separated members, sometimes grouped in small clusters and sometimes working individually. They will be communities not of common location, but of common interest. In each field, the overall community of interest will be large enough to support a comprehensive system of field-oriented programs and data.
…In the end, Licklider and Taylor predict that all of this interconnectedness will make us happier and even make unemployment a thing of the past. Their vision of everyone sitting at a console, working “through the network” is stunningly accurate for an information-driven society that fifty years ago would’ve looked far less tech-obsessed.
When people do their informational work “at the console” and “through the network,” telecommunication will be as natural an extension of individual work as face-to-face communication is now. The impact of that fact, and of the marked facilitation of the communicative process, will be very great—both on the individual and on society.
First, life will be happier for the on-line individual because the people with whom one interacts most strongly will be selected more by commonality of interests and goals than by accidents of proximity. Second, communication will be more effective and productive, and therefore more enjoyable. Third, much communication and interaction will be with programs and programmed models, which will be (a) highly responsive, (b) supplementary to one’s own capabilities, rather than competitive, and (c) capable of representing progressively more complex ideas without necessarily displaying all the levels of their structure at the same time-and which will therefore be both challenging and rewarding. And, fourth, there will be plenty of opportunity for everyone (who can afford a console) to find his calling, for the whole world of information, with all its fields and disciplines, will be open to him—with programs ready to guide him or to help him explore.
(You can read the entire paper online [pdf]. )”
Introducing: Project Open Data
White House Blog: “Technology evolves rapidly, and it can be challenging for policy and its implementation to evolve at the same pace. Last week, President Obama launched the Administration’s new Open Data Policy and Executive Order aimed at ensuring that data released by the government will be as accessible and useful as possible. To make sure this tech-focused policy can keep up with the speed of innovation, we created Project Open Data.
Project Open Data is an online, public repository intended to foster collaboration and promote the continual improvement of the Open Data Policy. We wanted to foster a culture change in government where we embrace collaboration and where anyone can help us make open data work better. The project is published on GitHub, an open source platform that allows communities of developers to collaboratively share and enhance code. The resources and plug-and-play tools in Project Open Data can help accelerate the adoption of open data practices. For example, one tool instantly converts spreadsheets and databases into APIs for easier consumption by developers. The idea is that anyone, from Federal agencies to state and local governments to private citizens, can freely use and adapt these open source tools—and that’s exactly what’s happening.
Within the first 24 hours after Project Open Data was published, more than two dozen contributions (or “pull requests” in GitHub speak) were submitted by the public. The submissions included everything from fixing broken links, to providing policy suggestions, to contributing new code and tools. One pull request even included new code that translates geographic data from locked formats into open data that is freely available for use by anyone…”
The open parliament in the age of the internet
Worldbank’s Tiago Peixoto reviewing Cristiano Faria’s book on Open Parliaments : “There are many reasons to read Cristiano’s piece, one of them being the scarcity of literature dealing with the usage of ICT by the legislative branch. I was honoured to be invited to write the preface to this book, in which I list a few other reasons why I think this book is very worthwhile reading. I have reproduced the preface below, with the addition of some hyperlinks.
***
Towards the end of the 18th Century, not long after the French Revolution, engineer Claude Chappe invented the optical telegraph. Also known as the Napoleonic Telegraph, this technological innovation enabled the transmission of messages over great distances at unprecedented speeds for its time. This novelty did not go unnoticed by the intellectuals of the period: the possibility of establishing a telegraph network that could connect individuals at high speed and lowered costs was seen as a unique opportunity for direct democracy to flourish.
The difficulties associated with direct democracy, so eloquently expressed by Rousseau just a few years earlier, no longer seemed relevant: simply opening the code used by the telegraph operators would suffice for a whirlpool of ideas to flow between citizens and government, bringing a new era of participatory decision-making. Events, however, took a different turn, and as time went by the enthusiasm for a democratic renewal faded away.
In the course of the centuries that followed, similar stories abounded. The emergence of each new ICT gave rise to a period of enthusiasm surrounding a renewal in politics and government, only to be followed by bitter disillusionment. While the causes of these historical experiences are multiple, it is safe to say that the failure of these technologies to deliver their much-heralded potential is underscored by a lack of understanding of the role of political institutions. These institutions are, inexorably, sources of obstacles and challenges that go beyond the reach of technological solutions.
Indeed, one could argue that despite the historical evidence, even today a certain amount of ingenuity permeates the majority of academic works in the domain of electronic democracy and open government, overestimating technological innovation and neglecting the role of institutions, actors, and their respective strategies.
Not falling prey to the techno-determinist temptation but rather carrying out an analysis grounded in institutions, organizational processes and actors’ strategies, is one of the many strengths of Cristiano Faria’s work…”