Ty McCormick in Foreign Policy: “If you’re checking in on Foursquare or ramping up the “strength” of your LinkedIn profile, you’ve just been gamified — whether or not you know it. “Gamification,” today’s hottest business buzzword, is gaining traction everywhere from corporate boardrooms to jihadi chat forums, and its proponents say it can revolutionize just about anything, from education to cancer treatment to ending poverty. While the global market for gamification is expected to explode from $242 million in 2012 to $2.8 billion in 2016, according to market analysis firm M2 Research, there is a growing chorus of critics who think it’s little more than a marketing gimmick. So is the application of game mechanics to everyday life more than just a passing fad? You decide.
Kellogg’s cereals offers its first “premium,” the Funny Jungleland Moving-Pictures book, free with every two boxes. Two years later, Cracker Jack starts putting prizes, from stickers to baseball cards, in its boxes of caramel-coated corn snacks. “A prize in every box” is an instant hit; over the next 100 years, Cracker Jack gives away more than 23 billion in-package treasures. By the 1950s, the concept of gamification is yet to be born, but its primary building block — fun — is motivating billions of consumers around the world.
Duke University sociologist Donald F. Roy publishes “Banana Time,” an ethnographic study of garment workers in Chicago. Roy chronicles how workers use “fun” and “fooling” on the factory room floor — including a daily ritual game in which workers steal a banana — to stave off the “beast of monotony.” The notion that fun can enhance job satisfaction and productivity inspires reams of research on games in the workplace….”
Geoff Mulgan’s blog: “We’ve often discussed the role of failure in innovation – and have started running FailureFests and other devices to get practitioners talking honestly about what they learned from things that didn’t work. We all know how hard this is.
There’s a new book out by the guru of failure in engineering, Henry Petroski: To forgive design: understanding failure. He argues that the best way of achieving lasting success is by understanding failure and that a single failure may show ‘weaknesses in reasoning, knowledge, and performance that all the successful designs may not even hint at’. For him the best examples are collapsing bridges. Here’s a very different, but helpful, example of trying to extract some useful lessons from a well-intentioned project that didn’t quite work in a field very distant from bridges. It’s a reminder of why it’s so important that the new What Works centres are brave enough to set out clearly the ideas that they think have been tested and shown not to work – that may be just as useful as the recommendations on best or proven practice.
Of course it’s not enough to say we should celebrate failure. No organisation or system can do that. Instead there is an unavoidable ambiguity in the relationship between innovation and failure. On the one hand if you’re not failing often, you’re probably not taking enough creative risks. On the other hand, if you fail too much don’t expect to keep your job, or your funding. “
A new book by Lev Manovich: “This new book from the celebrated author of The Language of New Media is the first to offer a rigorous theory of the technology we all use daily – software for media authoring, access, and sharing.
What motivated developers in the 1960s and ‘70s to create the concepts and techniques that now underlie contemporary applications like Photoshop, Illustrator, and Final Cut?
How do these tools shape the visual aesthetics of contemporary media and design? What happens to the idea of a “medium” after previously media-specific tools have been simulated and extended into software?
Lev Manovich answers these questions through detailed analysis of key media applications such as Photoshop and After Effects, popular web services such as Google Earth, and milestone projects in design, motion graphics, and interactive environments.
Software Takes Command is a must for scholars, designers, technologists, and artists concerned with contemporary media and digital culture.”
Ronald Brownstein in The National Journal: “Washington may be paralyzed by partisanship, but across the country, grassroots innovators are crafting solutions to our problems….This special issue of National Journal celebrates these pragmatic problem-solvers in business, the civic sector, local government, and partnerships that creatively combine all three. At a time of endemic stalemate in the nation’s capital, think of it as a report from the America that works (to borrow a recent phrase from The Economist)….
Another significant message is that the communications revolution, by greatly accelerating the sharing of ideas, has produced a “democratization of innovation,” as author Vijay Vaitheeswaran put it in his 2012 book, Need, Speed, and Greed. This dynamic has simultaneously allowed breakthroughs to disseminate faster than ever and empowered more people inside companies and communities to tackle problems previously left to elites. “One of the most interesting stories in social change today is how much creative problem-solving is emerging from citizens scattered far and wide who are taking it upon themselves to fix things and who, in many cases, are outperforming traditional organizations,” David Bornstein, founder of the Dowser.org website that tracks social innovation, wrote in The New York Times last year. Our honoree Eric Greitens, the former Navy SEAL who founded The Mission Continues for other post-9/11 veterans, personifies this trend. Across the categories, many honorees insist they have pursued new approaches in part because they could no longer wait for Washington to address the problems they face. In a world where barriers to the dispersal of ideas are crumbling, waiting for elites to propose answers may soon seem as outdated as waiting for a dial-up connection to the Internet.
The third conclusion limits the first two. Even many of the most dynamic grassroots innovations will remain isolated islands of excellence in this continent-sized society without energy and amplification from the top. Donald Kettl, dean of the University of Maryland’s School of Public Policy, notes the federal government is unavoidably a major force on many of the challenges facing America, particularly reforming education, health care, and training; developing regional economic strategies; and providing physical and digital infrastructure. Washington need not direct or control the response to these problems, but change on a massive scale is always harder without stronger signals and incentives than the federal government has provided in recent years. “It is possible to feed change aggressively from the bottom,” Kettl says. “[But] the federal government, for better or worse, inevitably is involved…. There’s a natural limit in what’s possible to bubble up from the bottom….
Special issue at https://web.archive.org/web/2013/http://www.nationaljournal.com/back-in-business ”
Jeremy Rozansky, assistant editor of National Affairs in The New Atlantis: ” In his debut book Uncontrolled, entrepreneur and policy analyst Jim Manzi argues that social scientists and policymakers should instead adopt the “experimental method.” The essential tool of this method is the randomized field trial (RFT), a technique that already informs many of our successful private enterprises. Perhaps the best known example of RFTs — one that Manzi uses to illustrate the concept — is the kind of clinical trial performed to test new medicines, wherein researchers “undertake a painstaking series of replicated controlled experiments to measure the effects of various interventions under various conditions,” as he puts it.
The central argument of Uncontrolled is that RFTs should be adopted more widely by businesses as well as government. The book is helpful and holds much wisdom — although the approach he recommends is ultimately just another streetlamp in the night, casting a pale light that tapers off after a few yards. Much still lies beyond its glow….
The econometric method now dominates the social sciences because it helps to cope with the problem of high causal density. It begins with a large data set: economic records, election results, surveys, and other similar big pools of data. Then the social scientist uses statistical techniques to model the interactions of sundry independent variables (causes) and a dependent variable (the effect). But for this method to work properly, social scientists must know all the causally important variables beforehand, because a hidden conditional could easily yield a false positive.
The experimental method, which Manzi prefers, offers a different way of coping with high causal density: sidestepping the problem of isolating exact causes. To sort out whether a given treatment or policy works, a scientist or social scientist can try it out on a random section of a population, and compare the results to a different section of the population where the treatment or policy was not implemented. So while econometric models aim to identify which particular variables are responsible for different results, RFTs have more modest aims, as they do not seek to identify every hidden conditional. By using the RFT approach, we may not know precisely why we achieved a desired effect, since we do not model all possible variables. But we can gain some ability to know that we will achieve a desired effect, at least under certain conditions.
Strictly speaking, even a randomized field trial only tells us with certainty that some exact technique worked with some specific population on some specific date in the past when conducted by some specific experimenters. We cannot know whether a given treatment or policy will work again under the same conditions at a later date, much less on a different population, much less still on the population as a whole. But scientists must always be cautious about moving from particular results to general conclusions; this is why experiments need to be replicated. And the more we do replicate them, the more information we can gain from those particular results, and the more reliably they can build toward teaching us which treatments or policies might work or (more often) which probably won’t. The result is that the RFT approach is very well suited to the business of government, since policymakers usually only need to know whether a given policy will work — whether it will produce a desired outcome.”
Steve Ressler in GovTech: “Although government 2.0 has been around since Bill Eggers’ 2005 book Government 2.0, the term truly took over in 2008. After President Barack Obama’s 2008 election, his first memorandum in office was the Open Government Directive with its three pillars of creating a more transparent, participatory and collaborative government. This framework quickly spread from federal government down to state and local government and across the nation.
So fast-forward five years and let’s ask what have we learned.
1. It’s about mission problems…
2. It’s about sustainability…
3. It’s about human capital…
4. It’s not static…
5. It’s more than open data…
Overall, a lot of progress has been made in five years. Besides the items above, it’s a cultural and mindset shift that we are seeing grow throughout government each year. Individuals and agencies are focusing on how to make important systemic change with new technology and approaches to improve government”
Larry Lessig in The Daily Beast: “Almost 15 years ago, as I was just finishing a book about the relationship between the Net (we called it “cyberspace” then) and civil liberties, a few ideas seemed so obvious as to be banal: First, life would move to the Net. Second, the Net would change as it did so. Gone would be simple privacy, the relatively anonymous default infrastructure for unmonitored communication; in its place would be a perpetually monitored, perfectly traceable system supporting both commerce and the government. That, at least, was the future that then seemed most likely, as business raced to make commerce possible and government scrambled to protect us (or our kids) from pornographers, and then pirates, and now terrorists.
But what astonishes me is that today, more than a decade into the 21st century, the world has remained mostly oblivious to these obvious points about the relationship between law and code….
the fact is that there is technology that could be deployed that would give many the confidence that none of us now have. “Trust us” does not compute. But trust and verify, with high-quality encryption, could. And there are companies, such as Palantir, developing technologies that could give us, and more importantly, reviewing courts, a very high level of confidence that data collected or surveilled was not collected or used in an improper way. Think of it as a massive audit log, recording how and who used what data for what purpose. We could code the Net in a string of obvious ways to give us even better privacy, while also enabling better security.
Tom Slee in The New Inquiry: “Since the earliest days of Linux and of Wikipedia, conflicting attitudes to profit have co-existed with a commitment to digital sharing. Whether it’s source code, text, artistic works, or government data, some see the open digital commons as an ethical alternative to corporate production, while others believe that sharing and profit go together like wine and cheese. And now, as massively open online courses bring the rhetoric of digital openness to education and Web-based startups are making it easy to share apartments and cars and unused parking spaces and jobs, the seeds have been planted for a sharing economy whose flowering is welcomed both by idealists who value authenticity, sustainability and community sharing over commodity ownership and by venture capitalists looking to make their next fortune. Strange bedfellows.
Cities have long been sites of commons and commerce: full of trade and private enterprise but shaped by parks and streetscapes, neighborhoods and rhythms of daily life that grow from non-commodified sharing. In his 2012 book Rebel Cities, David Harvey observes how, in cities, “people of all sorts and classes mingle … to produce a common of perpetually changing and transitory life,” from the irrepressible energy of Manhattan to the café culture of Rome to Barcelona’s distinctive architecture to the symbolic meaning of modern Berlin. Yes, by 2009, volunteers had spent a hundred million hours building Wikipedia, but cities put this dramatic number into perspective: Every year the citizens of Canada alone volunteer roughly 20 Wikipedias for hospitals and children’s sports, for charities and the arts — the equivalent of more than a million full-time jobs in a population of 30 million — and there is no reason to believe that the count is complete or that Canada is exceptional.
The similarities between urban and digital worlds are not incidental. Both are cultural spaces, and cultural spaces have always been iceberg-like. Above the surface, market forces and state interventions; beneath, a mass of noncommercial activity organized, at least in part, as open commons. But while digital entrepreneurs look to the “Internet’s way of working” to disrupt the bricks and mortar of our cities, urban experiences have sober lessons for the digerati if they will listen: The relationship between commons and commerce is fraught with contradictions. Harvey never once mentions computer technology in his book, but his reflections on cities make a compelling case that money-making and sharing are far from natural allies, and that the role of openness must be questioned if commons-based production is to be a real alternative.”
Jason Hibbets in Open Source.com: “How can you apply the concepts of open source to a living, breathing city? An open source city is a blend of open culture, open government policies, and economic development. I derived these characteristics based on my experiences and while writing my book, The foundation for an open source city. Characteristics such as collaboration, participation, transparency, rapid prototyping, and many others can be applied to any city that wants to create an open source culture. Let’s take a look at these characteristics in more detail.
Five characteristics of an open source city
- Fostering a culture of citizen participation
- Having an effective open government policy
- Having an effective open data initiative
- Promoting open source user groups and conferences
- Being a hub for innovation and open source businesses
In my book, I take a look at how these five principles are being actively applied in Raleigh, North Carolina. I also incorporate other experiences from my open government adventures such as CityCamps and my first Code for America Summit. Although Raleigh is the case study, the book is a guide for how cities across the country, and world, can implement the open source city brand.”