Wired: “Have you ever thought you could do a better job writing the laws of our country than those jokers on Capitol Hill? Or have you at least felt the urge to scratch a few lines out of a bill and replace them with something else? Here’s your chance.
Every bill currently being debated in the U.S. House of Representatives is available from a single website, and anyone can comment on the legislation or annotate it.
The site is powered by Madison Project, an open source software platform for writing, publishing, and annotating legislation. Like the site itself, the software was created by the OpenGov Foundation, a non-partisan, nonprofit organization co-founded by Rep. Darrell Issa, a Republican from California….
Any government agency or advocacy group can use Madison to gather public feedback on legislation. It’s slated to be used in Baltimore and San Francisco, where everything from building codes to LSD laws will be open to public comment. Meanwhile, CrunchGov, a tech politics site run by the blog TechCrunch, and a lobbying firm called the Internet Association use Madison to gather policy ideas from the public.1
Madison is a lot like a wiki or content management system such as Drupal and WordPress, but instead of juggling blog posts or technical documentation, its users manage policy.
For now, the San Francisco and Baltimore sites only let you comment on laws using Disqus (Kraft describes this as a “baby step” toward a full Madison roll-out). And though the CrunchGov and House of Representatives site let you edit policy as well, the changes you make to a bill or law can’t yet be shared with others. Kraft says future versions will include tools for sharing custom versions of a law and a Wikipedia-style system for tracking changes. He also says it will integrate with GitHub, a site originally designed for software developers to share and collaborate on code but now used for a wide variety of other purposes, from wedding planning to public policy.”
Alpheus Bingham, co-founder and a member of the board of directors at InnoCentive, in Wired: “But over the course of a decade, what we now call cloud-based or software-as-a-service (SaaS) applications has taken the world by storm and become mainstream. Today, cloud computing is an umbrella term that applies to a wide variety of successful technologies (and business models), from business apps like Salesforce.com, to infrastructure like Amazon Elastic Compute Cloud (Amazon EC2), to consumer apps like Netflix. It took years for all these things to become mainstream, and if the last decade saw the emergence (and eventual dominance) of the cloud over previous technologies and models, this decade will see the same thing with crowdsourcing.
Both an art and a science, crowdsourcing taps into the global experience and wisdom of individuals, teams, communities, and networks to accomplish tasks and work. It doesn’t matter who you are, where you live, or what you do or believe — in fact, the more diversity of thought and perspective, the better. Diversity is king and it’s common for people on the periphery of — or even completely outside of — a discipline or science to end up solving important problems.
The specific nature of the work offers few constraints – from a small business needing a new logo, to the large consumer goods company looking to ideate marketing programs, or to the nonprofit research organization looking to find a biomarker for ALS, the value is clear as well.
To get to the heart of the matter on why crowdsourcing is this decade’s cloud computing, several immediate reasons come to mind:
Crowdsourcing Is Disruptive
Much as cloud computing has created a new guard that in many ways threatens the old guard, so too has crowdsourcing. …
Crowdsourcing Provides On-Demand Talent Capacity
Labor is expensive and good talent is scarce. Think about the cost of adding ten additional researchers to a 100-person R&D team. You’ve increased your research capacity by 10% (more or less), but at a significant cost – and, a significant FIXED cost at that. …
Crowdsourcing Enables Pay-for-Performance.
You pay as you go with cloud computing — gone are the days of massive upfront capital expenditures followed by years of ongoing maintenance and upgrade costs. Crowdsourcing does even better: you pay for solutions, not effort, which predictably sometimes results in failure. In fact, with crowdsourcing, the marketplace bears the cost of failure, not you….
Crowdsourcing “Consumerizes” Innovation
Crowdsourcing can provide a platform for bi-directional communication and collaboration with diverse individuals and groups, whether internal or external to your organization — employees, customers, partners and suppliers. Much as cloud computing has consumerized technology, crowdsourcing has the same potential to consumerize innovation, and more broadly, how we collaborate to bring new ideas, products and services to market.
Crowdsourcing Provides Expert Services and Skills That You Don’t Possess.
One of the early value propositions of cloud-based business apps was that you didn’t need to engage IT to deploy them or Finance to help procure them, thereby allowing general managers and line-of-business heads to do their jobs more fluently and more profitably…”
New paper from the Brennan Center for Justice: “After the attacks of September 11, 2001, the government’s authority to collect, keep, and share information about Americans with little or no basis to suspect wrongdoing dramatically expanded. While the risks and benefits of this approach are the subject of intense debate, one thing is certain: it results in the accumulation of large amounts of innocuous information about law-abiding citizens. But what happens to this data? In the search to find the needle, what happens to the rest of the haystack? For the first time in one report, the Brennan Center takes a comprehensive look at the multiple ways U.S. intelligence agencies collect, share, and store data on average Americans. The report, which surveys across five intelligence agencies, finds that non-terrorism related data can be kept for up to 75 years or more, clogging national security databases and creating opportunities for abuse, and recommends multiple reforms that seek to tighten control over the government’s handling of Americans’ information.”
New and forthcoming book by Cass Sunstein: “Based on a series of pathbreaking lectures given at Yale University in 2012, this powerful, thought-provoking work by national best-selling author Cass R. Sunstein combines legal theory with behavioral economics to make a fresh argument about the legitimate scope of government, bearing on obesity, smoking, distracted driving, health care, food safety, and other highly volatile, high-profile public issues. Behavioral economists have established that people often make decisions that run counter to their best interests—producing what Sunstein describes as “behavioral market failures.” Sometimes we disregard the long term; sometimes we are unrealistically optimistic; sometimes we do not see what is in front of us. With this evidence in mind, Sunstein argues for a new form of paternalism, one that protects people against serious errors but also recognizes the risk of government overreaching and usually preserves freedom of choice.
Against those who reject paternalism of any kind, Sunstein shows that “choice architecture”—government-imposed structures that affect our choices—is inevitable, and hence that a form of paternalism cannot be avoided. He urges that there are profoundly moral reasons to ensure that choice architecture is helpful rather than harmful—and that it makes people’s lives better and longer.”
New paper by Ewan Sutherland: “While attention has been given to the uses of big data by network operators and to the provision of open data by governments, there has been no systematic attempt to re-examine the regulatory systems for telecommunications. The power of public authorities to access the big data held by operators could transform regulation by simplifying proof of bias or discrimination, making operators more susceptible to behavioural remedies, while it could also be used to deliver much finer granularity of decision making. By opening up data held by government and its agencies to enterprises, think tanks and research groups it should be possible to transform market regulation.“
Tom Simonite in MIT Technology Review: “The sixth most widely used website in the world is not run anything like the others in the top 10. It is not operated by a sophisticated corporation but by a leaderless collection of volunteers who generally work under pseudonyms and habitually bicker with each other. It rarely tries new things in the hope of luring visitors; in fact, it has changed little in a decade. And yet every month 10 billion pages are viewed on the English version of Wikipedia alone. When a major news event takes place, such as the Boston Marathon bombings, complex, widely sourced entries spring up within hours and evolve by the minute. Because there is no other free information source like it, many online services rely on Wikipedia. Look something up on Google or ask Siri a question on your iPhone, and you’ll often get back tidbits of information pulled from the encyclopedia and delivered as straight-up facts.
Yet Wikipedia and its stated ambition to “compile the sum of all human knowledge” are in trouble. The volunteer workforce that built the project’s flagship, the English-language Wikipedia—and must defend it against vandalism, hoaxes, and manipulation—has shrunk by more than a third since 2007 and is still shrinking. Those participants left seem incapable of fixing the flaws that keep Wikipedia from becoming a high-quality encyclopedia by any standard, including the project’s own. Among the significant problems that aren’t getting resolved is the site’s skewed coverage: its entries on Pokemon and female porn stars are comprehensive, but its pages on female novelists or places in sub-Saharan Africa are sketchy. Authoritative entries remain elusive. Of the 1,000 articles that the project’s own volunteers have tagged as forming the core of a good encyclopedia, most don’t earn even Wikipedia’s own middle-ranking quality scores.
The main source of those problems is not mysterious….”
New paper by Seth A. Marvel, Travis Martin, Charles R. Doering, David Lusseau, M. E. J. Newman: “The “small-world effect” is the observation that one can find a short chain of acquaintances, often of no more than a handful of individuals, connecting almost any two people on the planet. It is often expressed in the language of networks, where it is equivalent to the statement that most pairs of individuals are connected by a short path through the acquaintance network. Although the small-world effect is well-established empirically for contemporary social networks, we argue here that it is a relatively recent phenomenon, arising only in the last few hundred years: for most of mankind’s tenure on Earth the social world was large, with most pairs of individuals connected by relatively long chains of acquaintances, if at all. Our conclusions are based on observations about the spread of diseases, which travel over contact networks between individuals and whose dynamics can give us clues to the structure of those networks even when direct network measurements are not available. As an example we consider the spread of the Black Death in 14th-century Europe, which is known to have traveled across the continent in well-defined waves of infection over the course of several years. Using established epidemiological models, we show that such wave-like behavior can occur only if contacts between individuals living far apart are exponentially rare. We further show that if long-distance contacts are exponentially rare, then the shortest chain of contacts between distant individuals is on average a long one. The observation of the wave-like spread of a disease like the Black Death thus implies a network without the small-world effect.”
Initiative 14 of the Chicago Tech Plan: “The City will continue to increase and improve the quality of City data available internally and externally, and facilitate methods for analyzing that data to help create a smarter and more efficient city.”
Releasing data is a crucial component of creating an open and transparent government. Chicago is currently a leader in open data, capturing and publishing more than 400 machine-readable datasets to date. In 2012, Mayor Emanuel issued an executive order ensuring that the City continues to release new data, and empowering the Chief Data Officer to work with other City departments and agencies to develop new datasets. The City is following an aggressive schedule for releasing new datasets to the public and updating existing sets. It is also working to facilitate ways the City and others can use data to help improve City operations.
Open Data Success Story: ChicagoWorks
A collaboration between Alderman Ameya Pawar and local graphic design company 2pensmedia, ChicagoWorks is a free app that is changing the way Chicagoans interact with government. Using the app, residents can submit service requests directly to 311
and track the progress of reported issues. So far, more than 3,000 residents have downloaded the app.18
Open Data Success Story: SpotHero and Techstars Chicago
The app SpotHero makes residents’ lives easier by helping them find and reserve parking spots online. Developed in Chicago, the app had its start at Excelerate Labs, a Chicago start-up accelerator, now Techstars Chicago, that provides mentorship, training, and networking opportunities to 10 selected start-ups each year. After graduating from the program, ranked as one of the top 3 accelerators nationally, SpotHero attracted $2.5 million in VC funding. With this funding, the company is hiring new staff working to expand to other cities.19
Open Data Success Story: OpenGov Hack Night
Chicago boasts a community of “civic hackers” who are passionate about using technology to improve the city. An example of this passion in action is the OpenGov Hack Night. Organized by Open City, an organization that builds web apps and other tools using open government data, the Hack Night attracts civic hackers and curious residents eager to explore the intersection of open government data, smart cities, and technology. Every week, the Hack Night provides a collaborative environment for residents to learn about open data, working on cutting-edge projects and networking with passionate civic technologists.20
by R. Arunachalam and S. Sarkar: “Governments across the world facing unique challenges today than ever before. In recent time, Arab Spring
phenomenon is an example of how Governments can be impacted if they ignore citizen sentiment. It is a growing trend that Governments are trying to move closer to the citizen-centric model, where the priorities and services would be driven according to citizen needs rather than Government capability. Such trends are
forcing the Governments in rethinking and reshaping their policies in citizen interactions. New disruptive technologies like cloud, mobile etc. are opening new opportunities to the Governments to enable innovations in such interactions.
The advent of Social Media is a recent addition to such disruptive socio-technical enablers. Governments are fast realizing that it can be a great vehicle to get closer to the citizens. It can provide deep insight in what citizens want. Thus, in the current gloomy climate of world economy today, Governments can reorganize and reprioritize the allocation limited funds, thereby creating maximum impact on citizens’ life. Building such insight is a non-trivial task because of the huge
volume of information that social media can generate. However, Sentiment Analysis or Opinion Mining can be a useful vehicle in this journey.
In this work, we presented a model and case study to analyze citizen sentiment from social media in helping the Governments to take decisions.”
Antony Williams in EMBnet. journal:” Science has evolved from the isolated individual tinkering in the lab, through the era of the “gentleman scientist” with his or her assistant(s), to group-based then expansive collaboration and now to an opportunity to collaborate with the world. With the advent of the internet the opportunity for crowd-sourced contribution and large-scale collaboration has exploded and, as a result, scientific discovery has been further enabled. The contributions of enormous open data sets, liberal licensing policies and innovative technologies for mining and linking these data has given rise to platforms that are beginning to deliver on the promise of semantic technologies and nanopublications, facilitated by the unprecedented computational resources available today, especially the increasing capabilities of handheld devices. The speaker will provide an overview of his experiences in developing a crowdsourced platform for chemists allowing for data deposition, annotation and validation. The challenges of mapping chemical and pharmacological data, especially in regards to data quality, will be discussed. The promise of distributed participation in data analysis is already in place.”