Coase’s theories predicted Internet’s impact on how business is done


Don Tapscott in The Globe and Mail: “Renowned economist Ronald Coase died last week at the age of 102. Among his many achievements, Mr. Coase was awarded the 1991 Nobel Prize in Economics, largely for his inspiring 1937 paper The Nature of the Firm. The Nobel committee applauded the academic for his “discovery and clarification of the significance of transaction costs … for the institutional structure and functioning of the economy.”
Mr. Coase’s enduring legacy may well be that 60 years later, his paper and theories help us understand the Internet’s impact on business, the economy and all our institutions… Mr. Coase wondered why there was no market within the firm. Why is it unprofitable to have each worker, each step in the production process, become an independent buyer and seller? Why doesn’t the draftsperson auction their services to the engineer? Why is it that the engineer does not sell designs to the highest bidder? Mr. Coase argued that preventing this from happening created marketplace friction.
Mr. Coase argued that this friction gave rise to transaction costs – or to put it more broadly, collaboration or relationship costs. There are three types of these relationship costs. First are search costs, such as the hunt for appropriate suppliers. Second are contractual costs, including price and contract negotiations. Third are the co-ordination costs of meshing the different products and processes.
The upshot is that most vertically integrated corporations found it cheaper and simpler to perform most functions in-house, rather than incurring the cost, hassle and risk of constant transactions with outside partners….This is no longer the case. Many behemoths have lost market share to more supple competitors. Digital technologies slash transaction and collaboration costs. Smart companies are making their boundaries porous, using the Internet to harness knowledge, resources and capabilities outside the company. Everywhere,leading firms set a context for innovation and then invite their customers, partners and other third parties to co-create their products and services.
Today’s economic engines are Internet-based clusters of businesses. While each company retains its identity, companies function together, creating more wealth than they could ever hope to create individually. Where corporations were once gigantic, new business ecosystems tend toward the amorphous.
Procter & Gamble now gets 60 per cent of its innovation from outside corporate walls. Boeing has built a massive ecosystem to design and manufacture jumbo jets. China’s motorcycle industry, which consists of dozens of companies collaborating with no single company pulling the strings, now comprises 40 per cent of global motorcycle production.
Looked at one way, Amazon.com is a website with many employees that ships books. Looked at another way, however, Amazon is a vast ecosystem that includes authors, publishers, customers who write reviews for the site, delivery companies like UPS, and tens of thousands of affiliates that market products and arrange fulfilment through the Amazon network. Hundreds of thousands of people are involved in Amazon’s viral marketing network.
This is leading to the biggest change to the corporation in a century and altering how we orchestrate capability to innovate, create goods and services and engage with the world. From now on, the ecosystem itself, not the corporation per se, should serve as the point of departure for every business strategist seeking to understand the new economy – and for every manager, entrepreneur and investor seeking to prosper in it.
Nor does the Internet tonic apply only to corporations. The Web is dropping transaction costs everywhere – enabling networked approaches to almost every institution in society, from government, media, science and health care to our energy grid, transportation systems and institutions for global problem solving.
Governments can change from being vertically integrated, industrial-age bureaucracies to become networks. By releasing their treasures of raw data, governments can now become platforms upon which companies, NGOs, academics, foundations, individuals and other government agencies can collaborate to create public value…”

Assessing Zuckerberg’s Idea That Facebook Could Help Citizens Re-Make Their Government


Gregory Ferenstein in TechCrunch: “Mark Zuckerberg has a grand vision that Facebook will help citizens in developing countries decide their own governments. It’s a lofty and partially attainable goal. While Egypt probably won’t let citizens vote for their next president with a Like, it is theoretically possible to use Facebook to crowdsource expertise. Governments around the world are experimenting with radical online direct democracy, but it doesn’t always work out.

Very briefly, Zuckerberg laid out his broad vision for e-government to Wired’s Steven Levy, while defending Internet.org, a new consortium to bring broadband to the developing world.

“People often talk about how big a change social media had been for our culture here in the U.S. But imagine how much bigger a change it will be when a developing country comes online for the first time ever. We use things like Facebook to share news and keep in touch with our friends, but in those countries, they’ll use this for deciding what kind of government they want to have. Getting access to health care information for the first time ever.”

When he references “deciding … government,” Zuckerberg could be talking about voting, sharing ideas, or crafting a constitution. We decided to assess the possibilities of them all….
For citizens in the exciting/terrifying position to construct a brand-new government, American-style democracy is one of many options. Britain, for instance, has a parliamentary system and has no constitution. In other cases, a government may want to heed political scientists’ advice and develop a “consensus democracy,” where more than two political parties are incentivized to work collaboratively with citizens, business, and different branches of government to craft laws.
At least once, choosing a new style of democracy has been attempted through the Internet. After the global financial meltdown wrecked Iceland’s economy, the happy citizens of the grass-covered country decided to redo their government and solicit suggestions from the public (950 Icelanders chosen by lottery and general calls for ideas through social networks). After much press about Iceland’s “crowdsourced” constitution, it crashed miserably after most of the elected leaders rejected it.
Crafting law, especially a constitution, is legally complex; unless there is a systematic way to translate haphazard citizen suggestions into legalese, the results are disastrous.
“Collaborative drafting, at large scale, at low costs, and that is inclusive, is something that we still don’t know how to do,” says Tiago Peixoto, a World Bank Consultant on participatory democracy (and one of our Most Innovative People In Democracy).
Peixoto, who helps the Brazilian government conduct some of the world’s only online policymaking, says he’s optimistic that Facebook could be helpful, but he wouldn’t use it to draft laws just yet.
While technically it is possible for social networks to craft a new government, we just don’t know how to do it very well, and, therefore, leaders are likely to reject the idea. In other words, don’t expect Egypt to decide their future through Facebook likes.”

Mapping the Twitterverse


Mapping the Twitterverse

Phys.org: “What does your Twitter profile reveal about you? More than you know, according to Chris Weidemann. The GIST master’s student has developed an application that follows geospatial footprints.
You start your day at your favorite breakfast spot. When your order of strawberry waffles with extra whipped cream arrives, it’s too delectable not to share with your Twitter followers. You snap a photo with your smartphone and hit send. Then, it’s time to hit the books.
You tweet your friends that you’ll be at the library on campus. Later that day, palm trees silhouette a neon-pink sunset. You can’t resist. You tweet a picture with the hashtag #ILoveLA.
You may not realize that when you tweet those breezy updates and photos of food, you are sharing information about your location.
Chris Weidemann, a graduate student in the Geographic Information Science and Technology (GIST) online master’s program at USC Dornsife, investigated just how much public was generated by Twitter users and how their information—available through Twitter’s (API)—could potentially be used by third parties. His study was published June 2013 in the International Journal of Geoinformatics
Twitter has approximately 500 million active users, and reports show that 6 percent of users opt-in to allow the platform to broadcast their location using global positioning technology with each tweet they post. That’s about 30 million people sending geo-tagged data out into the Twitterverse. In their tweets, people can choose whether their information is displayed as a city and state, an address or pinpoint their precise latitude and longitude.
That’s only part of their geospatial footprint. Information contained in a post may reveal a user’s location. Depending upon how the account is set up, profiles may include details about their hometown, time zone and language.”
 

The Multistakeholder Model in Global Technology Governance: A Cross-Cultural Perspective


New APSA 2013 Annual Meeting Paper by Nanette S. Levinson: “This paper examines two key but often overlooked analytic dimensions related to global technology governance: the roles of culture and cross-cultural communication processes and the broader framework of Multistakeholderism. Each of these dimensions has a growing tradition of research/conceptual frameworks that can inform the analysis of Internet governance and related complex and power-related processes that may be present in a multistakeholder setting. The use of the term ‘multistakeholder’ itself has grown exponentially in discussing Internet governance and related governance domains; yet there are few rigorous studies within Internet governance that treat actual multistakeholder processes, especially from a cross-cultural and comparative perspective.
Using research on cross-cultural communication and related factors (at small group, occupational, organizational, interorganizational or cross- sector, national, regional levels), this paper provides and uses an analytic framework, especially for the 2012 WCIT and 2013 WSIS 10 and World Telecommunications Policy Forum that goes beyond the rhetoric of ‘multistakeholder’ as a term. It includes an examination of variables found to be important in studies from environmental governance, public administration, and private sector partnership domains including trust, absorptive capacity, and power in knowledge transfer processes. “

The Global Database of Events, Language, and Tone (GDELT)


“The Global Database of Events, Language, and Tone (GDELT) is an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world over the last two centuries down to the city level globally, to make all of this data freely available for open research, and to provide daily updates to create the first “realtime social sciences earth observatory.” Nearly a quarter-billion georeferenced events capture global behavior in more than 300 categories covering 1979 to present with daily updates.GDELT is designed to help support new theories and descriptive understandings of the behaviors and driving forces of global-scale social systems from the micro-level of the individual through the macro-level of the entire planet by offering realtime synthesis of global societal-scale behavior into a rich quantitative database allowing realtime monitoring and analytical exploration of those trends.
GDELT’s goal is to help uncover previously-obscured spatial, temporal, and perceptual evolutionary trends through new forms of analysis of the vast textual repositories that capture global societal activity, from news and social media archives to knowledge repositories.”

Index: The Data Universe


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on the data universe and was originally published in 2013.

  • How much data exists in the digital universe as of 2012: 2.7 zetabytes*
  • Increase in the quantity of Internet data from 2005 to 2012: +1,696%
  • Percent of the world’s data created in the last two years: 90
  • Number of exabytes (=1 billion gigabytes) created every day in 2012: 2.5; that number doubles every month
  • Percent of the digital universe in 2005 created by the U.S. and western Europe vs. emerging markets: 48 vs. 20
  • Percent of the digital universe in 2012 created by emerging markets: 36
  • Percent of the digital universe in 2020 predicted to be created by China alone: 21
  • How much information in the digital universe is created and consumed by consumers (video, social media, photos, etc.) in 2012: 68%
  • Percent of which enterprises have liability or responsibility for (copyright, privacy, compliance with regulations, etc.): 80
  • Amount included in the Obama Administration’s 2-12 Big Data initiative: over $200 million
  • Amount the Department of Defense is investing annually on Big Data projects as of 2012: over $250 million
  • Data created per day in 2012: 2.5 quintillion bytes
  • How many terabytes* of data collected by the U.S. Library of Congress as of April 2011: 235
  • How many terabytes of data collected by Walmart per hour as of 2012: 2,560, or 2.5 petabytes*
  • Projected growth in global data generated per year, as of 2011: 40%
  • Number of IT jobs created globally by 2015 to support big data: 4.4 million (1.9 million in the U.S.)
  • Potential shortage of data scientists in the U.S. alone predicted for 2018: 140,000-190,000, in addition to 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions
  • Time needed to sequence the complete human genome (analyzing 3 billion base pairs) in 2003: ten years
  • Time needed in 2013: one week
  • The world’s annual effective capacity to exchange information through telecommunication networks in 1986, 2007, and (predicted) 2013: 281 petabytes, 65 exabytes, 667 exabytes
  • Projected amount of digital information created annually that will either live in or pass through the cloud: 1/3
  • Increase in data collection volume year-over-year in 2012: 400%
  • Increase in number of individual data collectors from 2011 to 2012: nearly double (over 300 data collection parties in 2012)

*1 zetabyte = 1 billion terabytes | 1 petabyte = 1,000 terabytes | 1 terabyte = 1,000 gigabytes | 1 gigabyte = 1 billion bytes

Sources

Civic Innovation Fellowships Go Global


Some thoughts from Panthea Lee from Reboot: “In recent years, civic innovation fellowships have shown great promise to improve the relationships between citizens and government. In the United States, Code for America and the Presidential Innovation Fellows have demonstrated the positive impact a small group of technologists can have working hand-in-hand with government. With the launch of Code for All, Code for Europe, Code4Kenya, and Code4Africa, among others, the model is going global.
But despite the increasing popularity of civic innovation fellowships, there are few templates for how a “Code for” program can be adapted to a different context. In the US, the success of Code for America has drawn from a wealth of tech talent eager to volunteer skills, public and private support, and the active participation of municipal governments. Elsewhere, new “Code for” programs are surely going to have to operate within a different set of capacities and constraints.”

Five myths about big data


Samuel Arbesman, senior scholar at the Ewing Marion Kauffman Foundation and the author of “The Half-Life of Facts” in the Washington Post: “Big data holds the promise of harnessing huge amounts of information to help us better understand the world. But when talking about big data, there’s a tendency to fall into hyperbole. It is what compels contrarians to write such tweets as “Big Data, n.: the belief that any sufficiently large pile of s— contains a pony.” Let’s deflate the hype.
1. “Big data” has a clear definition.
The term “big data” has been in circulation since at least the 1990s, when it is believed to have originated in Silicon Valley. IBM offers a seemingly simple definition: Big data is characterized by the four V’s of volume, variety, velocity and veracity. But the term is thrown around so often, in so many contexts — science, marketing, politics, sports — that its meaning has become vague and ambiguous….
2. Big data is new.
By many accounts, big data exploded onto the scene quite recently. “If wonks were fashionistas, big data would be this season’s hot new color,” a Reuters report quipped last year. In a May 2011 report, the McKinsey Global Institute declared big data “the next frontier for innovation, competition, and productivity.”
It’s true that today we can mine massive amounts of data — textual, social, scientific and otherwise — using complex algorithms and computer power. But big data has been around for a long time. It’s just that exhaustive datasets were more exhausting to compile and study in the days when “computer” meant a person who performed calculations….
3. Big data is revolutionary.
In their new book, “Big Data: A Revolution That Will Transform How We Live, Work, and Think,”Viktor Mayer-Schonberger and Kenneth Cukier compare “the current data deluge” to the transformation brought about by the Gutenberg printing press.
If you want more precise advertising directed toward you, then yes, big data is revolutionary. Generally, though, it’s likely to have a modest and gradual impact on our lives….
4. Bigger data is better.
In science, some admittedly mind-blowing big-data analyses are being done. In business, companies are being told to “embrace big data before your competitors do.” But big data is not automatically better.
Really big datasets can be a mess. Unless researchers and analysts can reduce the number of variables and make the data more manageable, they get quantity without a whole lot of quality. Give me some quality medium data over bad big data any day…
5. Big data means the end of scientific theories.
Chris Anderson argued in a 2008 Wired essay that big data renders the scientific method obsolete: Throw enough data at an advanced machine-learning technique, and all the correlations and relationships will simply jump out. We’ll understand everything.
But you can’t just go fishing for correlations and hope they will explain the world. If you’re not careful, you’ll end up with spurious correlations. Even more important, to contend with the “why” of things, we still need ideas, hypotheses and theories. If you don’t have good questions, your results can be silly and meaningless.
Having more data won’t substitute for thinking hard, recognizing anomalies and exploring deep truths.”

Announcing Project Open Data from Cloudant Labs


Yuriy Dybskiy from Cloudant: “There has been an emerging pattern over the last few years of more and more government datasets becoming available for public access. Earlier this year, the White House announced official policy on such data – Project Open Data.

Available resources

Here are four resources on the topic:

  1. Tim Berners-Lee: Open, Linked Data for a Global Community – [10 min video]
  2. Rufus Pollock: Open Data – How We Got Here and Where We’re Going – [24 min video]
  3. Open Knowledge Foundation Datasets – http://data.okfn.org/data
  4. Max Ogden: Project dat – collaborative data – [github repo]

One of the main challenges is access to the datasets. If only there were a database that had easy access to its data baked right in it.
Luckily, there is CouchDB and Cloudant, which share the same APIs to access data via HTTP. This makes for a really great option to store interesting datasets.

Cloudant Open Data

Today we are happy to announce a Cloudant Labs project – Cloudant Open Data!
Several datasets are available at the moment, for example, businesses_sf – data regarding businesses registered in San Francisco and sf_pd_incidents – a collection of incident reports (criminal and non-criminal) made by the San Francisco Police Department.
We’ll add more, but if you have one you’d like us to add faster – drop us a line at open-data@cloudant.com
Create an account and play with these datasets yourself”

Internet Governance is Our Shared Responsibility


New paper by Vint Cerf, Patrick Ryan and Max Senges Senges: “This essay looks at the the different roles that multistakeholder institutions play in the Internet governance ecosystem. We propose a model for thinking of Internet governance within the context of the Internet’s layered model. We use the example of the negotiations in Dubai in 2102 at the World Conference on International Telecommunications as an illustration for why it is important for different institutions within the governance system to focus on their respective areas of expertise (e.g., the ITU, ICANN, and IGF). Several areas of conflict (a “tussle”) are reviewed, such as the desire to promote more broadband infrastructure, a topic that is in the remit of the International Telecommunications Union, but also the recurring desire of countries like Russia and China to use the ITU to regulate content and restrict free expression on the Internet through onerous cybersecurity and spam provisions. We conclude that it is folly to try and regulate all these areas through an international treaty, and encourage further development of mechanisms for global debate like the Internet Governance Forum (IGF).”