Using the Wisdom of the Crowd to Democratize Markets


David Weidner at the Wall Street Journal: “For years investors have largely depended on three sources to distill the relentless onslaught of information about public companies: the companies themselves, Wall Street analysts and the media.
Each of these has their strengths, but they may have even bigger weaknesses. Companies spin. Analysts have conflicts of interest. The financial media is under deadline pressure and ill-equipped to act as a catch-all watchdog.
But in recent years, the tech whizzes out of Silicon Valley have been trying to democratize the markets. In 2010 I wrote about an effort called Moxy Vote, an online system for shareholders to cast ballots in proxy contests. Moxy Vote had some initial success but ran into regulatory trouble and failed to gain traction.
Some newer efforts are more promising, mostly because they depend on users, or some form of crowdsourcing, for their content. Crowdsourcing is when a need is turned over to a large group, usually an online community, rather than traditional paid employees or outside providers….
Estimize.com is one. It was founded in 2011 by former trader Leigh Drogan, but recently has undergone some significant expansion, adding a crowd-sourced prediction for mergers and acquisitions. Estimize also boasts a track record. It claims it beats Wall Street analysts 65.9% of the time during earnings season. Like SeekingAlpha, Estimize does, however, lean heavily on pros or semi-pros. Nearly 5,000 of its contributors are analysts.
Closer to the social networking world there’s scutify.com, a website and mobile app that aggregates what’s being said about individual stocks on social networks, blogs and other sources. It highlights trending stocks and links to chatter on social networks. (The site is owned by Cody Willard, a contributor to MarketWatch, which is owned by Dow Jones, the publisher of The Wall Street Journal.)
Perhaps the most intriguing startup is TwoMargins.com. The site allows investors, analysts, average Joes — anyone, really — to annotate company releases. In that way, Two Margins potentially can tap the power of the crowd to provide a fourth source for the marketplace.
Two Margins, a startup funded by Bloomberg L.P.’s venture capital fund, borrows annotation technology that’s already in use on other sites such as genius.com and scrible.com. Participants can sign in with their Twitter or Facebook accounts and post to those networks from the site. (Dow Jones competes with Bloomberg in the provision of news and financial data.)
At this moment, Two Margins isn’t a game changer. Founders Gniewko Lubecki and Akash Kapur said the site is in a pre-beta phase, which is to say it’s sort of up and running and being constantly tweaked.
Right now there’s nothing close to the critical mass needed for an exhaustive look at company filings. There’s just a handful of users and less than a dozen company releases and filings available.
Still, in the first moments after Twitter Inc.’s earnings were released Tuesday, Two Margins’ most loyal users began to scour the release. “Looks like Twitter is getting significantly better at monetizing users,” wrote a user named “George” who had annotated the revenue line from the company’s financial statement. Another user, “Scott Paster,” noted Twitter’s stock option grants to executives were nearly as high as its reported loss.
“The sum is greater than it’s parts when you pull together a community of users,” Mr. Kapur said. “Widening access to these documents is one goal. The other goal is broadening the pool of knowledge that’s brought to bear on these documents.”
In the end, this new wave of tech-driven services may never capture enough users to make it into the investing mainstream. They all struggle with uninformed and inaccurate content especially if they gain critical mass. Vetting is a problem.
For that reasons, it’s hard to predict whether these new entries will flourish or even survive. That’s not a bad thing. The march of technology will either improve on the idea or come up with a new one.
Ultimately, technology is making possible what hasn’t been. That is, free discussion, access and analysis of information. Some may see it as a threat to Wall Street, which has always charged for expert analysis. Really, though, these efforts are good for markets, which pride themselves on being fair and transparent.
It’s not just companies that should compete, but ideas too.”

How Thousands Of Dutch Civil Servants Built A Virtual 'Government Square' For Online Collaboration


Federico Guerrini at Forbes: “Democracy needs a reboot, or as the founders of Democracy Os, an open source platform for political debate say, “a serious upgrade”. They are not alone in trying to change the way citizens and governments communicate with each other. Not long ago, I covered on this blog a Greek platform, VouliWatch, which aims at boosting civic engagement following the model of other similar initiatives in countries like Germany, France and Austria, all running thanks to a software called Parliament Watch.
Other decision making tools, used by activists and organizations that try to reduce the distance between the people and their representatives include Liquid Feedback, and Airesis. But the quest for disintermediation doesn’t regard only the relationship between governments and citizens: it’s changing the way public organisations work internally as well. Civil servants are starting to develop and use their internal “social networks”, to exchange ideas, discussing issues and collaborate on projects.
One such thing is happening in the Netherlands: thousands of civil servants belonging to all government organizations have built their own “intranet” using Pleio (“government square”, in Dutch) a platform that runs on the open source networking engine Elgg.
It all started in 2010, thanks to the work of a group of four founders, Davied van Berlo, Harrie Custers, Wim Essers and Marcel Ziemerink. Growth has been steady and now Pleio can count on some 75.000 users spread in about 800 subsites. The nice thing about the platform, in fact, is that it is modular: subscribers can collaborate on a group and then start a sub group to get in more depth with a smaller team. To learn a little more about this unique experience, I reached out for van Berlo, who kindly answered a few questions. Check the interview below.
pleio
Where did the Pleio idea come from?Were you inspired by other experiences?

The idea came mainly from the developments around us: the whole web 2.0 movement at the time. This has shown us the power of platforms to connect people, bring them together and let them cooperate. I noticed that civil servants were looking for ways of collaborating across organisational borders and many were using the new online tools. That’s why I started the Civil Servant 2.0 network, so they could exchange ideas and experiences in this new way of working.
However, these tools are not always the ideal solution. They’re commercial for one, which can get in the way of the public goals we work for. They’re often American, where other laws and practices apply. You can’t change them or add to them. Usually you have to get another tool (and login) for different functionalities. And they were outright forbidden by some government agencies. I noticed there was a need for a platform where different tools were integrated, where people from different organisations and outside government could work together and where all information would remain in the Netherlands and in the hands of the original owner. Since there was no such platform we started one of our own….”

Selected Readings on Sentiment Analysis


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of sentiment analysis was originally published in 2014.

Sentiment Analysis is a field of Computer Science that uses techniques from natural language processing, computational linguistics, and machine learning to predict subjective meaning from text. The term opinion mining is often used interchangeably with Sentiment Analysis, although it is technically a subfield focusing on the extraction of opinions (the umbrella under which sentiment, evaluation, appraisal, attitude, and emotion all lie).

The rise of Web 2.0 and increased information flow has led to an increase in interest towards Sentiment Analysis — especially as applied to social networks and media. Events causing large spikes in media — such as the 2012 Presidential Election Debates — are especially ripe for analysis. Such analyses raise a variety of implications for the future of crowd participation, elections, and governance.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Choi, Eunsol et al. “Hedge detection as a lens on framing in the GMO debates: a position paper.” Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics 13 Jul. 2012: 70-79. http://bit.ly/1wweftP

  • Understanding the ways in which participants in public discussions frame their arguments is important for understanding how public opinion is formed. This paper adopts the position that it is time for more computationally-oriented research on problems involving framing. In the interests of furthering that goal, the authors propose the following question: In the controversy regarding the use of genetically-modified organisms (GMOs) in agriculture, do pro- and anti-GMO articles differ in whether they choose to adopt a more “scientific” tone?
  • Prior work on the rhetoric and sociology of science suggests that hedging may distinguish popular-science text from text written by professional scientists for their colleagues. The paper proposes a detailed approach to studying whether hedge detection can be used to understand scientific framing in the GMO debates, and provides corpora to facilitate this study. Some of the preliminary analyses suggest that hedges occur less frequently in scientific discourse than in popular text, a finding that contradicts prior assertions in the literature.

Michael, Christina, Francesca Toni, and Krysia Broda. “Sentiment analysis for debates.” (Unpublished MSc thesis). Department of Computing, Imperial College London (2013). http://bit.ly/Wi86Xv

  • This project aims to expand on existing solutions used for automatic sentiment analysis on text in order to capture support/opposition and agreement/disagreement in debates. In addition, it looks at visualizing the classification results for enhancing the ease of understanding the debates and for showing underlying trends. Finally, it evaluates proposed techniques on an existing debate system for social networking.

Murakami, Akiko, and Rudy Raymond. “Support or oppose?: classifying positions in online debates from reply activities and opinion expressions.” Proceedings of the 23rd International Conference on Computational Linguistics: Posters 23 Aug. 2010: 869-875. https://bit.ly/2Eicfnm

  • In this paper, the authors propose a method for the task of identifying the general positions of users in online debates, i.e., support or oppose the main topic of an online debate, by exploiting local information in their remarks within the debate. An online debate is a forum where each user posts an opinion on a particular topic while other users state their positions by posting their remarks within the debate. The supporting or opposing remarks are made by directly replying to the opinion, or indirectly to other remarks (to express local agreement or disagreement), which makes the task of identifying users’ general positions difficult.
  • A prior study has shown that a link-based method, which completely ignores the content of the remarks, can achieve higher accuracy for the identification task than methods based solely on the contents of the remarks. In this paper, it is shown that utilizing the textual content of the remarks into the link-based method can yield higher accuracy in the identification task.

Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and trends in information retrieval 2.1-2 (2008): 1-135. http://bit.ly/UaCBwD

  • This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Its focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. It includes material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.

Ranade, Sarvesh et al. “Online debate summarization using topic directed sentiment analysis.” Proceedings of the Second International Workshop on Issues of Sentiment Discovery and Opinion Mining 11 Aug. 2013: 7. http://bit.ly/1nbKtLn

  • Social networking sites provide users a virtual community interaction platform to share their thoughts, life experiences and opinions. Online debate forum is one such platform where people can take a stance and argue in support or opposition of debate topics. An important feature of such forums is that they are dynamic and grow rapidly. In such situations, effective opinion summarization approaches are needed so that readers need not go through the entire debate.
  • This paper aims to summarize online debates by extracting highly topic relevant and sentiment rich sentences. The proposed approach takes into account topic relevant, document relevant and sentiment based features to capture topic opinionated sentences. ROUGE (Recall-Oriented Understudy for Gisting Evaluation, which employ a set of metrics and a software package to compare automatically produced summary or translation against human-produced onces) scores are used to evaluate the system. This system significantly outperforms several baseline systems and show improvement over the state-of-the-art opinion summarization system. The results verify that topic directed sentiment features are most important to generate effective debate summaries.

Schneider, Jodi. “Automated argumentation mining to the rescue? Envisioning argumentation and decision-making support for debates in open online collaboration communities.” http://bit.ly/1mi7ztx

  • Argumentation mining, a relatively new area of discourse analysis, involves automatically identifying and structuring arguments. Following a basic introduction to argumentation, the authors describe a new possible domain for argumentation mining: debates in open online collaboration communities.
  • Based on our experience with manual annotation of arguments in debates, the authors propose argumentation mining as the basis for three kinds of support tools, for authoring more persuasive arguments, finding weaknesses in others’ arguments, and summarizing a debate’s overall conclusions.

Selected Readings on Crowdsourcing Expertise


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.

Crowdsourcing enables leaders and citizens to work together to solve public problems in new and innovative ways. New tools and platforms enable citizens with differing levels of knowledge, expertise, experience and abilities to collaborate and solve problems together. Identifying experts, or individuals with specialized skills, knowledge or abilities with regard to a specific topic, and incentivizing their participation in crowdsourcing information, knowledge or experience to achieve a shared goal can enhance the efficiency and effectiveness of problem solving.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Börner, Katy, Michael Conlon, Jon Corson-Rikert, and Ying Ding. “VIVO: A Semantic Approach to Scholarly Networking and Discovery.” Synthesis Lectures on the Semantic Web: Theory and Technology 2, no. 1 (October 17, 2012): 1–178. http://bit.ly/17huggT.

  • This e-book “provides an introduction to VIVO…a tool for representing information about research and researchers — their scholarly works, research interests, and organizational relationships.”
  • VIVO is a response to the fact that, “Information for scholars — and about scholarly activity — has not kept pace with the increasing demands and expectations. Information remains siloed in legacy systems and behind various access controls that must be licensed or otherwise negotiated before access. Information representation is in its infancy. The raw material of scholarship — the data and information regarding previous work — is not available in common formats with common semantics.”
  • Providing access to structured information on the work and experience of a diversity of scholars enables improved expert finding — “identifying and engaging experts whose scholarly works is of value to one’s own. To find experts, one needs rich data regarding one’s own work and the work of potential related experts. The authors argue that expert finding is of increasing importance since, “[m]ulti-disciplinary and inter-disciplinary investigation is increasingly required to address complex problems. 

Bozzon, Alessandro, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci. “Choosing the Right Crowd: Expert Finding in Social Networks.” In Proceedings of the 16th International Conference on Extending Database Technology, 637–648. EDBT  ’13. New York, NY, USA: ACM, 2013. http://bit.ly/18QbtY5.

  • This paper explores the challenge of selecting experts within the population of social networks by considering the following problem: “given an expertise need (expressed for instance as a natural language query) and a set of social network members, who are the most knowledgeable people for addressing that need?”
  • The authors come to the following conclusions:
    • “profile information is generally less effective than information about resources that they directly create, own or annotate;
    • resources which are produced by others (resources appearing on the person’s Facebook wall or produced by people that she follows on Twitter) help increasing the assessment precision;
    • Twitter appears the most effective social network for expertise matching, as it very frequently outperforms all other social networks (either combined or alone);
    • Twitter appears as well very effective for matching expertise in domains such as computer engineering, science, sport, and technology & games, but Facebook is also very effective in fields such as locations, music, sport, and movies & tv;
    • surprisingly, LinkedIn appears less effective than other social networks in all domains (including computer science) and overall.”

Brabham, Daren C. “The Myth of Amateur Crowds.” Information, Communication & Society 15, no. 3 (2012): 394–410. http://bit.ly/1hdnGJV.

  • Unlike most of the related literature, this paper focuses on bringing attention to the expertise already being tapped by crowdsourcing efforts rather than determining ways to identify more dormant expertise to improve the results of crowdsourcing.
  • Brabham comes to two central conclusions: “(1) crowdsourcing is discussed in the popular press as a process driven by amateurs and hobbyists, yet empirical research on crowdsourcing indicates that crowds are largely self-selected professionals and experts who opt-in to crowdsourcing arrangements; and (2) the myth of the amateur in crowdsourcing ventures works to label crowds as mere hobbyists who see crowdsourcing ventures as opportunities for creative expression, as entertainment, or as opportunities to pass the time when bored. This amateur/hobbyist label then undermines the fact that large amounts of real work and expert knowledge are exerted by crowds for relatively little reward and to serve the profit motives of companies. 

Dutton, William H. Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government. One of a Series of Occasional Papers in Science and Technology Policy, Science and Technology Policy Institute, Institute for Defense Analyses, February 23, 2011. http://bit.ly/1c1bpEB.

  • In this paper, a case is made for more structured and well-managed crowdsourcing efforts within government. Specifically, the paper “explains how collaborative networking can be used to harness the distributed expertise of citizens, as distinguished from citizen consultation, which seeks to engage citizens — each on an equal footing.” Instead of looking for answers from an undefined crowd, Dutton proposes “networking the public as advisors” by seeking to “involve experts on particular public issues and problems distributed anywhere in the world.”
  • Dutton argues that expert-based crowdsourcing can be successfully for government for a number of reasons:
    • Direct communication with a diversity of independent experts
    • The convening power of government
    • Compatibility with open government and open innovation
    • Synergy with citizen consultation
    • Building on experience with paid consultants
    • Speed and urgency
    • Centrality of documents to policy and practice.
  • He also proposes a nine-step process for government to foster bottom-up collaboration networks:
    • Do not reinvent the technology
    • Focus on activities, not the tools
    • Start small, but capable of scaling up
    • Modularize
    • Be open and flexible in finding and going to communities of experts
    • Do not concentrate on one approach to all problems
    • Cultivate the bottom-up development of multiple projects
    • Experience networking and collaborating — be a networked individual
    • Capture, reward, and publicize success.

Goel, Gagan, Afshin Nikzad and Adish Singla. “Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets.” Under review by the International World Wide Web Conference (WWW). 2014. http://bit.ly/1qHBkdf

  • Combining the notions of crowdsourcing expertise and crowdsourcing tasks, this paper focuses on the challenge within platforms like Mechanical Turk related to intelligently matching tasks to workers.
  • The authors’ call for more strategic assignment of tasks in crowdsourcing markets is based on the understanding that “each worker has certain expertise and interests which define the set of tasks she can and is willing to do.”
  • Focusing on developing meaningful incentives based on varying levels of expertise, the authors sought to create a mechanism that, “i) is incentive compatible in the sense that it is truthful for agents to report their true cost, ii) picks a set of workers and assigns them to the tasks they are eligible for in order to maximize the utility of the requester, iii) makes sure total payments made to the workers doesn’t exceed the budget of the requester.

Gubanov, D., N. Korgin, D. Novikov and A. Kalkov. E-Expertise: Modern Collective Intelligence. Springer, Studies in Computational Intelligence 558, 2014. http://bit.ly/U1sxX7

  • In this book, the authors focus on “organization and mechanisms of expert decision-making support using modern information and communication technologies, as well as information analysis and collective intelligence technologies (electronic expertise or simply e-expertise).”
  • The book, which “addresses a wide range of readers interested in management, decision-making and expert activity in political, economic, social and industrial spheres, is broken into five chapters:
    • Chapter 1 (E-Expertise) discusses the role of e-expertise in decision-making processes. The procedures of e-expertise are classified, their benefits and shortcomings are identified, and the efficiency conditions are considered.
    • Chapter 2 (Expert Technologies and Principles) provides a comprehensive overview of modern expert technologies. A special emphasis is placed on the specifics of e-expertise. Moreover, the authors study the feasibility and reasonability of employing well-known methods and approaches in e-expertise.
    • Chapter 3 (E-Expertise: Organization and Technologies) describes some examples of up-to-date technologies to perform e-expertise.
    • Chapter 4 (Trust Networks and Competence Networks) deals with the problems of expert finding and grouping by information and communication technologies.
    • Chapter 5 (Active Expertise) treats the problem of expertise stability against any strategic manipulation by experts or coordinators pursuing individual goals.

Holst, Cathrine. “Expertise and Democracy.” ARENA Report No 1/14, Center for European Studies, University of Oslo. http://bit.ly/1nm3rh4

  • This report contains a set of 16 papers focused on the concept of “epistocracy,” meaning the “rule of knowers.” The papers inquire into the role of knowledge and expertise in modern democracies and especially in the European Union (EU). Major themes are: expert-rule and democratic legitimacy; the role of knowledge and expertise in EU governance; and the European Commission’s use of expertise.
    • Expert-rule and democratic legitimacy
      • Papers within this theme concentrate on issues such as the “implications of modern democracies’ knowledge and expertise dependence for political and democratic theory.” Topics include the accountability of experts, the legitimacy of expert arrangements within democracies, the role of evidence in policy-making, how expertise can be problematic in democratic contexts, and “ethical expertise” and its place in epistemic democracies.
    • The role of knowledge and expertise in EU governance
      • Papers within this theme concentrate on “general trends and developments in the EU with regard to the role of expertise and experts in political decision-making, the implications for the EU’s democratic legitimacy, and analytical strategies for studying expertise and democratic legitimacy in an EU context.”
    • The European Commission’s use of expertise
      • Papers within this theme concentrate on how the European Commission uses expertise and in particular the European Commission’s “expertgroup system.” Topics include the European Citizen’s Initiative, analytic-deliberative processes in EU food safety, the operation of EU environmental agencies, and the autonomy of various EU agencies.

King, Andrew and Karim R. Lakhani. “Using Open Innovation to Identify the Best Ideas.” MIT Sloan Management Review, September 11, 2013. http://bit.ly/HjVOpi.

  • In this paper, King and Lakhani examine different methods for opening innovation, where, “[i]nstead of doing everything in-house, companies can tap into the ideas cloud of external expertise to develop new products and services.”
  • The three types of open innovation discussed are: opening the idea-creation process, competitions where prizes are offered and designers bid with possible solutions; opening the idea-selection process, ‘approval contests’ in which outsiders vote to determine which entries should be pursued; and opening both idea generation and selection, an option used especially by organizations focused on quickly changing needs.

Long, Chengjiang, Gang Hua and Ashish Kapoor. Active Visual Recognition with Expertise Estimation in Crowdsourcing. 2013 IEEE International Conference on Computer Vision. December 2013. http://bit.ly/1lRWFur.

  • This paper is focused on improving the crowdsourced labeling of visual datasets from platforms like Mechanical Turk. The authors note that, “Although it is cheap to obtain large quantity of labels through crowdsourcing, it has been well known that the collected labels could be very noisy. So it is desirable to model the expertise level of the labelers to ensure the quality of the labels. The higher the expertise level a labeler is at, the lower the label noises he/she will produce.”
  • Based on the need for identifying expert labelers upfront, the authors developed an “active classifier learning system which determines which users to label which unlabeled examples” from collected visual datasets.
  • The researchers’ experiments in identifying expert visual dataset labelers led to findings demonstrating that the “active selection” of expert labelers is beneficial in cutting through the noise of crowdsourcing platforms.

Noveck, Beth Simone. “’Peer to Patent’: Collective Intelligence, Open Review, and Patent Reform.” Harvard Journal of Law & Technology 20, no. 1 (Fall 2006): 123–162. http://bit.ly/HegzTT.

  • This law review article introduces the idea of crowdsourcing expertise to mitigate the challenge of patent processing. Noveck argues that, “access to information is the crux of the patent quality problem. Patent examiners currently make decisions about the grant of a patent that will shape an industry for a twenty-year period on the basis of a limited subset of available information. Examiners may neither consult the public, talk to experts, nor, in many cases, even use the Internet.”
  • Peer-to-Patent, which launched three years after this article, is based on the idea that, “The new generation of social software might not only make it easier to find friends but also to find expertise that can be applied to legal and policy decision-making. This way, we can improve upon the Constitutional promise to promote the progress of science and the useful arts in our democracy by ensuring that only worth ideas receive that ‘odious monopoly’ of which Thomas Jefferson complained.”

Ober, Josiah. “Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment.” American Political Science Review 107, no. 01 (2013): 104–122. http://bit.ly/1cgf857.

  • In this paper, Ober argues that, “A satisfactory model of decision-making in an epistemic democracy must respect democratic values, while advancing citizens’ interests, by taking account of relevant knowledge about the world.”
  • Ober describes an approach to decision-making that aggregates expertise across multiple domains. This “Relevant Expertise Aggregation (REA) enables a body of minimally competent voters to make superior choices among multiple options, on matters of common interest.”

Sims, Max H., Jeffrey Bigham, Henry Kautz and Marc W. Halterman. Crowdsourcing medical expertise in near real time.” Journal of Hospital Medicine 9, no. 7, July 2014. http://bit.ly/1kAKvq7.

  • In this article, the authors discuss the develoment of a mobile application called DocCHIRP, which was developed due to the fact that, “although the Internet creates unprecedented access to information, gaps in the medical literature and inefficient searches often leave healthcare providers’ questions unanswered.”
  • The DocCHIRP pilot project used a “system of point-to-multipoint push notifications designed to help providers problem solve by crowdsourcing from their peers.”
  • Healthcare providers (HCPs) sought to gain intelligence from the crowd, which included 85 registered users, on questions related to medication, complex medical decision making, standard of care, administrative, testing and referrals.
  • The authors believe that, “if future iterations of the mobile crowdsourcing applications can address…adoption barriers and support the organic growth of the crowd of HCPs,” then “the approach could have a positive and transformative effect on how providers acquire relevant knowledge and care for patients.”

Spina, Alessandro. “Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies.” in Foundations of EU Food Law and Policy, eds. A. Alemmano and S. Gabbi. Ashgate, 2014. http://bit.ly/1k2EwdD.

  • In this paper, Spina “presents some reflections on how the collaborative and crowdsourcing practices of Open Government could be integrated in the activities of EFSA [European Food Safety Authority] and other EU agencies,” with a particular focus on “highlighting the benefits of the Open Government paradigm for expert regulatory bodies in the EU.”
  • Spina argues that the “crowdsourcing of expertise and the reconfiguration of the information flows between European agencies and teh public could represent a concrete possibility of modernising the role of agencies with a new model that has a low financial burden and an almost immediate effect on the legal governance of agencies.”
  • He concludes that, “It is becoming evident that in order to guarantee that the best scientific expertise is provided to EU institutions and citizens, EFSA should strive to use the best organisational models to source science and expertise.”

GitHub: A Swiss Army knife for open government


FCW: “Today, more than 300 government agencies are using the platform for public and private development. Cities (Chicago, Philadelphia, San Francisco), states (New York, Washington, Utah) and countries (United Kingdom, Australia) are sharing code and paving a new road to civic collaboration….

In addition to a rapidly growing code collection, the General Services Administration’s new IT development shop has created a “/Developer program” to “provide comprehensive support for any federal agency engaged in the production or use of APIs.”
The Consumer Financial Protection Bureau has built a full-blown website on GitHub to showcase the software and design work its employees are doing.
Most of the White House’s repos relate to Drupal-driven websites, but the Obama administration has also shared its iOS and Android apps, which together have been forked nearly 400 times.

Civic-focused organizations — such as the OpenGov Foundation, the Sunlight Foundation and the Open Knowledge Foundation — are also actively involved with original projects on GitHub. Those projects include the OpenGov Foundation’s Madison document-editing tool touted by the likes of Rep. Darrell Issa (R-Calif.) and the Open Knowledge Foundation’s CKAN, which powers hundreds of government data platforms around the world.
According to GovCode, an aggregator of public government open-source projects hosted on GitHub, there have been hundreds of individual contributors and nearly 90,000 code commits, which involve making a set of tentative changes permanent.
The nitty-gritty
Getting started on GitHub is similar to the process for other social networking platforms. Users create individual accounts and can set up “organizations” for agencies or cities. They can then create repositories (or repos) to collaborate on projects through an individual or organizational account. Other developers or organizations can download repo code for reuse or repurpose it in their own repositories (called forking), and make it available to others to do the same.
Collaborative aspects of GitHub include pull requests that allow developers to submit and accept updates to repos that build on and grow an open-source project. There are wikis, gists (code snippet sharing) and issue tracking for bugs, feature requests, or general questions and answers.
GitHub provides free code hosting for all public repos. Upgrade offerings include personal and organizational plans based on the number of private repos. For organizations that want a self-hosted GitHub development environment, GitHub Enterprise, used by the likes of CFPB, allows for self-hosted, private repos behind a firewall.
GitHub’s core user interface can be unwelcoming or even intimidating to the nondeveloper, but GitHub’s Pages package offers Web-hosting features that include domain mapping and lightweight content management tools such as static site generator Jekyll and text editor Atom.
Notable government projects that use Pages are the White House’s Project Open Data, 18F’s /Developer Program, CFPB’s Open Tech website and New York’s Open Data Handbook. Indeed, Wired recently commented that the White House’s open-data GitHub efforts “could help fix government.”…
See also: GitHub for Government (GovLab)

Twiplomacy Study 2014


Twiplomacy: “World leaders vie for attention, connections and followers on Twitter, that’s the latest finding of Burson-Marsteller’s Twiplomacy study 2014, an annual global study looking at the use of Twitter by heads of state and government and ministers of foreign affairs.
While some heads of state and government continue to amass large followings, foreign ministers have established a virtual diplomatic network by following each other on the social media platform.
For many diplomats Twitter has becomes a powerful channel for digital diplomacy and 21st century statecraft and not all Twitter exchanges are diplomatic, real world differences are spilling over reflected on Twitter and sometimes end up in hashtag wars.
“I am a firm believer in the power of technology and social media to communicate with people across the world,” India’s new Prime Minister Narendra Modi wrote in his inaugural message on his new website. Within weeks of his election in May 2014, the @NarendraModi account has moved into the top four most followed Twitter accounts of world leaders with close to five million followers.
More than half of the world’s foreign ministers and their institutions are active on the social networking site. Twitter has become an indispensable diplomatic networking and communication tool. As Finnish Prime Minister @AlexStubb wrote in a tweet in March 2014: “Most people who criticize Twitter are often not on it. I love this place. Best source of info. Great way to stay tuned and communicate.”
As of 25 June 2014, the vast majority (83 percent) of the 193 UN member countries have a presence on Twitter. More than two-thirds (68 percent) of all heads of state and heads of government have personal accounts on the social network.
As of 24 June 2014, the vast majority (83 percent) of the 193 UN member countries have a presence on Twitter. More than two-thirds (68 percent) of all heads of state and heads of government have personal accounts on the social network.

Most Followed World Leaders

Since his election in late May 2014, India’s new Prime Minister @NarendraModi has skyrocketed into fourth place, surpassing the the @WhiteHouse on 25 June 2014 and dropping Turkey’s President Abdullah Gül (@cbabdullahgul) and Prime Minister Recep Tayyip Erdoğan (@RT_Erdogan) into sixth and seventh place with more than 4 million followers each.
Twiplomacy - Top 50 Most Followed
Modi still has a ways to go to best U.S. President @BarackObama, who tops the world-leader list with a colossal 43.7 million followers, with Pope Francis @Pontifex) with 14 million followers on his nine different language accounts and Indonesia’s President Susilo Bambang Yudhoyono @SBYudhoyono, who has more than five million followers and surpassed President Obama’s official administration account @WhiteHouse on 13 February 2014.
In Latin America Cristina Fernández de Kirchner, the President of Argentina @CFKArgentina is slightly ahead of Colombia’s President @JuanManSantos with 2,894,864 and 2,885,752 followers respectively. Mexico’s President Enrique Peña Nieto @EPN, Brazil’s Dilma Rousseff @dilmabr and Venezuela’s @NicolasMaduro complete the Latin American top five, with more than two million followers each.
Kenya’s Uhuru Kenyatta @UKenyatta is Africa’s most followed president with 457,307 followers, ahead of Rwanda’s @PaulKagame (407,515 followers) and South Africa’s Jacob Zuma (@SAPresident) (325,876 followers).
Turkey’s @Ahmet_Davutoglu is the most followed foreign minister with 1,511,772 followers, ahead of India’s @SushmaSwaraj (1,274,704 followers) and the Foreign Minister of the United Arab Emirates @ABZayed (1,201,364 followers)…”

Index: The Networked Public


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on the networked public and was originally published in 2014.

Global Overview

  • The proportion of global population who use the Internet in 2013: 38.8%, up 3 percentage points from 2012
  • Increase in average global broadband speeds from 2012 to 2013: 17%
  • Percent of internet users surveyed globally that access the internet at least once a day in 2012: 96
  • Hours spent online in 2012 each month across the globe: 35 billion
  • Country with the highest online population, as a percent of total population in 2012: United Kingdom (85%)
  • Country with the lowest online population, as a percent of total population in 2012: India (8%)
  • Trend with the highest growth rate in 2012: Location-based services (27%)
  • Years to reach 50 million users: telephone (75), radio (38), TV (13), internet (4)

Growth Rates in 2014

  • Rate at which the total number of Internet users is growing: less than 10% a year
  • Worldwide annual smartphone growth: 20%
  • Tablet growth: 52%
  • Mobile phone growth: 81%
  • Percentage of all mobile users who are now smartphone users: 30%
  • Amount of all web usage in 2013 accounted for by mobile: 14%
  • Amount of all web usage in 2014 accounted for by mobile: 25%
  • Percentage of money spent on mobile used for app purchases: 68%
  • Growth of BitCoin wallet between 2013 and 2014: 8 times increase
  • Number of listings on AirBnB in 2014: 550k, 83% growth year on year
  • How many buyers are on Alibaba in 2014: 231MM buyers, 44% growth year on year

Social Media

  • Number of Whatsapp messages on average sent per day: 50 billion
  • Number sent per day on Snapchat: 1.2 billion
  • How many restaurants are registered on GrubHub in 2014: 29,000
  • Amount the sale of digital songs fell in 2013: 6%
  • How much song streaming grew in 2013: 32%
  • Number of photos uploaded and shared every day on Flickr, Snapchat, Instagram, Facebook and Whatsapp combined in 2014: 1.8 billion
  • How many online adults in the U.S. use a social networking site of some kind: 73%
  • Those who use multiple social networking sites: 42%
  • Dominant social networking platform: Facebook, with 71% of online adults
  • Number of Facebook users in 2004, its founding year: 1 million
  • Number of monthly active users on Facebook in September 2013: 1.19 billion, an 18% increase year-over-year
  • How many Facebook users log in to the site daily: 63%
  • Instagram users who log into the service daily: 57%
  • Twitter users who are daily visitors: 46%
  • Number of photos uploaded to Facebook every minute: over 243,000, up 16% from 2012
  • How much of the global internet population is actively using Twitter every month: 21%
  • Number of tweets per minute: 350,000, up 250% from 2012
  • Fastest growing demographic on Twitter: 55-64 year age bracket, up 79% from 2012
  • Fastest growing demographic on Facebook: 45-54 year age bracket, up 46% from 2012
  • How many LinkedIn accounts are created every minute: 120, up 20% from 2012
  • The number of Google searches in 2013: 3.5 million, up 75% from 2012
  • Percent of internet users surveyed globally that use social media in 2012: 90
  • Percent of internet users surveyed globally that use social media daily: 60
  • Time spent social networking, the most popular online activity: 22%, followed by searches (21%), reading content (20%), and emails/communication (19%)
  • The average age at which a child acquires an online presence through their parents in 10 mostly Western countries: six months
  • Number of children in those countries who have a digital footprint by age 2: 81%
  • How many new American marriages between 2005-2012 began by meeting online, according to a nationally representative study: more than one-third 
  • How many of the world’s 505 leaders are on Twitter: 3/4
  • Combined Twitter followers: of 505 world leaders: 106 million
  • Combined Twitter followers of Justin Bieber, Katy Perry, and Lady Gaga: 122 million
  • How many times all Wikipedias are viewed per month: nearly 22 billion times
  • How many hits per second: more than 8,000 
  • English Wikipedia’s share of total page views: 47%
  • Number of articles in the English Wikipedia in December 2013: over 4,395,320 
  • Platform that reaches more U.S. adults between ages 18-34 than any cable network: YouTube
  • Number of unique users who visit YouTube each month: more than 1 billion
  • How many hours of video are watched on YouTube each month: over 6 billion, 50% more than 2012
  • Proportion of YouTube traffic that comes from outside the U.S.: 80%
  • Most common activity online, based on an analysis of over 10 million web users: social media
  • People on Twitter who recommend products in their tweets: 53%
  • People who trust online recommendations from people they know: 90%

Mobile and the Internet of Things

  • Number of global smartphone users in 2013: 1.5 billion
  • Number of global mobile phone users in 2013: over 5 billion
  • Percent of U.S. adults that have a cell phone in 2013: 91
  • Number of which are a smartphone: almost two thirds
  • Mobile Facebook users in March 2013: 751 million, 54% increase since 2012
  • Growth rate of global mobile traffic as a percentage of global internet traffic as of May 2013: 15%, up from .9% in 2009
  • How many smartphone owners ages 18–44 “keep their phone with them for all but two hours of their waking day”: 79%
  • Those who reach for their smartphone immediately upon waking up: 62%
  • Those who couldn’t recall a time their phone wasn’t within reach or in the same room: 1 in 4
  • Facebook users who access the service via a mobile device: 73.44%
  • Those who are “mobile only”: 189 million
  • Amount of YouTube’s global watch time that is on mobile devices: almost 40%
  • Number of objects connected globally in the “internet of things” in 2012: 8.7 billion
  • Number of connected objects so far in 2013: over 10 billion
  • Years from tablet introduction for tables to surpass desktop PC and notebook shipments: less than 3 (over 55 million global units shipped in 2013, vs. 45 million notebooks and 35 million desktop PCs)
  • Number of wearable devices estimated to have been shipped worldwide in 2011: 14 million
  • Projected number of wearable devices in 2016: between 39-171 million
  • How much of the wearable technology market is in the healthcare and medical sector in 2012: 35.1%
  • How many devices in the wearable tech market are fitness or activity trackers: 61%
  • The value of the global wearable technology market in 2012: $750 million
  • The forecasted value of the market in 2018: $5.8 billion
  • How many Americans are aware of wearable tech devices in 2013: 52%
  • Devices that have the highest level of awareness: wearable fitness trackers,
  • Level of awareness for wearable fitness trackers amongst American consumers: 1 in 3 consumers
  • Value of digital fitness category in 2013: $330 million
  • How many American consumers surveyed are aware of smart glasses: 29%
  • Smart watch awareness amongst those surveyed: 36%

Access

  • How much of the developed world has mobile broadband subscriptions in 2013: 3/4
  • How much of the developing world has broadband subscription in 2013: 1/5
  • Percent of U.S. adults that had a laptop in 2012: 57
  • How many American adults did not use the internet at home, at work, or via mobile device in 2013: one in five
  • Amount President Obama initiated spending in 2009 in an effort to expand access: $7 billion
  • Number of Americans potentially shut off from jobs, government services, health care and education, among other opportunities due to digital inequality: 60 million
  • American adults with a high-speed broadband connection at home as of May 2013: 7 out of 10
  • Americans aged 18-29 vs. 65+ with a high-speed broadband connection at home as of May 2013: 80% vs. 43
  • American adults with college education (or more) vs. adults with no high school diploma that have a high-speed broadband connection at home as of May 2013: 89% vs. 37%
  • Percent of U.S. adults with college education (or more) that use the internet in 2011: 94
  • Those with no high school diploma that used the internet in 2011: 43
  • Percent of white American households that used the internet in 2013: 67
  • Black American households that used the internet in 2013: 57
  • States with lowest internet use rates in 2013: Mississippi, Alabama and Arkansas
  • How many American households have only wireless telephones as of the second half of 2012: nearly two in five
  • States with the highest prevalence of wireless-only adults according to predictive modeling estimates: Idaho (52.3%), Mississippi (49.4%), Arkansas (49%)
  • Those with the lowest prevalence of wireless-only adults: New Jersey (19.4%), Connecticut (20.6%), Delaware (23.3%) and New York (23.5%)

Sources

The Promise of a New Internet


Adrienne Lafrance in the Atlantic:People tend to talk about the Internet the way they talk about democracy—optimistically, and in terms that describe how it ought to be rather than how it actually is.

This idealism is what buoys much of the network neutrality debate, and yet many of what are considered to be the core issues at stake—like payment for tiered access, for instance—have already been decided. For years, Internet advocates have been asking what regulatory measures might help save the open, innovation-friendly Internet.
But increasingly, another question comes up: What if there were a technical solution instead of a regulatory one? What if the core architecture of how people connect could make an end run on the centralization of services that has come to define the modern net?
It’s a question that reflects some of the Internet’s deepest cultural values, and the idea that this network—this place where you are right now—should distribute power to people. In the post-NSA, post-Internet-access-oligopoly world, more and more people are thinking this way, and many of them are actually doing something about it.
Among them, there is a technology that’s become a kind of shorthand code for a whole set of beliefs about the future of the Internet: “mesh networking.” These words have become a way to say that you believe in a different, freer Internet.
*  *  *
Mesh networks promise the things we already expect but don’t always get from the Internet: they’re fast, reliable, and relatively inexpensive. But before we get into the particulars of what this alternate Internet might look like, a quick refresher on how the one we have works:
Your computer is connected to an Internet service provider like Comcast, which sends packets of your data (the binary stuff of emails, tweets, Facebook status updates, web addresses, etc.) back and forth across the network. The packets that move across the Internet encounter a series of checkpoints including routers and servers along the paths your data travels. You can’t control these paths or these checkpoints, so your data is subject to all kinds of security threats like hackers and snooping NSA agents.
So the idea behind mesh networking is to skip those checkpoints and cut out the middleman service provider whenever possible. This can work when each device in a network connects to the other devices, rather than each device connecting to the ISP.
It helps to visualize it. The image on the left shows a network built around a centralized hub, like the Internet as we know it. The image on the right is what a mesh network looks like:

Think of it this way: With a mesh network, each device is like a mini cell phone tower. So instead of having multiple devices rely on a single, centralized hub; multiple devices rely on one another. And with information ricocheting across the network more unpredictably between those devices, the network as a whole is harder to take out.
“You end up with a network that is much harder to disrupt,” said Stanislav Shalunov, co-founder of Open Garden, a startup that develops peer-to-peer and mesh networking apps. “There is no single point where you can unplug and expect that there will be a large impact.”
Plus, a mesh network forms itself based on an algorithm—which again reduces opportunities for disruption. “There is no human intervention involved, even from the users of the devices and certainly not from any administrative entity that needs to arrange the topology of this network or how people are connected or how the network is used,” Shalunov told me. “It is entirely up to the people participating and the software that runs this network to make everything work.”

Your regular old smartphone already has the power to connect to other smartphones without being hooked up to the Internet through a traditional carrier. All you need is the radio frequency of your phone’s bluetooth connection, and you can send and receive data over a mesh network from anyone in relatively close proximity—say, a person in the same neighborhood or office building. (Mesh networks can also be built around cheap wireless routers or roof antennae.)…
For now, there’s no nationwide device-to-device mesh network. So if you want to communicate with someone across the country,  someone—but not everyone—in the mesh network will need to be connected to the Internet through a traditional provider. That’s true locally, too, if you want the mesh network hooked up to the rest of the Internet. Mesh networks are more reliable in a crowd because devices can rely on one another—rather than each device trying to ping the same overburdened cell phone tower. “The important thing is we can use any of the Internet connections that anybody in that mesh network is connected to,” Shalunov said. “So maybe you are connected to AT&T and I am connected to Comcast and my phone is on Verizon and there is a Sprint subscriber nearby. If any of these will let the traffic through, all of it will get through.”
* * *
Mesh networks have been around, at least theoretically, for at least as long as the Internet has existed…”

How Helsinki Became the Most Successful Open-Data City in the World


Olli Sulopuisto in Atlantic Cities:  “If there’s something you’d like to know about Helsinki, someone in the city administration most likely has the answer. For more than a century, this city has funded its own statistics bureaus to keep data on the population, businesses, building permits, and most other things you can think of. Today, that information is stored and freely available on the internet by an appropriately named agency, City of Helsinki Urban Facts.
There’s a potential problem, though. Helsinki may be Finland’s capital and largest city, with 620,000 people. But it’s only one of more than a dozen municipalities in a metropolitan area of almost 1.5 million. So in terms of urban data, if you’re only looking at Helsinki, you’re missing out on more than half of the picture.
Helsinki and three of its neighboring cities are now banding together to solve that problem. Through an entity called Helsinki Region Infoshare, they are bringing together their data so that a fuller picture of the metro area can come into view.
That’s not all. At the same time these datasets are going regional, they’re also going “open.” Helsinki Region Infoshare publishes all of its data in formats that make it easy for software developers, researchers, journalists and others to analyze, combine or turn into web-based or mobile applications that citizens may find useful. In four years of operation, the project has produced more than 1,000 “machine-readable” data sources such as a map of traffic noise levels, real-time locations of snow plows, and a database of corporate taxes.
A global leader
All of this has put the Helsinki region at the forefront of the open-data movement that is sweeping cities across much of the world. The concept is that all kinds of good things can come from assembling city data, standardizing it and publishing it for free. Last month, Helsinki Region Infoshare was presented with the European Commission’s prize for innovation in public administration.

The project is creating transparency in government and a new digital commons. It’s also fueling a small industry of third-party application developers who take all this data and turn it into consumer products.
For example, Helsinki’s city council has a paperless system called Ahjo for handling its agenda items, minutes and exhibits that accompany council debates. Recently, the datasets underlying Ahjo were opened up. The city built a web-based interface for browsing the documents, but a software developer who doesn’t even live in Helsinki created a smartphone app for it. Now anyone who wants to keep up with just about any decision Helsinki’s leaders have before them can do so easily.
Another example is a product called BlindSquare, a smartphone app that helps blind people navigate the city. An app developer took the Helsinki region’s data on public transport and services, and mashed it up with location data from the social networking app Foursquare as well as mapping tools and the GPS and artificial voice capabilities of new smartphones. The product now works in dozens of countries and languages and sells for about €17 ($24 U.S.)

Helsinki also runs competitions for developers who create apps with public-sector data. That’s nothing new — BlindSquare won the Apps4Finland and European OpenCities app challenges in 2012. But this year, they’re trying a new approach to the app challenge concept, funded by the European Commission’s prize money and Sitra.
It’s called Datademo. Instead of looking for polished but perhaps random apps to heap fame and prize money on, Datademo is trying to get developers to aim their creative energies toward general goals city leaders think are important. The current competition specifies that apps have to use open data from the Helsinki region or from Finland to make it easier for citizens to find information and participate in democracy. The competition also gives developers seed funding upfront.
Datademo received more than 40 applications in its first round. Of those, the eight best suggestions were given three months and €2,000 ($2,770 U.S) to implement their ideas. The same process will be repeated two times, resulting in dozens of new app ideas that will get a total of €48,000 ($66,000 U.S.) in development subsidies. Keeping with the spirit of transparency, the voting and judging process is open to all who submit an idea for each round….”

This is what happens when you give social networking to doctors


in PandoDaily: “Dr. Gregory Kurio will never forget the time he was called to the ER because a epileptic girl was brought in suffering a cardiac arrest of sorts (HIPAA mandates he doesn’t give out the specific details of the situation). In the briefing, he learned the name of her cardiac physician who he happened to know through the industry. He subsequently called the other doctor and asked him to send over any available information on the patient — latest meds, EKGs, recent checkups, etc.

The scene in the ER was, to be expected, one of chaos, with trainees and respiratory nurses running around grabbing machinery and meds. Crucial seconds were ticking past, and Dr. Kurio quickly realized the fax machine was not the best approach for receiving the records he needed. ER fax machines are often on the opposite of the emergency room, take awhile to print lengthy of records, frequently run out of paper, and aren’t always reliable – not exactly the sort of technology you want when a patient’s life or death hangs in the midst.

Email wasn’t an option either, because HIPAA mandates that sensitive patient files are only sent through secure channels. With precious little time to waste, Dr. Kurio decided to take a chance on a new technology service he had just signed up for — Doximity.

Doximity is a LinkedIn for Doctors of sorts. It has, as one feature, a secure e-fax system that turns faxes into digital messages and sends them to a user’s mobile device. Dr. Kurio gave the other physician his e-fax number, and a little bit of techno-magic happened.

….

With a third of the nation’s doctors on the platform, today Doximity announced a $54 million Series C from DFJ,  T. Rowe Price Associates, Morgan Stanley, and existing investors. The funding news isn’t particularly important, in and of itself, aside from the fact that the company is attracting the attention of private market investors very early in its growth trajectory. But it’s a good opportunity to take a look at Doximity’s business model, how it mirrors the upwards growth of other vertical professional social networks (say that five times fast), and the way it’s transforming our healthcare providers’ jobs.

Doximity works, in many ways, just like LinkedIn. Doctors have profiles with pictures and their resume, and recruiters pay the company to message medical professionals. “If you think it’s hard to find a Ruby developer in San Francisco, try to find an emergency room physician in Indiana,” Doximity CEO Jeff Tangney says. One recruiter’s pain is a smart entrepreneur’s pleasure — a simple, straightforward monetization strategy.

But unlike LinkedIn, Doximity can dive much deeper on meeting doctors’ needs through specialized features like the e-fax system. It’s part of the reason Konstantin Guericke, one of LinkedIn’s “forgotten” co-founders, was attracted to the company and decided to join the board as an advisor. “In some ways, it’s a lot like LinkedIn,” Guericke says, when asked why he decided to help out. “But for me it’s the pleasure of focusing on a more narrow audience and making more of an impact on their life.”

In another such high-impact, specialized feature, doctors can access Doximity’s Google Alerts-like system for academic articles. They can sign up to receive notifications when stories are published about their obscure specialties. That means time-strapped physicians gain a more efficient way to stay up to date on all the latest research and information in their field. You can imagine that might impact the quality of the care they provide.

Lastly, Doximity offers a secure messaging system, allowing doctors to email one another regarding a fellow patient. Such communication is a thorny issue for doctors given HIPPA-related privacy requirements. There are limited ways to legally update say, a primary care physician when a specialist learns one of their patients has colon cancer. It turns into a big game of phone tag to relay what should be relatively straightforward information. Furthermore, leaving voicemails and sending faxes can result in details getting lost in what its an searchable system.

The platform is free for doctors, and it has attracted them quickly join in droves. Doximity co-founder and CEO Jeff Tangney estimates that last year the platform had added 15 to 16 percent of US doctors. But this year, the company claims it’s “on track to have half of US physicians as members by this summer.” Fairly impressive growth rate and market penetration.

With great market penetration comes great power. And dollars. Although the company is only monetizing through recruitment at the moment, the real money to be made with this service is through targeted advertising. Think about how much big pharma and medtech companies would be willing to cough up to to communicate at scale with the doctors who make purchasing decisions. Plus, this is an easy way for them to target industry thought leaders or professionals with certain specialties.

Doximity’s founders’ and investors’ eyes might be seeing dollar signs, but they haven’t rolled anything out yet on the advertising front. They’re wary and want to do so in a way that ads value to all parties while avoiding pissing off medical professionals. When they finally pul lthe trigger, however, it’s has the potential to be a Gold Rush.

Doximity isn’t the only company to have discovered there’s big money to be made in vertical professional social networks. As Pando has written, there’s a big trend in this regard. Spiceworks, the social network for IT professionals which claims to have a third of the world’s IT professionals on the site, just raised $57 million in a round led by none other than Goldman Sachs. Why does the firm have such faith in a free social network for IT pros — seemingly the most mundane and unprofitable of endeavors? Well, just like with doctor and pharma corps, IT companies are willing to shell out big to market their wares directly to such IT pros.

Although the monetization strategies differ from business to business, ResearchGate is building a similar community with a social network of scientists around the world, Edmodo is doing it with educators, GitHub with developers, GrabCAD for mechanical engineers. I’ve argued that such vertical professional social networks are a threat to LinkedIn, stealing business out from under it in large industry swaths. LinkedIn cofounder Konstantin Guericke disagrees.

“I don’t think it’s stealing revenue from them. Would it make sense for LinkedIn to add a profile subset about what insurance someone takes? That would just be clutter,” Guericke says. “It’s more going after an opportunity LinkedIn isn’t well positioned to capitalize on. They could do everything Doximity does, but they’d have to give up something else.”

All businesses come with their own challenges, and Doximity will certainly face its share of them as it scales. It has overcome the initial hurdle of achieving the network effects that come with penetrating the a large segment of the market. Next will come monetizing sensitively and continuing to protecting users — and patients’ — privacy.

There are plenty of data minefields to be had in a sector as closely regulated as healthcare, as fellow medical startup Practice Fusion recently found out. Doximity has to make sure its system for onboarding and verifying new doctors is airtight. The company has already encountered some instances of individuals trying to pose as medical professionals to get access to another’s records — specifically a former lover trying to chase down their ex-spouse’s STI tests. One blowup where the company approves someone they shouldn’t or hackers break into the system, and doctors could lose trust in the safety of the technology….”