The Impact of the Social Sciences


New book:  The Impact of the Social Sciences: How Academics and Their Research Make a Difference by Simon Bastow, Jane Tinkler and Patrick Dunleavy. 
The three-year Impact of Social Sciences Project has culminated in a monograph published by SAGE. The book presents thorough analysis of how academic research in the social sciences achieves public policy impacts, contributes to economic prosperity, and informs public understanding of policy issues as well as economic and social changes. This book is essential reading for academics, researchers, university administrators, government and private funders, and anyone interested in the global conversation about joining up and increasing the societal value and impact of social science knowledge and research.
figure214Resources:

  • View the data visualisations that appear in the book here.
  • Browse our Living Bibliography with links to further resources.
  • Research Design and Methods Appendix [PDF]
  • “Assessing the Impacts of Academic Social Science Research: Modelling the economic impact on the UK economy of UK-based academic social science research” [PDF] A report prepared for the LSE Public Policy Group by Cambridge Econometrics.

Research Blogs:

It’s the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough.


Eric Kansa at LSE Blog: “…However, I’m increasingly convinced that advocating for openness in research (or government) isn’t nearly enough. There’s been too much of an instrumentalist justification for open data an open access. Many advocates talk about how it will cut costs and speed up research and innovation. They also argue that it will make research more “reproducible” and transparent so interpretations can be better vetted by the wider community. Advocates for openness, particularly in open government, also talk about the wonderful commercial opportunities that will come from freeing research…
These are all very big policy issues, but they need to be asked if the Open Movement really stands for reform and not just a further expansion and entrenchment of Neoliberalism. I’m using the term “Neoliberalism” because it resonates as a convenient label for describing how and why so many things seem to suck in Academia. Exploding student debt, vanishing job security, increasing compensation for top administrators, expanding bureaucracy and committee work, corporate management methodologies (Taylorism), and intensified competition for ever-shrinking public funding all fall under the general rubric of Neoliberalism. Neoliberal universities primarily serve the needs of commerce. They need to churn out technically skilled human resources (made desperate for any work by high loads of debt) and easily monetized technical advancements….
“Big Data,” “Data Science,” and “Open Data” are now hot topics at universities. Investments are flowing into dedicated centers and programs to establish institutional leadership in all things related to data. I welcome the new Data Science effort at UC Berkeley to explore how to make research data professionalism fit into the academic reward systems. That sounds great! But will these new data professionals have any real autonomy in shaping how they conduct their research and build their careers? Or will they simply be part of an expanding class of harried and contingent employees- hired and fired through the whims of creative destruction fueled by the latest corporate-academic hype-cycle?
Researchers, including #AltAcs and “data professionals”, need  a large measure of freedom. Miriam Posner’s discussion about the career and autonomy limits of Alt-academic-hood help highlight these issues. Unfortunately, there’s only one area where innovation and failure seem survivable, and that’s the world of the start-up. I’ve noticed how the “Entrepreneurial Spirit” gets celebrated lots in this space. I’m guilty of basking in it myself (10 years as a quasi-independent #altAc in a nonprofit I co-founded!).
But in the current Neoliberal setting, being an entrepreneur requires a singular focus on monetizing innovation. PeerJ and Figshare are nice, since they have business models that less “evil” than Elsevier’s. But we need to stop fooling ourselves that the only institutions and programs that we can and should sustain are the ones that can turn a profit. For every PeerJ or Figshare (and these are ultimately just as dependent on continued public financing of research as any grant-driven project), we also need more innovative organizations like the Internet Archive, wholly dedicated to the public good and not the relentless pressure to commoditize everything (especially their patrons’ privacy). We need to be much more critical about the kinds of programs, organizations, and financing strategies we (as a society) can support. I raised the political economy of sustainability issue at a recent ThatCamp and hope to see more discussion.
In reality so much of the Academy’s dysfunctions are driven by our new Gilded Age’s artificial scarcity of money. With wealth concentrated in so few hands, it is very hard to finance risk taking and entreprenurialism in the scholarly community, especially to finance any form of entrepreneurialism that does not turn a profit in a year or two.
Open Access and Open Data will make so much more of a difference if we had the same kind of dynamism in the academic and nonprofit sector as we have in the for-profit start-up sector. After all, Open Access and Open Data can be key enablers to allow much broader participation in research and education. However, broader participation still needs to be financed: you cannot eat an open access publication. We cannot gloss over this key issue.
We need more diverse institutional forms so that researchers can find (or found) the kinds of organizations that best channel their passions into contributions that enrich us all. We need more diverse sources of financing (new foundations, better financed Kickstarters) to connect innovative ideas with the capital needed to see them implemented. Such institutional reforms will make life in the research community much more livable, creative, and dynamic. It would give researchers more options for diverse and varied career trajectories (for-profit or not-for-profit) suited to their interests and contributions.
Making the case to reinvest in the public good will require a long, hard slog. It will be much harder than the campaign for Open Access and Open Data because it will mean contesting Neoliberal ideologies and constituencies that are deeply entrenched in our institutions. However, the constituencies harmed by Neoliberalism, particularly the student community now burdened by over $1 trillion in debt and the middle class more generally, are much larger and very much aware that something is badly amiss. As we celebrate the impressive strides made by the Open Movement in the past year, it’s time we broaden our goals to tackle the needs for wider reform in the financing and organization of research and education.
This post originally appeared on Digging Digitally and is reposted under a CC-BY license.”

Big Data’s Dangerous New Era of Discrimination


Michael Schrage in HBR blog: “Congratulations. You bought into Big Data and it’s paying off Big Time. You slice, dice, parse and process every screen-stroke, clickstream, Like, tweet and touch point that matters to your enterprise. You now know exactly who your best — and worst — customers, clients, employees and partners are.  Knowledge is power.  But what kind of power does all that knowledge buy?
Big Data creates Big Dilemmas. Greater knowledge of customers creates new potential and power to discriminate. Big Data — and its associated analytics — dramatically increase both the dimensionality and degrees of freedom for detailed discrimination. So where, in your corporate culture and strategy, does value-added personalization and segmentation end and harmful discrimination begin?
Let’s say, for example, that your segmentation data tells you the following:
Your most profitable customers by far are single women between the ages of 34 and 55 closely followed by “happily married” women with at least one child. Divorced women are slightly more profitable than “never marrieds.” Gay males — single and in relationships — are also disproportionately profitable. The “sweet spot” is urban and 28 to 50. These segments collectively account for roughly two-thirds of your profitability.  (Unexpected factoid: Your most profitable customers are overwhelmingly Amazon Prime subscriber. What might that mean?)
Going more granular, as Big Data does, offers even sharper ethno-geographic insight into customer behavior and influence:

  • Single Asian, Hispanic, and African-American women with urban post codes are most likely to complain about product and service quality to the company. Asian and Hispanic complainers happy with resolution/refund tend to be in the top quintile of profitability. African-American women do not.
  • Suburban Caucasian mothers are most likely to use social media to share their complaints, followed closely by Asian and Hispanic mothers. But if resolved early, they’ll promote the firm’s responsiveness online.
  • Gay urban males receiving special discounts and promotions are the most effective at driving traffic to your sites.

My point here is that these data are explicit, compelling and undeniable. But how should sophisticated marketers and merchandisers use them?
Campaigns, promotions and loyalty programs targeting women and gay males seem obvious. But should Asian, Hispanic and white females enjoy preferential treatment over African-American women when resolving complaints? After all, they tend to be both more profitable and measurably more willing to effectively use social media. Does it make more marketing sense encouraging African-American female customers to become more social media savvy? Or are resources better invested in getting more from one’s best customers? Similarly, how much effort and ingenuity flow should go into making more gay male customers better social media evangelists? What kinds of offers and promotions could go viral on their networks?…
Of course, the difference between price discrimination and discrimination positively correlated with gender, ethnicity, geography, class, personality and/or technological fluency is vanishingly small. Indeed, the entire epistemological underpinning of Big Data for business is that it cost-effectively makes informed segmentation and personalization possible…..
But the main source of concern won’t be privacy, per se — it will be whether and how companies and organizations like your own use Big Data analytics to justify their segmentation/personalization/discrimination strategies. The more effective Big Data analytics are in profitably segmenting and serving customers, the more likely those algorithms will be audited by regulators or litigators.
Tomorrow’s Big Data challenge isn’t technical; it’s whether managements have algorithms and analytics that are both fairly transparent and transparently fair. Big Data champions and practitioners had better be discriminating about how discriminating they want to be.”

Report “Big and open data in Europe: A growth engine or a missed opportunity?”


Press Release: “Big data and open data are not just trendy issues, they are the concern of the government institutions at the highest level. On January 29th, 2014 a Conference concerning Big & Open Data in Europe 2020 was held in the European Parliament.
Questions were asked and discussed like: Is Big & Open Data a truly transformative phenomena or just a ‘hot air’? Does it matter for Europe? How big is the economic potential of Big and Open Data for Europe till 2020? How each of the 28 Member States may benefit from it?…
The conference complemented a research project by demosEUROPA – Centre for European Strategy on Big and Open Data in Europe that aims at fostering and facilitating policy debate on the socioeconomic impact of data. The key outcome of the project, a pan-European macroeconomic study titledBig and open data In Europe: A growth engine or a missed opportunity?” carried out by the Warsaw Institute for Economic Studies (WISE) was presented.
We have the pleasure to be one of the first to present some of the findings of the report and offer the report for download.
The report analyses how technologies have the potential to influence various aspects of the European society, about their substantial, long term impact on our wealth and quality of life, but also about the new developmental challenges for the EU as a whole – as well as for its member states and their regions.
You will learn from the report:
–  the resulting economic gains of business applications of big data
– how to structure big data to move from Big Trouble to Big Value
– the costs and benefits of opening data to holders
– 3 challenges that  Europeans face with respect to big and open data
– key areas, growth opportunities and challenges for big and open data in Europe per particular regions.
The study also elaborates on the key principle of open data philosophy, which is open by default.
Europe by 2020. What will happen?
The report contains a prognosis for the 28 countries from the EU about the impact of big and open data from 2020 and its additional output and how it will affect trade, health, manufacturing, information and communication, finance & insurance and public administration in different regions. It foresees that the EU economy will grow by 1.9% by 2020 thanks to big and open data and describes the increase of the general GDP level by countries and sectors.
One of the many interesting findings of the report is that the positive impact of the data revolution will be felt more acutely in Northern Europe, while most of the New Member States and Southern European economies will benefit significantly less, with two notable exceptions being the Czech Republic and Poland. If you would like to have first-hand up-to-date information about the impact of big and open data on the future of Europe – download the report.”

The Age of ‘Infopolitics’


Colin Koopman in the New York Times: “We are in the midst of a flood of alarming revelations about information sweeps conducted by government agencies and private corporations concerning the activities and habits of ordinary Americans. After the initial alarm that accompanies every leak and news report, many of us retreat to the status quo, quieting ourselves with the thought that these new surveillance strategies are not all that sinister, especially if, as we like to say, we have nothing to hide.
One reason for our complacency is that we lack the intellectual framework to grasp the new kinds of political injustices characteristic of today’s information society. Everyone understands what is wrong with a government’s depriving its citizens of freedom of assembly or liberty of conscience. Everyone (or most everyone) understands the injustice of government-sanctioned racial profiling or policies that produce economic inequality along color lines. But though nearly all of us have a vague sense that something is wrong with the new regimes of data surveillance, it is difficult for us to specify exactly what is happening and why it raises serious concern, let alone what we might do about it.
Our confusion is a sign that we need a new way of thinking about our informational milieu. What we need is a concept of infopolitics that would help us understand the increasingly dense ties between politics and information. Infopolitics encompasses not only traditional state surveillance and data surveillance, but also “data analytics” (the techniques that enable marketers at companies like Target to detect, for instance, if you are pregnant), digital rights movements (promoted by organizations like the Electronic Frontier Foundation), online-only crypto-currencies (like Bitcoin or Litecoin), algorithmic finance (like automated micro-trading) and digital property disputes (from peer-to-peer file sharing to property claims in the virtual world of Second Life). These are only the tip of an enormous iceberg that is drifting we know not where.
Surveying this iceberg is crucial because atop it sits a new kind of person: the informational person. Politically and culturally, we are increasingly defined through an array of information architectures: highly designed environments of data, like our social media profiles, into which we often have to squeeze ourselves. The same is true of identity documents like your passport and individualizing dossiers like your college transcripts. Such architectures capture, code, sort, fasten and analyze a dizzying number of details about us. Our minds are represented by psychological evaluations, education records, credit scores. Our bodies are characterized via medical dossiers, fitness and nutrition tracking regimens, airport security apparatuses. We have become what the privacy theorist Daniel Solove calls “digital persons.” As such we are subject to infopolitics (or what the philosopher Grégoire Chamayou calls “datapower,” the political theorist Davide Panagia “datapolitik” and the pioneering thinker Donna Haraway “informatics of domination”).
Today’s informational person is the culmination of developments stretching back to the late 19th century. It was in those decades that a number of early technologies of informational identity were first assembled. Fingerprinting was implemented in colonial India, then imported to Britain, then exported worldwide. Anthropometry — the measurement of persons to produce identifying records — was developed in France in order to identify recidivists. The registration of births, which has since become profoundly important for initiating identification claims, became standardized in many countries, with Massachusetts pioneering the way in the United States before a census initiative in 1900 led to national standardization. In the same era, bureaucrats visiting rural districts complained that they could not identify individuals whose names changed from context to context, which led to initiatives to universalize standard names. Once fingerprints, biometrics, birth certificates and standardized names were operational, it became possible to implement an international passport system, a social security number and all other manner of paperwork that tells us who someone is. When all that paper ultimately went digital, the reams of data about us became radically more assessable and subject to manipulation, which has made us even more informational.
We like to think of ourselves as somehow apart from all this information. We are real — the information is merely about us. But what is it that is real? What would be left of you if someone took away all your numbers, cards, accounts, dossiers and other informational prostheses? Information is not just about you — it also constitutes who you are….”

Online Video Game Plugs Players Into Real Biochemistry Lab


Science Now: “Crowdsourcing is the latest research rage—Kickstarter to raise funding, screen savers that number-crunch, and games to find patterns in data—but most efforts have been confined to the virtual lab of the Internet. In a new twist, researchers have now crowdsourced their experiments by connecting players of a video game to an actual biochemistry lab. The game, called EteRNA, allows players to remotely carry out real experiments to verify their predictions of how RNA molecules fold. The first big result: a study published this week in the Proceedings of the National Academy of Sciences, bearing the names of more than 37,000 authors—only 10 of them professional scientists. “It’s pretty amazing stuff,” says Erik Winfree, a biophysicist at the California Institute of Technology in Pasadena.
Some see EteRNA as a sign of the future for science, not only for crowdsourcing citizen scientists but also for giving them remote access to a real lab. “Cloud biochemistry,” as some call it, isn’t just inevitable, Winfree says: It’s already here. DNA sequencing, gene expression testing, and many biochemical assays are already outsourced to remote companies, and any “wet lab” experiment that can be automated will be automated, he says. “Then the scientists can focus on the non-boring part of their work.”
EteRNA grew out of an online video game called Foldit. Created in 2008 by a team led by David Baker and Zoran Popović, a molecular biologist and computer scientist, respectively, at the University of Washington, Seattle, Foldit focuses on predicting the shape into which a string of amino acids will fold. By tweaking virtual strings, Foldit players can surpass the accuracy of the fastest computers in the world at predicting the structure of certain proteins. Two members of the Foldit team, Adrien Treuille and Rhiju Das, conceived of EteRNA back in 2009. “The idea was to make a version of Foldit for RNA,” says Treuille, who is now based at Carnegie Mellon University in Pittsburgh, Pennsylvania. Treuille’s doctoral student Jeehyung Lee developed the needed software, but then Das persuaded them to take it a giant step further: hooking players up directly to a real-world, robot-controlled biochemistry lab. After all, RNA can be synthesized and its folded-up structure determined far more cheaply and rapidly than protein can.
Lee went back to the drawing board, redesigning the game so that it had not only a molecular design interface like Foldit, but also a laboratory interface for designing RNA sequences for synthesis, keeping track of hypotheses for RNA folding rules, and analyzing data to revise those hypotheses. By 2010, Lee had a prototype game ready for testing. Das had the RNA wet lab ready to go at Stanford University in Palo Alto, California, where he is now a professor. All they lacked were players.
A message to the Foldit community attracted a few hundred players. Then in early 2011, The New York Times wrote about EteRNA and tens of thousands of players flooded in.
The game comes with a detailed tutorial and a series of puzzles involving known RNA structures. Only after winning 10,000 points do you unlock the ability to join EteRNA’s research team. There the goal is to design RNA sequences that will fold into a target structure. Each week, eight sequences are chosen by vote and sent to Stanford for synthesis and structure determination. The data that come back reveal how well the sequences’ true structures matched their targets. That way, Treuille says, “reality keeps score.” The players use that feedback to tweak a set of hypotheses: design rules for determining how an RNA sequence will fold.
Two years and hundreds of RNA structures later, the players of EteRNA have proven themselves to be a potent research team. Of the 37,000 who played, about 1000 graduated to participating in the lab for the study published today. (EteRNA now has 133,000 players, 4000 of them doing research.) They generated 40 new rules for RNA folding. For example, at the junctions between different parts of the RNA structure—such as between a loop and an arm—the players discovered that it is far more stable if enriched with guanines and cytosines, the strongest bonding of the RNA base pairs. To see how well those rules describe reality, the humans then competed toe to toe against computers in a new series of RNA structure challenges. The researchers distilled the humans’ 40 rules into an algorithm called EteRNA Bot.”

The Moneyball Effect: How smart data is transforming criminal justice, healthcare, music, and even government spending


TED: “When Anne Milgram became the Attorney General of New Jersey in 2007, she was stunned to find out just how little data was available on who was being arrested, who was being charged, who was serving time in jails and prisons, and who was being released. It turns out that most big criminal justice agencies like my own didn’t track the things that matter,” she says in today’s talk, filmed at TED@BCG. “We didn’t share data, or use analytics, to make better decisions and reduce crime.”
Milgram’s idea for how to change this: “I wanted to moneyball criminal justice.”
Moneyball, of course, is the name of a 2011 movie starring Brad Pitt and the book it’s based on, written by Michael Lewis in 2003. The term refers to a practice adopted by the Oakland A’s general manager Billy Beane in 2002 — the organization began basing decisions not on star power or scout instinct, but on statistical analysis of measurable factors like on-base and slugging percentages. This worked exceptionally well. On a tiny budget, the Oakland A’s made it to the playoffs in 2002 and 2003, and — since then — nine other major league teams have hired sabermetric analysts to crunch these types of numbers.
Milgram is working hard to bring smart statistics to criminal justice. To hear the results she’s seen so far, watch this talk. And below, take a look at a few surprising sectors that are getting the moneyball treatment as well.

Moneyballing music. Last year, Forbes magazine profiled the firm Next Big Sound, a company using statistical analysis to predict how musicians will perform in the market. The idea is that — rather than relying on the instincts of A&R reps — past performance on Pandora, Spotify, Facebook, etc can be used to predict future potential. The article reads, “For example, the company has found that musicians who gain 20,000 to 50,000 Facebook fans in one month are four times more likely to eventually reach 1 million. With data like that, Next Big Sound promises to predict album sales within 20% accuracy for 85% of artists, giving labels a clearer idea of return on investment.”
Moneyballing human resources. In November, The Atlantic took a look at the practice of “people analytics” and how it’s affecting employers. (Billy Beane had something to do with this idea — in 2012, he gave a presentation at the TLNT Transform Conference called “The Moneyball Approach to Talent Management.”) The article describes how Bloomberg reportedly logs its employees’ keystrokes and the casino, Harrah’s, tracks employee smiles. It also describes where this trend could be going — for example, how a video game called Wasabi Waiter could be used by employers to judge potential employees’ ability to take action, solve problems and follow through on projects. The article looks at the ways these types of practices are disconcerting, but also how they could level an inherently unequal playing field. After all, the article points out that gender, race, age and even height biases have been demonstrated again and again in our current hiring landscape.
Moneyballing healthcare. Many have wondered: what about a moneyball approach to medicine? (See this call out via Common Health, this piece in Wharton Magazine or this op-ed on The Huffington Post from the President of the New York State Health Foundation.) In his TED Talk, “What doctors can learn from each other,” Stefan Larsson proposed an idea that feels like something of an answer to this question. In the talk, Larsson gives a taste of what can happen when doctors and hospitals measure their outcomes and share this data with each other: they are able to see which techniques are proving the most effective for patients and make adjustments. (Watch the talk for a simple way surgeons can make hip surgery more effective.) He imagines a continuous learning process for doctors — that could transform the healthcare industry to give better outcomes while also reducing cost.
Moneyballing government. This summer, John Bridgeland (the director of the White House Domestic Policy Council under President George W. Bush) and Peter Orszag (the director of the Office of Management and Budget in Barack Obama’s first term) teamed up to pen a provocative piece for The Atlantic called, “Can government play moneyball?” In it, the two write, “Based on our rough calculations, less than $1 out of every $100 of government spending is backed by even the most basic evidence that the money is being spent wisely.” The two explain how, for example, there are 339 federally-funded programs for at-risk youth, the grand majority of which haven’t been evaluated for effectiveness. And while many of these programs might show great results, some that have been evaluated show troubling results. (For example, Scared Straight has been shown to increase criminal behavior.) Yet, some of these ineffective programs continue because a powerful politician champions them. While Bridgeland and Orszag show why Washington is so averse to making data-based appropriation decisions, the two also see the ship beginning to turn around. They applaud the Obama administration for a 2014 budget with an “unprecendented focus on evidence and results.” The pair also gave a nod to the nonprofit Results for America, which advocates that for every $99 spent on a program, $1 be spent on evaluating it. The pair even suggest a “Moneyball Index” to encourage politicians not to support programs that don’t show results.
In any industry, figuring out what to measure, how to measure it and how to apply the information gleaned from those measurements is a challenge. Which of the applications of statistical analysis has you the most excited? And which has you the most terrified?”

Innovation in the Government Industry


in Huffington Post: “Government may be susceptible to the same forces that are currently changing many major industries. Software is eating government, too. Therefore government must use customer development to better serve customers else it risks becoming the next Blockbuster, Borders, or what the large publishing and financial services companies are at risk of becoming…
Government is currently one size fits all. In a free market, there is unblunding and multiple offerings for different segments of a market. For example there’s Natural Light, Budweiser, and Guinness. Competition forces companies to serve customers because if customers don’t like one offering they will simply choose a different one. If you don’t like your laundromat, restaurant, or job, you can simply go somewhere else. In contrast, switching governments is really hard.
Why Now
Government has been able to go a very long time without significant innovation. However now is the time for government to begin adapting because the forces changing nearly every industry may do the same to government. I will reiterate a few themes that Fred Wilson cited in a talk at LeWeb while talking about several different industries and add some more thoughts.
1. Organization: Technology driven networks replacing bureaucratic hierarchies
Bureaucratic hierarchies involve chains of command with lower levels of management making more detailed decisions and reporting back to higher levels of management. These systems often entail long communication lags, high costs, and principal/agent problems.
Technology driven networks are providing more efficient systems for organization and communication. For example, Amazon has changed the publishing industry by enabling anyone to publish content and enabling customers to decide what they want. Twitter has created a network around communication and news, enabling anyone who people want to hear to be heard.
2. Competition: Unbundling of product and service offerings
Technology advancements have made it cheaper and easier than ever before to produce a product and bring it to market. One result is that it’s become easier for an entrepreneur to provide one offering of a larger offering as a standalone offering. It provides customers with the option to buy what they want without having to pay more for stuff they don’t want. In addition, the offerings can be improved because producers are completely focused on that specific offering. For example, we used to buy one newspaper and get world, local, sports, etc. Now it’s all from different sources.
Bundling exists because it was more efficient than attempting to contract in the market for every tiny service. However some of the technology driven networks (as described above) are helping markets become more efficient and giving customers more customizable buying options. For example, you can buy a half hour of education, or borrow money from a peer.
We’re starting to see some of the governments offerings begin to be unbundled. For example, Uber and Hyperloop are providing transportation. A neighborhood in Oakland crowdfunded private security.
3. Finance: Lower payment transaction fees and crowdfunding
Innovation in payments, including Bitcoin, has made it cheaper and easier than ever to transfer money. It’s as easy as sending an email, clicking a hyperlink, or scanning a QR code. In addition, Bitcoin is not controlled by any regulators or intermediaries like the government, credit card companies, or even PayPal.
Crowdfunding enables the collective efforts of individuals to connect and pool their money to back initiatives, make purchases, or fund new projects. A school in Houston crowdfunded some exercise equipment instead of using government funding.
4. Communication: We are all connected and graphed
Mobile devices have become nearly as powerful as desktops or laptops. There are many things we can do with our phone that we can’t do on our desktop/laptop. For example, smartphones have sensors, are location aware, can be carried with us at all times, and are cheaper than desktops or laptops. These factors have lead to mass adoption of mobile devices across the world, including in countries with high poverty where people could not previously afford a desktop or laptop. Mobile is making innovative offerings like Uber and mobile payments possible.
Platforms like Facebook and Twitter provide everyone with access to millions of people. In addition, companies like Klout and Quora are measuring our reputation and social graph improving our ability to transact with each other. For example, when market participants trust one another (through the vehicle of a reputation system) many transactions that wouldn’t otherwise happen can now happen.This illustrated in the rise in popularity of collaborative consumption platforms and peer to peer marketplaces.
Serving Customers
The current government duopoly inhibits us from selecting the government that we want as well as from receiving the best possible service because of lack of incentive. However the technologies described above are making it possible to get services previously provided by the government through more efficient and effective means. They’re enabling a more free market for government services….
If government were to take the customer development route, it could try things like unbundling (see above) so that people could opt for the specific solutions they desire. Given the US government’s current balance sheet, it may actually need to start relying on other providers.
It could also rely more on “economic feedback” to inform its actions. Currently economic feedback is given through voting. Most people vote once every two or four years and then hope they get what they “paid” for. Can you imagine paying for a college without knowing which one you would be going to, know what they would be providing, or being able to request a refund or switch colleges? With more economic incentive, services would need to improve. For example, if there was a free market for roads, people would pay for and use the roads that were most safe.”

Belonging: Solidarity and Division in Modern Societies


New book by Montserrat Guibernau: “It is commonly assumed that we live in an age of unbridled individualism, but in this important new book Montserrat Guibernau argues that the need to belong to a group or community – from peer groups and local communities to ethnic groups and nations – is a pervasive and enduring feature of modern social life.
The power of belonging stems from the potential to generate an emotional attachment capable of fostering a shared identity, loyalty and solidarity among members of a given community. It is this strong emotional dimension that enables belonging to act as a trigger for political mobilization and, in extreme cases, to underpin collective violence.
Among the topics examined in this book are identity as a political instrument; emotions and political mobilization; the return of authoritarianism and the rise of the new radical right; symbols and the rituals of belonging; loyalty, the nation and nationalism. It includes case studies from Britain, Spain, Catalonia, Germany, the Middle East and the United States.”

Open Data and Clinical Trials


Editorial by Jeffrey M. Drazen, M.D at NEJM.org :”In the fall of 2013, the Institute of Medicine (IOM) convened a committee, on which I serve, to examine the sharing of data in the setting of clinical trials. The committee is charged with reviewing current practices on data sharing in the context of randomized, controlled trials and with making recommendations for future data-sharing standards. Over the past few months, the committee has prepared a draft report that reviews current practices on data sharing and lays out a number of potential data-sharing models. Full details regarding the committee’s charge and the interim report are available at www.iom.edu/activities/research/sharingclinicaltrialdata.aspx….
Open-data advocates argue that all the study data should be available to anyone at the time the first report is published or even earlier. Others argue that to maintain an incentive for researchers to pursue clinical investigations and to give those who gathered the data a chance to prepare and publish further reports, there should be a period of some specified length during which the data gatherers would have exclusive access to the information. Since these researchers could always agree to collaborate with others who were not involved in the study in order to use the data to help answer a scientific question, the period of exclusivity would really apply only to noncollaborative use of the data. That is, there would be a defined period during which the data would not be available to those who wanted to perform their own analyses and draw conclusions that could, for example, provide them with a scientific or commercial competitive advantage over the researchers who had originally gathered the data or allow them to derive conclusions that are potentially at odds with those drawn in the original publication.
As members of a community that either produces or uses data, what approach do you think serves our community best? There is no need to reply to the Journal, but please read the interim report and let the IOM know how you feel about this and the many other critical issues related to data sharing that are reviewed in the document. The IOM is collecting comments until March 24, 2014, at www8.nationalacademies.org/cp/projectview.aspx?key=49578.”