ideas42: “People are presented with many choices throughout their day, from what to have for lunch to where to go on vacation to how much money to save for emergencies. In many situations, this ability to choose enhances our lives. However, having too many choices can sometimes feel like a burden, especially if the choices are complex or the decisions we’re making are important. In these instances, we often make poor decisions, or sometimes even fail to choose at all. This can create real problems, for example when people fail to save enough for retirement or don’t make the right choices when it comes to staying healthy.
So why is it that so much effort has been spent trying to improve decision-making by giving people even more information about the choices available – often complicating the choice even further?
In a new paper by ideas42, ideas42 co-founder Antoinette Schoar of MIT’s Sloan School of Management, and ideas42’s Saugato Datta argue that this approach of providing more information to help individuals make better decisions is flawed, “since it does not take into account the psychological or behavioral barriers that prevent people from making better decisions.” The solution, they propose, is using effective rules of thumb, or ‘heuristics’, to “enable people to make ‘reasonably good’ decisions without needing to understand all the complex nuances of the situation.” The paper explores the effectiveness of heuristics as a tool to simplify information during decision-making and help people follow through on their intentions. The authors offer powerful examples of effective heuristics-based methods in three domains: financial education, agriculture, and medicine….(More)”
Pantheon: A Dataset for the Study of Global Cultural Production
Measuring government impact in a social media world
Arthur Mickoleit & Ryan Androsoff at OECD Insights: “There is hardly a government around the world that has not yet felt the impact of social media on how it communicates and engages with citizens. And while the most prominent early adopters in the public sector have tended to be politicians (think of US President Barack Obama’s impressive use of social media during his 2008 campaign), government offices are also increasingly jumping on the bandwagon. Yes, we are talking about those – mostly bricks-and-mortar – institutions that often toil away from the public gaze, managing the public administration in our countries. As the world changes, they too are increasingly engaging in a very public way through social media.
Research from our recent OECD working paper “Social Media Use by Governments” shows that as of November 2014, out of 34 OECD countries, 28 have a Twitter account for the office representing the top executive institution (head of state, head of government, or government as a whole), and 21 have a Facebook account….
But what is the impact governments can or should expect from social media? Is it all just vanity and peer pressure? Surely not.
Take the Spanish national police force (e.g. on Twitter, Facebook & YouTube), a great example of using social media to build long-term engagement, trust and a better public service. The thing so many governments yearn for, in this case the Spanish police seem to have managed well.
Or take the Danish “tax daddy” on Twitter – @Skattefar. It started out as the national tax administration’s quest to make it easier for everyone to submit correct tax filings; it is now one of the best examples around of a tax agency gone social.
Government administrations can use social media for internal purposes too. The Government of Canada used public platforms like Twitter and internal platforms like GCpedia and GCconnex to conduct a major employee engagement exercise (Blueprint 2020) to develop a vision for the future of the Canadian federal public service.
And when it comes to raising efficiency in the public sector, read this account of a Dutch research facility’s Director who decided to stop email. Not reduce it, but stop it altogether and replace it with social media.
There are so many other examples that could be cited. But the major question is how can we even begin to appraise the impact of these different initiatives? Because as we’ve known since the 19th century, “if you cannot measure it, you cannot improve it” (quote usually attributed to Lord Kelvin). Some aspects of impact measurement for social media can be borrowed from the private sector with regards to presence, popularity, penetration, and perception. But it’s around purpose that impact measurement agendas will split between the private sector and government. Virtually all companies will want to calculate the return on social media investments based on whether it helps them improve their financial returns. That’s different in the public sector where purpose is rarely defined in commercial terms.
A good impact assessment for social media in the public sector therefore needs to be built around its unique purpose-orientation. This is much more difficult to measure and it will involve a mix of quantitative data (e.g. reach of target audience) and qualitative data (e.g. case studies describing tangible impact). Social Media Use by Governments proposes a framework to start looking at social media measurement in gradual steps – from measuring presence, to popularity, to penetration, to perception, and finally, to purpose-orientation. The aim of this framework is to help governments develop truly relevant metrics and start treating social media activity by governments with the same public management rigour that is applied to other government activities. You can see a table summarising the framework by clicking on the thumbnail below.
This is far from an exact science, but we are beginning the work collaborating with member and partner governments to develop a toolkit that will help decision-makers implement the OECD Recommendation on Digital Government Strategies, including on the issue of social media metrics…(More)”.
Urban technology analysis matrix
New Paper by Pablo Emilio Branchi , Carlos Fernández-Valdivielso , and Ignacio Raúl Matías: “Our objective is to develop a method for better analyzing the utility and impact of new technologies on Smart Cities. We have designed a tool that will evaluate new technologies according to a three-pronged scoring system that considers the impact on physical space, environmental issues, and city residents. The purpose of this tool is to be used by city planners as part of a strategic approach to the implementation of a Smart City initiative in order to reduce unnecessary public spending and ensure the optimal allocation of city resources….
The paper provides a list of the different elements to be analyzed in Smart Cities in the form of a matrix and develops the methodology to evaluate them in order to obtain a final score for technologies prior to its application in cities….Traditional technological scenarios have been challenged, and Smart Cities have become the center of urban competitiveness. A lack of clarity has been detected in the way of describing what Smart Cities are, and we try to establish a methodology for urban policy makers to do so. As a dynamic process that affects several aspects, researchers are encouraged to test the proposed solution further. (More)”
We Need To Innovate The Science Business Model
Greg Satell at Forbes: “In 1945, Vannevar Bush, the man that led the nation’s scientific efforts during World War II, delivered a proposal to President Truman for funding scientific research in the post-war world. Titled Science, The Endless Frontier, it led to the formation of the NSF, NIH, DARPA and other agencies….
One assumption inherent in Bush’s proposal was that institutions would be at the center of scientific life. Scientists from disparate labs could read each others papers and meet at an occasional conference, but for the most part, they would be dependent on the network of researchers within their organization and those close by.
Sometimes, the interplay between institutions had major, even historical, impacts, such as John von Neumann’s sponsorship of Alan Turing, but mostly the work you did was largely a function of where you did it. The proximity of Watson, Crick, Rosalind Franklin and Maurice Wilkins, for example, played a major role in the discovery of the structure of DNA.
Yet today, digital technology is changing not only the speed and ease of how we communicate, but the very nature of how we are able to collaborate. When I spoke to Jonathan Adams, Chief Scientist at Digital Science, which develops and invests in software that makes science more efficient, he noted that there is a generational shift underway and said this:
When you talk to people like me, we’re established scientists who are still stuck in the old system of institutions and conferences. But the younger scientists are using technology to access networks and they do so on an ongoing, rather than a punctuated basis. Today, you don’t have to go to a conference or write a paper to exchange ideas.
Evidence would seem to bear this out. The prestigious journal Nature recently noted that the average scientific paper has four times as many authors as it did in the 1950’s, when Bush’s career was at its height. Moreover, it’s become common for co-authors to work at far-flung institutions. Scientific practice needs to adopt to this scientific reality.
There has been some progress in this area. The Internet, in fact, was created for the the explicit purpose of scientific collaboration. Yet still, the way in which scientists report and share their findings remains much the same as a century ago.
Moving From Publications To Platforms For Discovery
One especially ripe area for innovation is publishing. Typically, a researcher with a new discovery waits six months to a year for the peer review process to run its course before the work can be published. Even then, many of the results are questionable at best. Nature recently reported that the overwhelming majority of studies can’t be replicated…(More)”
City Governments Are Using Yelp to Tell You Where Not to Eat
Michael Luca and Luther Lowe at HBR Blog: “…in recent years consumer-feedback platforms like TripAdvisor, Foursquare, and Chowhound have transformed the restaurant industry (as well as the hospitality industry), becoming important guides for consumers. Yelp has amassed about 67 million reviews in the last decade. So it’s logical to think that these platforms could transform hygiene awareness too — after all, people who contribute to review sites focus on some of the same things inspectors look for.
It turns out that one way user reviews can transform hygiene awareness is by helping health departments better utilize their resources. The deployment of inspectors is usually fairly random, which means time is often wasted on spot checks at clean, rule-abiding restaurants. Social media can help narrow the search for violators.
Within a given city or area, it’s possible to merge the entire history of Yelp reviews and ratings — some of which contain telltale words or phrases such as “dirty” and “made me sick” — with the history of hygiene violations and feed them into an algorithm that can predict the likelihood of finding problems at reviewed restaurants. Thus inspectors can be allocated more efficiently.
In San Francisco, for example, we broke restaurants into the top half and bottom half of hygiene scores. In a recent paper, one of us (Michael Luca, with coauthor Yejin Choi and her graduate students) showed that we could correctly classify more than 80% of restaurants into these two buckets using only Yelp text and ratings. In the next month, we plan to hold a contest on DrivenData to get even better algorithms to help cities out (we are jointly running the contest). Similar algorithms could be applied in any city and in other sorts of prediction tasks.
Another means for transforming hygiene awareness is through the sharing of health-department data with online review sites. The logic is simple: Diners should be informed about violations before they decide on a destination, rather than after.
Over the past two years, we have been working with cities to help them share inspection data with Yelp through an open-data standard that Yelp created in 2012 to encourage officials to put their information in places that are more useful to consumers. In San Francisco, Los Angeles, Raleigh, and Louisville, Kentucky, customers now see hygiene data alongside Yelp reviews. There’s evidence that users are starting to pay attention to this data — click-through rates are similar to those for other features on Yelp
….
And there’s no reason this type of data sharing should be limited to restaurant-inspection reports. Why not disclose data about dentists’ quality and regulatory compliance via Yelp? Why not use data from TripAdvisor to help spot bedbugs? Why not use Twitter to understand what citizens are concerned about, and what cities can do about it? Uses of social media data for policy, and widespread dissemination of official data through social media, have the potential to become important means of public accountability. (More)
Data for good
Key Findings
- Citizens Advice (CAB) and Data Kind partnered to develop the Civic Dashboard. A tool which mines data from CAB consultations to understand emerging social issues in the UK.
- Shooting Star Chase volunteers streamlined the referral paths of how children come to be at the hospices saving up to £90,000 for children’s hospices around the country by refining the referral system.
- In a study of open grant funding data, NCVO identified 33,000 ‘below the radar organisations’ not currently registered in registers and databases on the third sector
- In their social media analysis of tweets related to the Somerset Floods, Demos found that 39,000 tweets were related to social action
New ways of capturing, sharing and analysing data have the potential to transform how community and voluntary sector organisations work and how social action happens. However, while analysing and using data is core to how some of the world’s fastest growing businesses understand their customers and develop new products and services, civil society organisations are still some way off from making the most of this potential.
Over the last 12 months Nesta has grant funded a number of research projects that explore two dimensions of how big and open data can be used for the common good. Firstly, how it can be used by charities to develop better products and services and secondly, how it can help those interested in civil society better understand social action and civil society activity.
- Citizens Advice Bureau (CAB) and Datakind, a global community of data scientists interested in how data can be used for a social purpose, were grant funded to explore how a datadriven approach to mining the rich data that CAB holds on social issues in the UK could be used to develop a real–time dashboard to identify emerging social issues. The project also explored how data–driven methods could better help other charities such as St Mungo’s and Buttle UK, and how data could be shared more effectively between charities as part of this process, to create collaborative data–driven projects.
- Five organisations (The RSA, Cardiff University, The Demos Centre for Analysis of Social Media, NCVO and European Alternatives) were grant funded to explore how data–driven methods, such as open data analysis and social media analysis, can help us understand informal social action, often referred to as ‘below the radar activity’ in new ways.
This paper is not the definitive story of the opportunities in using big and open data for the common good, but it can hopefully provide insight on what can be done and lessons for others interested in exploring the opportunities in these methods….(More).”
'From Atoms to Bits': A Visual History of American Ideas
Derek Thompson in The Atlantic: “A new paper employs a simple technique—counting words in patent texts—to trace the history of American invention, from chemistry to computers….in a new paper, Mikko Packalen at the University of Waterloo and Jay Bhattacharya of Stanford University, devised a brilliant way to address this question empirically. In short, they counted words in patent texts.
In a series of papers studying the history of American innovation, Packalen and Bhattacharya indexed every one-word, two-word, and three-word phrase that appeared in more than 4 million patent texts in the last 175 years. To focus their search on truly new concepts, they recorded the year those phrases first appeared in a patent. Finally, they ranked each concept’s popularity based on how many times it reappeared in later patents. Essentially, they trawled the billion-word literature of patents to document the birth-year and the lifespan of American concepts, from “plastic” to “world wide web” and “instant messaging.”
Here are the 20 most popular sequences of words in each decade from the 1840s to the 2000s. You can see polymerase chain reactions in the middle of the 1980s stack. Since the timeline, as it appears in the paper, is too wide to be visible on this article page, I’ve chopped it up and inserted the color code both above and below the timeline….
Another theme of Packalen and Bhattacharya’s research is that innovation has become more collaborative. Indeed, computers have not only taken over the world of inventions, but also they have changed the geography of innovation, Bhattacharya said. Larger cities have historically held an innovative advantage, because (the theory goes) their density of smarties speeds up debate on the merits of new ideas, which are often born raw and poorly understood. But the researchers found that in the last few decades, larger cities are no more likely to produce new ideas in patents than smaller cities that can just as easily connect online with their co-authors. “Perhaps due to the Internet, the advantage of larger cities appears to be eroding,” Packalen wrote in an email….(More)”
Ad hoc encounters with big data: Engaging citizens in conversations around tabletops
Morten Fjeld, Paweł Woźniak, Josh Cowls, Bonnie Nardi at FirstMonday: “The increasing abundance of data creates new opportunities for communities of interest and communities of practice. We believe that interactive tabletops will allow users to explore data in familiar places such as living rooms, cafés, and public spaces. We propose informal, mobile possibilities for future generations of flexible and portable tabletops. In this paper, we build upon current advances in sensing and in organic user interfaces to propose how tabletops in the future could encourage collaboration and engage users in socially relevant data-oriented activities. Our work focuses on the socio-technical challenges of future democratic deliberation. As part of our vision, we suggest switching from fixed to mobile tabletops and provide two examples of hypothetical interface types: TableTiles and Moldable Displays. We consider how tabletops could foster future civic communities, expanding modes of participation originating in the Greek Agora and in European notions of cafés as locales of political deliberation….(More)”
Fifty Shades of Manipulation
New paper by Cass Sunstein: “A statement or action can be said to be manipulative if it does not sufficiently engage or appeal to people’s capacity for reflective and deliberative choice. One problem with manipulation, thus understood, is that it fails to respect people’s autonomy and is an affront to their dignity. Another problem is that if they are products of manipulation, people’s choices might fail to promote their own welfare, and might instead promote the welfare of the manipulator. To that extent, the central objection to manipulation is rooted in a version of Mill’s Harm Principle: People know what is in their best interests and should have a (manipulation-free) opportunity to make that decision. On welfarist grounds, the norm against manipulation can be seen as a kind of heuristic, one that generally works well, but that can also lead to serious errors, at least when the manipulator is both informed and genuinely interested in the welfare of the chooser.
For the legal system, a pervasive puzzle is why manipulation is rarely policed. The simplest answer is that manipulation has so many shades, and in a social order that values free markets and is committed to freedom of expression, it is exceptionally difficult to regulate manipulation as such. But as the manipulator’s motives become more self-interested or venal, and as efforts to bypass people’s deliberative capacities becomes more successful, the ethical objections to manipulation become very forceful, and the argument for a legal response is fortified. The analysis of manipulation bears on emerging first amendment issues raised by compelled speech, especially in the context of graphic health warnings. Importantly, it can also help orient the regulation of financial products, where manipulation of consumer choices is an evident but rarely explicit concern….(More)”.