By giving students information-driven suggestions that lead to smarter actions, technology nudges are intended to tackle a range of problems surrounding the process by which students begin college and make their way to graduation.
New approaches are certainly needed….
There are many reasons for low rates of persistence and graduation, including financial problems, the difficulty of juggling non-academic responsibilities such as work and family, and, for some first-generation students, culture shock. But academic engagement and success are major contributors. That’s why colleges are using behavioral nudges, drawing on data analytics and behavioral psychology, to focus on problems that occur along the academic pipeline:
• Poor student organization around the logistics of going to college
• Unwise course selections that increase the risk of failure and extend time to degree
• Inadequate information about academic progress and the need for academic help
• Unfocused support systems that identify struggling students but don’t directly engage with them
• Difficulty tapping into counseling services
These new ventures, whether originating within colleges or created by outside entrepreneurs, are doing things with data that just couldn’t be done in the past—creating giant databases of student course records, for example, to find patterns of success and failure that result when certain kinds of students take certain kinds of courses.”
Visualizing the legislative process with Sankey diagrams
Kamil Gregor at OpeningParliament.org: “The process of shaping the law often resembles an Indiana Jones maze. Bills and amendments run through an elaborate system of committees, sessions and hearings filled with booby traps before finally reaching the golden idol of a final approval.
Parliamentary monitoring organizations and researchers are often interested in how various pieces of legislation survive in this environment and what are the strategies to either kill or aid them. This specifically means answering two questions: What is the probability of a bill being approved and what factors determine this probability?
The legislative process is usually hierarchical: Successful completion of a step in the process is conditioned by completion of all previous steps. Therefore, we may also want to know the probabilities of completion in each consecutive step and their determinants.
A simple way how to give a satisfying answer to these questions without wandering into the land of nonlinear logistic regressions is the Sankey diagram. It is a famous flow chart in which a process is visualized using arrows. Relative quantities of outcomes in the process are represented by arrows’ widths.
A famous example is a Sankey diagram of Napoleon’s invasion of Russia. We can clearly see how the Grand Army was gradually shrinking as French soldiers were dying or defecting. Another well-known example is the Google Analytics flow chart. It shows how many visitors enter a webpage and then either leave or continue to a different page on the same website. As the number of consecutive steps increases, the number of visitors remaining on the website decreases.
The legislative process can be visualized in the same way. Progress of bills is represented by a stream between various steps in the process and width of the stream corresponds to quantities of bills. A bill can either complete all the steps of the process, or it can “drop out” of it at some point if it gets rejected.
Let’s take a look…”
The Tech Intellectuals
New Essay by Henry Farrell in Democracy: “A quarter of a century ago, Russell Jacoby lamented the demise of the public intellectual. The cause of death was an improvement in material conditions. Public intellectuals—Dwight Macdonald, I.F. Stone, and their like—once had little choice but to be independent. They had difficulty getting permanent well-paying jobs. However, as universities began to expand, they offered new opportunities to erstwhile unemployables. The academy demanded a high price. Intellectuals had to turn away from the public and toward the practiced obscurities of academic research and prose. In Jacoby’s description, these intellectuals “no longer need[ed] or want[ed] a larger public…. Campuses [were] their homes; colleagues their audience; monographs and specialized journals their media.”
Over the last decade, conditions have changed again. New possibilities are opening up for public intellectuals. Internet-fueled media such as blogs have made it much easier for aspiring intellectuals to publish their opinions. They have fostered the creation of new intellectual outlets (Jacobin, The New Inquiry, The Los Angeles Review of Books), and helped revitalize some old ones too (The Baffler, Dissent). Finally, and not least, they have provided the meat for a new set of arguments about how communications technology is reshaping society.
These debates have created opportunities for an emergent breed of professional argument-crafters: technology intellectuals. Like their predecessors of the 1950s and ’60s, they often make a living without having to work for a university. Indeed, the professoriate is being left behind. Traditional academic disciplines (except for law, which has a magpie-like fascination with new and shiny things) have had a hard time keeping up. New technologies, to traditionalists, are suspect: They are difficult to pin down within traditional academic boundaries, and they look a little too fashionable to senior academics, who are often nervous that their fields might somehow become publicly relevant.
Many of these new public intellectuals are more or less self-made. Others are scholars (often with uncomfortable relationships with the academy, such as Clay Shirky, an unorthodox professor who is skeptical that the traditional university model can survive). Others still are entrepreneurs, like technology and media writer and podcaster Jeff Jarvis, working the angles between public argument and emerging business models….
Different incentives would lead to different debates. In a better world, technology intellectuals might think more seriously about the relationship between technological change and economic inequality. Many technology intellectuals think of the culture of Silicon Valley as inherently egalitarian, yet economist James Galbraith argues that income inequality in the United States “has been driven by capital gains and stock options, mostly in the tech sector.”
They might think more seriously about how technology is changing politics. Current debates are still dominated by pointless arguments between enthusiasts who believe the Internet is a model for a radically better democracy, and skeptics who claim it is the dictator’s best friend.
Finally, they might pay more attention to the burgeoning relationship between technology companies and the U.S. government. Technology intellectuals like to think that a powerful technology sector can enhance personal freedom and constrain the excesses of government. Instead, we are now seeing how a powerful technology sector may enable government excesses. Without big semi-monopolies like Facebook, Google, and Microsoft to hoover up personal information, surveillance would be far more difficult for the U.S. government.
Debating these issues would require a more diverse group of technology intellectuals. The current crop are not diverse in some immediately obvious ways—there are few women, few nonwhites, and few non-English speakers who have ascended to the peak of attention. Yet there is also far less intellectual diversity than there ought to be. The core assumptions of public debates over technology get less attention than they need and deserve.”
Design for Multiple Motivations
Tom Hulme in the Huffington Post: “Community is king for any collaboration platform or social network: Outcomes are primarily a function of the participating users and any software is strictly in their service.
With this in mind, when creating new platforms we define success criteria and ask ourselves who we might attract and engage. We ask ourselves where they congregate (online or offline) and ask what might motivate them to participate.
It’s often claimed that everyone’s incentives need to be the same in any successful system. However, no communities or individuals are identical, and their needs and interests will also differ. Instead, our experience has taught us that a site’s users might have wildly differing incentives and motivations, and if you value diverse input, that’s healthy…
I was fortunate to go to a lecture by Karim Lakhani of Harvard Business School when we began imagining OpenIDEO. He did a wonderful job of showing the range of potential incentives and motivations of different community members in this framework:

Given that we agreed inclusivity was a design principle for OpenIDEO, it followed that we should design as many of these intrinsic and extrinsic motivations into the platform as possible and that our primary job was to design a system in which they were all aligned toward the same common goal….More recently, as we have begun applying the OI Engine platform to enterprises we have been pleasantly surprised at the power of these non-financial motivations in driving contributions from employees. In fact, one client’s toughest challenge when selling the platform into his international bank was in convincing the managers that prizes weren’t required to drive employee contribution and might instead compromise collaboration. He was vindicated when thousands of employees actively engaged.”
Patients Take Control of Their Health Care Online
MIT Technology Review: “Patients are collaborating for better health — and, just maybe, radically reduced health-care costs….Not long ago, Sean Ahrens managed flare-ups of his Crohn’s disease—abdominal pain, vomiting, diarrhea—by calling his doctor and waiting a month for an appointment, only to face an inconclusive array of possible prescriptions. Today, he can call on 4,210 fellow patients in 66 countries who collaborate online to learn which treatments—drugs, diets, acupuncture, meditation, even do-it-yourself infusions of intestinal parasites —bring the most relief.
The online community Ahrens created and launched two years ago, Crohnology.com, is one of the most closely watched experiments in digital health. It lets patients with Crohn’s, colitis, and other inflammatory bowel conditions track symptoms, trade information on different diets and remedies, and generally care for themselves.
The site is at the vanguard of the growing “e-patient” movement that is letting patients take control over their health decisions—and behavior—in ways that could fundamentally change the economics of health care. Investors are particularly interested in the role “peer-to-peer” social networks could play in the $3 trillion U.S. health-care market.

“Patients sharing data about how they feel, the type of treatments they’re using, and how well they’re working is a new behavior,” says Malay Gandhi, chief strategy officer of Rock Health, a San Francisco incubator for health-care startups that invested in Crohnology.com. “If you can get consumers to engage in their health for 15 to 30 minutes a day, there’s the largest opportunity in digital health care.”
Experts say when patients learn from each other, they tend to get fewer tests, make fewer doctors’ visits, and also demand better treatment. “It can lead to better quality, which in many cases will be way more affordable,” says Bob Kocher, an oncologist and former adviser to the Obama administration on health policy.”
Frontiers in Massive Data Analysis
New report from the National Academy of Sciences: “Data mining of massive data sets is transforming the way we think about crisis response, marketing, entertainment, cybersecurity and national intelligence. Collections of documents, images, videos, and networks are being thought of not merely as bit strings to be stored, indexed, and retrieved, but as potential sources of discovery and knowledge, requiring sophisticated analysis techniques that go far beyond classical indexing and keyword counting, aiming to find relational and semantic interpretations of the phenomena underlying the data.
Frontiers in Massive Data Analysis examines the frontier of analyzing massive amounts of data, whether in a static database or streaming through a system. Data at that scale–terabytes and petabytes–is increasingly common in science (e.g., particle physics, remote sensing, genomics), Internet commerce, business analytics, national security, communications, and elsewhere. The tools that work to infer knowledge from data at smaller scales do not necessarily work, or work well, at such massive scale. New tools, skills, and approaches are necessary, and this report identifies many of them, plus promising research directions to explore. Frontiers in Massive Data Analysis discusses pitfalls in trying to infer knowledge from massive data, and it characterizes seven major classes of computation that are common in the analysis of massive data. Overall, this report illustrates the cross-disciplinary knowledge–from computer science, statistics, machine learning, and application disciplines–that must be brought to bear to make useful inferences from massive data.”
How to make a city great
New video and report by McKinsey: “What makes a great city? It is a pressing question because by 2030, 5 billion people—60 percent of the world’s population—will live in cities, compared with 3.6 billion today, turbocharging the world’s economic growth. Leaders in developing nations must cope with urbanization on an unprecedented scale, while those in developed ones wrestle with aging infrastructures and stretched budgets. All are fighting to secure or maintain the competitiveness of their cities and the livelihoods of the people who live in them. And all are aware of the environmental legacy they will leave if they fail to find more sustainable, resource-efficient ways of managing these cities.
To understand the core processes and benchmarks that can transform cities into superior places to live and work, McKinsey developed and analyzed a comprehensive database of urban economic, social, and environmental performance indicators. The research included interviewing 30 mayors and other leaders in city governments on four continents and synthesizing the findings from more than 80 case studies that sought to understand what city leaders did to improve processes and services from urban planning to financial management and social housing.
The result is How to make a city great (PDF–2.1MB), a new report arguing that leaders who make important strides in improving their cities do three things really well:
- They achieve smart growth. …
- They do more with less. Great cities secure all revenues due, explore investment partnerships, embrace technology, make organizational changes that eliminate overlapping roles, and manage expenses. Successful city leaders have also learned that, if designed and executed well, private–public partnerships can be an essential element of smart growth, delivering lower-cost, higher-quality infrastructure and services.
- They win support for change. Change is not easy, and its momentum can even attract opposition. Successful city leaders build a high-performing team of civil servants, create a working environment where all employees are accountable for their actions, and take every opportunity to forge a stakeholder consensus with the local population and business community. They take steps to recruit and retain top talent, emphasize collaboration, and train civil servants in the use of technology.”
Understanding the impact of releasing and re-using open government data
New Report by the European Public Sector Information Platform: “While there has been a proliferation of open data portals and data re-using tools and applications of tremendous speed in the last decade, research and understanding about the impact of opening up public sector information and open government data (OGD hereinafter) has been lacking behind.
Until now, there have been some research efforts to structure the concept of the impact of OGD suggesting various theories of change, their measuring methodologies or in some cases, concrete calculations as to what financial benefits opening government data brings on a table. For instance, the European Commission conducted a study on pricing of public sector information, which attempted evaluating direct and indirect economic impact of opening public data and identified key indicators to monitor the effects of open data portals. Also, Open Data Research Network issued a background report in April 2012 suggesting a general framework of key indicators to measure the impact of open data initiatives both on a provision and re-use stages.
Building on the research efforts up to date, this report will reflect upon the main types of impacts OGD may have and will also present key measuring frameworks to observe the change OGD initiatives may bring about.”
Connecting Grassroots to Government for Disaster Management
New Report by the Commons Lab (Wilson Center): “The growing use of social media and other mass collaboration technologies is opening up new opportunities in disaster management efforts, but is also creating new challenges for policymakers looking to incorporate these tools into existing frameworks, according to our latest report.
The Commons Lab, part of the Wilson Center’s Science & Technology Innovation Program, hosted a September 2012 workshop bringing together emergency responders, crisis mappers, researchers, and software programmers to discuss issues surrounding the adoption of these new technologies.
We are now proud to unveil “Connecting Grassroots to Government for Disaster Management: Workshop Summary,” a report discussing the key findings, policy suggestions, and success stories that emerged during the workshop. The report’s release coincides with the tenth annual Disaster Preparedness Month, sponsored by the Federal Emergency Management Agency in the Department of Homeland Security to help educate the public about preparing for emergencies. The report can be downloaded here.”
Open data for accountable governance: Is data literacy the key to citizen engagement?
Camilla Monckton at UNDP’s Voices of Eurasia blog: “How can technology connect citizens with governments, and how can we foster, harness, and sustain the citizen engagement that is so essential to anti-corruption efforts?
UNDP has worked on a number of projects that use technology to make it easier for citizens to report corruption to authorities:
- Serbia’s SMS corruption reporting in the health sector
- Montenegro’s ‘be responsible app’
- Kosovo’s online corruption reporting site kallxo.com
These projects are showing some promising results, and provide insights into how a more participatory, interactive government could develop.
At the heart of the projects is the ability to use citizen generated data to identify and report problems for governments to address….
Wanted: Citizen experts
As Kenneth Cukier, The Economist’s Data Editor, has discussed, data literacy will become the new computer literacy. Big data is still nascent and it is impossible to predict exactly how it will affect society as a whole. What we do know is that it is here to stay and data literacy will be integral to our lives.
It is essential that we understand how to interact with big data and the possibilities it holds.
Data literacy needs to be integrated into the education system. Educating non-experts to analyze data is critical to enabling broad participation in this new data age.
As technology advances, key government functions become automated, and government data sharing increases, newer ways for citizens to engage will multiply.
Technology changes rapidly, but the human mind and societal habits cannot. After years of closed government and bureaucratic inefficiency, adaptation of a new approach to governance will take time and education.
We need to bring up a generation that sees being involved in government decisions as normal, and that views participatory government as a right, not an ‘innovative’ service extended by governments.
What now?
In the meantime, while data literacy lies in the hands of a few, we must continue to connect those who have the technological skills with citizen experts seeking to change their communities for the better – as has been done in many a Social Innovation Camps recently (in Montenegro, Ukraine and Armenia at Mardamej and Mardamej Relaoded and across the region at Hurilab).
The social innovation camp and hackathon models are an increasingly debated topic (covered by Susannah Vila, David Eaves, Alex Howard and Clay Johnson).
On the whole, evaluations are leading to newer models that focus on greater integration of mentorship to increase sustainability – which I readily support. However, I do have one comment:
Social innovation camps are often criticized for a lack of sustainability – a claim based on the limited number of apps that go beyond the prototype phase. I find a certain sense of irony in this, for isn’t this what innovation is about: Opening oneself up to the risk of failure in the hope of striking something great?
In the words of Vinod Khosla:
“No failure means no risk, which means nothing new.”
As more data is released, the opportunity for new apps and new ways for citizen interaction will multiply and, who knows, someone might come along and transform government just as TripAdvisor transformed the travel industry.”