Could a tweet or a text increase college enrollment or student achievement?


 at the Conversation: “Can a few text messages, a timely email or a letter increase college enrollment and student achievement? Such “nudges,” designed carefully using behavioral economics, can be effective.

But when do they work – and when not?

Barriers to success

Consider students who have just graduated high school intending to enroll in college. Even among those who have been accepted to college, 15 percent of low-income students do not enroll by the next fall. For the large share who intend to enroll in community colleges, this number can be as high as 40 percent….

Can a few text messages or a timely email overcome these barriers? My research uses behavioral economics to design low-cost, scalable interventions aimed at improving education outcomes. Behavioral economics suggests several important features to make a nudge effective: simplify complex information, make tasks easier to complete and ensure that support is timely.

So, what makes for an effective nudge?

Improving college enrollment

In 2012, researchers Ben Castleman and Lindsay Page sent 10 text messages to nearly 2,000 college-intending students the summer after high school graduation. These messages provided just-in-time reminders on key financial aid, housing and enrollment deadlines from early July to mid August.

Instead of set meetings with counselors, students could reply to messages and receive on-demand support from college guidance counselors to complete key tasks.

In another intervention – the Expanding College Opportunities Project (ECO) – researchers Caroline Hoxby and Sarah Turner worked to help high-achieving, low-income students enroll in colleges on par with their achievement. The intervention arrived to students as a packet in the mail.

The mailer simplified information by providing a list of colleges tailored to each student’s location along with information about net costs, graduation rates, and application deadlines. Moreover, the mailer included easy-to-claim application fee waivers. All these features reduced both the complexity and cost in applying to a wider range of colleges.

In both cases, researchers found that it significantly improved college outcomes. College enrollment went up by 15 percent in the intervention designed to reduce summer melt for community college students. The ECO project increased the likelihood of admission to a selective college by 78 percent.

When there is no impact

While these interventions are promising, there are important caveats.

For instance, our preliminary findings from ongoing research show that information alone may not be enough. We sent emails and letters to more than one hundred thousand college applicants about financial aid and education-related tax benefits. However, we didn’t provide any additional support to help families through the process of claiming these benefits.

In other words, we didn’t provide any support to complete the tasks – no fee waivers, no connection to guidance counselors – just the email and the letter. Without this support to answer questions or help families complete forms to claim the benefits, we found no impact, even when students opened the emails.

More generally, “nudges” often lead to modest impacts and should be considered only a part of the solution. But there’s a dearth of low-cost, scalable interventions in education, and behavioral economics can help.

Identifying the crucial decision points – when applications are due, forms need to be filled out or school choices are made – and supplying the just-in-time support to families is key….(More).”

Moneyballing Criminal Justice


Anne Milgram in the Atlantic: “…One area in which the potential of data analysis is still not adequately realized,however, is criminal justice. This is somewhat surprising given the success of CompStat, a law enforcement management tool that uses data to figure out how police resources can be used to reduce crime and hold law enforcement officials accountable for results. CompStat is widely credited with contributing to New York City’s dramatic reduction in serious crime over the past two decades. Yet data-driven decision-making has not expanded to the whole of the criminal justice system.

But it could. And, in this respect, the front end of the system — the part of the process that runs from arrest through sentencing — is particularly important. Atthis stage, police, prosecutors, defenders, and courts make key choices about how to deal with offenders — choices that, taken together, have an enormous impact on crime. Yet most jurisdictions do not collect or analyze the data necessary to know whether these decisions are being made in a way that accomplishes the most important goals of the criminal justice system: increased public safety,decreased recidivism, reduced cost, and the fair, efficient administration of justice.

Even in jurisdictions where good data exists, a lack of technology is often an obstacle to using it effectively. Police, jails, courts, district attorneys, and public defenders each keep separate information systems, the data from which is almost never pulled together and analyzed in a way that could answer the questions that matter most: Who is in our criminal justice system? What crimes have been charged? What risks do individual offenders pose? And which option would best protect the public and make the best use of our limited resources?

While debates about prison over-crowding, three strikes laws, and mandatory minimum sentences have captured public attention, the importance of what happens between arrest and sentencing has gone largely unnoticed. Even though I ran the criminal justice system in New Jersey, one of the largest states in the country, I had not realized the magnitude of the pretrial issues until I was tasked by theLaura and John Arnold Foundation with figuring out which aspects of criminal justice had the most need and presented the greatest opportunity for reform….

Technology could help us leverage data to identify offenders who will pose unacceptable risks to society if they are not behind bars and distinguish them from those defendants who will have lower recidivism rates if they are supervised in the community or given alternatives to incarceration before trial. Likewise, it could help us figure out which terms of imprisonment, alternatives to incarceration, and other interventions work best–and for whom. And the list does not end there.

The truth is our criminal justice system already makes these decisions every day.But it makes them without knowing whether they’re the right ones. That needs to change. If data is powerful enough to transform baseball, health care, and education, it can do the same for criminal justice….(More)”

…(More).

The Small World Initiative: An Innovative Crowdsourcing Platform for Antibiotics


Ana Maria Barral et al in FASEB Journal: “The Small World Initiative™ (SWI) is an innovative program that encourages students to pursue careers in science and sets forth a unique platform to crowdsource new antibiotics. It centers around an introductory biology course through which students perform original hands-on field and laboratory research in the hunt for new antibiotics. Through a series of student-driven experiments, students collect soil samples, isolate diverse bacteria, test their bacteria against clinically-relevant microorganisms, and characterize those showing inhibitory activity. This is particularly relevant since over two thirds of antibiotics originate from soil bacteria or fungi. SWI’s approach also provides a platform to crowdsource antibiotic discovery by tapping into the intellectual power of many people concurrently addressing a global challenge and advances promising candidates into the drug development pipeline. This unique class approach harnesses the power of active learning to achieve both educational and scientific goals…..We will discuss our preliminary student evaluation results, which show the compelling impact of the program in comparison to traditional introductory courses. Ultimately, the mission of the program is to provide an evidence-based approach to teaching introductory biology concepts in the context of a real-world problem. This approach has been shown to be particularly impactful on underrepresented STEM talent pools, including women and minorities….(More)”

Scientists Are Just as Confused About the Ethics of Big-Data Research as You


Sarah Zhang at Wired: “When a rogue researcher last week released 70,000 OkCupid profiles, complete with usernames and sexual preferences, people were pissed. When Facebook researchers manipulated stories appearing in Newsfeeds for a mood contagion study in 2014, people were really pissed. OkCupid filed a copyright claim to take down the dataset; the journal that published Facebook’s study issued an “expression of concern.” Outrage has a way of shaping ethical boundaries. We learn from mistakes.

Shockingly, though, the researchers behind both of those big data blowups never anticipated public outrage. (The OkCupid research does not seem to have gone through any kind of ethical review process, and a Cornell ethics review board approved the Facebook experiment.) And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

Even fewer have computer science or security expertise, which may be necessary to protect participants in this new kind of research. “The IRB may make very different decisions based on who is on the board, what university it is, and what they’re feeling that day,” says Kelsey Finch, policy counsel at the Future of Privacy Forum. There are hundreds of these IRBs in the US—and they’re grappling with research ethics in the digital age largely on their own….

Or maybe other institutions, like the open science repositories asking researchers to share data, should be picking up the slack on ethical issues. “Someone needs to provide oversight, but the optimal body is unlikely to be an IRB, which usually lacks subject matter expertise in de-identification and re-identification techniques,” Michelle Meyer, a bioethicist at Mount Sinai, writes in an email.

Even among Internet researchers familiar with the power of big data, attitudes vary. When Katie Shilton, an information technology research at the University of Maryland, interviewed 20 online data researchers, she found “significant disagreement” over issues like the ethics of ignoring Terms of Service and obtaining informed consent. Surprisingly, the researchers also said that ethical review boards had never challenged the ethics of their work—but peer reviewers and colleagues had. Various groups like theAssociation of Internet Researchers and the Center for Applied Internet Data Analysis have issued guidelines, but the people who actually have power—those on institutional review boards–are only just catching up.

Outside of academia, companies like Microsoft have started to institute their own ethical review processes. In December, Finch at the Future of Privacy Forum organized a workshop called Beyond IRBs to consider processes for ethical review outside of federally funded research. After all, modern tech companies like Facebook, OkCupid, Snapchat, Netflix sit atop a trove of data 20th century social scientists could have only dreamed up.

Of course, companies experiment on us all the time, whether it’s websites A/B testing headlines or grocery stores changing the configuration of their checkout line. But as these companies hire more data scientists out of PhD programs, academics are seeing an opportunity to bridge the divide and use that data to contribute to public knowledge. Maybe updated ethical guidelines can be forged out of those collaborations. Or it just might be a mess for a while….(More)”

BeMyEye: Crowdsourcing is making it easier to gather data fast


Jack Torrance at Management Today: “The era of big data is upon us. Dozens of well-funded start-ups have sprung up of late claiming to be able to turn raw data into ‘actionable insights’ that would have been unimaginable a few years ago. But the process of actually collecting data is still not always straightforward….

London-based start-up BeMyEye (not to be confused with Be My Eyes, an iPhone app that claims to ‘help the blind see’) has built an army of casual data gatherers that report back via their phones. ‘For companies that sell their product to high street retailers or supermarkets, being able to verify the presence of their product, the adequacy of the promotions, the positioning in relation to competitors, this is all invaluable intelligence,’ CEO Luca Pagano tells MT. ‘Our crowd is able to observe and feed this information back to these brands very, very quickly.’…

They can do more than check prices in shops. Some of its clients (which include Heineken, Illy and Three) have used the service to check billboards they are paying for have actually been put up correctly. ‘We realised the level of [billboard] compliance is actually below 90%,’ says Pagano. It can also be used to generate sales leads….

BeMyEyes isn’t the only company that’s exploring this business model. San Francisco company Premise is using a similar network of data gatherers to monitor food prices and other metrics in developing countries for NGOs and governments as well as commercial organisations. It’s not hard to see why they would be an attractive proposition for clients, but the challenge for both of these businesses will be ensuring they can find enough reliable and effective data gatherers to keep the information flowing in at a high enough quality….(More)”

Building Data Responsibility into Humanitarian Action


Stefaan Verhulst at The GovLab: “Next Monday, May 23rd, governments, non-profit organizations and citizen groups will gather in Istanbul at the first World Humanitarian Summit. A range of important issues will be on the agenda, not least of which the refugee crisis confronting the Middle East and Europe. Also on the agenda will be an issue of growing importance and relevance, even if it does not generate front-page headlines: the increasing potential (and use) of data in the humanitarian context.

To explore this topic, a new paper, “Building Data Responsibility into Humanitarian Action,” is being released today, and will be presented tomorrow at the Understanding Risk Forum. This paper is the result of a collaboration between the United Nations Office for the Coordination of Humanitarian Affairs (OCHA), The GovLab (NYU Tandon School of Engineering), the Harvard Humanitarian Initiative, and Leiden UniversityCentre for Innovation. It seeks to identify the potential benefits and risks of using data in the humanitarian context, and begins to outline an initial framework for the responsible use of data in humanitarian settings.

Both anecdotal and more rigorously researched evidence points to the growing use of data to address a variety of humanitarian crises. The paper discusses a number of data risk case studies, including the use of call data to fight Malaria in Africa; satellite imagery to identify security threats on the border between Sudan and South Sudan; and transaction data to increase the efficiency of food delivery in Lebanon. These early examples (along with a few others discussed in the paper) have begun to show the opportunities offered by data and information. More importantly, they also help us better understand the risks, including and especially those posed to privacy and security.

One of the broader goals of the paper is to integrate the specific and the theoretical, in the process building a bridge between the deep, contextual knowledge offered by initiatives like those discussed above and the broader needs of the humanitarian community. To that end, the paper builds on its discussion of case studies to begin establishing a framework for the responsible use of data in humanitarian contexts. It identifies four “Minimum Humanitarian standards for the Responsible use of Data” and four “Characteristics of Humanitarian Organizations that use Data Responsibly.” Together, these eight attributes can serve as a roadmap or blueprint for humanitarian groups seeking to use data. In addition, the paper also provides a four-step practical guide for a data responsibility framework (see also earlier blog)….(More)” Full Paper: Building Data Responsibility into Humanitarian Action

Virtual memory: the race to save the information age


Review by Richard Ovenden in the Financial Times of:
You Could Look It Up: The Reference Shelf from Ancient Babylon to Wikipedia, by Jack Lynch, Bloomsbury, RRP£25/$30, 464 pages

When We Are No More: How Digital Memory Is Shaping Our Future, by Abbey Smith Rumsey, Bloomsbury, RRP£18.99/$28, 240 pages

Ctrl + Z: The Right to Be Forgotten, by Meg Leta Jones, NYU Press, RRP£20.99/$29.95, 284 pages

“…For millions of people, technological devices have become essential tools in keeping memories alive — to the point where it can feel as though events without an impression in silicon have somehow not been fully experienced. In under three decades, the web has expanded to contain more than a billion sites. Every day about 300m digital photographs, more than 100 terabytes’ worth, are uploaded to Facebook. An estimated 204m emails are sent every minute and, with 5bn mobile devices in existence, the generation of new content looks set to continue its rapid growth.

Is the abundance of information in the age of Google and Facebook storing up problems for future generations? Richard Ovenden, who as Bodley’s Librarian is responsible for the research libraries of the University of Oxford, talks about the opportunites and concerns of the digitisation of memory with John Thornhill, the FT’s innovation editor

We celebrate this growth, and rightly. Today knowledge is created and consumed at a rate that would have been inconceivable a generation ago; instant access to the fruits of millennia of civilisation now seems like a natural state of affairs. Yet we overlook — at our peril — just how unstable and transient much of this information is. Amid the proliferation there is also constant decay: phenomena such as “bit rot” (the degradation of software programs over time), “data rot” (the deterioration of digital storage media) and “link rot” (web links pointing to online resources that have become permanently unavailable) can render information inaccessible. This affects everything from holiday photos and email correspondence to official records: to give just one example, a Harvard study published in 2013 found that 50 per cent of links in the US Supreme Court opinions website were broken.

Are we creating a problem that future generations will not be able to solve? Could the early decades of the 21st century even come to seem, in the words of the internet pioneer Vint Cerf, like a“digital Dark Age”? Whether or not such fears are realised, it is becoming increasingly clear that the migration of knowledge to formats permitting rapid and low-cost copying and dissemination, but in which the base information cannot survive without complex and expensive intervention, requires that we choose, more actively than ever before, what to remember and what to forget….(More)”

Post, Mine, Repeat: Social Media Data Mining Becomes Ordinary


In this book, Helen Kennedy argues that as social media data mining becomes more and more ordinary, as we post, mine and repeat, new data relations emerge. These new data relations are characterised by a widespread desire for numbers and the troubling consequences of this desire, and also by the possibility of doing good with data and resisting data power, by new and old concerns, and by instability and contradiction. Drawing on action research with public sector organisations, interviews with commercial social insights companies and their clients, focus groups with social media users and other research, Kennedy provides a fascinating and detailed account of living with social media data mining inside the organisations that make up the fabric of everyday life….(More)”

Outstanding Challenges in Recent Open Government Data Initiatives


Paper by Usamah A. Algemili: “In recent years, we have witnessed increasing interest in government data. Many governments around the world have sensed the value of its passive data sets. These governments started their Open Data policies, yet many countries are on the way converting raw data into useful representation. This paper surveys the previous efforts of Open Data initiatives. It discusses the various challenges that open data projects may encounter during the transformation from passive data sets towards Open Data culture. It reaches out project teams acquiring their practical assessment. Thus, an online form has been distributed among project teams. The questionnaire was developed in alignment with previous literature of data integration challenges. 138 eligible professional participated, and their responds has been analyzed by the researcher. The result section identifies the most critical challenges from project-teams’ point-of-view, and the findings show four obstacles that stand out as critical challenges facing project teams. This paper casts on these challenges, and it attempts to indicate the missing gap between current guidelines and practical experience. Accordingly, this paper presents the current infrastructure of Open Data framework followed by additional recommendations that may lead to successful implementation of Open Data development….(More)”

Twelve principles for open innovation 2.0


Martin Curley in Nature: “A new mode of innovation is emerging that blurs the lines between universities, industry, governments and communities. It exploits disruptive technologies — such as cloud computing, the Internet of Things and big data — to solve societal challenges sustainably and profitably, and more quickly and ably than before. It is called open innovation 2.0 (ref. 1).

Such innovations are being tested in ‘living labs’ in hundreds of cities. In Dublin, for example, the city council has partnered with my company, the technology firm Intel (of which I am a vice-president), to install a pilot network of sensors to improve flood management by measuring local rain fall and river levels, and detecting blocked drains. Eindhoven in the Netherlands is working with electronics firm Philips and others to develop intelligent street lighting. Communications-technology firm Ericsson, the KTH Royal Institute of Technology, IBM and others are collaborating to test self-driving buses in Kista, Sweden.

Yet many institutions and companies remain unaware of this radical shift. They often confuse invention and innovation. Invention is the creation of a technology or method. Innovation concerns the use of that technology or method to create value. The agile approaches needed for open innovation 2.0 conflict with the ‘command and control’ organizations of the industrial age (see ‘How innovation modes have evolved’). Institutional or societal cultures can inhibit user and citizen involvement. Intellectual-property (IP) models may inhibit collaboration. Government funders can stifle the emergence of ideas by requiring that detailed descriptions of proposed work are specified before research can begin. Measures of success, such as citations, discount innovation and impact. Policymaking lags behind the market place….

Keys to collaborative innovation

  1. Purpose. Efforts and intellects aligned through commitment rather than compliance deliver an impact greater than the sum of their parts. A great example is former US President John F. Kennedy’s vision of putting a man on the Moon. Articulating a shared value that can be created is important. A win–win scenario is more sustainable than a win–lose outcome.
  2. Partner. The ‘quadruple helix’ of government, industry, academia and citizens joining forces aligns goals, amplifies resources, attenuates risk and accelerates progress. A collaboration between Intel, University College London, Imperial College London and Innovate UK’s Future Cities Catapult is working in the Intel Collaborative Research Institute to improve people’s well-being in cities, for example to enable reduction of air pollution.
  3. Platform. An environment for collaboration is a basic requirement. Platforms should be integrated and modular, allowing a plug-and-play approach. They must be open to ensure low barriers to use, catalysing the evolution of a community. Challenges in security, standards, trust and privacy need to be addressed. For example, the Open Connectivity Foundation is securing interoperability for the Internet of Things.
  4. Possibilities. Returns may not come from a product but from the business model that enabled it, a better process or a new user experience. Strategic tools are available, such as industrial designer Larry Keeley’s breakdown of innovations into ten types in four categories: finance, process, offerings and delivery.
  5. Plan. Adoption and scale should be the focus of innovation efforts, not product creation. Around 20% of value is created when an innovation is established; more than 80% comes when it is widely adopted7. Focus on the ‘four Us’: utility (value to the user); usability; user experience; and ubiquity (designing in network effects).
  6. Pyramid. Enable users to drive innovation. They inspired two-thirds of innovations in semiconductors and printed circuit boards, for example. Lego Ideas encourages children and others to submit product proposals — submitters must get 10,000 supporters for their idea to be reviewed. Successful inventors get 1% of royalties.
  7. Problem. Most innovations come from a stated need. Ethnographic research with users, customers or the environment can identify problems and support brainstorming of solutions. Create a road map to ensure the shortest path to a solution.
  8. Prototype. Solutions need to be tested and improved through rapid experimentation with users and citizens. Prototyping shows how applicable a solution is, reduces the risks of failures and can reveal pain points. ‘Hackathons’, where developers come together to rapidly try things, are increasingly common.
  9. Pilot. Projects need to be implemented in the real world on small scales first. The Intel Collaborative Research Institute runs research projects in London’s parks, neighbourhoods and schools. Barcelona’s Laboratori — which involves the quadruple helix — is pioneering open ‘living lab’ methods in the city to boost culture, knowledge, creativity and innovation.
  10. Product. Prototypes need to be converted into viable commercial products or services through scaling up and new infrastructure globally. Cloud computing allows even small start-ups to scale with volume, velocity and resilience.
  11. Product service systems. Organizations need to move from just delivering products to also delivering related services that improve sustainability as well as profitability. Rolls-Royce sells ‘power by the hour’ — hours of flight time rather than jet engines — enabled by advanced telemetry. The ultimate goal of open innovation 2.0 is a circular or performance economy, focused on services and reuse rather than consumption and waste.
  12. Process. Innovation is a team sport. Organizations, ecosystems and communities should measure, manage and improve their innovation processes to deliver results that are predictable, probable and profitable. Agile methods supported by automation shorten the time from idea to implementation….(More)”