Reforms to improve U.S. government accountability


Alexander B. Howard and Patrice McDermott in Science: “Five decades after the United States first enacted the Freedom of Information Act (FOIA), Congress has voted to make the first major reforms to the statute since 2007. President Lyndon Johnson signed the first FOIA on 4 July 1966, enshrining in law the public’s right to access to information from executive branch government agencies. Scientists and others around the world can use the FOIA to learn what the U.S. government has done in its policies and practices. Proposed reforms should be a net benefit to public understanding of the scientific process and knowledge, by increasing the access of scientists to archival materials and reducing the likelihood of science and scientists being suppressed by official secrecy or bureaucracy.

Although the FOIA has been important for accountability, reform is sorely needed. An analysis of the 15 federal government agencies that received the most FOIA requests found poor to abysmal compliance rates (1, 2). In 2016, the Associated Press found that the Obama Administration had set a new record for unfulfilled FOIA requests (3). Although that has to be considered in the context of a rise in request volume without commensurate increases in resources to address them, researchers have found that most agencies simply ignore routine requests for travel schedules (4). An audit of 165 federal government agencies found that only 40% complied with the E-FOIA Act of 1996; just 67 of them had online libraries that were regularly updated with a substantial number of documents released under FOIA (5).

In the face of growing concerns about compliance, FOIA reform was one of the few recent instances of bicameral bipartisanship in Congress, with both the House and Senate each passing bills this spring with broad support. Now that Congress moved to send the Senate bill on to the president to sign into law, implementation of specific provisions will bear close scrutiny, including the potential impact of disclosure upon scientists who work in or with government agencies (6). Proposed revisions to the FOIA statute would improve how government discloses information to the public, while leaving intact exemptions for privacy, proprietary information, deliberative documents, and national security.

Features of Reforms

One of the major reforms in the House and Senate bills was to codify the “presumption of openness” outlined by President Obama the day after he took office in January 2009 when he declared that FOIA should be administered with a clear presumption: In the face of doubt, “openness” would prevail. This presumption of openness was affirmed by U.S. Attorney General Holder in March 2009. Although these declarations have had limited effect in the agencies (as described above), codifying these reforms into law is crucial not only to ensure that this remains executive branch policy after this president leaves office but also to provide requesters with legal force beyond an executive order….(More)”

The Surprising History of the Infographic


Clive Thompson at the Smithsonian magazine: “As the 2016 election approaches, we’re hearing a lot about “red states” and “blue states.” That idiom has become so ingrained that we’ve almost forgotten where it originally came from: a data visualization.

In the 2000 presidential election, the race between Al Gore and George W. Bush was so razor close that broadcasters pored over electoral college maps—which they typically colored red and blue. What’s more, they talked about those shadings. NBC’s Tim Russert wondered aloud how George Bush would “get those remaining 61 electoral red states, if you will,” and that language became lodged in the popular imagination. America became divided into two colors—data spun into pure metaphor. Now Americans even talk routinely about “purple” states, a mental visualization of political information.

We live in an age of data visualization. Go to any news website and you’ll see graphics charting support for the presidential candidates; open your iPhone and the Health app will generate personalized graphs showing how active you’ve been this week, month or year. Sites publish charts showing how the climate is changing, how schools are segregating, how much housework mothers do versus fathers. And newspapers are increasingly finding that readers love “dataviz”: In 2013, the New York Times’ most-read story for the entire year was a visualization of regional accents across the United States. It makes sense. We live in an age of Big Data. If we’re going to understand our complex world, one powerful way is to graph it.

But this isn’t the first time we’ve discovered the pleasures of making information into pictures. Over a hundred years ago, scientists and thinkers found themselves drowning in their own flood of data—and to help understand it, they invented the very idea of infographics.

**********

The idea of visualizing data is old: After all, that’s what a map is—a representation of geographic information—and we’ve had maps for about 8,000 years. But it was rare to graph anything other than geography. Only a few examples exist: Around the 11th century, a now-anonymous scribe created a chart of how the planets moved through the sky. By the 18th century, scientists were warming to the idea of arranging knowledge visually. The British polymath Joseph Priestley produced a “Chart of Biography,” plotting the lives of about 2,000 historical figures on a timeline. A picture, he argued, conveyed the information “with more exactness, and in much less time, than it [would take] by reading.”

Still, data visualization was rare because data was rare. That began to change rapidly in the early 19th century, because countries began to collect—and publish—reams of information about their weather, economic activity and population. “For the first time, you could deal with important social issues with hard facts, if you could find a way to analyze it,” says Michael Friendly, a professor of psychology at York University who studies the history of data visualization. “The age of data really began.”

An early innovator was the Scottish inventor and economist William Playfair. As a teenager he apprenticed to James Watt, the Scottish inventor who perfected the steam engine. Playfair was tasked with drawing up patents, which required him to develop excellent drafting and picture-drawing skills. After he left Watt’s lab, Playfair became interested in economics and convinced that he could use his facility for illustration to make data come alive.

“An average political economist would have certainly been able to produce a table for publication, but not necessarily a graph,” notes Ian Spence, a psychologist at the University of Toronto who’s writing a biography of Playfair. Playfair, who understood both data and art, was perfectly positioned to create this new discipline.

In one famous chart, he plotted the price of wheat in the United Kingdom against the cost of labor. People often complained about the high cost of wheat and thought wages were driving the price up. Playfair’s chart showed this wasn’t true: Wages were rising much more slowly than the cost of the product.

JULAUG2016_H04_COL_Clive.jpg
Playfair’s trade-balance time-series chart, published in his Commercial and Political Atlas, 1786 (Wikipedia)

“He wanted to discover,” Spence notes. “He wanted to find regularities or points of change.” Playfair’s illustrations often look amazingly modern: In one, he drew pie charts—his invention, too—and lines that compared the size of various country’s populations against their tax revenues. Once again, the chart produced a new, crisp analysis: The British paid far higher taxes than citizens of other nations.

Neurology was not yet a robust science, but Playfair seemed to intuit some of its principles. He suspected the brain processed images more readily than words: A picture really was worth a thousand words. “He said things that sound almost like a 20th-century vision researcher,” Spence adds. Data, Playfair wrote, should “speak to the eyes”—because they were “the best judge of proportion, being able to estimate it with more quickness and accuracy than any other of our organs.” A really good data visualization, he argued, “produces form and shape to a number of separate ideas, which are otherwise abstract and unconnected.”

Soon, intellectuals across Europe were using data visualization to grapple with the travails of urbanization, such as crime and disease….(More)”

DARPA wants to design an army of ultimate automated data scientists


Michael Cooney in NetworkWorld: “Because of a plethora of data from sensor networks, Internet of Things devices and big data resources combined with a dearth of data scientists to effectively mold that data, we are leaving many important applications – from intelligence to science and workforce management – on the table.

It is a situation the researchers at DARPA want to remedy with a new program called Data-Driven Discovery of Models (D3M). The goal of D3M is to develop algorithms and software to help overcome the data-science expertise gap by facilitating non-experts to construct complex empirical models through automation of large parts of the model-creation process. If successful, researchers using D3M tools will effectively have access to an army of “virtual data scientists,” DARPA stated.

d3mfigureDARPA

This army of virtual data scientists is needed because some experts project deficits of 140,000 to 190,000 data scientists worldwide in 2016 alone, and increasing shortfalls in coming years. Also, because the process to build empirical models is so manual, their relative sophistication and value is often limited, DARPA stated.

“We have an urgent need to develop machine-based modeling for users with no data-science background. We believe it’s possible to automate certain aspects of data science, and specifically to have machines learn from prior example how to construct new models,” said Wade Shen, program manager in DARPA’s Information Innovation Office in a statement….(More)”

Big Data Challenges: Society, Security, Innovation and Ethics


Book edited by Bunnik, A., Cawley, A., Mulqueen, M., Zwitter, A: “This book brings together an impressive range of academic and intelligence professional perspectives to interrogate the social, ethical and security upheavals in a world increasingly driven by data. Written in a clear and accessible style, it offers fresh insights to the deep reaching implications of Big Data for communication, privacy and organisational decision-making. It seeks to demystify developments around Big Data before evaluating their current and likely future implications for areas as diverse as corporate innovation, law enforcement, data science, journalism, and food security. The contributors call for a rethinking of the legal, ethical and philosophical frameworks that inform the responsibilities and behaviours of state, corporate, institutional and individual actors in a more networked, data-centric society. In doing so, the book addresses the real world risks, opportunities and potentialities of Big Data….(More)”

Better research through video games


Simon Parkin at the New Yorker:”… it occurred to Szantner and Revaz that the tremendous amount of time and energy that people put into games could be co-opted in the name of human progress. That year, they founded Massively Multiplayer Online Science, a company that pairs game makers with scientists.

This past March, the first fruits of their conversation in Geneva appeared in EVE Online, a complex science-fiction game set in a galaxy composed of tens of thousands of stars and planets, and inhabited by half a million or so people from across the Internet, who explore and do battle daily. EVE was launched in 2003 by C.C.P., a studio based in Reykjavík, but players have only recently begun to contribute to scientific research. Their task is to assist with the Human Protein Atlas (H.P.A.), a Swedish-run effort to catalogue proteins and the genes that encode them, in both normal tissue and cancerous tumors. “Humans are, by evolution, very good at quickly recognizing patterns,” Emma Lundberg, the director of the H.P.A.’s Subcellular Atlas, a database of high-resolution images of fluorescently dyed cells, told me. “This is what we exploit in the game.”

The work, dubbed Project Discovery, fits snugly into EVE Online’s universe. At any point, players can take a break from their dogfighting, trading, and political machinations to play a simple game within the game, finding commonalities and differences between some thirteen million microscope images. In each one, the cell’s innards have been color-coded—blue for the nucleus (the cell’s brain), red for microtubules (the cell’s scaffolding), and green for anywhere that a protein has been detected. After completing a tutorial, players tag the image using a list of twenty-nine options, including “nucleus,” “cytoplasm,” and “mitochondria.” When enough players reach a consensus on a single image, it is marked as “solved” and handed off to the scientists at the H.P.A. “In terms of the pattern recognition and classification, it resembles what we are doing as researchers,” Lundberg said. “But the game interface is, of course, much cooler than our laboratory information-management system. I would love to work in-game only.”

Rather than presenting the project as a worthy extracurricular activity, EVE Online’s designers have cast it as an extension of the game’s broader fiction. Players work for the Sisters of EVE, a religious humanitarian-aid organization, which rewards their efforts with virtual currency. This can be used to purchase items in the game, including a unique set of armor designed by one of the C.C.P.’s artists, Andrei Cristea. (The armor is available only to players who participate in Project Discovery, and therefore, like a rare Coco Chanel frock, is desirable as much for its scarcity as for its design.) Insuring that the mini-game be thought of as more than a short-term novelty or diversion was an issue that Linzi Campbell, Project Discovery’s lead designer, considered carefully. “The hardest challenge has been turning the image-analysis process into a game that is strong enough to motivate the player to continue playing,” Campbell told me. “The fun comes from the feeling of mastery.”

Evidently, her efforts were successful. On the game’s first day of release, there were four hundred thousand submissions from players. According to C.C.P., some people have been so caught up in the task that they have played for fifteen hours without interruption. “EVE players turned out to be a perfect crowd for this type of citizen science,” Lundberg said. She anticipates that the first phase of the project will be completed this summer. If the work meets this target, players will be presented with more advanced images and tasks, such as the classification of protein patterns in complex tumor-tissue samples. Eventually, their efforts could aid in the development of new cancer drugs….(More)”

The Behavioral Economics Guide 2016


Guide edited by Alain Samson: “Since the publication of last year’s edition of the Behavioral Economics (BE) Guide, behavioral science has continued to exert its influence in various domains of scholarship and practical applications. The Guide’s host, behavioraleconomics.com, has grown to become a popular online hub for behavioral science ideas and resources. Our domain’s new blog publishes articles from academics and practitioners alike, reflecting the wide range of areas in which BE ideas are generated and used. …

Past editions of the BE Guide focused on BE theory (2014) and behavioral science practice (2015). The aim of this year’s issue is to provide different perspectives on the field and novel applications. This editorial1 offers a selection of recent (often critical) thinking around behavioral economics research and applications. It is followed by Q&As with Richard Thaler and Varun Gauri. The subsequent section provides a range of absorbing contributions from authors who work in applied behavioral science. The final section includes a further expanded encyclopedia of BE (and related) concepts, a new listing of behavioral science events, more graduate programs, and a larger selection of journals, reflecting the growth of the field and our continued efforts to compile relevant information….(More)”

Nudging for Success


Press Release: “A groundbreaking report published today by ideas42 reveals several innovations that college administrators and policymakers can leverage to significantly improve college graduation rates at a time where completion is more out of reach than ever for millions of students.

The student path through college to graduation day is strewn with subtle, often invisible barriers that, over time, hinder students’ progress and cause some of them to drop out entirely. In Nudging for Success: Using Behavioral Science to Improve the Postsecondary Student Journey, ideas42 focuses on simple, low-cost ways to combat these unintentional obstacles and support student persistence and success at every stage in the college experience, from pre-admission to post-graduation. Teams worked with students, faculty and administrators at colleges around the country.

Even for students whose tuition is covered by financial aid, whose academic preparation is exemplary, and who are able to commit themselves full-time to their education, the subtle logistical and psychological sticking points can have a huge impact on their ability to persist and fully reap the benefits of a higher education.

Less than 60% of full-time students graduate from four-year colleges within six years, and less than 30% graduate from community colleges within three years. There are a myriad of factors often cited as deterrents to finishing school, such as the cost of tuition or the need to juggle family and work obligations, but behavioral science and the results of this report demonstrate that lesser-known dynamics like self-perception are also at play.

From increasing financial aid filing to fostering positive friend groups and a sense of belonging on campus, the 16 behavioral solutions outlined in Nudging for Success represent the potential for significant impact on the student experience and persistence. At Arizona State University, sending behaviorally-designed email reminders to students and parents about the Free Application for Federal Student Aid (FAFSA) priority deadline increased submissions by 72% and led to an increase in grant awards. Freshman retention among low-income, first generation, under-represented or other students most at risk of dropping out increased by 10% at San Francisco State University with the use of a testimonial video, self-affirming exercises, and monthly messaging aimed at first-time students.

“This evidence demonstrates how behavioral science can be the key to uplifting millions of Americans through education,” said Alissa Fishbane, Managing Director at ideas42. “By approaching the completion crisis from the whole experience of students themselves, administrators and policymakers have the opportunity to reduce the number of students who start, but do not finish, college—students who take on the financial burden of tuition but miss out on the substantial benefits of earning a degree.”

The results of this work drive home the importance of examining the college experience from the student perspective and through the lens of human behavior. College administrators and policymakers can replicate these gains at institutions across the country to make it simpler for students to complete the degree they started in ways that are often easier and less expensive to implement than existing alternatives—paving the way to stronger economic futures for millions of Americans….(More)”

Using Behavioral Science to Combat Climate Change


Cass R. Sunstein and Lucia A. Reisch in the Oxford Research Encyclopedia of Climate Science (Forthcoming): “Careful attention to choice architecture promises to open up new possibilities for reducing greenhouse gas emissions – possibilities that go well beyond, and that may supplement or complement, the standard tools of economic incentives, mandates, and bans. How, for example, do consumers choose between climate-friendly products or services and alternatives that are potentially damaging to the climate but less expensive? The answer may well depend on the default rule. Indeed, climate-friendly default rules may well be a more effective tool for altering outcomes than large economic incentives. The underlying reasons include the power of suggestion; inertia and procrastination; and loss aversion. If well-chosen, climate-friendly defaults are likely to have large effects in reducing the economic and environmental harms associated with various products and activities. In deciding whether to establish climate-friendly defaults, choice architects (subject to legal constraints) should consider both consumer welfare and a wide range of other costs and benefits. Sometimes that assessment will argue strongly in favor of climate-friendly defaults, particularly when both economic and environmental considerations point in their direction. Notably, surveys in the United States and Europe show that majorities in many nations are in favor of climate-friendly defaults….(More)”

Finding Pathways to More Equitable and Meaningful Public-Scientist Partnerships


Daniela Soleri et al in Citizen Science: Theory and Practice: “For many, citizen science is exciting because of the possibility for more diverse, equitable partnerships in scientific research with outcomes considered meaningful and useful by all, including public participants. This was the focus of a symposium we organized at the 2015 conference of the Citizen Science Association. Here we synthesize points made by symposium participants and our own reflections.

Professional science has a participation problem that is part of a larger equity problem in society. Inequity in science has negative consequences including a failure to address the needs and goals arising from diverse human and social experiences, for example, lack of attention to issues such as environmental contamination that disproportionately impact under-represented populations, and a failure to recognize the pervasive effects of structural racism. Inequity also encourages mistrust of science and scientists. A perception that science is practiced for the sole benefit of dominant social groups is reinforced when investigations of urgent community concerns such as hydraulic fracturing are questioned as being biased endeavors.

Defined broadly, citizen science can challenge and change this inequity and mistrust, but only if it reflects the diversity of publics, and if it doesn’t reinforce existing inequities in science and society. Key will be the way that science is portrayed: Acknowledging the presence of bias in all scientific research and the tools available for minimizing this, and demonstrating the utility of science for local problem solving and policy change. Symposium participants called for reflexive research, mutual learning, and other methods for supporting more equitable engagement in practice and in the activities of the Citizen Science Association…(More)”.

Is artificial intelligence key to dengue prevention?


BreakDengue: “Dengue fever outbreaks are increasing in both frequency and magnitude. Not only that, the number of countries that could potentially be affected by the disease is growing all the time.

This growth has led to renewed efforts to address the disease, and a pioneering Malaysian researcher was recently recognized for his efforts to harness the power of big data and artificial intelligence to accurately predict dengue outbreaks.

Dr. Dhesi Baha Raja received the Pistoia Alliance Life Science Award at King’s College London in April of this year, for developing a disease prediction platform that employs technology and data to give people prior warning of when disease outbreaks occur.The medical doctor and epidemiologist has spent years working to develop AIME (Artificial Intelligence in Medical Epidemiology)…

it relies on a complex algorithm, which analyses a wide range of data collected by local government and also satellite image recognition systems. Over 20 variables such as weather, wind speed, wind direction, thunderstorm, solar radiation and rainfall schedule are included and analyzed. Population models and geographical terrain are also included. The ultimate result of this intersection between epidemiology, public health and technology is a map, which clearly illustrates the probability and location of the next dengue outbreak.

The ground-breaking platform can predict dengue fever outbreaks up to two or three months in advance, with an accuracy approaching 88.7 per cent and within a 400m radius. Dr. Dhesi has just returned from Rio de Janeiro, where the platform was employed in a bid to fight dengue in advance of this summer’s Olympics. In Brazil, its perceived accuracy was around 84 per cent, whereas in Malaysia in was over 88 per cent – giving it an average accuracy of 86.37 per cent.

The web-based application has been tested in two states within Malaysia, Kuala Lumpur, and Selangor, and the first ever mobile app is due to be deployed across Malaysia soon. Once its capability is adequately tested there, it will be rolled out globally. Dr. Dhesi’s team are working closely with mobile digital service provider Webe on this.

By making the app free to download, this will ensure the service becomes accessible to all, Dr Dhesi explains.
“With the web-based application, this could only be used by public health officials and agencies. We recognized the need for us to democratize this health service to the community, and the only way to do this is to provide the community with the mobile app.”
This will also enable the gathering of even greater knowledge on the possibility of dengue outbreaks in high-risk areas, as well as monitoring the changing risks as people move to different areas, he adds….(More)”