How to Convince Men to Help the Poor


at Pacific Standard: “Please give. It’s a plea we are confronted with constantly, as a variety of charities implore us to help them help the less fortunate.

Whether we get out our checkbook or throw the request in the recycling bin is determined, in part, by the specific way the request is framed. But a new study suggests non-profits might want to create two separate appeals: One aimed at men, and another at women.

A research team led by Stanford University sociologist Robb Willer reports empathy-based appeals tend to be effective with women. But as a rule, men—who traditionally give somewhat less to anti-poverty charities—need to be convinced that their self-interest aligns with that of the campaign.

“Framing poverty as an issue that negatively affects all Americans increased men’s willingness to donate to the cause, eliminating the gender gap,” the researchers write in the journal Social Science Research….

“While this reframing resonated with men, who were otherwise less likely to spontaneously express concern about poverty,” Willer and his colleagues write, “it had the opposite effect for women, who might have felt less motivated to express concern about poverty when doing so seemed inconsistent with feeling empathy for the poor.”…(More)”

The Cathedral of Computation


at the Atlantic: “We’re not living in an algorithmic culture so much as a computational theocracy.  Algorithms are everywhere, supposedly. We are living in an “algorithmic culture,” to use the author and communication scholar Ted Striphas’s name for it. Google’s search algorithms determine how we access information. Facebook’s News Feed algorithms determine how we socialize. Netflix’s and Amazon’s collaborative filtering algorithms choose products and media for us. You hear it everywhere. “Google announced a change to its algorithm,” a journalist reports. “We live in a world run by algorithms,” a TED talk exhorts. “Algorithms rule the world,” a news report threatens. Another upgrades rule to dominion: “The 10 Algorithms that Dominate Our World.”…
It’s part of a larger trend. The scientific revolution was meant to challenge tradition and faith, particularly a faith in religious superstition. But today, Enlightenment ideas like reason and science are beginning to flip into their opposites. Science and technology have become so pervasive and distorted, they have turned into a new type of theology.
The worship of the algorithm is hardly the only example of the theological reversal of the Enlightenment—for another sign, just look at the surfeit of nonfiction books promising insights into “The Science of…” anything, from laughter to marijuana. But algorithms hold a special station in the new technological temple because computers have become our favorite idols….
Once you adopt skepticism toward the algorithmic- and the data-divine, you can no longer construe any computational system as merely algorithmic. Think about Google Maps, for example. It’s not just mapping software running via computer—it also involves geographical information systems, geolocation satellites and transponders, human-driven automobiles, roof-mounted panoramic optical recording systems, international recording and privacy law, physical- and data-network routing systems, and web/mobile presentational apparatuses. That’s not algorithmic culture—it’s just, well, culture….(More).”

Would You Share Private Data for the Good of City Planning?


Henry Grabar at NextCity: “The proliferation of granular data on automobile movement, drawn from smartphones, cab companies, sensors and cameras, is sharpening our sense of how cars travel through cities. Panglossian seers believe the end of traffic jams is nigh.
This information will change cities beyond their roads. Real-time traffic data may lead to reworked intersections and new turning lanes, but understanding cars is in some ways a stand-in for understanding people. There’s traffic as traffic and traffic as proxy, notes Brett Goldstein, an urban science fellow at the University of Chicago who served as that city’s first data officer from 2011 to 2013. “We’d be really naive, in thinking about how we make cities better,” he says, “to only consider traffic for what it is.”
Even a small subset of a city’s car data goes a long way. Consider the raft of discrete findings that have emerged from the records of New York City taxis.
Researchers at the Massachusetts Institute of Technology, led by Paolo Santi, showed that cab-sharing could reduce taxi mileage by 40 percent. Their counterparts at NYU, led by Claudio Silva, mapped activity around hubs like train stations and airports and during hurricanes.
“You start to build actual models of how people move, and where they move,” observes Silva, the head of disciplines at NYU’s Center for Science and Urban Progress (CUSP). “The uses of this data for non-traffic engineering are really substantial.”…
Many of these ideas are hypothetical, for the moment, because so-called “granular” data is so hard to come by. That’s one reason the release of New York’s taxi cab data spurred so many studies — it’s an oasis of information in a desert of undisclosed records. Corporate entreaties, like Uber’s pending data offering to Boston, don’t always meet researchers’ standards. “It’s going to be a lot of superficial data, and it’s not clear how usable it’ll be at this point,” explains Sarah Kaufman, the digital manager at NYU’s Rudin Center for Transportation….
Yet Americans seem much more alarmed by the collection of location data than other privacy breaches.
How can data utopians convince the hoi polloi to share their comings and goings? One thought: Make them secure. Mike Flowers, the founder of New York City’s Office of Data Analytics and a fellow at NYU’s CUSP, told me it might be time to consider establishing a quasi-governmental body that people would trust to make their personal data anonymous before they are channeled into government projects. (New York City’s Taxi and Limousine Commission did not do a very good job at this, which led to Gawker publishing a dozen celebrity cab rides.)
Another idea is to frame open data as a beneficial trade-off. “When people provide information, they want to realize the benefit of the information,” Goldstein says.
Users tell the routing company Waze where they are and get a smoother commute in return. Progressive Insurance offers drivers a “Snapshot” tracker. If it likes the way you drive, the company will lower your rates. It’s not hard to imagine that, in the long run, drivers will be penalized for refusing such a device…. (More).”

Study: Complaining on Twitter correlates with heart disease risks


at ArsTechnica: “Tweets prove better regional heart disease predictor than many classic factors. This week, a study was released by researchers at the University of Pennsylvania that found a surprising correlation when studying two kinds of maps: those that mapped the county-level frequency of cardiac disease, and those that mapped the emotional state of an area’s Twitter posts.
In all, researchers sifted through over 826 million tweets, made available by Twitter’s research-friendly “garden hose” server access, then narrowed those down to roughly 146 million tweets that had been posted with geolocation data from over 1,300 counties (each county needed to have at least 50,000 tweets to sift through to qualify). The team then measured an individual county’s expected “health” level based on frequency of certain phrases, using dictionaries that had been put through scrutiny over their application to emotional states. Negative statements about health, jobs, and attractiveness—along with a bump in curse words—would put a county in the “risk” camp, while words like “opportunities,” “overcome,” and “weekend” added more points to a county’s “protective” rating.
Not only did this measure correlate strongly with age-adjusted heart disease rate data, it turned out to be a more efficient predictor of higher or lower disease likelihood than “ten classical predictors” combined, including education, obesity, and smoking. Twitter beat that data by a rate of 42 percent to 36 percent….Psychological Science, 2014. DOI: 10.1177/0956797614557867  (About DOIs)….(More)”

Citizen Science in America’s DNA


Keynote by NOAA Chief Scientist, Dr. Richard Spinrad at the forum  entitled, Tracking a Changing Climate: “Citizen science is part of America’s DNA.  For centuries, citizens not trained in science have helped shaped our understanding of Earth.
Thomas Jefferson turned Lewis and Clark into citizen scientists when he asked them to explore the landscape, wildlife and weather during their journeys of the West.They investigated plants, animals and geography, and came back with maps, sketches and journals.  These new data were some of the first pieces of environmental intelligence defining our young nation.  President Jefferson instilled citizen science in my own agency’s DNA by creating the Survey of the Coast, a NOAA legacy agency focused on charting  and protecting the entire coast of our Nation.
The National Weather Service’s Cooperative Observer Program, begun in 1890, is an outstanding example of citizen science.  Last year, NOAA honored an observer who has provided weather observations every day for 80 years. Volunteer citizen scientists have transcribed more than 68,000 pages of Arctic ship logs, adding to the long-term climate record by populating a database with historic weather and sea ice observations. Also, citizen scientists are providing new estimates of cyclone intensity by interpreting satellite images.
There is tremendous value in the capability of citizen scientists to feed local data into their own communities’ forecasts. In September 2013, for example, formal observation systems and tracking instruments were washed out when extreme floods struck Colorado and New Mexico. By ensuring that real-time forecasts were still integrated into the National Weather Service Flood Warning System, the reports of about 200 citizen scientists contributed to what has been called the best mapped extreme rain event in Colorado history and possibly nationwide.
The Community Collaborative Rain, Hail and Snow (CoCoRaHS) Network played a pivotal role in this mapping. CoCoRaHS also shows how citizen science can help make data collection straightforward and inexpensive. To measure the impact and size of hail, for example, it uses a Styrofoam sheet covered with tin foil, creating a “hail pad” that has proven to be quite accurate.
The recognized value of citizen science is growing rapidly.  NOAA has an app to crowdsource real-time precipitation data. If you feel a raindrop, or spot a snowflake, report it through NOAA’s mPING app. Precipitation reports have already topped 600,000, and the National Weather Service uses them to fine-tune forecasts…(More).”

Big Data Now


at Radar – O’Reilly: “In the four years we’ve been producing Big Data Now, our wrap-up of important developments in the big data field, we’ve seen tools and applications mature, multiply, and coalesce into new categories. This year’s free wrap-up of Radar coverage is organized around seven themes:

  • Cognitive augmentation: As data processing and data analytics become more accessible, jobs that can be automated will go away. But to be clear, there are still many tasks where the combination of humans and machines produce superior results.
  • Intelligence matters: Artificial intelligence is now playing a bigger and bigger role in everyone’s lives, from sorting our email to rerouting our morning commutes, from detecting fraud in financial markets to predicting dangerous chemical spills. The computing power and algorithmic building blocks to put AI to work have never been more accessible.
  • The convergence of cheap sensors, fast networks, and distributed computation: The amount of quantified data available is increasing exponentially — and aside from tools for centrally handling huge volumes of time-series data as it arrives, devices and software are getting smarter about placing their own data accurately in context, extrapolating without needing to ‘check in’ constantly.
  • Reproducing, managing, and maintaining data pipelines: The coordination of processes and personnel within organizations to gather, store, analyze, and make use of data.
  • The evolving, maturing marketplace of big data components: Open-source components like Spark, Kafka, Cassandra, and ElasticSearch are reducing the need for companies to build in-house proprietary systems. On the other hand, vendors are developing industry-specific suites and applications optimized for the unique needs and data sources in a field.
  • The value of applying techniques from design and social science: While data science knows human behavior in the aggregate, design works in the particular, where A/B testing won’t apply — you only get one shot to communicate your proposal to a CEO, for example. Similarly, social science enables extrapolation from sparse data. Both sets of tools enable you to ask the right questions, and scope your problems and solutions realistically.
  • The importance of building a data culture: An organization that is comfortable with gathering data, curious about its significance, and willing to act on its results will perform demonstrably better than one that doesn’t. These priorities must be shared throughout the business.
  • The perils of big data: From poor analysis (driven by false correlation or lack of domain expertise) to intrusiveness (privacy invasion, price profiling, self-fulfilling predictions), big data has negative potential.

Download our free snapshot of big data in 2014, and follow the story this year on Radar.”

Competition-Based Innovation: The Case of the X Prize Foundation


Paper by Hossain, Mokter and Kauranen, Ilkka, in the Journal of Organization Design,/SSRN: “The use of competition-based processes for the development of innovations is increasing. In parallel with the increasing use of competition-based innovation in business firms, this model of innovation is successfully being used by non-profit organizations for advancing the development of science and technology. One such non-profit organization is the X Prize Foundation, which designs and manages innovation competitions to encourage scientific and technological development. The objective of this article is to analyze the X Prize Foundation and three of the competitions it has organized in order to identify the challenges of competition-based innovation and how to overcome them….(More)”.
 

Doing Social Network Research: Network-based Research Design for Social Scientists


New book by Garry Robins: “Are you struggling to design your social network research? Are you looking for a book that covers more than social network analysis? If so, this is the book for you! With straight-forward guidance on research design and data collection, as well as social network analysis, this book takes you start to finish through the whole process of doing network research. Open the book and you’ll find practical, ‘how to’ advice and worked examples relevant to PhD students and researchers from across the social and behavioural sciences. The book covers:

  • Fundamental network concepts and theories
  • Research questions and study design
  • Social systems and data structures
  • Network observation and measurement
  • Methods for data collection
  • Ethical issues for social network research
  • Network visualization
  • Methods for social network analysis
  • Drawing conclusions from social network results

This is a perfect guide for all students and researchers looking to do empirical social network research…(More)”

The Cobweb: Can the Internet be archived?


in The New Yorker: “….The average life of a Web page is about a hundred days. ….Web pages don’t have to be deliberately deleted to disappear. Sites hosted by corporations tend to die with their hosts. When MySpace, GeoCities, and Friendster were reconfigured or sold, millions of accounts vanished. …
The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.” The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.
The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?… (More)”.

New Journal: Citizen Science: Theory and Practice


“Citizen Science: Theory and Practice is an open-access, peer-reviewed journal published by Ubiquity Press on behalf of the Citizen Science Association. It focuses on advancing the field of citizen science by providing a venue for citizen science researchers and practitioners – scientists, information technologists, conservation biologists, community health organizers, educators, evaluators, urban planners, and more – to share best practices in conceiving, developing, implementing, evaluating, and sustaining projects that facilitate public participation in scientific endeavors in any discipline.”