Millions of LinkedIn Members Want to Volunteer Their Skills for Good [INFOGRAPHIC]


LinkedIn: “What do 140,000 marketers, 4,000 Googlers, and 170,000 C-level executives have in common?
Answer: They all want to use their skills for good!
More than 4 million professionals like them have expressed interest in joining a nonprofit board or doing skills-based volunteering – or both.
GreatNonprofits, an organization that helps people find and rate other nonprofits, recently found two of these professionals on LinkedIn. Both are product managers who are spending 6 months to develop critical web-based tools to help the organization expand its reach.
“Both volunteers are creative, analytical, and really engaging deeply with staff,” says GreatNonprofits Founder and CEO Perla Ni. “We’re encouraged by how many talented, passionate people there are on LinkedIn who are willing to commit time to help nonprofits.”
As a professional, you can signal you want to use your skills for good on your LinkedIn profile by checking the boxes for “Joining a nonprofit board” and/or “Skills-based volunteering” in the Volunteer Experience & Causes section. If you’ve already done that, try searching for volunteer opportunities in your area.
For nonprofit organizations, we’ve made it quick and easy to find professionals who are eager to help. Use our free Advanced Search tool to find these talented needles in the haystack.
We took a closer look at the millions of LinkedIn members who’ve raised their hands for volunteer or board service, and found some interesting insights. For starters, the stereotype that millennials are hungry for purpose actually rings true. Also, the highest concentration of would-be volunteers are in the Midwest. And they have a ton of valuable skills – from strategic planning to marketing – for nonprofits to leverage.
Check out the infographic below for more, or visit nonprofit.linkedin.com.”

With a Few Bits of Data, Researchers Identify ‘Anonymous’ People


in the New York Times: “Even when real names and other personal information are stripped from big data sets, it is often possible to use just a few pieces of the information to identify a specific person, according to a study to be published Friday in the journal Science.

In the study, titled “Unique in the Shopping Mall: On the Reidentifiability of Credit Card Metadata,” a group of data scientists analyzed credit card transactions made by 1.1 million people in 10,000 stores over a three-month period. The data set contained details including the date of each transaction, amount charged and name of the store.

Although the information had been “anonymized” by removing personal details like names and account numbers, the uniqueness of people’s behavior made it easy to single them out.

In fact, knowing just four random pieces of information was enough to reidentify 90 percent of the shoppers as unique individuals and to uncover their records, researchers calculated. And that uniqueness of behavior — or “unicity,” as the researchers termed it — combined with publicly available information, like Instagram or Twitter posts, could make it possible to reidentify people’s records by name.

“The message is that we ought to rethink and reformulate the way we think about data protection,” said Yves-Alexandre de Montjoye, a graduate student in computational privacy at the M.I.T. Media Lab who was the lead author of the study. “The old model of anonymity doesn’t seem to be the right model when we are talking about large-scale metadata.”

The analysis of large data sets containing details on people’s behavior holds great potential to improve public health, city planning and education.

But the study calls into question the standard methods many companies, hospitals and government agencies currently use to anonymize their records. It may also give ammunition to some technologists and privacy advocates who have challenged the consumer-tracking processes used by advertising software and analytics companies to tailor ads to so-called anonymous users online….(More).”

Waze a danger to cops? Police reveal their own location on social media


The Architecture of Privacy


Book by “Technology’s influence on privacy has become a matter of everyday concern for millions of people, from software architects designing new products to political leaders and consumer groups. This book explores the issue from the perspective of technology itself: how privacy-protective features can become a core part of product functionality, rather than added on late in the development process.
The Architecture of Privacy will not only help empower software engineers, but also show policymakers, academics, and advocates that, through an arsenal of technical tools, engineers can form the building blocks of nuanced policies that maximize privacy protection and utility—a menu of what to demand in new technology.
Topics include:

  • How technology and privacy policy interact and influence one another
  • Privacy concerns about government and corporate data collection practices
  • Approaches to federated systems as a component of privacy-protecting architecture
  • Alternative approaches to compartmentalized access to data
  • Methods to limit the amount of data revealed in searches, sidestepping all-or-nothing choices
  • Techniques for data purging and responsible data retention
  • Keeping and analyzing audit logs as part of a program of comprehensive system oversight
  • … (More)

The Cathedral of Computation


at the Atlantic: “We’re not living in an algorithmic culture so much as a computational theocracy.  Algorithms are everywhere, supposedly. We are living in an “algorithmic culture,” to use the author and communication scholar Ted Striphas’s name for it. Google’s search algorithms determine how we access information. Facebook’s News Feed algorithms determine how we socialize. Netflix’s and Amazon’s collaborative filtering algorithms choose products and media for us. You hear it everywhere. “Google announced a change to its algorithm,” a journalist reports. “We live in a world run by algorithms,” a TED talk exhorts. “Algorithms rule the world,” a news report threatens. Another upgrades rule to dominion: “The 10 Algorithms that Dominate Our World.”…
It’s part of a larger trend. The scientific revolution was meant to challenge tradition and faith, particularly a faith in religious superstition. But today, Enlightenment ideas like reason and science are beginning to flip into their opposites. Science and technology have become so pervasive and distorted, they have turned into a new type of theology.
The worship of the algorithm is hardly the only example of the theological reversal of the Enlightenment—for another sign, just look at the surfeit of nonfiction books promising insights into “The Science of…” anything, from laughter to marijuana. But algorithms hold a special station in the new technological temple because computers have become our favorite idols….
Once you adopt skepticism toward the algorithmic- and the data-divine, you can no longer construe any computational system as merely algorithmic. Think about Google Maps, for example. It’s not just mapping software running via computer—it also involves geographical information systems, geolocation satellites and transponders, human-driven automobiles, roof-mounted panoramic optical recording systems, international recording and privacy law, physical- and data-network routing systems, and web/mobile presentational apparatuses. That’s not algorithmic culture—it’s just, well, culture….(More).”

Would You Share Private Data for the Good of City Planning?


Henry Grabar at NextCity: “The proliferation of granular data on automobile movement, drawn from smartphones, cab companies, sensors and cameras, is sharpening our sense of how cars travel through cities. Panglossian seers believe the end of traffic jams is nigh.
This information will change cities beyond their roads. Real-time traffic data may lead to reworked intersections and new turning lanes, but understanding cars is in some ways a stand-in for understanding people. There’s traffic as traffic and traffic as proxy, notes Brett Goldstein, an urban science fellow at the University of Chicago who served as that city’s first data officer from 2011 to 2013. “We’d be really naive, in thinking about how we make cities better,” he says, “to only consider traffic for what it is.”
Even a small subset of a city’s car data goes a long way. Consider the raft of discrete findings that have emerged from the records of New York City taxis.
Researchers at the Massachusetts Institute of Technology, led by Paolo Santi, showed that cab-sharing could reduce taxi mileage by 40 percent. Their counterparts at NYU, led by Claudio Silva, mapped activity around hubs like train stations and airports and during hurricanes.
“You start to build actual models of how people move, and where they move,” observes Silva, the head of disciplines at NYU’s Center for Science and Urban Progress (CUSP). “The uses of this data for non-traffic engineering are really substantial.”…
Many of these ideas are hypothetical, for the moment, because so-called “granular” data is so hard to come by. That’s one reason the release of New York’s taxi cab data spurred so many studies — it’s an oasis of information in a desert of undisclosed records. Corporate entreaties, like Uber’s pending data offering to Boston, don’t always meet researchers’ standards. “It’s going to be a lot of superficial data, and it’s not clear how usable it’ll be at this point,” explains Sarah Kaufman, the digital manager at NYU’s Rudin Center for Transportation….
Yet Americans seem much more alarmed by the collection of location data than other privacy breaches.
How can data utopians convince the hoi polloi to share their comings and goings? One thought: Make them secure. Mike Flowers, the founder of New York City’s Office of Data Analytics and a fellow at NYU’s CUSP, told me it might be time to consider establishing a quasi-governmental body that people would trust to make their personal data anonymous before they are channeled into government projects. (New York City’s Taxi and Limousine Commission did not do a very good job at this, which led to Gawker publishing a dozen celebrity cab rides.)
Another idea is to frame open data as a beneficial trade-off. “When people provide information, they want to realize the benefit of the information,” Goldstein says.
Users tell the routing company Waze where they are and get a smoother commute in return. Progressive Insurance offers drivers a “Snapshot” tracker. If it likes the way you drive, the company will lower your rates. It’s not hard to imagine that, in the long run, drivers will be penalized for refusing such a device…. (More).”

The Modern Beauty of 19th-Century Data Visualizations


Laura Bliss at CityLab: “The Library of Congress‘ online presence is a temple of American history, an unmatched, searchable collection of digitized photographs, maps, recordings, sheet music, and documents in the millions, dating back to the 15th century.
 
Sifting through these treasures isn’t so easy, though. When you do manage the clunky search interface and stumble across a gorgeous 1870s statistical atlas, it’s hard to zoom in closely on its pages and properly marvel at the antique gem.
Problem solved, thanks to the info-nerds at Vintage Visualizations, a project of the Brooklyn Brainery. They’ve reproduced a number of the LOC’s Civil War-era data visualizations in high-quality poster prints, and they are mouthwateringly cool. For example, I really wish we still ranked city populations like this chart does, which traces a century of census data in colorful Jenga towers (NYC, forever the biggest apple!):

Behold, the ratio of “church accommodation” by state, circa 1870, displayed like wallpaper swatches….(More):

Study: Complaining on Twitter correlates with heart disease risks


at ArsTechnica: “Tweets prove better regional heart disease predictor than many classic factors. This week, a study was released by researchers at the University of Pennsylvania that found a surprising correlation when studying two kinds of maps: those that mapped the county-level frequency of cardiac disease, and those that mapped the emotional state of an area’s Twitter posts.
In all, researchers sifted through over 826 million tweets, made available by Twitter’s research-friendly “garden hose” server access, then narrowed those down to roughly 146 million tweets that had been posted with geolocation data from over 1,300 counties (each county needed to have at least 50,000 tweets to sift through to qualify). The team then measured an individual county’s expected “health” level based on frequency of certain phrases, using dictionaries that had been put through scrutiny over their application to emotional states. Negative statements about health, jobs, and attractiveness—along with a bump in curse words—would put a county in the “risk” camp, while words like “opportunities,” “overcome,” and “weekend” added more points to a county’s “protective” rating.
Not only did this measure correlate strongly with age-adjusted heart disease rate data, it turned out to be a more efficient predictor of higher or lower disease likelihood than “ten classical predictors” combined, including education, obesity, and smoking. Twitter beat that data by a rate of 42 percent to 36 percent….Psychological Science, 2014. DOI: 10.1177/0956797614557867  (About DOIs)….(More)”

At Universities, a Push for Data-Driven Career Services


at The New York Times: “Officials at the University of California, San Diego, had sparse information on the career success of their graduates until they set up a branded page for the university on LinkedIn a couple of years ago.

“Back then, we had records on 125,000 alumni, but we had good employment information on less than 10,000 of them,” recalled Armin Afsahi, who oversees alumni relations as the university’s associate vice chancellor for advancement. “Aside from Qualcomm, which is in our back yard, we didn’t know who employed our alumni.”

Within three months of setting up the university page, LinkedIn connections surfaced information on 92,000 alumni, Mr. Afsahi said.

The LinkedIn page of University of California, San Diego.
The LinkedIn page of University of California, San Diego.Credit

….

“The old models of alumni relations don’t work,” Mr. Afsahi said. “We have to be a data-driven, intelligence-oriented organization to create the engagement and value” that students and alumni expect.

In an article on Sunday, I profiled two analytics start-ups, EverTrue and Graduway, which aim to help colleges and universities identify their best prospective donors or student mentors by scanning their graduates’ social networking activities. Each start-up taps into LinkedIn profiles of alumni — albeit in different ways — to help institutions of higher education stay up-to-date with their graduates’ contact information and careers.

Since 2013, however, LinkedIn has offered its own proprietary service, called University Pages, where schools can create hubs for alumni outreach and networking. About 25,000 institutions of higher learning around the world now have official university pages on the site…(More).”

Big Data Now


at Radar – O’Reilly: “In the four years we’ve been producing Big Data Now, our wrap-up of important developments in the big data field, we’ve seen tools and applications mature, multiply, and coalesce into new categories. This year’s free wrap-up of Radar coverage is organized around seven themes:

  • Cognitive augmentation: As data processing and data analytics become more accessible, jobs that can be automated will go away. But to be clear, there are still many tasks where the combination of humans and machines produce superior results.
  • Intelligence matters: Artificial intelligence is now playing a bigger and bigger role in everyone’s lives, from sorting our email to rerouting our morning commutes, from detecting fraud in financial markets to predicting dangerous chemical spills. The computing power and algorithmic building blocks to put AI to work have never been more accessible.
  • The convergence of cheap sensors, fast networks, and distributed computation: The amount of quantified data available is increasing exponentially — and aside from tools for centrally handling huge volumes of time-series data as it arrives, devices and software are getting smarter about placing their own data accurately in context, extrapolating without needing to ‘check in’ constantly.
  • Reproducing, managing, and maintaining data pipelines: The coordination of processes and personnel within organizations to gather, store, analyze, and make use of data.
  • The evolving, maturing marketplace of big data components: Open-source components like Spark, Kafka, Cassandra, and ElasticSearch are reducing the need for companies to build in-house proprietary systems. On the other hand, vendors are developing industry-specific suites and applications optimized for the unique needs and data sources in a field.
  • The value of applying techniques from design and social science: While data science knows human behavior in the aggregate, design works in the particular, where A/B testing won’t apply — you only get one shot to communicate your proposal to a CEO, for example. Similarly, social science enables extrapolation from sparse data. Both sets of tools enable you to ask the right questions, and scope your problems and solutions realistically.
  • The importance of building a data culture: An organization that is comfortable with gathering data, curious about its significance, and willing to act on its results will perform demonstrably better than one that doesn’t. These priorities must be shared throughout the business.
  • The perils of big data: From poor analysis (driven by false correlation or lack of domain expertise) to intrusiveness (privacy invasion, price profiling, self-fulfilling predictions), big data has negative potential.

Download our free snapshot of big data in 2014, and follow the story this year on Radar.”