Are Some Tweets More Interesting Than Others? #HardQuestion


New paper by Microsoft Research (Omar Alonso, Catherine C. Marshall, and Marc Najork): “Twitter has evolved into a significant communication nexus, coupling personal and highly contextual utterances with local news, memes, celebrity gossip, headlines, and other microblogging subgenres. If we take Twitter as a large and varied dynamic collection, how can we predict which tweets will be interesting to a broad audience in advance of lagging social indicators of interest such as retweets? The telegraphic form of tweets, coupled with the subjective notion of interestingness, makes it difficult for human judges to agree on which tweets are indeed interesting.
In this paper, we address two questions: Can we develop a reliable strategy that results in high-quality labels for a collection of tweets, and can we use this labeled collection to predict a tweet’s interestingness?
To answer the first question, we performed a series of studies using crowdsourcing to reach a diverse set of workers who served as a proxy for an audience with variable interests and perspectives. This method allowed us to explore different labeling strategies, including varying the judges, the labels they applied, the datasets, and other aspects of the task.
To address the second question, we used crowdsourcing to assemble a set of tweets rated as interesting or not; we scored these tweets using textual and contextual features; and we used these scores as inputs to a binary classifier. We were able to achieve moderate agreement (kappa = 0.52) between the best classifier and the human assessments, a figure which reflects the challenges of the judgment task.”

Technology Can Expose Government Sins, But You Need Humans to Fix Them


Lorelei Kelly: “We can’t bring accountability to the NSA unless we figure out how to give the whole legislative branch modern methods for policy oversight. Those modern methods can include technology, but the primary requirement is figuring out how to supply Congress with unbiased subject matter experts—not just industry lobbyists or partisan think tank analysts. Why? Because trusted and available expertise inside the process of policymaking is what is missing today.
According to calculations by the Sunlight Foundation, today’s Congress is operating with about 40 percent less staff than in 1979. According to the Congressional Management Foundation, it’s also contending with at least 800 percent more incoming communications. Yet, instead of helping Congress gain insight in new ways, instead of helping it sort and filter, curate and authenticate, technology has mostly created disorganized information overload. And the information Congress receives is often sentiment, not substance. Elected leaders should pay attention to both, but need the latter for policymaking.
The result? Congress defaults to what it knows. And that means slapping a “national security” label on policy questions that instead deserve to be treated as broad public conversations about the evolution of American democracy. This is a Congress that categorizes questions about our freedoms on the Internet as “cyber security.”
What can we do? First, recognize that Congress is an obsolete and incapacitated system, and treat it as such. Technology and transparency can help modernize our legislature, but they can’t fix the system of governance.
Activists, even tech-savvy ones, need to talk directly with Congressional members and staff at home. Hackers, you should invite your representatives to wherever you do your hacking. And then offer your skills to help them in any way possible. You may create some great data maps and visualization tools, but the real point is to make friends in Congress. There’s no substitute for repeated conversations, and long-haul engagement. In politics, relationships will leverage the technology. All technology can do is help you find one another.
Without our help and our knowledge, our elected leaders and governing institutions won’t have the bandwidth to cope with our complex world. This will be a steep climb. But, like nearly every good outcome in politics, the climb starts with an outstretched hand, not one that’s poised at a keyboard, ready to tweet.”

How to Make All Apps More Civic


Nick Grossman in Idea Lab: “The big idea in all of this is that through open data and standards and API-based interoperability, it’s possible not just to build more “civic apps,” but to make all apps more civic:
apps
So in a perfect world, I’d not only be able to get my transit information from anywhere (say, Citymapper), I’d be able to read restaurant inspection data from anywhere (say, Foursquare), be able to submit a 311 request from anywhere (say, Twitter), etc.
These examples only scratch the surface of how apps can “become more civic” (i.e., integrate with government/civic information and services). And that’s only really describing one direction: apps tapping into government information and services.
Another, even more powerful direction is the reverse: helping governments tap into the people-power in web networks. In fact, I heard an amazing stat earlier this year:
It’s incredible to think about how web-enabled networks can extend the reach and increase the leverage of public-interest programs and government services, even when (perhaps especially when) that is not their primary function — i.e., Waze is a traffic avoidance app, not a “civic” app. Other examples include the Airbnb community coming together to provide emergency housing after Sandy, and the Etsy community helping to “craft a comeback” in Rockford, Ill.
In other words, helping all apps “be more civic,” rather than just building more civic apps. I think there is a ton of leverage there, and it’s a direction that has just barely begun to be explored.”

Defining Open Data


Open Knowledge Foundation Blog: “Open data is data that can be freely used, shared and built-on by anyone, anywhere, for any purpose. This is the summary of the full Open Definition which the Open Knowledge Foundation created in 2005 to provide both a succinct explanation and a detailed definition of open data.
As the open data movement grows, and even more governments and organisations sign up to open data, it becomes ever more important that there is a clear and agreed definition for what “open data” means if we are to realise the full benefits of openness, and avoid the risks of creating incompatibility between projects and splintering the community.

Open can apply to information from any source and about any topic. Anyone can release their data under an open licence for free use by and benefit to the public. Although we may think mostly about government and public sector bodies releasing public information such as budgets or maps, or researchers sharing their results data and publications, any organisation can open information (corporations, universities, NGOs, startups, charities, community groups and individuals).

Read more about different kinds of data in our one page introduction to open data
There is open information in transport, science, products, education, sustainability, maps, legislation, libraries, economics, culture, development, business, design, finance …. So the explanation of what open means applies to all of these information sources and types. Open may also apply both to data – big data and small data – or to content, like images, text and music!
So here we set out clearly what open means, and why this agreed definition is vital for us to collaborate, share and scale as open data and open content grow and reach new communities.

What is Open?

The full Open Definition provides a precise definition of what open data is. There are 2 important elements to openness:

  • Legal openness: you must be allowed to get the data legally, to build on it, and to share it. Legal openness is usually provided by applying an appropriate (open) license which allows for free access to and reuse of the data, or by placing data into the public domain.
  • Technical openness: there should be no technical barriers to using that data. For example, providing data as printouts on paper (or as tables in PDF documents) makes the information extremely difficult to work with. So the Open Definition has various requirements for “technical openness,” such as requiring that data be machine readable and available in bulk.”…

New crowdsourcing platform links tech-skilled volunteers with charities


Charity Digital News: “The Atlassian Foundation today previewed its innovative crowdsourcing platform, MakeaDiff.org, which will allow nonprofits to coordinate with technically-skilled volunteers who want to help convert ideas into successful projects…
Once vetted, nonprofits will be able to list their volunteer jobs on the site. Skilled volunteers such as developers, designers, business analysts and project managers will then be able to go online and quickly search the site for opportunities relevant and convenient to them.
Atlassian Foundation manager, Melissa Beaumont Lee, said: “We started hearing from nonprofits that what they valued even more than donations was access to Atlassian’s technology expertise. Similarly, we had lots of employees who were keen to volunteer, but didn’t know how to get involved; coordinating volunteers for all these amazing projects was just not scalable. Thus, MakeaDiff.org was born to benefit both nonprofits and volunteers. We wanted to reduce the friction in coordinating efforts so more time can be spent doing really meaningful work.”
 

The Science Behind Using Online Communities To Change Behavior


Sean D. Young in TechCrunch: “Although social media and online communities might have been developed for people to connect and share information, recent research shows that these technologies are really helpful in changing behaviors. My colleagues and I in the medical school, for instance, created online communities designed to improve health by getting people to do things, such as test for HIV, stop using methamphetamines, and just de-stress and relax. We don’t handpick people to join because we think they’ll love the technology; that’s not how science works. We invite them because the technology is relevant to them — they’re engaging in drugs, sex and other behaviors that might put themselves and others at risk. It’s our job to create the communities in a way that engages them enough to want to stay and participate. Yes, we do offer to pay them $30 to complete an hour-long survey, but then they are free to collect their money and never talk to us again. But for some reason, they stay in the group and decide to be actively engaged with strangers.
So how do we create online communities that keep people engaged and change their behaviors? Our starting point is to understand and address their psychological needs….
Throughout our research, we find that newly created online communities can change people’s behaviors by addressing the following psychological needs:
The Need to Trust. Sharing our thoughts, experiences, and difficulties with others makes us feel closer to others and increases our trust. When we trust people, we’re more open-minded, more willing to learn, and more willing to change our behavior. In our studies, we found that sharing personal information (even something as small as describing what you did today) can help increase trust and change behavior.
The Need to Fit In. Most of us inherently strive to fit in. Social norms, or other people’s attitudes and behaviors, heavily influence our own attitudes and behaviors. Each time a new online community or group forms, it creates its own set of social norms and expectations for how people should behave. Most people are willing to change their attitudes and/or behavior to fit these group norms and fit in with the community.
The Need for Self-Worth. When people feel good about themselves, they are more open to change and feel empowered to be able to change their behavior. When an online community is designed to have people support and care for each other, they can help to increase self-esteem.
The Need to Be Rewarded for Good Behavior. Anyone who has trained a puppy knows that you can get him to keep sitting as long as you keep the treats flowing to reward him, but if you want to wean him off the treats and really train him then you’ll need to begin spacing out the treats to make them less predictable. Well, people aren’t that different from animals in that way and can be trained with reinforcements too. For example, “liking” people’s communications when they immediately join a network, and then progressively spacing out the time that their posts are liked (psychologists call this variable reinforcement) can be incorporated onto social network platforms to encourage them to keep posting content. Eventually, these behaviors become habits.
The Need to Feel Empowered. While increasing self-esteem makes people feel good about themselves, increasing empowerment helps them know they have the ability to change. Creating a sense of empowerment is one of the most powerful predictors of whether people will change their behavior. Belonging to a network of people who are changing their own behaviors, support our needs, and are confident in our changing our behavior empowers us and gives us the ability to change our behavior.”

Best Practices for Government Crowdsourcing Programs


Anton Root: “Crowdsourcing helps communities connect and organize, so it makes sense that governments are increasingly making use of crowd-powered technologies and processes.
Just recently, for instance, we wrote about the Malaysian government’s initiative to crowdsource the national budget. Closer to home, we’ve seen government agencies from U.S. AID to NASA make use of the crowd.
Daren Brabham, professor at the University of Southern California, recently published a report titled “Using Crowdsourcing In Government” that introduces readers to the basics of crowdsourcing, highlights effective use cases, and establishes best practices when it comes to governments opening up to the crowd. Below, we take a look at a few of the suggestions Brabham makes to those considering crowdsourcing.
Brabham splits up his ten best practices into three phases: planning, implementation, and post-implementation. The first suggestion in the planning phase he makes may be the most critical of all: “Clearly define the problem and solution parameters.” If the community isn’t absolutely clear on what the problem is, the ideas and solutions that users submit will be equally vague and largely useless.
This applies not only to government agencies, but also to SMEs and large enterprises making use of crowdsourcing. At Massolution NYC 2013, for instance, we heard again and again the importance of meticulously defining a problem. And open innovation platform InnoCentive’s CEO Andy Zynga stressed the big role his company plays in helping organizations do away with the “curse of knowledge.”
Brabham also has advice for projects in their implementation phase, the key bit being: “Launch a promotional plan and a plan to grow and sustain the community.” Simply put, crowdsourcing cannot work without a crowd, so it’s important to build up the community before launching a campaign. It does take some balance, however, as a community that’s too large by the time a campaign launches can turn off newcomers who “may not feel welcome or may be unsure how to become initiated into the group or taken seriously.”
Brabham’s key advice for the post-implementation phase is: “Assess the project from many angles.” The author suggests tracking website traffic patterns, asking users to volunteer information about themselves when registering, and doing original research through surveys and interviews. The results of follow-up research can help to better understand the responses submitted, and also make it easier to show the successes of the crowdsourcing campaign. This is especially important for organizations partaking in ongoing crowdsourcing efforts.”

The Solution Revolution


New book by William D. Eggers and Paul Macmillan from Deloitte: “Where tough societal problems persist, citizens, social enterprises, and yes, even businesses, are relying less and less on government-only solutions. More likely, they are crowd funding, ride-sharing, app- developing or impact- investing to design lightweight solutions for seemingly intractable problems. No challenge is too daunting, from malaria in Africa to traffic congestion in California.
These wavemakers range from edgy social enterprises growing at a clip of 15% a year, to mega-foundations that are eclipsing development aid, to Fortune 500 companies delivering social good on the path to profit. The collective force of these new problem solvers is creating dynamic and rapidly evolving markets for social good. They trade solutions instead of dollars to fill the gap between what government can provide and what citizens need. By erasing public-private sector boundaries, they are unlocking trillions of dollars in social benefit and commercial value.
The Solution Revolution explores how public and private are converging to form the Solution Economy. By examining scores of examples, Eggers and Macmillan reveal the fundamentals of this new – globally prevalent – economic and social order. The book is designed to help guide those willing to invest time, knowledge or capital toward sustainable, social progress.”

Imagining Data Without Division


Thomas Lin in Quanta Magazine: “As science dives into an ocean of data, the demands of large-scale interdisciplinary collaborations are growing increasingly acute…Seven years ago, when David Schimel was asked to design an ambitious data project called the National Ecological Observatory Network, it was little more than a National Science Foundation grant. There was no formal organization, no employees, no detailed science plan. Emboldened by advances in remote sensing, data storage and computing power, NEON sought answers to the biggest question in ecology: How do global climate change, land use and biodiversity influence natural and managed ecosystems and the biosphere as a whole?…
For projects like NEON, interpreting the data is a complicated business. Early on, the team realized that its data, while mid-size compared with the largest physics and biology projects, would be big in complexity. “NEON’s contribution to big data is not in its volume,” said Steve Berukoff, the project’s assistant director for data products. “It’s in the heterogeneity and spatial and temporal distribution of data.”
Unlike the roughly 20 critical measurements in climate science or the vast but relatively structured data in particle physics, NEON will have more than 500 quantities to keep track of, from temperature, soil and water measurements to insect, bird, mammal and microbial samples to remote sensing and aerial imaging. Much of the data is highly unstructured and difficult to parse — for example, taxonomic names and behavioral observations, which are sometimes subject to debate and revision.
And, as daunting as the looming data crush appears from a technical perspective, some of the greatest challenges are wholly nontechnical. Many researchers say the big science projects and analytical tools of the future can succeed only with the right mix of science, statistics, computer science, pure mathematics and deft leadership. In the big data age of distributed computing — in which enormously complex tasks are divided across a network of computers — the question remains: How should distributed science be conducted across a network of researchers?
Part of the adjustment involves embracing “open science” practices, including open-source platforms and data analysis tools, data sharing and open access to scientific publications, said Chris Mattmann, 32, who helped develop a precursor to Hadoop, a popular open-source data analysis framework that is used by tech giants like Yahoo, Amazon and Apple and that NEON is exploring. Without developing shared tools to analyze big, messy data sets, Mattmann said, each new project or lab will squander precious time and resources reinventing the same tools. Likewise, sharing data and published results will obviate redundant research.
To this end, international representatives from the newly formed Research Data Alliance met this month in Washington to map out their plans for a global open data infrastructure.”

User-Generated Content Is Here to Stay


in the Huffington Post: “The way media are transmitted has changed dramatically over the last 10 years. User-generated content (UGC) has completely changed the landscape of social interaction, media outreach, consumer understanding, and everything in between. Today, UGC is media generated by the consumer instead of the traditional journalists and reporters. This is a movement defying and redefining traditional norms at the same time. Current events are largely publicized on Twitter and Facebook by the average person, and not by a photojournalist hired by a news organization. In the past, these large news corporations dominated the headlines — literally — and owned the monopoly on public media. Yet with the advent of smartphones and spread of social media, everything has changed. The entire industry has been replaced; smartphones have supplanted how information is collected, packaged, edited, and conveyed for mass distribution. UGC allows for raw and unfiltered movement of content at lightening speed. With the way that the world works today, it is the most reliable way to get information out. One thing that is for certain is that UGC is here to stay whether we like it or not, and it is driving much more of modern journalistic content than the average person realizes.
Think about recent natural disasters where images are captured by citizen journalists using their iPhones. During Hurricane Sandy, 800,000 photos uploaded onto Instagram with “#Sandy.” Time magazine even hired five iPhoneographers to photograph the wreckage for its Instagram page. During the May 2013 Oklahoma City tornadoes, the first photo released was actually captured by a smartphone. This real-time footage brings environmental chaos to your doorstep in a chillingly personal way, especially considering the photographer of the first tornado photos ultimately died because of the tornado. UGC has been monumental for criminal investigations and man-made catastrophes. Most notably, the Boston Marathon bombing was covered by UGC in the most unforgettable way. Dozens of images poured in identifying possible Boston bombers, to both the detriment and benefit of public officials and investigators. Though these images inflicted considerable damage to innocent bystanders sporting suspicious backpacks, ultimately it was also smartphone images that highlighted the presence of the Tsarnaev brothers. This phenomenon isn’t limited to America. Would the so-called Arab Spring have happened without social media and UGC? Syrians, Egyptians, and citizens from numerous nations facing protests can easily publicize controversial images and statements to be shared worldwide….
This trend is not temporary but will only expand. The first iPhone launched in 2007, and the world has never been the same. New smartphones are released each month with better cameras and faster processors than computers had even just a few years ago….”