Explore our articles
View All Results

Stefaan Verhulst

David Chapman, Katrina Miller-Stevens, John C Morris, and Brendan O’Hallarn in First Monday: “In an age when electronic communication is ubiquitous, non-profit organizations are actively using social media platforms as a way to deliver information to end users. In spite of the broad use of these platforms, little scholarship has focused on the internal processes these organizations employ to implement these tools. A limited number of studies offer models to help explain an organization’s use of social media from initiation to outcomes, yet few studies address a non-profit organization’s mission as the driver to employ social media strategies and tactics. Furthermore, the effectiveness of social media use is difficult for non-profit organizations to measure. Studies that attempt to address this question have done so by viewing social media platform analytics (e.g., Facebook analytics) or analyzing written content by users of social media (Nah and Saxton, 2013; Auger, 2013; Uzunoğlu and Misci Kip, 2014; or Guo and Saxton, 2014). The value added of this study is to present a model for practice (Weil, 1997) that explores social media use and its challenges from a non-profit organization’s mission through its desired outcome, in this case an outcome of advocacy and civic engagement.

We focus on one non-profit organization, Blue Star Families, that actively engages in advocacy and civic engagement. Blue Star Families was formed in 2009 to “raise the awareness of the challenges of military family life with our civilian communities and leaders” (Blue Star Families, 2010). Blue Star Families is a virtual organization with no physical office location. Thus, the organization relies on its Web presence and social media tools to advocate for military families and engage service members and their families, communities, and citizens in civic engagement activities (Blue Star Families, 2010).

The study aims to provide organizational-level insights of the successes and challenges of working in the social media environment. Specifically, the study asks: What are the processes non-profit organizations follow to link organizational mission to outcomes when using social media platforms? What are the successes and challenges of using social media platforms for advocacy and civic engagement purposes? In our effort to answer these questions, we present a new model to explore non-profit organizations’ use of social media platforms by building on previous models and frameworks developed to explore the use of social media in the public, private, and non-profit sectors.

This research is important for three reasons. First, most previous studies of social media tend to employ models that focus on the satisfaction of the social media tools for organizational members, rather than the utility of social media as a tool to meet organizational goals. Our research offers a means to explore the utility of social media from an organization perspective. Second, the exemplar case for our research, Blue Star Families, Inc., is a non-profit organization whose mission is to create and nurture a virtual community spread over a large geographical — if not global — area. Because Blue Star Families was founded as an online organization that could not exist without social media, it provides a case for which social media is a critical component of the organization’s activity. Finally, we offer some “lessons learned” from our case to identify issues for other organizations seeking to create a significant social media presence.

This paper is organized as follows: first, the growth of social media is briefly addressed to provide background context. Second, previous models and frameworks exploring social media are discussed. This is followed by a presentation of a new model exploring the use of social media from an organizational perspective, starting with the driver of a non-profit organization’s mission, to its desired outcomes of advocacy and civic engagement. Third, the case study methodology is explained. Next, we present an analysis and discussion applying the new model to Blue Star Families’ use of social media platforms. We conclude by discussing the challenges of social media revealed in the case study analysis, and we offer recommendations to address these challenges….(More)”

A new model to explore non-profit social media use for advocacy and civic engagement

Twitter Blog: “After the disastrous Sichuan earthquake in 2008, people turned to Twitter to share firsthand information about the earthquake. What amazed many was the impression that Twitter was faster at reporting the earthquake than the U.S. Geological Survey (USGS), the official government organization in charge of tracking such events.

This Twitter activity wasn’t a big surprise to the USGS. The USGS National Earthquake Information Center (NEIC) processes about 2,000 realtime earthquake sensors, with the majority based in the United States. That leaves a lot of empty space in the world with no sensors. On the other hand, there are hundreds of millions of people using Twitter who can report earthquakes. At first, the USGS staff was a bit skeptical that Twitter could be used as a detection system for earthquakes – but when they looked into it, they were surprised at the effectiveness of Twitter data for detection.

USGS staffers Paul Earle, a seismologist, and Michelle Guy, a software developer, teamed up to look at how Twitter data could be used for earthquake detection and verification. By using Twitter’s Public API, they decided to use the same time series event detection method they use when detecting earthquakes. This gave them a baseline for earthquake-related chatter, but they decided to dig in even further. They found that people Tweeting about actual earthquakes kept their Tweets really short, even just to ask, “earthquake?” Concluding that people who are experiencing earthquakes aren’t very chatty, they started filtering out Tweets with more than seven words. They also recognized that people sharing links or the size of the earthquake were significantly less likely to be offering firsthand reports, so they filtered out any Tweets sharing a link or a number. Ultimately, this filtered stream proved to be very significant at determining when earthquakes occurred globally.

USGS Modeling Twitter Data to Detect Earthquakes

While I was at the USGS office in Golden, Colo. interviewing Michelle and Paul, three earthquakes happened in a relatively short time. Using Twitter data, their system was able to pick up on an aftershock in Chile within one minute and 20 seconds – and it only took 14 Tweets from the filtered stream to trigger an email alert. The other two earthquakes, off Easter Island and Indonesia, weren’t picked up because they were not widely felt…..

The USGS monitors for earthquakes in many languages, and the words used can be a clue as to the magnitude and location of the earthquake. Chile has two words for earthquakes: terremotoand temblor; terremoto is used to indicate a bigger quake. This one in Chile started with people asking if it was a terremoto, but others realizing that it was a temblor.

As the USGS team notes, Twitter data augments their own detection work on felt earthquakes. If they’re getting reports of an earthquake in a populated area but no Tweets from there, that’s a good indicator to them that it’s a false alarm. It’s also very cost effective for the USGS, because they use Twitter’s Public API and open-source software such as Kibana and ElasticSearch to help determine when earthquakes occur….(More)”

How the USGS uses Twitter data to track earthquakes

White House Fact Sheet: “Today, the Administration is celebrating the five-year anniversary of Challenge.gov, a historic effort by the Federal Government to collaborate with members of the public through incentive prizes to address our most pressing local, national, and global challenges. True to the spirit of the President’s charge from his first day in office, Federal agencies have collaborated with more than 200,000 citizen solvers—entrepreneurs, citizen scientists, students, and more—in more than 440 challenges, on topics ranging from accelerating the deployment of solar energy, to combating breast cancer, to increasing resilience after Hurricane Sandy.

Highlighting continued momentum from the President’s call to harness the ingenuity of the American people, the Administration is announcing:

  • Nine new challenges from Federal agencies, ranging from commercializing NASA technology, to helping students navigate their education and career options, to protecting marine habitats.
  • Expanding support for use of challenges and prizes, including new mentoring support from the General Services Administration (GSA) for interested agencies and a new $244 million innovation platform opened by the U.S. Agency for International Development (USAID) with over 70 partners.

In addition, multiple non-governmental institutions are announcing 14 new challenges, ranging from improving cancer screenings, to developing better technologies to detect, remove, and recover excess nitrogen and phosphorus from water, to increasing the resilience of island communities….

Expanding the Capability for Prize Designers to find one another

The GovLab and MacArthur Foundation Research Network on Opening Governance will launch an expert network for prizes and challenges. The Governance Lab (GovLab) and MacArthur Foundation Research Network on Opening Governance will develop and launch the Network of Innovators (NoI) expert networking platform. NoI will make easily searchable the know-how of innovators on topics ranging from developing prize-backed challenges, opening up data, and use of crowdsourcing for public good. Platform users will answer questions about their skills and experiences, creating a profile that enables them to be matched to those with complementary knowledge to enable mutual support and learning. A beta version for user testing within the Federal prize community will launch in early October, with a full launch at the end of October. NoI will be open to civil servants around the world…(More)”

US Administration Celebrates Five-Year Anniversary of Challenge.gov

European Commission: “A JRC study, “Nudges to Privacy Behaviour: Exploring an Alternative Approach to Privacy Notices“, used behavioural sciences to look at how individuals react to different types of privacy notices. Specifically, the authors analysed users’ reactions to modified choice architecture (i.e. the environment in which decisions take place) of web interfaces.

Two types of privacy behaviour were measured: passive disclosure, when people unwittingly disclose personal information, and direct disclosure, when people make an active choice to reveal personal information. After testing different designs with over 3 000 users from the UK, Italy, Germany and Poland, results show web interface affects decisions on disclosing personal information. The study also explored differences related to country of origin, gender, education level and age.

A depiction of a person’s face on the website led people to reveal more personal information. Also, this design choice and the visualisation of the user’s IP or browsing history had an impact on people’s awareness of a privacy notice. If confirmed, these features are particularly relevant for habitual and instinctive online behaviour.

With regard to education, users who had attended (though not necessarily graduated from) college felt significantly less observed or monitored and more comfortable answering questions than those who never went to college. This result challenges the assumption that the better educated are more aware of information tracking practices. Further investigation, perhaps of a qualitative nature, could help dig deeper into this issue. On the other hand, people with a lower level of education were more likely to reveal personal information unwittingly. This behaviour appeared to be due to the fact that non-college attendees were simply less aware that some online behaviour revealed personal information about themselves.

Strong differences between countries were noticed, indicating a relation between cultures and information disclosure. Even though participants in Italy revealed the most personal information in passive disclosure, in direct disclosure they revealed less than in other countries. Approximately 75% of participants in Italy chose to answer positively to at least one stigmatised question, compared to 81% in Poland, 83% in Germany and 92% in the UK.

Approximately 73% of women answered ‘never’ to the questions asking whether they had ever engaged in socially stigmatised behaviour, compared to 27% of males. This large difference could be due to the nature of the questions (e.g. about alcohol consumption, which might be more acceptable for males). It could also suggest women feel under greater social scrutiny or are simply more cautious when disclosing personal information.

These results could offer valuable insights to inform European policy decisions, despite the fact that the study has targeted a sample of users in four countries in an experimental setting. Major web service providers are likely to have extensive amounts of data on how slight changes to their services’ privacy controls affect users’ privacy behaviour. The authors of the study suggest that collaboration between web providers and policy-makers can lead to recommendations for web interface design that allow for conscientious disclosure of privacy information….(More)”

Web design plays a role in how much we reveal online

Constantine E. Kontokosta: “This paper presents the conceptual framework and justification for a “Quantified Community” (QC) and a networked experimental environment of neighborhood labs. The QC is a fully instrumented urban neighborhood that uses an integrated, expandable, and participatory sensor network to support the measurement, integration, and analysis of neighborhood conditions, social interactions and behavior, and sustainability metrics to support public decision-making. Through a diverse range of sensor and automation technologies — combined with existing data generated through administrative records, surveys, social media, and mobile sensors — information on human, physical, and environmental elements can be processed in real-time to better understand the interaction and effects of the built environment on human well-being and outcomes. The goal is to create an “informatics overlay” that can be incorporated into future urban development and planning that supports the benchmarking and evaluation of neighborhood conditions, provides a test-bed for measuring the impact of new technologies and policies, and responds to the changing needs and preferences of the local community….(More)”

The Quantified Community and Neighborhood Labs: A Framework for Computational Urban Planning and Civic Technology Innovation

Philipp Hacker: “This essay is both a review of the excellent book “Nudge and the Law. A European Perspective”, edited by Alberto Alemanno and Anne-Lise Sibony, and an assessment of the major themes and challenges that the behavioural analysis of law will and should face in the immediate future.

The book makes important and novel contributions in a range of topics, both on a theoretical and a substantial level. Regarding theoretical issues, four themes stand out: First, it highlights the differences between the EU and the US nudging environments. Second, it questions the reliance on expertise in rulemaking. Third, it unveils behavioural trade-offs that have too long gone unnoticed in behavioural law and economics. And fourth, it discusses the requirement of the transparency of nudges and the related concept of autonomy. Furthermore, the different authors discuss the impact of behavioural regulation on a number of substantial fields of law: health and lifestyle regulation, privacy law, and the disclosure paradigm in private law.

This paper aims to take some of the book’s insights one step further in order to point at crucial challenges – and opportunities – for the future of the behavioural analysis of law. In the last years, the movement has gained tremendously in breadth and depth. It is now time to make it scientifically even more rigorous, e.g. by openly embracing empirical uncertainty and by moving beyond the neo-classical/behavioural dichotomy. Simultaneously, the field ought to discursively readjust its normative compass. Finally and perhaps most strikingly, however, the power of big data holds the promise of taking behavioural interventions to an entirely new level. If these challenges can be overcome, this paper argues, the intersection between law and behavioural sciences will remain one of the most fruitful approaches to legal analysis in Europe and beyond….(More)”

Nudge 2.0

Brentin Mock at CityLab: “…In New York City, 40 percent of the jailed population are there because they couldn’t afford bail—most of them for nonviolent drug crimes. The city spends $42 million on average annually incarcerating non-felony defendants….

Wednesday, NYC Mayor Bill de Blasio signed into law legislation aimed at helping correct these bail problems, providing inmates a bill of rights for when they’re detained and addressing other problems that lead to overstuffing city jails with poor people of color.

The omnibus package of criminal justice reform bills will require the city to produce better accounting of how many people are in city jails, what they’re average incarceration time is while waiting for trial, the average bail amounts imposed on defendants, and a whole host of other data points on incarceration. Under the new legislation, the city will have to release reports quarterly and semi-annually to the public—much of it from data now sheltered within the city’s Department of Corrections.

“This is bringing sunshine to information that is already being looked at internally, but is better off being public data,” New York City council member Helen Rosenthal tells CityLab. “We can better understand what polices we need to change if we have the data to understand what’s going on in the system.”…

The city passed a package of transparency bills last month that focused on Rikers, but the legislation passed Wednesday will focus on the city’s courts and jails system as a whole….(More)”

As a Start to NYC Prison Reform, Jail Data Will Be Made Public

John C. Havens at Mashable: “….While welcoming the feedback that sensors, data and Artificial Intelligence provide, we’re at a critical inflection point. Demarcating the parameters between assistance and automation has never been more central to human well-being. But today, beauty is in the AI of the beholder. Desensitized to the value of personal data, we hemorrhage precious insights regarding our identity that define the moral nuances necessary to navigate algorithmic modernity.

If no values-based standards exist for Artificial Intelligence, then the biases of its manufacturers will define our universal code of human ethics. But this should not be their cross to bear alone. It’s time to stop vilifying the AI community and start defining in concert with their creations what the good life means surrounding our consciousness and code.

The intention of the ethics

“Begin as you mean to go forward.” Michael Stewart is founder, chairman & CEO of Lucid, an Artificial Intelligence company based in Austin that recently announced the formation of the industry’s first Ethics Advisory Panel (EAP). While Google claimed creation of a similar board when acquiring AI firm DeepMind in January 2014, no public realization of its efforts currently exist (as confirmed by a PR rep from Google for this piece). Lucid’s Panel, by comparison, has already begun functioning as a separate organization from the analytics side of the business and provides oversight for the company and its customers. “Our efforts,” Stewart says, “are guided by the principle that our ethics group is obsessed with making sure the impact of our technology is good.”

Kay Firth-Butterfield is chief officer of the EAP, and is charged with being on the vanguard of the ethical issues affecting the AI industry and society as a whole. Internally, the EAP provides the hub of ethical behavior for the company. Someone from Firth-Butterfield’s office even sits on all core product development teams. “Externally,” she notes, “we plan to apply Cyc intelligence (shorthand for ‘encyclopedia,’ Lucid’s AI causal reasoning platform) for research to demonstrate the benefits of AI and to advise Lucid’s leadership on key decisions, such as the recent signing of the LAWS letter and the end use of customer applications.”

Ensuring the impact of AI technology is positive doesn’t happen by default. But as Lucid is demonstrating, ethics doesn’t have to stymie innovation by dwelling solely in the realm of risk mitigation. Ethical processes aligning with a company’s core values can provide more deeply relevant products and increased public trust. Transparently including your customer’s values in these processes puts the person back into personalization….(Mashable)”

The importance of human innovation in A.I. ethics

Liron Lavi at Democratic Audit: “Dēmokratía, literally ‘the rule of the people’, is the basis for democracy as a political regime. However, ‘the people’ is a heterogeneous, open, and dynamic entity. So, how can we think about democracy without the people as a coherent entity, yet as the source of democracy? I employ a performative theorisation of democracy in order to answer this question. Democracy, I suggest, is an effect produced by repetitive performative acts and ‘the people’ is produced as the source of democratic sovereignty.

A quick search on ‘democratic performance’ will usually yield results (and concerns) regarding voter competence, government accountability, liberal values, and legitimacy. However, from the perspective of performative theory, the term gains a rather different meaning (as has been discussed at length by Judith Butler). It suggests that democracy is not a pre-given structure but rather needs to be constructed repeatedly. Thus, for a democracy to be recognised and maintained as such it needs to be performed by citizens, institutions, office-holders, the media, etc. Acts made by these players – voting, demonstrating, decision- and- law-making, etc. – give form to the abstract concept of democracy, thus producing it as their (imagined) source. There is, therefore, no finite set of actions that can determine once and for all that a social structure is indeed a democracy, for the regime is not a stable and pre-given structure, but rather produced and imagined through a multitude of acts and procedures.

Elections, for example, are a democratic performance insofar as they are perceived as an effective tool for expressing the public’s preferences and choosing its representatives and desired policies. Polling stations are therefore the site in which democracy is constituted insofar as all eligible members (can) participate in the act of voting, and therefore are constructed as the source of sovereignty. By this, elections produce democracy as their effect, as their source, and hold together the political imagination of democracy. And they do this periodically, thus open options for new variations (and failures) in the democratic effect they produce. Elections are therefore, not only an opportunity to replace representatives and incumbents, but also an opportunity to perform democracy, shape it, alter it, and load it with various meanings….(More)”

Understanding democracy as a product of citizen performances reduces the need for a defined ‘people’

Jake Porway at O’Reilly: “….Every week, a data or technology company declares that it wants to “do good” and there are countless workshops hosted by major foundations musing on what “big data can do for society.” Add to that a growing number of data-for-good programs from Data Science for Social Good’s fantastic summer program toBayes Impact’s data science fellowships to DrivenData’s data-science-for-good competitions, and you can see how quickly this idea of “data for good” is growing.

Yes, it’s an exciting time to be exploring the ways new datasets, new techniques, and new scientists could be deployed to “make the world a better place.” We’ve already seen deep learning applied to ocean health,satellite imagery used to estimate poverty levels, and cellphone data used to elucidate Nairobi’s hidden public transportation routes. And yet, for all this excitement about the potential of this “data for good movement,” we are still desperately far from creating lasting impact. Many efforts will not only fall short of lasting impact — they will make no change at all….

So how can these well-intentioned efforts reach their full potential for real impact? Embracing the following five principles can drastically accelerate a world in which we truly use data to serve humanity.

1. “Statistics” is so much more than “percentages”

We must convey what constitutes data, what it can be used for, and why it’s valuable.

There was a packed house for the March 2015 release of the No Ceilings Full Participation Report. Hillary Clinton, Melinda Gates, and Chelsea Clinton stood on stage and lauded the report, the culmination of a year-long effort to aggregate and analyze new and existing global data, as the biggest, most comprehensive data collection effort about women and gender ever attempted. One of the most trumpeted parts of the effort was the release of the data in an open and easily accessible way.

I ran home and excitedly pulled up the data from the No Ceilings GitHub, giddy to use it for our DataKind projects. As I downloaded each file, my heart sunk. The 6MB size of the entire global dataset told me what I would find inside before I even opened the first file. Like a familiar ache, the first row of the spreadsheet said it all: “USA, 2009, 84.4%.”

What I’d encountered was a common situation when it comes to data in the social sector: the prevalence of inert, aggregate data. ….

2. Finding problems can be harder than finding solutions

We must scale the process of problem discovery through deeper collaboration between the problem holders, the data holders, and the skills holders.

In the immortal words of Henry Ford, “If I’d asked people what they wanted, they would have said a faster horse.” Right now, the field of data science is in a similar position. Framing data solutions for organizations that don’t realize how much is now possible can be a frustrating search for faster horses. If data cleaning is 80% of the hard work in data science, then problem discovery makes up nearly the remaining 20% when doing data science for good.

The plague here is one of education. …

3. Communication is more important than technology

We must foster environments in which people can speak openly, honestly, and without judgment. We must be constantly curious about each other.

At the conclusion of one of our recent DataKind events, one of our partner nonprofit organizations lined up to hear the results from their volunteer team of data scientists. Everyone was all smiles — the nonprofit leaders had loved the project experience, the data scientists were excited with their results. The presentations began. “We used Amazon RedShift to store the data, which allowed us to quickly build a multinomial regression. The p-value of 0.002 shows …” Eyes glazed over. The nonprofit leaders furrowed their brows in telegraphed concentration. The jargon was standing in the way of understanding the true utility of the project’s findings. It was clear that, like so many other well-intentioned efforts, the project was at risk of gathering dust on a shelf if the team of volunteers couldn’t help the organization understand what they had learned and how it could be integrated into the organization’s ongoing work…..

4. We need diverse viewpoints

To tackle sector-wide challenges, we need a range of voices involved.

One of the most challenging aspects to making change at the sector level is the range of diverse viewpoints necessary to understand a problem in its entirety. In the business world, profit, revenue, or output can be valid metrics of success. Rarely, if ever, are metrics for social change so cleanly defined….

Challenging this paradigm requires diverse, or “collective impact,” approaches to problem solving. The idea has been around for a while (h/t Chris Diehl), but has not yet been widely implemented due to the challenges in successful collective impact. Moreover, while there are many diverse collectives committed to social change, few have the voice of expert data scientists involved. DataKind is piloting a collective impact model called DataKind Labs, that seeks to bring together diverse problem holders, data holders, and data science experts to co-create solutions that can be applied across an entire sector-wide challenge. We just launchedour first project with Microsoft to increase traffic safety and are hopeful that this effort will demonstrate how vital a role data science can play in a collective impact approach.

5. We must design for people

Data is not truth, and tech is not an answer in-and-of-itself. Without designing for the humans on the other end, our work is in vain.

So many of the data projects making headlines — a new app for finding public services, a new probabilistic model for predicting weather patterns for subsistence farmers, a visualization of government spending — are great and interesting accomplishments, but don’t seem to have an end user in mind. The current approach appears to be “get the tech geeks to hack on this problem, and we’ll have cool new solutions!” I’ve opined that, though there are many benefits to hackathons, you can’t just hack your way to social change….(More)”

Five principles for applying data science for social good

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday