Citizen Science Profile: SeaSketch


Blog entry from the Commons Lab within the  Science and Technology Innovation Program of the Woodrow Wilson International Center for Scholars: “As part of the Commons Lab’s ongoing initiative to highlight the intersection of emerging technologies and citizen science, we present a profile of SeaSketch, a marine management software that makes complex spatial planning tools accessible to everyone. This was prepared with the gracious assistance of Will McClintock, director of the McClintock Lab.
The SeaSketch initiative highlights key components of successful citizen science projects. The end product is a result of an iterative process where the developers applied previous successes and learned from mistakes. The tool was designed to allow people without technical training to participate, expanding access to stakeholders. MarineMap had a quantifiable impact on California marine protected areas, increasing their size from 1 percent to 16 percent of the coastline. The subsequent version, SeaSketch, is uniquely suited to scale out worldwide, addressing coastal and land management challenges. By emphasizing iterative development, non-expert accessibility and scalability, SeaSketch offers a model of successful citizen science….
SeaSketch succeeded as a citizen science initiative by focusing on three project priorities:

  • Iterative Development: The current version of SeaSketch’s PGIS software is the result of seven years of trial and error. Doris and MarineMap helped the project team learn what worked and adjust accordingly. The final result would have been impossible without a sustained commitment to the project and regular product assessments.
  • Non-Expert Accessibility: GIS software is traditionally limited to those with technical expertise. SeaSketch was developed anticipating that stakeholders without GIS training would use the software. New features allow users to contribute spatial surveys, sharing their knowledge of the area to better inform planning. This ease of use means the project is outward facing: More people can participate, meaning the analyses better reflect community priorities.
  • Scalability: Although MarineMap was built specifically to guide the MLPA process, the concept is highly flexible. SeaSketch  is being used to support oceanic management issues worldwide, including in areas of international jurisdiction. The software can support planning with legal implications as well as cooperative agreements. SeaSketch’s project team believes it can also be used for freshwater and terrestrial management issues.”

Analyzing the Analyzers


catAn Introspective Survey of Data Scientists and Their Work,By Harlan Harris, Sean Murphy, Marck Vaisman: “There has been intense excitement in recent years around activities labeled “data science,” “big data,” and “analytics.” However, the lack of clarity around these terms and, particularly, around the skill sets and capabilities of their practitioners has led to inefficient communication between “data scientists” and the organizations requiring their services. This lack of clarity has frequently led to missed opportunities. To address this issue, we surveyed several hundred practitioners via the Web to explore the varieties of skills, experiences, and viewpoints in the emerging data science community.

We used dimensionality reduction techniques to divide potential data scientists into five categories based on their self-ranked skill sets (Statistics, Math/Operations Research, Business, Programming, and Machine Learning/Big Data), and four categories based on their self-identification (Data Researchers, Data Businesspeople, Data Engineers, and Data Creatives). Further examining the respondents based on their division into these categories provided additional insights into the types of professional activities, educational background, and even scale of data used by different types of Data Scientists.
In this report, we combine our results with insights and data from others to provide a better understanding of the diversity of practitioners, and to argue for the value of clearer communication around roles, teams, and careers.”

Sensing and Shaping Emerging Conflicts


cover.phpA new Report of a Joint Workshop of the National Academy of Engineering and the United States Institute of Peace: Roundtable on Technology, Science, and Peacebuilding: “Technology has revolutionized many aspects of modern life, from how businesses operate, to how people get information, to how countries wage war. Certain technologies in particular, including not only cell phones and the Internet but also satellites, drones, and sensors of various kinds, are transforming the work of mitigating conflict and building peaceful societies. Rapid increases in the capabilities and availability of digital technologies have put powerful communications devices in the hands of most of the world’s population.
These technologies enable one-to-one and one-to-many flows of information, connecting people in conflict settings to individuals and groups outside those settings and, conversely, linking humanitarian organizations to people threatened by violence. Communications within groups have also intensified and diversified as the group members use new technologies to exchange text, images, video, and audio. Monitoring and analysis of the flow and content of this information can yield insights into how violence can be prevented or mitigated. In this way technologies and the resulting information can be used to detect and analyze, or sense, impending conflict or developments in ongoing conflict.”

Can Silicon Valley Save the World?


Charles Kenny and Justin Sandefur in Foreign Policy: “Not content with dominating IPOs on Wall Street, Silicon Valley entrepreneurs are taking their can-do, failure-conquering, technology-enabled tactics to the challenge of global poverty. And why not? If we can look up free Khan Academy math lectures using the cheap, kid-friendly computers handed out by the folks at One Laptop per Child, who needs to worry about the complexities of education reform? With a lamp lit up by an electricity-generating soccer ball in every hut, who needs coal-fired power stations and transmission lines? And if even people in refugee camps can make money transcribing outsourced first-world dental records, who needs manufacturing or the roads and port systems required to export physical goods? No wonder the trendiest subject these days for TED talks is cracking the code on digital-era do-gooding, with 100 recent talks and counting just on the subjects of Africa and development…
But entrepreneurial spirit and even the fanciest of gadgets will only get you so far. All the technological transformation of the last 200 years hasn’t come close to wiping out global poverty. More than half the planet still lives on less than $4 a day, and 2.4 billion people live on less than $2 a day. And that’s after a decade that saw the biggest drop in extreme poverty ever. What’s more, millions and millions of people still die annually from easily and cheaply preventable or treatable diseases like diarrhea and pneumonia. None of this is for a lack of science; often it isn’t even for lack of money. It is because parents don’t follow simple health practices like washing their hands, government bureaucrats can’t or won’t provide basic water and sanitation programs, and arbitrary immigration restrictions prevent the poor from moving to places with better opportunities.
Sorry, but no iPhone, even one loaded with the coolest apps, is going to change all that….
SO WHAT CAN BE DONE to harness technological innovation, filter the good ideas from the bad, and spread a little of Silicon Valley’s fairy dust on the world’s poorer regions? The answer, according to Harvard economist Michael Kremer, is market discipline and rigorous testing. Kremer is a MacArthur “genius” grant winner whose name pops up in speculation about future Nobel Prize contenders. He thinks that technological fixes can dramatically improve the lives of the global poor, but markets won’t provide the right innovations without support.”

Targeting Transparency


New paper by David Weil, Mary Graham, and Archon Fung in Science Magazine: “When rules, taxes, or subsidies prove impractical as policy tools, governments increasingly employ “targeted transparency,” compelling disclosure of information as an alternative means of achieving specific objectives. For example, the U.S. Affordable Care Act of 2010 requires calories be posted on menus to enlist both restaurants and patrons in the effort to reduce obesity. It is crucial to understand when and how such targeted transparency works, as well as when it is inappropriate. Research about its use and effectiveness has begun to take shape, drawing on social and behavioral scientists, economists, and legal scholars. We explore questions central to the performance of targeted transparency policies.

Targeted transparency differs from broader “right-to-know” and “open-government” policies that span from the 1966 Freedom of Information Act to the Obama Administration’s “open-government” initiative encouraging officials to make existing data sets readily available and easy to parse as an end in itself (1, 2). Targeted transparency offers a more focused approach often used to introduce new scientific evidence of public risks into market choices. Government compels companies or agencies to disclose information in standardized formats to reduce specific risks, to ameliorate externalities arising from a failure of consumers or producers to fully consider social costs associated with a product, or to improve provision of public goods and services. Such policies are more light-handed than conventional regulation, relying on the power of information rather than on enforcement of rules and standards or financial inducements….”

See also the Transparency Policy Project at http://transparencypolicy.net/

Mozilla Science Lab


Mark Surman in Mozilla Blog: “We’re excited to announce the launch of the Mozilla Science Lab, a new initiative that will help researchers around the world use the open web to shape science’s future.
Scientists created the web — but the open web still hasn’t transformed scientific practice to the same extent we’ve seen in other areas like media, education and business. For all of the incredible discoveries of the last century, science is still largely rooted in the “analog” age. Credit systems in science are still largely based around “papers,” for example, and as a result researchers are often discouraged from sharing, learning, reusing, and adopting the type of open and collaborative learning that the web makes possible.
The Science Lab will foster dialog between the open web community and researchers to tackle this challenge. Together they’ll share ideas, tools, and best practices for using next-generation web solutions to solve real problems in science, and explore ways to make research more agile and collaborative….
With support from the Alfred P. Sloan Foundation, Mozilla Science Lab will start by convening a broad conversation about open web approaches and skills training, working with existing tool developers and supporting a global community of researchers.
Get involved
Stay tuned for more about how you can join the conversation. In the mean time, you can:

Is Cybertopianism Really Such a Bad Thing?


in Slate: “As the historian and technology scholar Langdon Winner suggests, “The arrival of any new technology that has significant power and practical potential always brings with it a wave of visionary enthusiasm that anticipates the rise of a utopian social order.” Technologies that connect individuals to one another—like the airplane, the telegraph, and the radio—appear particularly powerful at helping us imagine a smaller, more connected world. Seen through this lens, the Internet’s underlying architecture—it is no more and no less than a network that connects networks—and the sheer amount written about it in the past decade guaranteed that the network would be placed at the center of visions for a world made better through connection. These visions are so abundant that they’ve even spawned a neologism: “cyberutopianism.”

The term “cyberutopian” tends to be used only in the context of critique. Calling someone a cyberutopian implies that he or she has an unrealistic and naïvely overinflated sense of what technology makes possible and an insufficient understanding of the forces that govern societies. Curiously, the commonly used term for an opposite stance, a belief that Internet technologies are weakening society, coarsening discourse, and hastening conflict is described with a less weighted term: “cyberskepticism.” Whether or not either of these terms adequately serves us in this debate, we should consider cyberutopianism’s appeal, and its merits….

If we reject the notion that technology makes certain changes inevitable, but accept that the aspirations of the “cyberutopians” are worthy ones, we are left with a challenge: How do we rewire the tools we’ve built to maximize our impact on an interconnected world? Accepting the shortcomings of the systems we’ve built as inevitable and unchangeable is lazy. As Benjamin Disraeli observed in Vivian Grey, “Man is not the creature of circumstances, circumstances are the creatures of men. We are free agents, and man is more powerful than matter.” And, as Rheingold suggests, believing that people can use technology to build a world that’s more just, fair, and inclusive isn’t merely defensible. It’s practically a moral imperative.


Excerpted from Rewire: Digital Cosmopolitans in the Age of Connection by Ethan Zuckerman.

UK launches Information Economy Strategy


Open Data Institute: “The Information Economy Strategy sets out a range of key actions, including:

  • Digitally transforming 25 of the top 50 UK public services over the next 300 days, including plans to give businesses a single, online view of their tax records
  • Launching a new programme to help 1.6 million SMEs scale up their business online over the next five years.
  • Publishing a data capability strategy in October 2013, developed in partnership with government, industry and academia. The strategy will build on the recommendations in Stephan Shakespeare’s review of Public Sector Information and the Prime Minister’s Council for Science and Technology’s report on algorithms, and will be published alongside the Open Government Partnership National Action Plan.
  • Establishing the world’s first facility for testing state of the art 5G mobile technology, working with industry and the University of Surrey.”

Why Are We Signing Our Emails With “Thank You?”


Krystal D’Costa on Anthropology in Practice in Scientific American: “These types of linguistic structures are known as “politeness formulae.” … These patterns of responses are deeply nuanced and reflect the nature of the relationship between participants: degree of intimacy, relative status, and length of contact or expected duration of separation all influence how these interactions are carried out.

In the age of texting, these practices may seem antiquated, but the need for those sorts of rituals remains important, particularly in electronic communication where tone is hard to read. We end our communiques with “talk later,” “talk 2 u tomorrow,” or even simply “bye.” “Thanks” and “Thank you” have worked their way into this portion of the formula particularly in emails. More traditional valedictions have been replaced with “Thank you” so subtly that it’s now a common sign-off in this medium. But what does it mean? And why is it more acceptable than “Sincerely” or “Yours truly”?

It is in part be a reflection of our times. Email offers a speedier means of contact than an actual letter (and in some cases, a telephone), but that speed also means we’re sending more messages through this medium both for personal and professional reasons, and reading and responding to these messages requires a commitment of time. So it’s more important that the sender recognize the burden that they’ve placed on the recipient. In a time when letters took time to write, send, and respond to, it was important for the sender to attest to her reliability. Responses and actions were not so easy to take back. “Sincerely” and “Yours truly” which were meant to build trust between communicants. Credibility was an important determinant of whether a response would be issues. Today, as the web enables stranger to contact each other with little effort, credibility is less of a factor in determining responses (SPAM mail aside) when weighed against time.”

Big Data Is Not Our Master. Humans create technology. Humans can control it.


Chris Hughes in New Republic: “We’ve known for a long time that big companies can stalk our every digital move and customize our every Web interaction. Our movements are tracked by credit cards, Gmail, and tollbooths, and we haven’t seemed to care all that much.
That is, until this week’s news of government eavesdropping, with the help of these very same big companies—Verizon, Facebook, and Google, among others. For the first time, America is waking up to the realities of what all this information—known in the business as “big data”—enables governments and corporations to do….
We are suddenly wondering, Can the rise of enormous data systems that enable this surveillance be stopped or controlled? Is it possible to turn back the clock?
Technologists see the rise of big data as the inevitable march of history, impossible to prevent or alter. Viktor Mayer-Schönberger and Kenneth Cukier’s recent book Big Data is emblematic of this argument: They say that we must cope with the consequences of these changes, but they never really consider the role we play in creating and supporting these technologies themselves….
But these well-meaning technological advocates have forgotten that as a society, we determine our own future and set our own standards, norms, and policy. Talking about technological advancements as if they are pre-ordained science erases the role of human autonomy and decision-making in inventing our own future. Big data is not a Leviathan that must be coped with, but a technological trend that we have made possible and support through social and political policy.”