Volta: “Visualising arguments helps people assemble their throughts and get to grip with complex problems according to The Argumentation Factory, based in Amsterdam. Their Argument Maps, constructed for government agencies, NGOs and commercial organizations, are designed to enable people to make better decisions and share and communicate information.
Dutch research organisation TNO, in association with The Argumentation Factory, have launched the European Shale Gas Argument Map detailing the pros and cons of the production of shale gas for EU member states with shale gas resources. Their map is designed to provide the foundation for an open discussion and help the user make a balaced assessment.”
Ben Schneiderman, the founding director of the Human-Computer Interaction Lab, in The Atlantic: “The choice between basic and applied research is a false one….The belief that basic or pure research lays the foundation for applied research was fixed in science policy circles by Vannevar Bush’s 1945 report on Science: The Endless Frontier. Unfortunately, his unsubstantiated beliefs have remained attractive to powerful advocates of basic research who seek funding for projects that may or may not advance innovation and economic growth. Shifting the policy agenda to recognize that applied research goals often trigger more effective basic research could accelerate both applied and basic research….the highest payoffs often come when there is a healthy interaction of basic and applied research (Figure 3). This ecological model also suggests that basic and applied research are embedded in a rich context of large development projects and continuing efforts to refine production & operations.”
Paper by Jeffrey Johnson
for Annual Conference of the Midwest Political Science Association: “This paper argues for subsuming the question of open data within a larger question of information justice. I show that there are several problems of justice that emerge as a consequence of opening data to full public accessibility, and are generally a consequence of the failure of the open data movement to understand the constructed nature of data. I examine three such problems: the embedding of social privilege in datasets as the data is constructed, the differential capabilities of data users (especially differences between citizens and “enterprise” users), and the norms that data systems impose through their function as disciplinary systems.
In each case I show that open data has the quite real potential to exacerbate rather than alleviate injustices. This necessitates a theory of information justice. I briefly suggest two complementary directions in which such a theory might be developed: one leading toward moral principles that can be used to evaluate the justness of data practices, and another exploring the practices and structures that a social movement promoting information justice might pursue.”
Paper by NetLab (Toronto University) scholars in the latest issue of the Journal of Computer-Mediated Communication: “We review the evidence from a number of surveys in which our NetLab has been involved about the extent to which the Internet is transforming or enhancing community. The studies show that the Internet is used for connectivity locally as well as globally, although the nature of its use varies in different countries. Internet use is adding on to other forms of communication, rather than replacing them. Internet use is reinforcing the pre-existing turn to societies in the developed world that are organized around networked individualism rather than group or local solidarities. The result has important implications for civic involvement.”
New Scientist: “Diagnosing rare illnesses could get easier, thanks to new web-based tools that pool information from a wide variety of sources…CrowdMed, launched on 16 April at the TedMed conference in Washington DC, uses crowds to solve tough medical cases.
Anyone can join CrowdMed and analyse cases, regardless of their background or training. Participants are given points that they can then use to bet on the correct diagnosis from lists of suggestions. This creates a prediction market, with diagnoses falling and rising in value based on their popularity, like stocks in a stock market. Algorithms then calculate the probability that each diagnosis will be correct. In 20 initial test cases, around 700 participants identified each of the mystery diseases as one of their top three suggestions….
Frustrated patients and doctors can also turn to FindZebra, a recently launched search engine for rare diseases. It lets users search an index of rare disease databases looked after by a team of researchers. In initial trials, FindZebra returned more helpful results than Google on searches within this same dataset.”
Tom Kalil, Deputy Director for Technology and Innovation at OSTP : “As we enter the second year of the Big Data Initiative, the Obama Administration is encouraging multiple stakeholders, including federal agencies, private industry, academia, state and local government, non-profits, and foundations to develop and participate in Big Data initiatives across the country. Of particular interest are partnerships designed to advance core Big Data technologies; harness the power of Big Data to advance national goals such as economic growth, education, health, and clean energy; use competitions and challenges; and foster regional innovation.
The National Science Foundation has issued a request for information encouraging stakeholders to identify Big Data projects they would be willing to support to achieve these goals. And, later this year, OSTP, NSF, and other partner agencies in the Networking and Information Technology R&D (NITRD) program plan to convene an event that highlights high-impact collaborations and identifies areas for expanded collaboration between the public and private sectors.”
Steve Lohr from the New York Times: “Work-force science, in short, is what happens when Big Data meets H.R….Today, every e-mail, instant message, phone call, line of written code and mouse-click leaves a digital signal. These patterns can now be inexpensively collected and mined for insights into how people work and communicate, potentially opening doors to more efficiency and innovation within companies.
Digital technology also makes it possible to conduct and aggregate personality-based assessments, often using online quizzes or games, in far greater detail and numbers than ever before. In the past, studies of worker behavior were typically based on observing a few hundred people at most. Today, studies can include thousands or hundreds of thousands of workers, an exponential leap ahead.
“The heart of science is measurement,” says Erik Brynjolfsson, director of the Center for Digital Business at the Sloan School of Management at M.I.T. “We’re seeing a revolution in measurement, and it will revolutionize organizational economics and personnel economics.”
The data-gathering technology, to be sure, raises questions about the limits of worker surveillance. “The larger problem here is that all these workplace metrics are being collected when you as a worker are essentially behind a one-way mirror,” says Marc Rotenberg, executive director of the Electronic Privacy Information Center, an advocacy group. “You don’t know what data is being collected and how it is used.”
Experian: “Insights from Experian, the global information services company, reveals that if the time spent on the Internet was distilled into an hour then a quarter of it would be spent on social networking and forums across UK, US and Australia. In the UK 13 minutes out of every hour online is spent on social networking and forums, nine minutes on entertainment sites and six minutes shopping.”
A new paper on “Local governance in the new information ecology” in the journal Public Money & Management calls for the creation of “interpretative communities” to make sense locally of open government data. In particular they argue that:
“The availability of this open government data… solves nothing: as many writers have pointed out, such data needs to be interpreted and interpretation is always a function of a collective—what has been called an ‘interpretative’ or ‘epistemic’ community.”
The call mirrors the emerging view that the next stage regarding open data is to focus on making sense of the data, and using it to serve the public good.
The authors identify “three different models of ‘interpretative communities’ that have emerged over the last few decades, drawing on, respectively, literary theory, science and technology studies and international politics” – including reference groups, epistemic communities and expert networks. As to develop these communities locally, the paper states that it will require us…
“to rethink the resources and institutions that could support a local use of OGD and other resources. Such institutional support for local interpretation cannot be wholly local but needs to draw, in a critical and interactive manner, on wider knowledge bases. In the end, the model of the local that needs to be mobilized is not one based on spatial propinquity alone, or on a bounded sense of local, but one in which the local is seen as relational, connected and dynamic…we might look to the new technologies, and the newly-emerged social networks or Web 2.0 in particular, for such a balance of the local and the extra local”.
The paper ultimately is a critique of the current movement to provide open data that is presented as objective knowledge but is based on a “view from nowhere”…
“To make it into the view from somewhere will require the construction of powerful, yet open, interpretative communities to enact local governance with all of the subsequent questions arising about current modes of democratic representation, sectional interests in the third and private sector and centrallocal government dynamics. The prospects of such an information ecology to support interpretative community building emerging in the current environment in anything other than a piecemeal and reactive way do not appear promising.”
NYT on the integration of data science within universities: “In the last few years, dozens of programs under a variety of names have sprung up in response to the excitement about Big Data, not to mention the six-figure salaries for some recent graduates.”