Request for Proposals: Exploring the Implications of Government Release of Large Datasets


“The Berkeley Center for Law & Technology and Microsoft are issuing this request for proposals (RFP) to fund scholarly inquiry to examine the civil rights, human rights, security and privacy issues that arise from recent initiatives to release large datasets of government information to the public for analysis and reuse.  This research may help ground public policy discussions and drive the development of a framework to avoid potential abuses of this data while encouraging greater engagement and innovation.
This RFP seeks to:

    • Gain knowledge of the impact of the online release of large amounts of data generated by citizens’ interactions with government
    • Imagine new possibilities for technical, legal, and regulatory interventions that avoid abuse
    • Begin building a body of research that addresses these issues

– BACKGROUND –

 
Governments at all levels are releasing large datasets for analysis by anyone for any purpose—“Open Data.”  Using Open Data, entrepreneurs may create new products and services, and citizens may use it to gain insight into the government.  A plethora of time saving and other useful applications have emerged from Open Data feeds, including more accurate traffic information, real-time arrival of public transportation, and information about crimes in neighborhoods.  Sometimes governments release large datasets in order to encourage the development of unimagined new applications.  For instance, New York City has made over 1,100 databases available, some of which contain information that can be linked to individuals, such as a parking violation database containing license plate numbers and car descriptions.
Data held by the government is often implicitly or explicitly about individuals—acting in roles that have recognized constitutional protection, such as lobbyist, signatory to a petition, or donor to a political cause; in roles that require special protection, such as victim of, witness to, or suspect in a crime; in the role as businessperson submitting proprietary information to a regulator or obtaining a business license; and in the role of ordinary citizen.  While open government is often presented as an unqualified good, sometimes Open Data can identify individuals or groups, leading to a more transparent citizenry.  The citizen who foresees this growing transparency may be less willing to engage in government, as these transactions may be documented and released in a dataset to anyone to use for any imaginable purpose—including to deanonymize the database—forever.  Moreover, some groups of citizens may have few options or no choice as to whether to engage in governmental activities.  Hence, open data sets may have a disparate impact on certain groups. The potential impact of large-scale data and analysis on civil rights is an area of growing concern.  A number of civil rights and media justice groups banded together in February 2014 to endorse the “Civil Rights Principles for the Era of Big Data” and the potential of new data systems to undermine longstanding civil rights protections was flagged as a “central finding” of a recent policy review by White House adviser John Podesta.
The Berkeley Center for Law & Technology (BCLT) and Microsoft are issuing this request for proposals in an effort to better understand the implications and potential impact of the release of data related to U.S. citizens’ interactions with their local, state and federal governments. BCLT and Microsoft will fund up to six grants, with a combined total of $300,000.  Grantees will be required to participate in a workshop to present and discuss their research at the Berkeley Technology Law Journal (BTLJ) Spring Symposium.  All grantees’ papers will be published in a dedicated monograph.  Grantees’ papers that approach the issues from a legal perspective may also be published in the BTLJ. We may also hold a followup workshop in New York City or Washington, DC.
While we are primarily interested in funding proposals that address issues related to the policy impacts of Open Data, many of these issues are intertwined with general societal implications of “big data.” As a result, proposals that explore Open Data from a big data perspective are welcome; however, proposals solely focused on big data are not.  We are open to proposals that address the following difficult question.  We are also open to methods and disciplines, and are particularly interested in proposals from cross-disciplinary teams.

    • To what extent does existing Open Data made available by city and state governments affect individual profiling?  Do the effects change depending on the level of aggregation (neighborhood vs. cities)?  What releases of information could foreseeably cause discrimination in the future? Will different groups in society be disproportionately impacted by Open Data?
    • Should the use of Open Data be governed by a code of conduct or subject to a review process before being released? In order to enhance citizen privacy, should governments develop guidelines to release sampled or perturbed data, instead of entire datasets? When datasets contain potentially identifiable information, should there be a notice-and-comment proceeding that includes proposed technological solutions to anonymize, de-identify or otherwise perturb the data?
    • Is there something fundamentally different about government services and the government’s collection of citizen’s data for basic needs in modern society such as power and water that requires governments to exercise greater due care than commercial entities?
    • Companies have legal and practical mechanisms to shield data submitted to government from public release.  What mechanisms do individuals have or should have to address misuse of Open Data?  Could developments in the constitutional right to information policy as articulated in Whalen and Westinghouse Electric Co address Open Data privacy issues?
    • Collecting data costs money, and its release could affect civil liberties.  Yet it is being given away freely, sometimes to immensely profitable firms.  Should governments license data for a fee and/or impose limits on its use, given its value?
    • The privacy principle of “collection limitation” is under siege, with many arguing that use restrictions will be more efficacious for protecting privacy and more workable for big data analysis.  Does the potential of Open Data justify eroding state and federal privacy act collection limitation principles?   What are the ethical dimensions of a government system that deprives the data subject of the ability to obscure or prevent the collection of data about a sensitive issue?  A move from collection restrictions to use regulation raises a number of related issues, detailed below.
    • Are use restrictions efficacious in creating accountability?  Consumer reporting agencies are regulated by use restrictions, yet they are not known for their accountability.  How could use regulations be implemented in the context of Open Data efficaciously?  Can a self-learning algorithm honor data use restrictions?
    • If an Open Dataset were regulated by a use restriction, how could individuals police wrongful uses?   How would plaintiffs overcome the likely defenses or proof of facts in a use regulation system, such as a burden to prove that data were analyzed and the product of that analysis was used in a certain way to harm the plaintiff?  Will plaintiffs ever be able to beat first amendment defenses?
    • The President’s Council of Advisors on Science and Technology big data report emphasizes that analysis is not a “use” of data.  Such an interpretation suggests that NSA metadata analysis and large-scale scanning of communications do not raise privacy issues.  What are the ethical and legal implications of the “analysis is not use” argument in the context of Open Data?
    • Open Data celebrates the idea that information collected by the government can be used by another person for various kinds of analysis.  When analysts are not involved in the collection of data, they are less likely to understand its context and limitations.  How do we ensure that this knowledge is maintained in a use regulation system?
    • Former President William Clinton was admitted under a pseudonym for a procedure at a New York Hospital in 2004.  The hospital detected 1,500 attempts by its own employees to access the President’s records.  With snooping such a tempting activity, how could incentives be crafted to cause self-policing of government data and the self-disclosure of inappropriate uses of Open Data?
    • It is clear that data privacy regulation could hamper some big data efforts.  However, many examples of big data successes hail from highly regulated environments, such as health care and financial services—areas with statutory, common law, and IRB protections.  What are the contours of privacy law that are compatible with big data and Open Data success and which are inherently inimical to it?
    • In recent years, the problem of “too much money in politics” has been addressed with increasing disclosure requirements.  Yet, distrust in government remains high, and individuals identified in donor databases have been subjected to harassment.  Is the answer to problems of distrust in government even more Open Data?
    • What are the ethical and epistemological implications of encouraging government decision-making based upon correlation analysis, without a rigorous understanding of cause and effect?  Are there decisions that should not be left to just correlational proof? While enthusiasm for data science has increased, scientific journals are elevating their standards, with special scrutiny focused on hypothesis-free, multiple comparison analysis. What could legal and policy experts learn from experts in statistics about the nature and limits of open data?…
      To submit a proposal, visit the Conference Management Toolkit (CMT) here.
      Once you have created a profile, the site will allow you to submit your proposal.
      If you have questions, please contact Chris Hoofnagle, principal investigator on this project.”

Towards Timely Public Health Decisions to Tackle Seasonal Diseases With Open Government Data


Paper by Vandana Srivastava and Biplav Srivastava for the Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence : “Improving public health is a major responsibility of any government, and is of major interest to citizens and scientific communities around the world. Here, one sees two extremes. On one hand, tremendous progress has been made in recent years in the understanding of causes, spread and remedies of common and regularly occurring diseases like Dengue, Malaria and Japanese Encephalistis (JE). On the other hand, public agencies treat these diseases in an ad hoc manner without learning from the experiences of previous years. Specifically, they would get alerted once reported cases have already arisen substantially in the known disease season, reactively initiate a few actions and then document the disease impact (cases, deaths) for that period, only to forget this learning in the next season. However, they miss the opportunity to reduce preventable deaths and sickness, and their corresponding economic impact, which scientific progress could have enabled. The gap is universal but very prominent in developing countries like India.
In this paper, we show that if public agencies provide historical disease impact information openly, it can be analyzed with statistical and machine learning techniques, correlated with best emerging practices in disease control, and simulated in a setting to optimize social benefits to provide timely guidance for new disease seasons and regions. We illustrate using open data for mosquito-borne communicable diseases; published results in public health on efficacy of Dengue control methods and apply it on a simulated typical city for maximal benefits with available resources. The exercise helps us further suggest strategies for new regions that may be anywhere in the world, how data could be better recorded by city agencies and what prevention methods should medical community focus on for wider impact.
Full Text: PDF

Sharing Data Is a Form of Corporate Philanthropy


Matt Stempeck in HBR Blog:  “Ever since the International Charter on Space and Major Disasters was signed in 1999, satellite companies like DMC International Imaging have had a clear protocol with which to provide valuable imagery to public actors in times of crisis. In a single week this February, DMCii tasked its fleet of satellites on flooding in the United Kingdom, fires in India, floods in Zimbabwe, and snow in South Korea. Official crisis response departments and relevant UN departments can request on-demand access to the visuals captured by these “eyes in the sky” to better assess damage and coordinate relief efforts.

DMCii is a private company, yet it provides enormous value to the public and social sectors simply by periodically sharing its data.
Back on Earth, companies create, collect, and mine data in their day-to-day business. This data has quickly emerged as one of this century’s most vital assets. Public sector and social good organizations may not have access to the same amount, quality, or frequency of data. This imbalance has inspired a new category of corporate giving foreshadowed by the 1999 Space Charter: data philanthropy.
The satellite imagery example is an area of obvious societal value, but data philanthropy holds even stronger potential closer to home, where a wide range of private companies could give back in meaningful ways by contributing data to public actors. Consider two promising contexts for data philanthropy: responsive cities and academic research.
The centralized institutions of the 20th century allowed for the most sophisticated economic and urban planning to date. But in recent decades, the information revolution has helped the private sector speed ahead in data aggregation, analysis, and applications. It’s well known that there’s enormous value in real-time usage of data in the private sector, but there are similarly huge gains to be won in the application of real-time data to mitigate common challenges.
What if sharing economy companies shared their real-time housing, transit, and economic data with city governments or public interest groups? For example, Uber maintains a “God’s Eye view” of every driver on the road in a city:
stempeck2
Imagine combining this single data feed with an entire portfolio of real-time information. An early leader in this space is the City of Chicago’s urban data dashboard, WindyGrid. The dashboard aggregates an ever-growing variety of public datasets to allow for more intelligent urban management.
stempeck3
Over time, we could design responsive cities that react to this data. A responsive city is one where services, infrastructure, and even policies can flexibly respond to the rhythms of its denizens in real-time. Private sector data contributions could greatly accelerate these nascent efforts.
Data philanthropy could similarly benefit academia. Access to data remains an unfortunate barrier to entry for many researchers. The result is that only researchers with access to certain data, such as full-volume social media streams, can analyze and produce knowledge from this compelling information. Twitter, for example, sells access to a range of real-time APIs to marketing platforms, but the price point often exceeds researchers’ budgets. To accelerate the pursuit of knowledge, Twitter has piloted a program called Data Grants offering access to segments of their real-time global trove to select groups of researchers. With this program, academics and other researchers can apply to receive access to relevant bulk data downloads, such as an period of time before and after an election, or a certain geographic area.
Humanitarian response, urban planning, and academia are just three sectors within which private data can be donated to improve the public condition. There are many more possible applications possible, but few examples to date. For companies looking to expand their corporate social responsibility initiatives, sharing data should be part of the conversation…
Companies considering data philanthropy can take the following steps:

  • Inventory the information your company produces, collects, and analyzes. Consider which data would be easy to share and which data will require long-term effort.
  • Think who could benefit from this information. Who in your community doesn’t have access to this information?
  • Who could be harmed by the release of this data? If the datasets are about people, have they consented to its release? (i.e. don’t pull a Facebook emotional manipulation experiment).
  • Begin conversations with relevant public agencies and nonprofit partners to get a sense of the sort of information they might find valuable and their capacity to work with the formats you might eventually make available.
  • If you expect an onslaught of interest, an application process can help qualify partnership opportunities to maximize positive impact relative to time invested in the program.
  • Consider how you’ll handle distribution of the data to partners. Even if you don’t have the resources to set up an API, regular releases of bulk data could still provide enormous value to organizations used to relying on less-frequently updated government indices.
  • Consider your needs regarding privacy and anonymization. Strip the data of anything remotely resembling personally identifiable information (here are some guidelines).
  • If you’re making data available to researchers, plan to allow researchers to publish their results without obstruction. You might also require them to share the findings with the world under Open Access terms….”

'Big Data' Will Change How You Play, See the Doctor, Even Eat


We’re entering an age of personal big data, and its impact on our lives will surpass that of the Internet. Data will answer questions we could never before answer with certainty—everyday questions like whether that dress actually makes you look fat, or profound questions about precisely how long you will live.

ADVERTISEMENT

Every 20 years or so, a powerful technology moves from the realm of backroom expertise and into the hands of the masses. In the late-1970s, computing made that transition—from mainframes in glass-enclosed rooms to personal computers on desks. In the late 1990s, the first web browsers made networks, which had been for science labs and the military, accessible to any of us, giving birth to the modern Internet.

Each transition touched off an explosion of innovation and reshaped work and leisure. In 1975, 50,000 PCs were in use worldwide. Twenty years later: 225 million. The number of Internet users in 1995 hit 16 million. Today it’s more than 3 billion. In much of the world, it’s hard to imagine life without constant access to both computing and networks.

The 2010s will be the coming-out party for data. Gathering, accessing and gleaning insights from vast and deep data has been a capability locked inside enterprises long enough. Cloud computing and mobile devices now make it possible to stand in a bathroom line at a baseball game while tapping into massive computing power and databases. On the other end, connected devices such as the Nest thermostat or Fitbit health monitor and apps on smartphones increasingly collect new kinds of information about everyday personal actions and habits, turning it into data about ourselves.

More than 80 percent of data today is unstructured: tangles of YouTube videos, news stories, academic papers, social network comments. Unstructured data has been almost impossible to search for, analyze and mix with other data. A new generation of computers—cognitive computing systems that learn from data—will read tweets or e-books or watch video, and comprehend its content. Somewhat like brains, these systems can link diverse bits of data to come up with real answers, not just search results.

Such systems can work in natural language. The progenitor is the IBM Watson computer that won on Jeopardy in 2011. Next-generation Watsons will work like a super-powered Google. (Google today is a data-searching wimp compared with what’s coming.)

Sports offers a glimpse into the data age. Last season the NBA installed in every arena technology that can “watch” a game and record, in 48 minutes of action, more than 4 million data points about every movement and shot. That alone could yield new insights for NBA coaches, such as which group of five players most efficiently passes the ball around….

Think again about life before personal computing and the Internet. Even if someone told you that you’d eventually carry a computer in your pocket that was always connected to global networks, you would’ve had a hard time imagining what that meant—imagining WhatsApp, Siri, Pandora, Uber, Evernote, Tinder.

As data about everything becomes ubiquitous and democratized, layered on top of computing and networks, it will touch off the most spectacular technology explosion yet. We can see the early stages now. “Big data” doesn’t even begin to describe the enormity of what’s coming next.”

Chief Executive of Nesta on the Future of Government Innovation


Interview between Rahim Kanani and Geoff Mulgan, CEO of NESTA and member of the MacArthur Research Network on Opening Governance: “Our aspiration is to become a global center of expertise on all kinds of innovation, from how to back creative business start-ups and how to shape innovations tools such as challenge prizes, to helping governments act as catalysts for new solutions,” explained Geoff Mulgan, chief executive of Nesta, the UK’s innovation foundation. In an interview with Mulgan, we discussed their new report, published in partnership with Bloomberg Philanthropies, which highlights 20 of the world’s top innovation teams in government. Mulgan and I also discussed the founding and evolution of Nesta over the past few years, and leadership lessons from his time inside and outside government.
Rahim Kanani: When we talk about ‘innovations in government’, isn’t that an oxymoron?
Geoff Mulgan: Governments have always innovated. The Internet and World Wide Web both originated in public organizations, and governments are constantly developing new ideas, from public health systems to carbon trading schemes, online tax filing to high speed rail networks.  But they’re much less systematic at innovation than the best in business and science.  There are very few job roles, especially at senior levels, few budgets, and few teams or units.  So although there are plenty of creative individuals in the public sector, they succeed despite, not because of the systems around them. Risk-taking is punished not rewarded.   Over the last century, by contrast, the best businesses have learned how to run R&D departments, product development teams, open innovation processes and reasonably sophisticated ways of tracking investments and returns.
Kanani: This new report, published in partnership with Bloomberg Philanthropies, highlights 20 of the world’s most effective innovation teams in government working to address a range of issues, from reducing murder rates to promoting economic growth. Before I get to the results, how did this project come about, and why is it so important?
Mulgan: If you fail to generate new ideas, test them and scale the ones that work, it’s inevitable that productivity will stagnate and governments will fail to keep up with public expectations, particularly when waves of new technology—from smart phones and the cloud to big data—are opening up dramatic new possibilities.  Mayor Bloomberg has been a leading advocate for innovation in the public sector, and in New York he showed the virtues of energetic experiment, combined with rigorous measurement of results.  In the UK, organizations like Nesta have approached innovation in a very similar way, so it seemed timely to collaborate on a study of the state of the field, particularly since we were regularly being approached by governments wanting to set up new teams and asking for guidance.
Kanani: Where are some of the most effective innovation teams working on these issues, and how did you find them?
Mulgan: In our own work at Nesta, we’ve regularly sought out the best innovation teams that we could learn from and this study made it possible to do that more systematically, focusing in particular on the teams within national and city governments.  They vary greatly, but all the best ones are achieving impact with relatively slim resources.  Some are based in central governments, like Mindlab in Denmark, which has pioneered the use of design methods to reshape government services, from small business licensing to welfare.  SITRA in Finland has been going for decades as a public technology agency, and more recently has switched its attention to innovation in public services. For example, providing mobile tools to help patients manage their own healthcare.   In the city of Seoul, the Mayor set up an innovation team to accelerate the adoption of ‘sharing’ tools, so that people could share things like cars, freeing money for other things.  In south Australia the government set up an innovation agency that has been pioneering radical ways of helping troubled families, mobilizing families to help other families.
Kanani: What surprised you the most about the outcomes of this research?
Mulgan: Perhaps the biggest surprise has been the speed with which this idea is spreading.  Since we started the research, we’ve come across new teams being created in dozens of countries, from Canada and New Zealand to Cambodia and Chile.  China has set up a mobile technology lab for city governments.  Mexico City and many others have set up labs focused on creative uses of open data.  A batch of cities across the US supported by Bloomberg Philanthropy—from Memphis and New Orleans to Boston and Philadelphia—are now showing impressive results and persuading others to copy them.
 

Open Data for economic growth: the latest evidence


Andrew Stott at the Worldbank OpenData Blog: “One of the key policy drivers for Open Data has been to drive economic growth and business innovation. There’s a growing amount of evidence and analysis not only for the total potential economic benefit but also for some of the ways in which this is coming about. This evidence is summarised and reviewed in a new World Bank paper published today.
There’s a range of studies that suggest that the potential prize from Open Data could be enormous – including an estimate of $3-5 trillion a year globally from McKinsey Global Institute and an estimate of $13 trillion cumulative over the next 5 years in the G20 countries.  There are supporting studies of the value of Open Data to certain sectors in certain countries – for instance $20 billion a year to Agriculture in the US – and of the value of key datasets such as geospatial data.  All these support the conclusion that the economic potential is at least significant – although with a range from “significant” to “extremely significant”!
At least some of this benefit is already being realised by new companies that have sprung up to deliver new, innovative, data-rich services and by older companies improving their efficiency by using open data to optimise their operations. Five main business archetypes have been identified – suppliers, aggregators, enrichers, application developers and enablers. What’s more there are at least four companies which did not exist ten years ago, which are driven by Open Data, and which are each now valued at around $1 billion or more. Somewhat surprisingly the drive to exploit Open Data is coming from outside the traditional “ICT sector” – although the ICT sector is supplying many of the tools required.
It’s also becoming clear that if countries want to maximise their gain from Open Data the role of government needs to go beyond simply publishing some data on a website. Governments need to be:

  • Suppliers – of the data that business need
  • Leaders – making sure that municipalities, state owned enterprises and public services operated by the private sector also release important data
  • Catalysts – nurturing a thriving ecosystem of data users, coders and application developers and incubating new, data-driven businesses
  • Users – using Open Data themselves to overcome the barriers to using data within government and innovating new ways to use the data they collect to improve public services and government efficiency.

Nevertheless, most of the evidence for big economic benefits for Open Data comes from the developed world. So on Wednesday the World Bank is holding an open seminar to examine critically “Can Open Data Boost Economic Growth and Prosperity” in developing countries. Please join us and join the debate!
Learn more:

Selected Readings on Sentiment Analysis


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of sentiment analysis was originally published in 2014.

Sentiment Analysis is a field of Computer Science that uses techniques from natural language processing, computational linguistics, and machine learning to predict subjective meaning from text. The term opinion mining is often used interchangeably with Sentiment Analysis, although it is technically a subfield focusing on the extraction of opinions (the umbrella under which sentiment, evaluation, appraisal, attitude, and emotion all lie).

The rise of Web 2.0 and increased information flow has led to an increase in interest towards Sentiment Analysis — especially as applied to social networks and media. Events causing large spikes in media — such as the 2012 Presidential Election Debates — are especially ripe for analysis. Such analyses raise a variety of implications for the future of crowd participation, elections, and governance.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Choi, Eunsol et al. “Hedge detection as a lens on framing in the GMO debates: a position paper.” Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics 13 Jul. 2012: 70-79. http://bit.ly/1wweftP

  • Understanding the ways in which participants in public discussions frame their arguments is important for understanding how public opinion is formed. This paper adopts the position that it is time for more computationally-oriented research on problems involving framing. In the interests of furthering that goal, the authors propose the following question: In the controversy regarding the use of genetically-modified organisms (GMOs) in agriculture, do pro- and anti-GMO articles differ in whether they choose to adopt a more “scientific” tone?
  • Prior work on the rhetoric and sociology of science suggests that hedging may distinguish popular-science text from text written by professional scientists for their colleagues. The paper proposes a detailed approach to studying whether hedge detection can be used to understand scientific framing in the GMO debates, and provides corpora to facilitate this study. Some of the preliminary analyses suggest that hedges occur less frequently in scientific discourse than in popular text, a finding that contradicts prior assertions in the literature.

Michael, Christina, Francesca Toni, and Krysia Broda. “Sentiment analysis for debates.” (Unpublished MSc thesis). Department of Computing, Imperial College London (2013). http://bit.ly/Wi86Xv

  • This project aims to expand on existing solutions used for automatic sentiment analysis on text in order to capture support/opposition and agreement/disagreement in debates. In addition, it looks at visualizing the classification results for enhancing the ease of understanding the debates and for showing underlying trends. Finally, it evaluates proposed techniques on an existing debate system for social networking.

Murakami, Akiko, and Rudy Raymond. “Support or oppose?: classifying positions in online debates from reply activities and opinion expressions.” Proceedings of the 23rd International Conference on Computational Linguistics: Posters 23 Aug. 2010: 869-875. https://bit.ly/2Eicfnm

  • In this paper, the authors propose a method for the task of identifying the general positions of users in online debates, i.e., support or oppose the main topic of an online debate, by exploiting local information in their remarks within the debate. An online debate is a forum where each user posts an opinion on a particular topic while other users state their positions by posting their remarks within the debate. The supporting or opposing remarks are made by directly replying to the opinion, or indirectly to other remarks (to express local agreement or disagreement), which makes the task of identifying users’ general positions difficult.
  • A prior study has shown that a link-based method, which completely ignores the content of the remarks, can achieve higher accuracy for the identification task than methods based solely on the contents of the remarks. In this paper, it is shown that utilizing the textual content of the remarks into the link-based method can yield higher accuracy in the identification task.

Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and trends in information retrieval 2.1-2 (2008): 1-135. http://bit.ly/UaCBwD

  • This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Its focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. It includes material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.

Ranade, Sarvesh et al. “Online debate summarization using topic directed sentiment analysis.” Proceedings of the Second International Workshop on Issues of Sentiment Discovery and Opinion Mining 11 Aug. 2013: 7. http://bit.ly/1nbKtLn

  • Social networking sites provide users a virtual community interaction platform to share their thoughts, life experiences and opinions. Online debate forum is one such platform where people can take a stance and argue in support or opposition of debate topics. An important feature of such forums is that they are dynamic and grow rapidly. In such situations, effective opinion summarization approaches are needed so that readers need not go through the entire debate.
  • This paper aims to summarize online debates by extracting highly topic relevant and sentiment rich sentences. The proposed approach takes into account topic relevant, document relevant and sentiment based features to capture topic opinionated sentences. ROUGE (Recall-Oriented Understudy for Gisting Evaluation, which employ a set of metrics and a software package to compare automatically produced summary or translation against human-produced onces) scores are used to evaluate the system. This system significantly outperforms several baseline systems and show improvement over the state-of-the-art opinion summarization system. The results verify that topic directed sentiment features are most important to generate effective debate summaries.

Schneider, Jodi. “Automated argumentation mining to the rescue? Envisioning argumentation and decision-making support for debates in open online collaboration communities.” http://bit.ly/1mi7ztx

  • Argumentation mining, a relatively new area of discourse analysis, involves automatically identifying and structuring arguments. Following a basic introduction to argumentation, the authors describe a new possible domain for argumentation mining: debates in open online collaboration communities.
  • Based on our experience with manual annotation of arguments in debates, the authors propose argumentation mining as the basis for three kinds of support tools, for authoring more persuasive arguments, finding weaknesses in others’ arguments, and summarizing a debate’s overall conclusions.

Recent progress in Open Data production and consumption


Examples from a Governmental institute (SMHI) and a collaborative EU research project (SWITCH-ON) by Arheimer, Berit; and Falkenroth, Esa: “The Swedish Meteorological and Hydrological Institute (SMHI) has a long tradition both in producing and consuming open data on a national, European and global scale. It is also promoting community building among water scientists in Europe by participating in and initiating collaborative projects. This presentation will exemplify the contemporary European movement imposed by the INSPIRE directive and the Open Data Strategy, by showing the progress in openness and shift in attitudes during the last decade when handling Research Data and Public Sector Information at a national European institute. Moreover, the presentation will inform about a recently started collaborative project (EU FP7 project No 603587) coordinated by SMHI and called SWITCH-ON http://water-switch-on.eu/. The project addresses water concerns and currently untapped potential of open data for improved water management across the EU. The overall goal of the project is to make use of open data, and add value to society by repurposing and refining data from various sources. SWITCH-ON will establish new forms of water research and facilitate the development of new products and services based on principles of sharing and community building in the water society. The SWITCH-ON objectives are to use open data for implementing: 1) an innovative spatial information platform with open data tailored for direct water assessments, 2) an entirely new form of collaborative research for water-related sciences, 3) fourteen new operational products and services dedicated to appointed end-users, 4) new business and knowledge to inform individual and collective decisions in line with the Europe’s smart growth and environmental objectives. The presentation will discuss challenges, progress and opportunities with the open data strategy, based on the experiences from working both at a Governmental institute and being part of the global research community.”

When Technologies Combine, Amazing Innovation Happens


FastCoexist: “Innovation occurs both within fields, and in combinations of fields. It’s perhaps the latter that ends up being most groundbreaking. When people of disparate expertise, mindset and ideas work together, new possibilities pop up.
In a new report, the Institute for the Future argues that “technological change is increasingly driven by the combination and recombination of foundational elements.” So, when we think about the future, we need to consider not just fundamental advances (say, in computing, materials, bioscience) but also at the intersection of these technologies.
The report uses combination-analysis in the form of a map. IFTF selects 13 “territories”–what it calls “frontiers of innovation”–and then examines the linkages and overlaps. The result is 20 “combinational forecasts.” “These are the big stories, hot spots that will shape the landscape of technology in the coming decade,” the report explains. “Each combinatorial forecast emerges from the intersection of multiple territories.”…

Quantified Experiences

Advances in brain-imaging techniques will make bring new transparency to our thoughts and feelings. “Assigning precise measurements to feelings like pain through neurofeedback and other techniques could allow for comparison, modulation, and manipulation of these feelings,” the report says. “Direct measurement of our once-private thoughts and feelings can help us understand other people’s experience but will also present challenges regarding privacy and definition of norms.”…

Code Is The Law

The law enforcement of the future may increasingly rely on sensors and programmable devices. “Governance is shifting from reliance on individual responsibility and human policing toward a system of embedded protocols and automatic rule enforcement,” the report says. That in turn means greater power for programmers who are effectively laying down the parameters of the new relationship between government and governed….”

Privacy-Invading Technologies and Privacy by Design


New book by Demetrius Klitou: “Challenged by rapidly developing privacy-invading technologies (PITs), this book provides a convincing set of potential policy recommendations and practical solutions for safeguarding both privacy and security. It shows that benefits such as public security do not necessarily come at the expense of privacy and liberty overall.
Backed up by comprehensive study of four specific PITs – Body scanners; Public space CCTV microphones; Public space CCTV loudspeakers; and Human-implantable microchips (RFID implants/GPS implants) – the author shows how laws that regulate the design and development of PITs may more effectively protect privacy than laws that only regulate data controllers and the use of such technologies. New rules and regulations should therefore incorporate fundamental privacy principles through what is known as ‘Privacy by Design’.
The numerous sources explored by the author provide a workable overview of the positions of academia, industry, government and relevant international organizations and NGOs.

  • Explores a relatively novel approach of protecting privacy
  • Offers a convincing set of potential policy recommendations and practical solutions
  • Provides a workable overview of the positions of academia, industry, government and relevant international organizations and NGOs”