What future do you want? Commission invites votes on what Europe could look like in 2050 to help steer future policy and research planning


European Commission – MEMO: “Vice-President Neelie Kroes, responsible for the Digital Agenda, is inviting people to join a voting and ranking process on 11 visions of what the world could look like in 20-40 years. The Commission is seeking views on living and learning, leisure and working in Europe in 2050, to steer long-term policy or research planning.
The visions have been gathered over the past year through the Futurium, an online debate platform that allows policymakers to not only consult citizens, but to collaborate and “co-create” with them, and at events throughout Europe. Thousands of thinkers – from high school students, to the Erasmus Students Network; from entrepreneurs and internet pioneers to philosophers and university professors, have engaged in a collective inquiry – a means of crowd-sourcing what our future world could look like.
Eleven over-arching themes have been drawn together from more than 200 ideas for the future. From today, everyone is invited to join the debate and offer their rating and rankings of the various ideas. The results of the feedback will help the European Commission make better decisions about how to fund projects and ideas that both shape the future and get Europe ready for that future….
The Futurium is a foresight project run by DG CONNECT, based on an open source approach. It develops visions of society, technologies, attitudes and trends in 2040-2050 and use these, for example as potential blueprints for future policy choices or EU research and innovation funding priorities.
It is an online platform developed to capture emerging trends and enable interested citizens to co-create compelling visions of the futures that matter to them.

This crowd-sourcing approach provides useful insights on:

  1. vision: where people want to go, how desirable and likely are the visions posted on the platform;
  2. policy ideas: what should ideally be done to realise the futures; the possible impacts and plausibility of policy ideas;
  3. evidence: scientific and other evidence to support the visions and policy ideas.

….
Connecting policy making to people: in an increasingly connected society, online outreach and engagement is an essential response to the growing demand for participation, helping to capture new ideas and to broaden the legitimacy of the policy making process (IP/10/1296). The Futurium is an early prototype of a more general policy-making model described in the paper “The Futurium—a Foresight Platform for Evidence-Based and Participatory Policymaking“.

The Futurium was developed to lay the groundwork for future policy proposals which could be considered by the European Parliament and the European Commission under their new mandates as of 2014. But the Futurium’s open, flexible architecture makes it easily adaptable to any policy-making context, where thinking ahead, stakeholder participation and scientific evidence are needed.”

Mirroring the real world in social media: twitter, geolocation, and sentiment analysis


Paper by E Baucom, A Sanjari, X Liu, and M Chen as part of the proceedings of UnstructureNLP ’13: “In recent years social media has been used to characterize and predict real world events, and in this research we seek to investigate how closely Twitter mirrors the real world. Specifically, we wish to characterize the relationship between the language used on Twitter and the results of the 2011 NBA Playoff games. We hypothesize that the language used by Twitter users will be useful in classifying the users’ locations combined with the current status of which team is in the lead during the game. This is based on the common assumption that “fans” of a team have more positive sentiment and will accordingly use different language when their team is doing well. We investigate this hypothesis by labeling each tweet according the the location of the user along with the team that is in the lead at the time of the tweet. The hypothesized difference in language (as measured by tfidf) should then have predictive power over the tweet labels. We find that indeed it does and we experiment further by adding semantic orientation (SO) information as part of the feature set. The SO does not offer much improvement over tf-idf alone. We discuss the relative strengths of the two types of features for our data.”

Open government and conflicts with public trust and privacy: Recent research ideas


Article by John Wihbey:  “Since the Progressive Era, ideas about the benefits of government openness — crystallized by Justice Brandeis’s famous phrase about the disinfectant qualities of “sunlight” — have steadily grown more popular and prevalent. Post-Watergate reforms further embodied these ideas. Now, notions of “open government” and dramatically heightened levels of transparency have taken hold as zero-cost digital dissemination has become a reality. Many have advocated switching the “default” of government institutions so information and data are no longer available just “on demand” but rather are publicized as a matter of course in usable digital form.
As academic researchers point out, we don’t yet have a great deal of long-term, valid data for many of the experiments in this area to weigh civic outcomes and the overall advance of democracy. Anecdotally, though, it seems that more problems — from potholes to corruption — are being surfaced, enabling greater accountability. This “new fuel” of data also creates opportunities for businesses and organizations; and so-called “Big Data” projects frequently rely on large government datasets, as do “news apps.”
But are there other logical limits to open government in the digital age? If so, what are the rationales for these limits? And what are the latest academic insights in this area?
Most open-records laws, including the federal Freedom of Information Act, still provide exceptions that allow public institutions to guard information that might interfere with pending legal proceedings or jeopardize national security. In addition, the internal decision-making and deliberation processes of government agencies as well as documents related to personnel matters are frequently off limits. These exceptions remain largely untouched in the digital age (notwithstanding extralegal actions by WikiLeaks and Edward Snowden, or confidential sources who disclose things to the press). At a practical level, experts say that the functioning of FOIA laws is still uneven, and some states continue to threaten rollbacks.
Limits of transparency?
A key moment in the rethinking of openness came in 2009, when Harvard University legal scholar Lawrence Lessig published an essay in The New Republic titled “Against Transparency.” In it, Lessig — a well-known advocate for greater access to information and knowledge of many kinds — warned that transparency in and of itself could lead to diminished trust in government and must be tied to policies that can also rebuild public confidence in democratic institutions.
In recent years, more political groups have begun leveraging open records laws as a kind of tool to go after opponents, a phenomenon that has even touched the public university community, which is typically subject to disclosure laws….

Privacy and openness
If there is a tension between transparency and public trust, there is also an uneasy balance between government accountability and privacy. A 2013 paper in the American Review of Public Administration, “Public Pay Disclosure in State Government: An Ethical Analysis,” examines a standard question of disclosure faced in every state: How much should even low-level public servants be subject to personal scrutiny about their salaries? The researchers, James S. Bowman and Kelly A. Stevens of Florida State University, evaluate issues of transparency based on three competing values: rules (justice or fairness), results (what does the greatest good), and virtue (promoting integrity.)…”

When Nudges Fail: Slippery Defaults


New paper by Lauren E. Willis “Inspired by the success of “automatic enrollment” in increasing participation in defined contribution retirement savings plans, policymakers have put similar policy defaults in place in a variety of other contexts, from checking account overdraft coverage to home-mortgage escrows. Internet privacy appears poised to be the next arena. But how broadly applicable are the results obtained in the retirement savings context? Evidence from other contexts indicates two problems with this approach: the defaults put in place by the law are not always sticky, and the people who opt out may be those who would benefit the most from the default. Examining the new default for consumer checking account overdraft coverage reveals that firms can systematically undermine each of the mechanisms that might otherwise operate to make defaults sticky. Comparing the retirement-savings default to the overdraft default, four boundary conditions on the use of defaults as a policy tool are apparent: policy defaults will not be sticky when (1) motivated firms oppose them, (2) these firms have access to the consumer, (3) consumers find the decision environment confusing, and (4) consumer preferences are uncertain. Due to constitutional and institutional constraints, government regulation of the libertarian-paternalism variety is unlikely to be capable of overcoming these bounds. Therefore, policy defaults intended to protect individuals when firms have the motivation and means to move consumers out of the default are unlikely to be effective unless accompanied by substantive regulation. Moreover, the same is likely to be true of “nudges” more generally, when motivated firms oppose them.”

Selected Readings on Linked Data and the Semantic Web


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of linked data and the semantic web was originally published in 2013.

Linked Data and the Semantic Web movement are seeking to make our growing body of digital knowledge and information more interconnected, searchable, machine-readable and useful. First introduced by the W3C, Sir Tim Berners-Lee, Christian Bizer and Tom Heath define Linked Data as “data published to the Web in such a way that it is machine-readable, its meaning is explicitly defined, it is linked to other external data sets, and can in turn be linked to from external datasets.” In other words, Linked Data and the Semantic Web seek to do for data what the Web did for documents. Additionally, the evolving capability of linking together different forms of data is fueling the potentially transformative rise of social machines – “processes in which the people do the creative work and the machine does the administration.”

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Alani, Harith, David Dupplaw, John Sheridan, Kieron O’Hara, John Darlington, Nigel Shadbolt, and Carol Tullo. “Unlocking the Potential of Public Sector Information with Semantic Web Technology,” 2007. http://bit.ly/17fMbCt.

  • This paper explores the potential of using Semantic Web technology to increase the value of public sector information already in existence.
  • The authors note that, while “[g]overnments often hold very rich data and whilst much of this information is published and available for re-use by others, it is often trapped by poor data structures, locked up in legacy data formats or in fragmented databases. One of the great benefits that Semantic Web (SW) technology offers is facilitating the large scale integration and sharing of distributed data sources.”
  • They also argue that Linked Data and the Semantic Web are growing in use and visibility in other sectors, but government has been slower to adapt: “The adoption of Semantic Web technology to allow for more efficient use of data in order to add value is becoming more common where efficiency and value-added are important parameters, for example in business and science. However, in the field of government there are other parameters to be taken into account (e.g. confidentiality), and the cost-benefit analysis is more complex.” In spite of that complexity, the authors’ work “was intended to show that SW technology could be valuable in the governmental context.”

Berners-Lee, Tim, James Hendler, and Ora Lassila. “The Semantic Web.” Scientific American 284, no. 5 (2001): 28–37. http://bit.ly/Hhp9AZ.

  • In this article, Sir Tim Berners-Lee, James Hendler and Ora Lassila introduce the Semantic Web, “a new form of Web content that is meaningful to computers [and] will unleash a revolution of new possibilities.”
  • The authors argue that the evolution of linked data and the Semantic Web “lets anyone express new concepts that they invent with minimal effort. Its unifying logical language will enable these concepts to be progressively linked into a universal Web. This structure will open up the knowledge and workings of humankind to meaningful analysis by software agents, providing a new class of tools by which we can live, work and learn together.”

Bizer, Christian, Tom Heath, and Tim Berners-Lee. “Linked Data – The Story So Far.” International Journal on Semantic Web and Information Systems (IJSWIS) 5, no. 3 (2009): 1–22. http://bit.ly/HedpPO.

  • In this paper, the authors take stock of Linked Data’s challenges, potential and successes close to a decade after its introduction. They build their argument for increasingly linked data by referring to the incredible value creation of the Web: “Despite the inarguable benefits the Web provides, until recently the same principles that enabled the Web of documents to flourish have not been applied to data.”
  • The authors expect that “Linked Data will enable a significant evolutionary step in leading the Web to its full potential” if a number of research challenges can be adequately addressed, both technical, like interaction paradigms and data fusion; and non-technical, like licensing, quality and privacy.

Ding, Li, Dominic Difranzo, Sarah Magidson, Deborah L. Mcguinness, and Jim Hendler. Data-Gov Wiki: Towards Linked Government Data, n.d. http://bit.ly/1h3ATHz.

  • In this paper, the authors “investigate the role of Semantic Web technologies in converting, enhancing and using linked government data” in the context of Data-gov Wiki, a project that attempts to integrate datasets found at Data.gov into the Linking Open Data (LOD) cloud.
  • The paper features discussion and “practical strategies” based on four key issue areas: Making Government Data Linkable, Linking Government Data, Supporting the Use of Linked Government Data and Preserving Knowledge Provenance.

Kalampokis, Evangelos, Michael Hausenblas, and Konstantinos Tarabanis. “Combining Social and Government Open Data for Participatory Decision-Making.” In Electronic Participation, edited by Efthimios Tambouris, Ann Macintosh, and Hans de Bruijn, 36–47. Lecture Notes in Computer Science 6847. Springer Berlin Heidelberg, 2011. http://bit.ly/17hsj4a.

  • This paper presents a proposed data architecture for “supporting participatory decision-making based on the integration and analysis of social and government data.” The authors believe that their approach will “(i) allow decision makers to understand and predict public opinion and reaction about specific decisions; and (ii) enable citizens to inadvertently contribute in decision-making.”
  • The proposed approach, “based on the use of the linked data paradigm,” draws on subjective social data and objective government data in two phases: Data Collection and Filtering and Data Analysis. “The aim of the former phase is to narrow social data based on criteria such as the topic of the decision and the target group that is affected by the decision. The aim of the latter phase is to predict public opinion and reactions using independent variables related to both subjective social and objective government data.”

Rady, Kaiser. Publishing the Public Sector Legal Information in the Era of the Semantic Web. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 2012. http://bit.ly/17fMiOp.

  • Following an EU directive calling for the release of public sector information by member states, this study examines the “uniqueness” of creating and publishing primary legal source documents on the web and highlights “the most recent technological strategy used to structure, link and publish data online (the Semantic Web).”
  • Rady argues for public sector legal information to be published as “open-linked-data in line with the new approach for the web.” He believes that if data is created and published in this form, “the data will be more independent from devices and applications and could be considered as a component of [a] big information system. That because, it will be well-structured, classified and has the ability to be used and utilized in various combinations to satisfy specific user requirements.”

Shadbolt, Nigel, Kieron O’Hara, Tim Berners-Lee, Nicholas Gibbins, Hugh Glaser, Wendy Hall, and m.c. schraefel. “Linked Open Government Data: Lessons from Data.gov.uk.” IEEE Intelligent Systems 27, no. 3 (May 2012): 16–24. http://bit.ly/1cgdH6R.

  • In this paper, the authors view Open Government Data (OGD) as an “opportunity and a challenge for the LDW [Linked Data Web]. The opportunity is to grow by linking with PSI [Public Sector Information] – real-world, useful information with good provenance. The challenge is to manage the sudden influx of heterogeneous data, often with minimal semantics and structure, tailored to highly specific task contexts.
  • As the linking of OGD continues, the authors argue that, “Releasing OGD is not solely a technical problem, although it presents technical challenges. OGD is not a rigid government IT specification, but it demands productive dialogue between data providers, users, and developers. We should expect a ‘perpetual beta,’ in which best practice, technical development, innovative use of data, and citizen-centric politics combine to drive data-release programs.”
  • Despite challenges, the authors believe that, “Integrating OGD onto the LDW will vastly increase the scope and richness of the LDW. A reciprocal benefit is that the LDW will provide additional resources and context to enrich OGD. Here, we see the network effect in action, with resources mutually adding value to one another.”

Vitale, Michael, Anni Rowland-Campbell, Valentina Cardo, and Peter Thompson. “The Implications of Government as a ‘Social Machine’ for Making and Implementing Market-based Policy.” Intersticia, September 2013. http://bit.ly/HhMzqD.

  • This report from the Australia and New Zealand School of Government (ANZSOG) explores the concept of government as a social machine. The authors draw on the definition of a social machine proposed by Sir Nigel Shadbolt et al. – a system where “human and computational intelligence coalesce in order to achieve a given purpose” – to describe a “new approach to the relationship between citizens and government, facilitated by technological systems which are increasingly becoming intuitive, intelligent and ‘social.'”
  • The authors argue that beyond providing more and varied data to government, the evolving concept of government as a social machine as the potential to alter power dynamics, address the growing lack of trust in public institutions and facilitate greater public involvement in policy-making.

Seizing the data opportunity: UK data capability strategy


New UK Policy Paper by the Department for Business, Innovation & Skills: “In the information economy, the ability to handle and analyse data is essential for the UK’s competitive advantage and business transformation. The volume, velocity and variety of data being created and analysed globally is rising every day, and using data intelligently has the potential to transform public sector organisations, drive research and development, and enable market-changing products and services. The social and economic potential is significant, and the UK is well placed to compete in the global market for data analytics. Through this strategy, the government aims to place the UK at the forefront of this process by building our capability to exploit data for the benefit of citizens, business, and academia. This is our action plan for making the UK a data success story.

Working in partnership with business and academia, the government has developed a shared vision for the UK’s data capability, with the aim of making the UK a worldleader in extracting insight and value from data for the benefit of citizens and consumers, business and academia, the public and the private sectors. The Information Economy Council and the E-infrastructure Leadership Council will oversee delivery of the actions in this strategy, and continue to develop additional
plans to support this vision.
Data capability: This strategy focuses on three overarching aspects to data capability. The first is human capital – a skilled workforce, and data-confident citizens. The second covers the tools and infrastructure which are available to store and analyse data. The third is data itself as an enabler – data capability is underpinned by the ability of consumers, businesses and academia to access and share data appropriately…”

 

Peer Production: A Modality of Collective Intelligence


New paper by Yochai Benkler, Aaron Shaw and Benjamin Mako Hill:  “Peer production is the most significant organizational innovation that has emerged from
Internet-mediated social practice and among the most a visible and important examples of collective intelligence. Following Benkler,  we define peer production as a form of open creation and sharing performed by groups online that: (1) sets and executes goals in a decentralized manner; (2) harnesses a diverse range of participant motivations, particularly non-monetary motivations; and (3) separates governance and management relations from exclusive forms of property and relational contracts (i.e., projects are governed as open commons or common property regimes and organizational governance utilizes combinations of participatory, meritocratic and charismatic, rather than proprietary or contractual, models). For early scholars of peer production, the phenomenon was both important and confounding for its ability to generate high quality work products in the absence of formal hierarchies and monetary incentives. However, as peer production has become increasingly established in society, the economy, and scholarship, merely describing the success of some peer production projects has become less useful. In recent years, a second wave of scholarship has emerged to challenge assumptions in earlier work; probe nuances glossed over by earlier framings of the phenomena; and identify the necessary dynamics, structures, and conditions for peer production success.
Peer production includes many of the largest and most important collaborative communities on the Internet….
Much of this academic interest in peer production stemmed from the fact that the phenomena resisted straightforward explanations in terms of extant theories of the organization and production of functional information goods like software or encyclopedias. Participants in peer production projects join and contribute valuable resources without the hierarchical bureaucracies or strong leadership structures common to state agencies or firms, and in the absence of clear financial incentives or rewards. As a result, foundationalresearch on peer production was focused on (1) documenting and explaining the organization and governance of peer production communities, (2) understanding the motivation of contributors to peer production, and (3) establishing and evaluating the quality of peer production’s outputs.
In the rest of this chapter, we describe the development of the academic literature on peer production in these three areas – organization, motivation, and quality.”

Making government simpler is complicated


Mike Konczal in The Washington Post: “Here’s something a politician would never say: “I’m in favor of complex regulations.” But what would the opposite mean? What would it mean to have “simple” regulations?

There are two definitions of “simple” that have come to dominate liberal conversations about government. One is the idea that we should make use of “nudges” in regulation. The other is the idea that we should avoid “kludges.” As it turns out, however, these two definitions conflict with each other —and the battle between them will dominate conversations about the state in the years ahead.

The case for “nudges”

The first definition of a “simple” regulation is one emphasized in Cass Sunstein’s recent book titled Simpler: The Future of Government (also see here). A simple policy is one that simply “nudges” people into one choice or another using a variety of default rules, disclosure requirements, and other market structures. Think, for instance, of rules that require fast-food restaurants to post calories on their menus, or a mortgage that has certain terms clearly marked in disclosures.

These sorts of regulations are deemed “choice preserving.” Consumers are still allowed to buy unhealthy fast-food meals or sign up for mortgages they can’t reasonably afford. The regulations are just there to inform people about their choices. These rules are designed to keep the market “free,” where all possibilities are ultimately possible, although there are rules to encourage certain outcomes.
In his book, however, Sunstein adds that there’s another very different way to understand the term “simple.” What most people mean when they think of simple regulations is a rule that is “simple to follow.” Usually a rule is simple to follow because it outright excludes certain possibilities and thus ensures others. Which means, by definition, it limits certain choices.

The case against “kludges”
This second definition of simple plays a key role in political scientist Steve Teles’ excellent recent essay, “Kludgeocracy in America.” For Teles, a “kludge” is a “clumsy but temporarily effective” fix for a policy problem. (The term comes from computer science.) These kludges tend to pile up over time, making government cumbersome and inefficient overall.
Teles focuses on several ways that kludges are introduced into policy, with a particularly sharp focus on overlapping jurisdictions and the related mess of federal and state overlap in programs. But, without specifically invoking it, he also suggests that a reliance on “nudge” regulations can lead to more kludges.
After all, non-kludge policy proposal is one that will be simple to follow and will clearly cause a certain outcome, with an obvious causality chain. This is in contrast to a web of “nudges” and incentives designed to try and guide certain outcomes.

Why “nudges” aren’t always simpler
The distinction between the two is clear if we take a specific example core to both definitions: retirement security.
For Teles, “one of the often overlooked benefits of the Social Security program… is that recipients automatically have taxes taken out of their paychecks, and, then without much effort on their part, checks begin to appear upon retirement. It’s simple and direct. By contrast, 401(k) retirement accounts… require enormous investments of time, effort, and stress to manage responsibly.”

Yet 401(k)s are the ultimately fantasy laboratory for nudge enthusiasts. A whole cottage industry has grown up around figuring out ways to default people into certain contributions, on designing the architecture of choices of investments, and trying to effortlessly and painlessly guide people into certain savings.
Each approach emphasizes different things. If you want to focus your energy on making people better consumers and market participations, expanding our government’s resources and energy into 401(k)s is a good choice. If you want to focus on providing retirement security directly, expanding Social Security is a better choice.
The first is “simple” in that it doesn’t exclude any possibility but encourages market choices. The second is “simple” in that it is easy to follow, and the result is simple as well: a certain amount of security in old age is provided directly. This second approach understands the government as playing a role in stopping certain outcomes, and providing for the opposite of those outcomes, directly….

Why it’s hard to create “simple” regulations
Like all supposed binaries this is really a continuum. Taxes, for instance, sit somewhere in the middle of the two definitions of “simple.” They tend to preserve the market as it is but raise (or lower) the price of certain goods, influencing choices.
And reforms and regulations are often most effective when there’s a combination of these two types of “simple” rules.
Consider an important new paper, “Regulating Consumer Financial Products: Evidence from Credit Cards,” by Sumit Agarwal, Souphala Chomsisengphet, Neale Mahoney and Johannes Stroebel. The authors analyze the CARD Act of 2009, which regulated credit cards. They found that the nudge-type disclosure rules “increased the number of account holders making the 36-month payment value by 0.5 percentage points.” However, more direct regulations on fees had an even bigger effect, saving U.S. consumers $20.8 billion per year with no notable reduction in credit access…..
The balance between these two approaches of making regulations simple will be front and center as liberals debate the future of government, whether they’re trying to pull back on the “submerged state” or consider the implications for privacy. The debate over the best way for government to be simple is still far from over.”

Google’s flu fail shows the problem with big data


Adam Kucharski in The Conversation: “When people talk about ‘big data’, there is an oft-quoted example: a proposed public health tool called Google Flu Trends. It has become something of a pin-up for the big data movement, but it might not be as effective as many claim.
The idea behind big data is that large amount of information can help us do things which smaller volumes cannot. Google first outlined the Flu Trends approach in a 2008 paper in the journal Nature. Rather than relying on disease surveillance used by the US Centers for Disease Control and Prevention (CDC) – such as visits to doctors and lab tests – the authors suggested it would be possible to predict epidemics through Google searches. When suffering from flu, many Americans will search for information related to their condition….
Between 2003 and 2008, flu epidemics in the US had been strongly seasonal, appearing each winter. However, in 2009, the first cases (as reported by the CDC) started in Easter. Flu Trends had already made its predictions when the CDC data was published, but it turned out that the Google model didn’t match reality. It had substantially underestimated the size of the initial outbreak.
The problem was that Flu Trends could only measure what people search for; it didn’t analyse why they were searching for those words. By removing human input, and letting the raw data do the work, the model had to make its predictions using only search queries from the previous handful of years. Although those 45 terms matched the regular seasonal outbreaks from 2003–8, they didn’t reflect the pandemic that appeared in 2009.
Six months after the pandemic started, Google – who now had the benefit of hindsight – updated their model so that it matched the 2009 CDC data. Despite these changes, the updated version of Flu Trends ran into difficulties again last winter, when it overestimated the size of the influenza epidemic in New York State. The incidents in 2009 and 2012 raised the question of how good Flu Trends is at predicting future epidemics, as opposed to merely finding patterns in past data.
In a new analysis, published in the journal PLOS Computational Biology, US researchers report that there are “substantial errors in Google Flu Trends estimates of influenza timing and intensity”. This is based on comparison of Google Flu Trends predictions and the actual epidemic data at the national, regional and local level between 2003 and 2013
Even when search behaviour was correlated with influenza cases, the model sometimes misestimated important public health metrics such as peak outbreak size and cumulative cases. The predictions were particularly wide of the mark in 2009 and 2012:

Original and updated Google Flu Trends (GFT) model compared with CDC influenza-like illness (ILI) data. PLOS Computational Biology 9:10
Click to enlarge

Although they criticised certain aspects of the Flu Trends model, the researchers think that monitoring internet search queries might yet prove valuable, especially if it were linked with other surveillance and prediction methods.
Other researchers have also suggested that other sources of digital data – from Twitter feeds to mobile phone GPS – have the potential to be useful tools for studying epidemics. As well as helping to analysing outbreaks, such methods could allow researchers to analyse human movement and the spread of public health information (or misinformation).
Although much attention has been given to web-based tools, there is another type of big data that is already having a huge impact on disease research. Genome sequencing is enabling researchers to piece together how diseases transmit and where they might come from. Sequence data can even reveal the existence of a new disease variant: earlier this week, researchers announced a new type of dengue fever virus….”

Making regulations easier to use


at the Consumer Financial Protection Bureau (CFPB): “We write rules to protect consumers, but what actually protects consumers is people: advocates knowing what rights people have, government agencies’ supervision and enforcement staff having a clear view of what potential violations to look out for; and responsible industry employees following the rules.
Today, we’re releasing a new open source tool we built, eRegulations, to help make regulations easier to understand. Check it out: consumerfinance.gov/eregulations
One thing that’s become clear during our two years as an agency is that federal regulations can be difficult to navigate. Finding answers to questions about a regulation is hard. Frequently, it means connecting information from different places, spread throughout a regulation, often separated by dozens or even hundreds of pages. As a result, we found people were trying to understand regulations by using paper editions, several different online tools to piece together the relevant information, or even paid subscription services that still don’t make things easy, and are expensive.

Here’s hoping that even more people who work with regulations will have the same reaction as this member of our bank supervision team:
 “The eRegulations site has been very helpful to my work. It has become my go-to resource on Reg. E and the Official Interpretations. I use it several times a week in the course of completing regulatory compliance evaluations. My prior preference was to use the printed book or e-CFR, but I’ve found the eRegulations (tool) to be easier to read, search, and navigate than the printed book, and more efficient than the e-CFR because of the way eRegs incorporates the commentary.”
New rules about international money transfers – also called “remittances” –  in Regulation E will take effect on October 28, 2013, and you can now use the eRegulations tool to check out the regulation.

We need your help

There are two ways we’d love your help with our work to make regulations easier to use. First, the tool is a work in progress.  If you have comments or suggestions, please write to us at CFPB_eRegs_Team@cfpb.gov. We read every message and would love to hear what you think.
Second, the tool is open source, so we’d love for other agencies, developers, or groups to use it and adapt it. And remember, the first time a citizen developer suggested a change to our open source software, it was to fix a typo (thanks again, by the way!), so no contribution is too small.”