The Missing Information That Municipal-Bond Investors Need


Marc Joffe at Governing: “…There are many reasons why the municipal market lacks sophistication in this area, but a big part of the problem has been a lack of free (or even low-cost) financial-statement data. In this regard, some strides are being made. First, the 2009 launch by the Municipal Securities Rulemaking Board (MSRB) of its Electronic Municipal Market Access (EMMA) system gave investors a one-stop shop for municipal financial disclosure. But as the Securities and Exchange Commission (SEC) observed recently, a large number of municipal-bond issuers have been posting their statements late or not at all. The commission’s Municipal Continuing Disclosure Cooperation Initiative has greatly increased the number of statements on EMMA. Finally, late this year the Census Bureau is expected to begin posting federal single-audit submissions online. These packages include the same basic financial statements typically found in municipal market disclosure.

But the simple publication of thousands of voluminous PDFs does not provide the degree of transparency needed to raise the level of municipal-bond-market financial literacy. The vast majority of investors and analysts lack the patience and/or technical skills needed to extract the valuable needles of insight from this haystack of disclosure.

Investors in corporate securities do not face these difficulties. For the last 20 years, company financial reports have been available in textual form on the SEC’s Electronic Data Gathering, Analysis and Retrieval system. As a result, corporate financial-statement data is freely available in convenient forms around the Internet: Yahoo Finance, MarketWatch, Morningstar and your broker’s website are just a few of the places you can find this data.

So while corporate investors can readily compare the financial statistics of a safe company like Apple to an insolvent one like Radio Shack, municipal investors cannot easily perform the same exercise for Dallas and Detroit.

It wasn’t always this way. Between 1909 and 1931, the Census Bureau published an annual volume entitled “Financial Statistics of Cities Having a Population of Over 30,000.” The final edition — available at the St. Louis Federal Reserve’s website — covered 311 American cities and included hundreds of revenue, expenditure, asset and liability data points for each municipality. Unfortunately, ever since 1931, Census financial data on local governments has become less comprehensive, less timely and less comprehensible to the lay user.

In the years after 1931, we lost the understanding that comparative local-government financial statistics were a public good. While we might look to the federal government to once again offer this this information in today’s era of heightened need, it may be challenged to take on this role in an era of sequesters.

But while we may need the private sector to provide this public good, the federal government can greatly reduce the cost of compiling a local-government financial-statement database. The SEC has required companies to file financial statements in text form — rather than via PDF — since the mid-1990s. In 2008, the SEC further standardized company financial reporting by requiring firms to file their statements in the form of eXtensible Business Reporting Language (XBRL), which imposes a consistent format on all filings. To date, neither the SEC nor the MSRB has pursued a similar course with respect to municipal financial disclosure.

Next week, the Data Transparency Coalition, a group that advocates for the use of XBRL, will hold a Financial Regulation Summit featuring numerous congressional representatives and regulators. Perhaps the extension of XBRL to the municipal-bond market can find its way onto the agenda….(More)

Gamification harnesses the power of games to motivate


Kevin Werbach at the Conversation: “Walk through any public area and you’ll see people glued to their phones, playing mobile games like Game of War and Candy Crush Saga. They aren’t alone. 59% of Americans play video games, and contrary to stereotypes, 48% of gamers are women. The US$100 billion video game industry is among the least-appreciated business phenomena in the world today.

But this isn’t an article about video games. It’s about where innovative organizations are applying the techniques that make those games so powerfully engaging: everywhere else.

Gamification is the perhaps-unfortunate name for the growing practice of applying structural elements, design patterns, and psychological insights from game design to business, education, health, marketing, crowdsourcing and other fields. Over the past four years, gamification has gone through a cycle of (over-)hype and (overblown) disappointment common for technological trends. Yet if you look carefully, you’ll see it everywhere.

Tapping into pieces of games

Gamification involves two primary mechanisms. The first is to take design structures from games, such as levels, achievements, points, and leaderboards — in my book, For the Win, my co-author and I label them “game elements” — and incorporate them into activities. The second, more subtle but ultimately more effective, is to mine the rich vein of design techniques that game designers have developed over many years. Good games pull you in and carry you through a journey that remains engaging, using an evolving balance of challenges and a stream of well crafted, actionable feedback.

Many enterprises now use tools built on top of Salesforce.com’s customer relationship management platform to motivate employees through competitions, points and leaderboards. Online learning platforms such as Khan Academy commonly challenge students to “level up” by sprinkling game elements throughout the process. Even games are now gamified: Microsoft’s Xbox One and Sony’s PS4 consoles offer a meta-layer of achievements and trophies to promote greater game-play.

The differences between a gamified system that incorporates good design principles and one that doesn’t aren’t always obvious on the surface. They show up in the results.

Duolingo is an online language-learning app. It’s pervasively and thoughtfully gamified: points, levels, achievements, bonuses for “streaks,” visual progression indicators, even a virtual currency with various ways to spend it. The well integrated gamification is a major differentiator for Duolingo, which happens to be the most successful tool of its kind. With over 60 million registered users, it teaches languages to more people than the entire US public school system.

Most of the initial high-profile cases of gamification were for marketing: for example, USA Network ramped up its engagement numbers with web-based gamified challenges for fans of its shows, and Samsung gave points and badges for learning about its products.

Soon it became clear that other applications were equally promising. Today, organizations are using gamification to enhance employee performance, promote health and wellness activities, improve retention in online learning, help kids with cancer endure their treatment regimen, and teach people how to code, to name just a few examples. Gamification has potential anywhere that motivation is an important element of success.

Gamification works because our responses to games are deeply hard-wired into our psychology. Game design techniques can activate our innate desires to recognize patterns, solve puzzles, master challenges, collaborate with others, and be in the drivers’ seat when experiencing the world around us. They can also create a safe space for experimentation and learning. After all, why not try something new when you know that even if you fail, you’ll get another life?…(More)

What Your Tweets Say About You


at the New Yorker: “How much can your tweets reveal about you? Judging by the last nine hundred and seventy-two words that I used on Twitter, I’m about average when it comes to feeling upbeat and being personable, and I’m less likely than most people to be depressed or angry. That, at least, is the snapshot provided by AnalyzeWords, one of the latest creations from James Pennebaker, a psychologist at the University of Texas who studies how language relates to well-being and personality. One of Pennebaker’s most famous projects is a computer program called Linguistic Inquiry and Word Count (L.I.W.C.), which looks at the words we use, and in what frequency and context, and uses this information to gauge our psychological states and various aspects of our personality….

Take a study, out last month, from a group of researchers based at the University of Pennsylvania. The psychologist Johannes Eichstaedt and his colleagues analyzed eight hundred and twenty-six million tweets across fourteen hundred American counties. (The counties contained close to ninety per cent of the U.S. population.) Then, using lists of words—some developed by Pennebaker, others by Eichstaedt’s team—that can be reliably associated with anger, anxiety, social engagement, and positive and negative emotions, they gave each county an emotional profile. Finally, they asked a simple question: Could those profiles help determine which counties were likely to have more deaths from heart disease?

The answer, it turned out, was yes….

The researchers have a theory: they suggest that “the language of Twitter may be a window into the aggregated and powerful effects of the community context.” They point to other epidemiological studies which have shown that general facts about a community, such as its “social cohesion and social capital,” have consequences for the health of individuals. Broadly speaking, people who live in poorer, more fragmented communities are less healthy than people living in richer, integrated ones.“When we do a sub-analysis, we find that the power that Twitter has is in large part accounted for by community and socioeconomic variables,” Eichstaedt told me when we spoke over Skype. In short, a young person’s negative, angry, and stressed-out tweets might reflect his or her stress-inducing environment—and that same environment may have negative health repercussions for other, older members of the same community….(More)”

Secrecy versus openness: Internet security and the limits of open source and peer production


Dissertation by Andreas Schmidt:” Open source and peer production have been praised as organisational models that could change the world for the better. It is commonly asserted that almost any societal activity could benefit from distributed, bottom-up collaboration — by making societal interaction more open, more social, and more democratic. However, we also need to be mindful of the limits of these models. How could they function in environments hostile to openness? Security is a societal domain more prone to secrecy than any other, except perhaps for romantic love. In light of the destructive capacity of contemporary cyber attacks, how has the Internet survived without a comprehensive security infrastructure? Secrecy vs. openness describes the realities of Internet security production through the lenses of open source and peer production theories. The study offers a glimpse into the fascinating communities of technical experts, who played a pivotal role when the chips were down for the Internet after large-scale attacks. After an initial flirtation with openness in the early years, operational Internet security communities have put in place institutional mechanisms that have resulted in less open forms of social production…(More)”

Using open legislative data to map bill co-sponsorship networks in 15 countries


François Briatte at OpeningParliament.org: “A few years back, Kamil Gregor published a post under the title “Visualizing politics: Network analysis of bill sponsors”. His post, which focused on the lower chamber of the Czech Parliament, showed how basic social network analysis can support the exploration of parliamentary work, by revealing the ties that members of parliament create between each other through the co-sponsorship of private bills….In what follows, I would like to quickly report on a small research project that I have developed over the years, under the name “parlnet”.

Legislative data on bill co-sponsorship

This project looks at bill co-sponsorship networks in European countries. Many parliaments allow their members to co-sponsor each other’s private bills, which makes it possible to represent these parliaments as collaborative networks, where a tie exists between two MPs if they have co-sponsored legislation together.

This idea is not new: it was pioneered by James Fowler in the United States, and has been the subject of extensive research in American politics, both on the U.S. Congress and on state legislatures. Similar research also exists on the bill co-sponsorship networks of parliaments in Argentina, Chile andRomania.

Inspired by this research and by Baptiste Coulmont’s visualisation of the French lower chamber, I surveyed the parliamentary websites of the following countries:

  • all 28 current members of the European Union ;
  • 4 members of the EFTA: Iceland, Liechtenstein, Norway, and Switzerland

This search returned 19 parliamentary chambers from 15 countries for which it was (relatively) easy to extract legislative data, either through open data portals like data.riksdagen.se in Sweden ordata.stortinget.no in Norway, or from official parliamentary websites directly….After splitting the data into legislative periods separated by nationwide elections, I was able to draw a large collection of networks showing bill co-sponsorship in these 19 chambers….In this graph, each point (or node) is a Belgian MP, and each tie between two MPs indicates that they have co-sponsored at least one bill together. The colors and abbreviations used in the graph are party-related codes, which combine information on the parliamentary group and linguistic community of each MP.Because this kind of graph can be interesting to explore in more detail, I have also built interactive visualizations out of them, in order to show more detailed information on the MPs who participate in bill cosposorship…

The parlnet project was coded in R, and its code is public so that it might benefit from external contributions. The list of countries and chambers that it covers is not exhaustive: in some cases like Portugal, I simply failed to retrieve the data. More talented coders might therefore be able to add to the current database.

Bill cosponsorship networks illustrate how open legislative data provided by parliaments can be turned into interactive tools that easily convey some information about parliamentary work, including, but not limited to:

  • the role of parliamentary party leaders in managing the legislation produced by their groups
  • the impact of partisan discipline and ideology on legislative collaboration between MPs
  • the extent of cross-party cooperation in various parliamentary environments and chambers… (More)

UNESCO demonstrates global impact through new transparency portal


“Opendata.UNESCO.org  is intended to present comprehensive, quality and timely information about UNESCO’s projects, enabling users to find information by country/region, funding source, and sector and providing comprehensive project data, including budget, expenditure, completion status, implementing organization, project documents, and more. It publishes program and financial information that are in line with UN system-experience of the IATI (International Aid Transparency Initiative) standards and other relevant transparency initiatives. UNESCO is now part of more than 230 organizations that have published to the IATI Registry, which brings together donor and developing countries, civil society organizations and other experts in aid information who are committed to working together to increase the transparency of aid.

Since its creation 70 years ago, UNESCO has tirelessly championed the causes of education, culture, natural sciences, social and human sciences, communication and information, globally. For instance – started in March 2010, the program for the Enhancement of Literacy in Afghanistan (ELA) benefited from a $19.5 million contribution by Japan. It aimed to improve the level of literacy, numeracy and vocational skills of the adult population in 70 districts of 15 provinces of Afghanistan. Over the next three years, until April 2013, the ELA programme helped some 360,000 adult learners in General Literacy compotency. An interactive map allows for an easy identification of UNESCO’s high-impact programs, and up-to-date information of current and future aid allocations within and across countries.

Public participation and interactivity are key to the success of any open data project. http://Opendata.UNESCO.org will evolve as Member States and partners will get involved, by displaying data on their own websites and sharing data among different networks, building and sharing applications, providing feedback, comments, and recommendations. …(More)”

Institutional isomorphism, policy networks, and the analytical depreciation of measurement indicators: The case of the EU e-government benchmarking


Paper by Cristiano Codagnone et al: “This article discusses the socio-political dimension of measurement in the context of benchmarking e-government within the European Union׳s Open Method of Coordination. It provides empirical evidence of how this has resulted in institutional isomorphism within the self-referential policy network community involved in the benchmarking process. It argues that the policy prominence retained by supply-side benchmarking of e-government has probably indirectly limited efforts made to measure and evaluate more tangible impacts. High scores in EU benchmarking have contributed to increasing the institutionally-perceived quality but not necessarily the real quality and utility of e-government services. The article concludes by outlining implications for policy and practical recommendations for filling the gaps identified in measurement and evaluation of e-government. It proposes a more comprehensive policy benchmarking framework, which aims to ensure a gradual improvement in measurement activities with indicators that reflect and follow the pace of change, align measurement activities to evaluation needs and, eventually, reduce measurement error….(More)”

Data for policy: when the haystack is made of needles. A call for contributions


Diana Vlad-Câlcic at the European Commission: “If policy-making is ‘whatever government chooses to do or not to do’ (Th. Dye), then how do governments actually decide? Evidence-based policy-making is not a new answer to this question, but it is constantly challenging both policy-makers and scientists to sharpen their thinking, their tools and their responsiveness.  The European Commission has recognised this and has embedded in its processes, namely through Impact Assessment, policy monitoring and evaluation, an evidence-informed decision-making approach.

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk. (John von Neumann)

New data technologies raise the bar high for advanced modelling, dynamic visualisation, real-time data flows and a variety of data sources, from sensors, to cell phones or the Internet as such. An abundance of (big) data, a haystack made of needles, but do public administrations have the right tools and skills to exploit it? How much of it adds real value to established statistics and to scientific evidence? Are the high hopes and the high expectations partly just hype? And what lessons can we learn from experience?

To explore these questions, the European Commission is launching a study with the Oxford Internet Institute, Technopolis and CEPS  on ‘Data for policy: big data and other innovative data-driven approaches for evidence-informed policymaking’. As a first step, the study will collect examples of initiatives in public institutions at national and international level, where innovative data technologies contribute to the policy process. It will eventually develop case-studies for EU policies.

Contribute to the collective reflection by sharing with us good practices and examples you have from other public administrations. Follow the developments of the study also on Twitter @data4policyEU

How to Fight the Next Epidemic


Bill Gates in the New York Times: “The Ebola Crisis Was Terrible. But Next Time Could Be Much Worse….Much of the public discussion about the world’s response to Ebola has focused on whether the World Health Organization, the Centers for Disease Control and Prevention and other groups could have responded more effectively. These are worthwhile questions, but they miss the larger point. The problem isn’t so much that the system didn’t work well enough. The problem is that we hardly have a system at all.

To begin with, most poor countries, where a natural epidemic is most likely to start, have no systematic disease surveillance in place. Even once the Ebola crisis was recognized last year, there were no resources to effectively map where cases occurred, or to use people’s travel patterns to predict where the disease might go next….

Data is another crucial problem. During the Ebola epidemic, the database that tracks cases has not always been accurate. This is partly because the situation is so chaotic, but also because much of the case reporting has been done on paper and then sent to a central location for data entry….

I believe that we can solve this problem, just as we’ve solved many others — with ingenuity and innovation.

We need a global warning and response system for outbreaks. It would start with strengthening poor countries’ health systems. For example, when you build a clinic to deliver primary health care, you’re also creating part of the infrastructure for fighting epidemics. Trained health care workers not only deliver vaccines; they can also monitor disease patterns, serving as part of the early warning systems that will alert the world to potential outbreaks. Some of the personnel who were in Nigeria to fight polio were redeployed to work on Ebola — and that country was able to contain the disease very quickly.

We also need to invest in disease surveillance. We need a case database that is instantly accessible to the relevant organizations, with rules requiring countries to share their information. We need lists of trained personnel, from local leaders to global experts, prepared to deal with an epidemic immediately. … (More)”

Big Data Is an Economic Justice Issue, Not Just a Privacy Problem


in the Huffington Post: “The control of personal data by “big data” companies is not just an issue of privacy but is becoming a critical issue of economic justice, argues a new report issued by the organization Data Justice>, which itself is being publicly launched in conjunction with the report. ..

At the same time, big data is fueling economic concentration across our economy. As a handful of data platforms generate massive amounts of user data, the barriers to entry rise, since potential competitors have little data themselves to entice advertisers compared with the incumbents, who have both the concentrated processing power and the supply of user data to dominate particular sectors. With little competition, companies end up with little incentive to either protect user privacy or share the economic value of that user data with the consumers generating those profits.

The report argues for a threefold approach to making big data work for everyone in the economy, not just for the big data platforms’ shareholders:

  • First, regulators need to strengthen user control of their own data by both requiring explicit consent for all uses of the data and better informing users of how it’s being used and how companies profit from that data.
  • Second, regulators need to factor control of data into merger review, and to initiate antitrust actions against companies like Google where monopoly control of a sector like search advertising has been established.
  • Third, policymakers should restrict practices that harm consumers, including banning price discrimination where consumers are not informed of all discount options available and bringing the participation of big data platforms in marketing financial services under the regulation of the Consumer Financial Protection Bureau.

Data Justice itself has been founded as an organization “to promote public education and new alliances to challenge the danger of big data to workers, consumers and the public.” It will work to educate the public, policymakers and organizational allies on how big data is contributing to economic inequality in the economy. Its new website at datajustice.org is intended to bring together a wide range of resources highlighting the economic justice aspects of big data.”