E-government and organisational transformation of government: Black box revisited?


New paper in Government Information Quarterly: “During the e-government era the role of technology in the transformation of public sector organisations has significantly increased, whereby the relationship between ICT and organisational change in the public sector has become the subject of increasingly intensive research over the last decade. However, an overview of the literature to date indicates that the impacts of e-government on the organisational transformation of administrative structures and processes are still relatively poorly understood and vaguely defined.

The main purpose of the paper is therefore the following: (1) to examine the interdependence of e-government development and organisational transformation in public sector organisations and propose a clearer explanation of ICT’s role as a driving force of organisational transformation in further e-government development; and (2) to specify the main characteristics of organisational transformation in the e-government era through the development of a new framework. This framework describes organisational transformation in two dimensions, i.e. the ‘depth’ and the ‘nature’ of changes, and specifies the key attributes related to the three typical organisational levels.”

Tech Policy Is Not A Religion


Opinion Piece by Robert Atkinson: “”Digital libertarians” and “digital technocrats” want us to believe their way is the truth and the light. It’s not that black and white. Manichaeism, an ancient religion, took a dualistic view of the world. It described the struggle between a good, spiritual world of light, and an evil, material world of darkness. Listening to tech policy debates, especially in America, one would presume that Manichaeism is alive and well.
On one side (light or dark, depending on your view) are the folks who embrace free markets, bottom-up processes, multi-stakeholderism, open-source systems, and crowdsourced innovations. On the other are those who embrace government intervention, top-down processes, additional regulation, proprietary systems, and expert-based innovations.
For the first group, whom I’ll call the digital libertarians, government is the problem, not the solution. Tech enables freedom, and statist actions can only limit it.
According to this camp, tech is moving so fast that government can’t hope to keep up — the only workable governance system is a nimble one based on multi-stakeholder processes, such as ICANN and W3C. With Web 2.0, everyone can be a contributor, and it is through the proliferation of multiple and disparate voices that we discover the truth. And because of the ability of communities of coders to add their contributions, the only viable tech systems are based on open-source models.
For the second group, the digital technocrats, the problem is the anarchic, lawless, corporate-dominated nature of the digital world. Tech is so disruptive, including to long-established norms and laws, it needs to be limited and shaped, and only the strong hand of the state can do that. Because of the influence of tech on all aspects of society, any legitimate governance process must stem from democratic institutions — not from a select group of insiders — and that can only happen with government oversight such as through the UN’s International Telecommunication Union.
According to this camp, because there are so many uninformed voices on the Internet spreading urban myths like wildfire, we need carefully vetted experts, whether in media or other organizations, to sort through the mass of information and provide expert, unbiased analysis. And because IT systems are so critical to the safety and well-functioning of  society, we need companies to build and profit from them through a closed-source model.
Of course, just as religious Manichaeism leads to distorted practices of faith, tech Manichaeism leads to distorted policy practices and views. Take Internet governance. The process of ensuring Internet governance and evolution is complex and rapidly changing. A strong case can be made for the multi-stakeholder process as the driving force.
But this situation doesn’t mean, as digital libertarians would assert, that governments should stay out of the Internet altogether. Governments are not, as digital libertarian John Perry Barlow arrogantly asserts, “weary giants of flesh and steel.” Governments can and do play legitimate roles in many Internet policy issues, from establishing cybersecurity guidelines to setting online sales tax policy to combatting spam and digital piracy to setting rules governing unfair and deceptive online marketing practices.
This assertion doesn’t mean governments always get things right. They don’t. But as the Information Technology and Innovation Foundation writes in its recent response to Barlow’s manifesto, to deny people the right to regulate Internet activity through their government officials ignores the significant contribution the government can play in promoting the continued development of the Internet and digital economy.
At the same time, the digital technocrats must understand that the digital world is different from the analog one, and that old rules, regulations, and governing structures simply don’t apply. When ITU Secretary General Hamadoun Toure argues that “at the behest of all the world’s nations, the UN must lead this effort” to manage the global Internet, and that “for big commercial interests, it’s about maximizing the bottom line,” he’s ignoring the critical role that tech companies and other non-government stakeholders play in the Internet ecosystem.
Because digital technology is such a vastly complex system, digital libertarians claim that their “light” approach is superior to the “dark,” controlling, technocratic approach. In fact, this very complexity requires that we base Internet policy on pragmatism, not religion.
Conversely, because technology is so important to opportunity and the functioning of societies, digital technocrats assert that only governments can maximize these benefits. In fact, its importance requires us to respect its complexity and the role of private sector innovators in driving digital progress.
In short, the belief that one or the other of these approaches is sufficient in itself to maximize tech innovation is misleading at best and damaging at worst.”

Garbage In, Garbage Out… Or, How to Lie with Bad Data


Medium: For everyone who slept through Stats 101, Charles Wheelan’s Naked Statistics is a lifesaver. From batting averages and political polls to Schlitz ads and medical research, Wheelan “illustrates exactly why even the most reluctant mathophobe is well advised to achieve a personal understanding of the statistical underpinnings of life” (New York Times). What follows is adapted from the book, out now in paperback.
Behind every important study there are good data that made the analysis possible. And behind every bad study . . . well, read on. People often speak about “lying with statistics.” I would argue that some of the most egregious statistical mistakes involve lying with data; the statistical analysis is fine, but the data on which the calculations are performed are bogus or inappropriate. Here are some common examples of “garbage in, garbage out.”

Selection Bias

….Selection bias can be introduced in many other ways. A survey of consumers in an airport is going to be biased by the fact that people who fly are likely to be wealthier than the general public; a survey at a rest stop on Interstate 90 may have the opposite problem. Both surveys are likely to be biased by the fact that people who are willing to answer a survey in a public place are different from people who would prefer not to be bothered. If you ask 100 people in a public place to complete a short survey, and 60 are willing to answer your questions, those 60 are likely to be different in significant ways from the 40 who walked by without making eye contact.

Publication Bias

Positive findings are more likely to be published than negative findings, which can skew the results that we see. Suppose you have just conducted a rigorous, longitudinal study in which you find conclusively that playing video games does not prevent colon cancer. You’ve followed a representative sample of 100,000 Americans for twenty years; those participants who spend hours playing video games have roughly the same incidence of colon cancer as the participants who do not play video games at all. We’ll assume your methodology is impeccable. Which prestigious medical journal is going to publish your results?

Most things don’t prevent cancer.

None, for two reasons. First, there is no strong scientific reason to believe that playing video games has any impact on colon cancer, so it is not obvious why you were doing this study. Second, and more relevant here, the fact that something does not prevent cancer is not a particularly interesting finding. After all, most things don’t prevent cancer. Negative findings are not especially sexy, in medicine or elsewhere.
The net effect is to distort the research that we see, or do not see. Suppose that one of your graduate school classmates has conducted a different longitudinal study. She finds that people who spend a lot of time playing video games do have a lower incidence of colon cancer. Now that is interesting! That is exactly the kind of finding that would catch the attention of a medical journal, the popular press, bloggers, and video game makers (who would slap labels on their products extolling the health benefits of their products). It wouldn’t be long before Tiger Moms all over the country were “protecting” their children from cancer by snatching books out of their hands and forcing them to play video games instead.
Of course, one important recurring idea in statistics is that unusual things happen every once in a while, just as a matter of chance. If you conduct 100 studies, one of them is likely to turn up results that are pure nonsense—like a statistical association between playing video games and a lower incidence of colon cancer. Here is the problem: The 99 studies that find no link between video games and colon cancer will not get published, because they are not very interesting. The one study that does find a statistical link will make it into print and get loads of follow-on attention. The source of the bias stems not from the studies themselves but from the skewed information that actually reaches the public. Someone reading the scientific literature on video games and cancer would find only a single study, and that single study will suggest that playing video games can prevent cancer. In fact, 99 studies out of 100 would have found no such link.

Recall Bias

Memory is a fascinating thing—though not always a great source of good data. We have a natural human impulse to understand the present as a logical consequence of things that happened in the past—cause and effect. The problem is that our memories turn out to be “systematically fragile” when we are trying to explain some particularly good or bad outcome in the present. Consider a study looking at the relationship between diet and cancer. In 1993, a Harvard researcher compiled a data set comprising a group of women with breast cancer and an age-matched group of women who had not been diagnosed with cancer. Women in both groups were asked about their dietary habits earlier in life. The study produced clear results: The women with breast cancer were significantly more likely to have had diets that were high in fat when they were younger.
Ah, but this wasn’t actually a study of how diet affects the likelihood of getting cancer. This was a study of how getting cancer affects a woman’s memory of her diet earlier in life. All of the women in the study had completed a dietary survey years earlier, before any of them had been diagnosed with cancer. The striking finding was that women with breast cancer recalled a diet that was much higher in fat than what they actually consumed; the women with no cancer did not.

Women with breast cancer recalled a diet that was much higher in fat than what they actually consumed; the women with no cancer did not.

The New York Times Magazine described the insidious nature of this recall bias:

The diagnosis of breast cancer had not just changed a woman’s present and the future; it had altered her past. Women with breast cancer had (unconsciously) decided that a higher-fat diet was a likely predisposition for their disease and (unconsciously) recalled a high-fat diet. It was a pattern poignantly familiar to anyone who knows the history of this stigmatized illness: these women, like thousands of women before them, had searched their own memories for a cause and then summoned that cause into memory.

Recall bias is one reason that longitudinal studies are often preferred to cross-sectional studies. In a longitudinal study the data are collected contemporaneously. At age five, a participant can be asked about his attitudes toward school. Then, thirteen years later, we can revisit that same participant and determine whether he has dropped out of high school. In a cross-sectional study, in which all the data are collected at one point in time, we must ask an eighteen-year-old high school dropout how he or she felt about school at age five, which is inherently less reliable.

Survivorship Bias

Suppose a high school principal reports that test scores for a particular cohort of students has risen steadily for four years. The sophomore scores for this class were better than their freshman scores. The scores from junior year were better still, and the senior year scores were best of all. We’ll stipulate that there is no cheating going on, and not even any creative use of descriptive statistics. Every year this cohort of students has done better than it did the preceding year, by every possible measure: mean, median, percentage of students at grade level, and so on. Would you (a) nominate this school leader for “principal of the year” or (b) demand more data?

If you have a room of people with varying heights, forcing the short people to leave will raise the average height in the room, but it doesn’t make anyone taller.

I say “b.” I smell survivorship bias, which occurs when some or many of the observations are falling out of the sample, changing the composition of the observations that are left and therefore affecting the results of any analysis. Let’s suppose that our principal is truly awful. The students in his school are learning nothing; each year half of them drop out. Well, that could do very nice things for the school’s test scores—without any individual student testing better. If we make the reasonable assumption that the worst students (with the lowest test scores) are the most likely to drop out, then the average test scores of those students left behind will go up steadily as more and more students drop out. (If you have a room of people with varying heights, forcing the short people to leave will raise the average height in the room, but it doesn’t make anyone taller.)

Healthy User Bias

People who take vitamins regularly are likely to be healthy—because they are the kind of people who take vitamins regularly! Whether the vitamins have any impact is a separate issue. Consider the following thought experiment. Suppose public health officials promulgate a theory that all new parents should put their children to bed only in purple pajamas, because that helps stimulate brain development. Twenty years later, longitudinal research confirms that having worn purple pajamas as a child does have an overwhelmingly large positive association with success in life. We find, for example, that 98 percent of entering Harvard freshmen wore purple pajamas as children (and many still do) compared with only 3 percent of inmates in the Massachusetts state prison system.

The purple pajamas do not matter.

Of course, the purple pajamas do not matter; but having the kind of parents who put their children in purple pajamas does matter. Even when we try to control for factors like parental education, we are still going to be left with unobservable differences between those parents who obsess about putting their children in purple pajamas and those who don’t. As New York Times health writer Gary Taubes explains, “At its simplest, the problem is that people who faithfully engage in activities that are good for them—taking a drug as prescribed, for instance, or eating what they believe is a healthy diet—are fundamentally different from those who don’t.” This effect can potentially confound any study trying to evaluate the real effect of activities perceived to be healthful, such as exercising regularly or eating kale. We think we are comparing the health effects of two diets: kale versus no kale. In fact, if the treatment and control groups are not randomly assigned, we are comparing two diets that are being eaten by two different kinds of people. We have a treatment group that is different from the control group in two respects, rather than just one.

If statistics is detective work, then the data are the clues. My wife spent a year teaching high school students in rural New Hampshire. One of her students was arrested for breaking into a hardware store and stealing some tools. The police were able to crack the case because (1) it had just snowed and there were tracks in the snow leading from the hardware store to the student’s home; and (2) the stolen tools were found inside. Good clues help.
Like good data. But first you have to get good data, and that is a lot harder than it seems.

Open data movement faces fresh hurdles


SciDevNet: “The open-data community made great strides in 2013 towards increasing the reliability of and access to information, but more efforts are needed to increase its usability on the ground and the general capacity of those using it, experts say.
An international network of innovation hubs, the first extensive open data certification system and a data for development partnership are three initiatives launched last year by the fledgling Open Data Institute (ODI), a UK-based not-for-profit firm that champions the use of open data to aid social, economic and environmental development.
Before open data can be used effectively the biggest hurdles to be cleared are agreeing common formats for data sets and improving their trustworthiness and searchability, says the ODI’s chief statistician, Ulrich Atz.
“As it is so new, open data is often inconsistent in its format, making it difficult to reuse. We see a great need for standards and tools,” he tells SciDev.Net. Data that is standardised is of “incredible value” he says, because this makes it easier and faster to use and gives it a longer useable lifetime.
The ODI — which celebrated its first anniversary last month — is attempting to achieve this with a first-of-its-kind certification system that gives publishers and users important details about online data sets, including publishers’ names and contact information, the type of sharing licence, the quality of information and how long it will be available.
Certificates encourage businesses and governments to make use of open data by guaranteeing their quality and usability, and making them easier to find online, says Atz.
Finding more and better ways to apply open data will also be supported by a growing network of ODI ‘nodes’: centres that bring together companies, universities and NGOs to support open-data projects and communities….
Because lower-income countries often lack well-established data collection systems, they have greater freedom to rethink how data are collected and how they flow between governments and civil society, he says.
But there is still a long way to go. Open-data projects currently rely on governments and other providers sharing their data on online platforms, whereas in a truly effective system, information would be published in an open format from the start, says Davies.
Furthermore, even where advances are being made at a strategic level, open-data initiatives are still having only a modest impact in the real world, he says.
“Transferring [progress at a policy level] into availability of data on the ground and the capacity to use it is a lot tougher and slower,” Davies says.”

Open Development (Networked Innovations in International Development)


New book edited by Matthew L. Smith and Katherine M. A. Reilly (Foreword by Yochai Benkler) : “The emergence of open networked models made possible by digital technology has the potential to transform international development. Open network structures allow people to come together to share information, organize, and collaborate. Open development harnesses this power, to create new organizational forms and improve people’s lives; it is not only an agenda for research and practice but also a statement about how to approach international development. In this volume, experts explore a variety of applications of openness, addressing challenges as well as opportunities.
Open development requires new theoretical tools that focus on real world problems, consider a variety of solutions, and recognize the complexity of local contexts. After exploring the new theoretical terrain, the book describes a range of cases in which open models address such specific development issues as biotechnology research, improving education, and access to scholarly publications. Contributors then examine tensions between open models and existing structures, including struggles over privacy, intellectual property, and implementation. Finally, contributors offer broader conceptual perspectives, considering processes of social construction, knowledge management, and the role of individual intent in the development and outcomes of social models.”

From Faith-Based to Evidence-Based: The Open Data 500 and Understanding How Open Data Helps the American Economy


Beth Noveck in Forbes: “Public funds have, after all, paid for their collection, and the law says that federal government data are not protected by copyright. By the end of 2009, the US and the UK had the only two open data one-stop websites where agencies could post and citizens could find open data. Now there are over 300 such portals for government data around the world with over 1 million available datasets. This kind of Open Data — including weather, safety and public health information as well as information about government spending — can serve the country by increasing government efficiency, shedding light on regulated industries, and driving innovation and job creation.

It’s becoming clear that open data has the potential to improve people’s lives. With huge advances in data science, we can take this data and turn it into tools that help people choose a safer hospital, pick a better place to live, improve the performance of their farm or business by having better climate models, and know more about the companies with whom they are doing business. Done right, people can even contribute data back, giving everyone a better understanding, for example of nuclear contamination in post-Fukushima Japan or incidences of price gouging in America’s inner cities.

The promise of open data is limitless. (see the GovLab index for stats on open data) But it’s important to back up our faith with real evidence of what works. Last September the GovLab began the Open Data 500 project, funded by the John S. and James L. Knight Foundation, to study the economic value of government Open Data extensively and rigorously.  A recent McKinsey study pegged the annual global value of Open Data (including free data from sources other than government), at $3 trillion a year or more. We’re digging in and talking to those companies that use Open Data as a key part of their business model. We want to understand whether and how open data is contributing to the creation of new jobs, the development of scientific and other innovations, and adding to the economy. We also want to know what government can do better to help industries that want high quality, reliable, up-to-date information that government can supply. Of those 1 million datasets, for example, 96% are not updated on a regular basis.

The GovLab just published an initial working list of 500 American companies that we believe to be using open government data extensively.  We’ve also posted in-depth profiles of 50 of them — a sample of the kind of information that will be available when the first annual Open Data 500 study is published in early 2014. We are also starting a similar study for the UK and Europe.

Even at this early stage, we are learning that Open Data is a valuable resource. As my colleague Joel Gurin, author of Open Data Now: the Secret to Hot Start-Ups, Smart Investing, Savvy Marketing and Fast Innovation, who directs the project, put it, “Open Data is a versatile and powerful economic driver in the U.S. for new and existing businesses around the country, in a variety of ways, and across many sectors. The diversity of these companies in the kinds of data they use, the way they use it, their locations, and their business models is one of the most striking things about our findings so far.” Companies are paradoxically building value-added businesses on top of public data that anyone can access for free….”

FULL article can be found here.

Entrepreneurs Shape Free Data Into Money


Angus Loten in the Wall Street Journal: “More cities are putting information on everything from street-cleaning schedules to police-response times and restaurant inspection reports in the public domain, in the hope that people will find a way to make money off the data.
Supporters of such programs often see them as a local economic stimulus plan, allowing software developers and entrepreneurs in cities ranging from San Francisco to South Bend, Ind., to New York, to build new businesses based on the information they get from government websites.
When Los Angeles Mayor Eric Garcetti issued an executive directive last month to launch the city’s open-data program, he cited entrepreneurs and businesses as important beneficiaries. Open-data promotes innovation and “gives companies, individuals, and nonprofit organizations the opportunity to leverage one of government’s greatest assets: public information,” according to the Dec. 18 directive.
A poster child for the movement might be 34-year-old Matt Ehrlichman of Seattle, who last year built an online business in part using Seattle work permits, professional licenses and other home-construction information gathered up by the city’s Department of Planning and Development.
While his website is free, his business, called Porch.com, has more than 80 employees and charges a $35 monthly fee to industry professionals who want to boost the visibility of their projects on the site.
The site gathers raw public data—such as addresses for homes under renovation, what they are doing, who is doing the work and how much they are charging—and combines it with photos and other information from industry professionals and homeowners. It then creates a searchable database for users to compare ideas and costs for projects near their own neighborhood.
…Ian Kalin, director of open-data services at Socrata, a Seattle-based software firm that makes the back-end applications for many of these government open-data sites, says he’s worked with hundreds of companies that were formed around open data.
Among them is Climate Corp., a San Francisco-based firm that collects weather and yield-forecasting data to help farmers decide when and where to plant crops. Launched in 2006, the firm was acquired in October by Monsanto Co. MON -2.90% , the seed-company giant, for $930 million.
Overall, the rate of new business formation declined nationally between 2006 and 2010. But according to the latest data from the Ewing Marion Kauffman Foundation, an entrepreneurship advocacy group in Kansas City, Mo., the rate of new business formation in Seattle in 2011 rose 9.41% in 2011, compared with the national average of 3.9%.
Other cities where new business formation was ahead of the national average include Chicago, Austin, Texas, Baltimore, and South Bend, Ind.—all cities that also have open-data programs. Still, how effective the ventures are in creating jobs is difficult to gauge.
One wrinkle: privacy concerns about the potential for information—such as property tax and foreclosure data—to be misused.
Some privacy advocates fear that government data that include names, addresses and other sensitive information could be used by fraudsters to target victims.”

The Emergence Of The Connected City


Glen Martin at Forbes: “If the modern city is a symbol for randomness — even chaos — the city of the near future is shaping up along opposite metaphorical lines. The urban environment is evolving rapidly, and a model is emerging that is more efficient, more functional, more — connected, in a word.
This will affect how we work, commute, and spend our leisure time. It may well influence how we relate to one another, and how we think about the world. Certainly, our lives will be augmented: better public transportation systems, quicker responses from police and fire services, more efficient energy consumption. But there could also be dystopian impacts: dwindling privacy and imperiled personal data. We could even lose some of the ferment that makes large cities such compelling places to live; chaos is stressful, but it can also be stimulating.
It will come as no surprise that converging digital technologies are driving cities toward connectedness. When conjoined, ISM band transmitters, sensors, and smart phone apps form networks that can make cities pretty darn smart — and maybe more hygienic. This latter possibility, at least, is proposed by Samrat Saha of the DCI Marketing Group in Milwaukee. Saha suggests “crowdsourcing” municipal trash pick-up via BLE modules, proximity sensors and custom mobile device apps.
“My idea is a bit tongue in cheek, but I think it shows how we can gain real efficiencies in urban settings by gathering information and relaying it via the Cloud,” Saha says. “First, you deploy sensors in garbage cans. Each can provides a rough estimate of its fill level and communicates that to a BLE 112 Module.”
As pedestrians who have downloaded custom “garbage can” apps on their BLE-capable iPhone or Android devices pass by, continues Saha, the information is collected from the module and relayed to a Cloud-hosted service for action — garbage pick-up for brimming cans, in other words. The process will also allow planners to optimize trash can placement, redeploying receptacles from areas where need is minimal to more garbage-rich environs….
Garbage can connectivity has larger implications than just, well, garbage. Brett Goldstein, the former Chief Data and Information Officer for the City of Chicago and a current lecturer at the University of Chicago, says city officials found clear patterns between damaged or missing garbage cans and rat problems.
“We found areas that showed an abnormal increase in missing or broken receptacles started getting rat outbreaks around seven days later,” Goldstein said. “That’s very valuable information. If you have sensors on enough garbage cans, you could get a temporal leading edge, allowing a response before there’s a problem. In urban planning, you want to emphasize prevention, not reaction.”
Such Cloud-based app-centric systems aren’t suited only for trash receptacles, of course. Companies such as Johnson Controls are now marketing apps for smart buildings — the base component for smart cities. (Johnson’s Metasys management system, for example, feeds data to its app-based Paoptix Platform to maximize energy efficiency in buildings.) In short, instrumented cities already are emerging. Smart nodes — including augmented buildings, utilities and public service systems — are establishing connections with one another, like axon-linked neurons.
But Goldstein, who was best known in Chicago for putting tremendous quantities of the city’s data online for public access, emphasizes instrumented cities are still in their infancy, and that their successful development will depend on how well we “parent” them.
“I hesitate to refer to ‘Big Data,’ because I think it’s a terribly overused term,” Goldstein said. “But the fact remains that we can now capture huge amounts of urban data. So, to me, the biggest challenge is transitioning the fields — merging public policy with computer science into functional networks.”…”

Engaging Citizens in Co-Creation in Public Services


New report by Professors Nambisan and Nambisan for the IBM Center for the Business of Government:  “The term “co-creation” refers to the development of new public services by citizens in partnership with governments. The authors present four roles that citizens co-creators often assume: explorer, ideator, designer, and diffuser.

  • Explorers identify/discover and define emerging and existing problems.
  • Ideators conceptualize novel solutions to well-defined problems.
  • Designers design and/or develop implementable solutions to well-defined problems.
  • Diffusers directly support or facilitate the adoption and diffusion of public service innovations and solutions among well-defined target populations.

Report authors Drs. Satish and Priya Nambisan of University of Wisconsin-Milwaukee provide detailed examples of citizens playing each of these roles.  They note that numerous forces contribute to the trend of citizens participating in government activities, “a shift from that of a passive service beneficiary to that of an active, informed partner or co-creator in public service innova­tion and problem-solving.“
The help government leaders craft successful co-creation programs, the report outlines four strategies to encourage citizen co-creation:

  1. Fit the approach to the innovation context
  2. Manage citizen expectations
  3. Link the internal organization with the external partners
  4. Embed citizen engagement in the broader context”

Building Creative Commons: The Five Pillars Of Open Source Finance


Brett Scott: “This is an article about Open Source Finance. It’s an idea I first sketched out at a talk I gave at the Open Data Institute in London. By ‘Open Source Finance’, I don’t just mean open source software programmes. Rather, I’m referring to something much deeper and broader. It’s a way of framing an overall change we might want to see in the financial system….

You can thus take on five conceptually separate, but mutualistic roles: Producer, consumer, validator, community member, or (competitive or complementary) breakaway. And these same five elements can underpin a future system of Open Source Finance. I’m framing this as an overall change we might want to see in the financial system, but perhaps we are already seeing it happening. So let’s look briefly at each pillar in turn.
Pillar 1: Access to the means of financial production
Very few of us perceive ourselves as offering financial services when we deposit our money in banks. Mostly we perceive ourselves as passive recipients of services. Put another way, we frequently don’t imagine we have the capability to produce financial services, even though the entire financial system is foundationally constructed from the actions of small-scale players depositing money into banks and funds, buying the products of companies that receive loans, and culturally validating the money system that the banks uphold. Let’s look though, at a few examples of prototypes that are breaking this down:

  1. Peer-to-peer finance models: If you decide to lend money to your friend, you directly perceive yourself as offering them a service. P2P finance platforms extend that concept far beyond your circle of close contacts, so that you can directly offer a financial service to someone who needs it. In essence, such platforms offer you access to an active, direct role in producing financial services, rather than an indirect, passive one.
  2. There are many interesting examples of actual open source financial software aimed at helping to fulfil the overall mission of an open source financial system. Check out Mifos and Cyclos, and Hamlets (developed by Community Forge’s Matthew Slater and others), all of which are designed to help people set up their own financial institutions
  3. Alternative currencies: There’s a reason why the broader public are suddenly interested in understanding Bitcoin. It’s a currency that people have produced themselves. As a member of the Bitcoin community, I am much more aware of my role in upholding – or producing – the system, than I am when using normal money, which I had no conscious role in producing. The scope toinvent your own currency goes far beyond crypto-currencies though: local currencies, time-banks, and mutual credit systems are emerging all over
  4. The Open Bank Project is trying to open up banks to third party apps that would allow a depositor to have much greater customisability of their bank account. It’s not aimed at bypassing banks in the way that P2P is, but it’s seeking to create an environment where an ecosystem of alternative systems can plug into the underlying infrastructure provided by banks

Pillar 2: Widespread distribution
Financial intermediaries like banks and funds serve as powerful gatekeepers to access to financing. To some extent this is a valid role – much like a publisher or music label will attempt to only publish books or music that they believe are high quality enough – but on the other hand, this leads to excessive power vested in the intermediaries, and systematic bias in what gets to survive. When combined with a lack of democratic accountability on the part of the intermediaries, you can have whole societies held hostage to the (arbitrary) whims, prejudices and interests of such intermediaries. Expanding access to financial services is thus a big front in the battle for financial democratisation. In addition to more traditional means to buildingfinancial inclusion – such as credit unions and microfinance – here are two areas to look at:

  • Crowdfunding: In the dominant financial system, you have to suck up to a single set of gatekeepers to get financing, hoping they won’t exclude you. Crowdfunding though, has expanded access to receiving financial services to a whole host of people who previously wouldn’t have access, such as artists, small-scale filmmakers, activists, and entrepreneurs with no track record. Crowdfunding can serve as a micro redistribution system in society, offering people a direct way to transfer wealth to areas that traditional welfare systems might neglect
  • Mobile banking: This is a big area, with important implications for international development and ICT4D. Check out innovations like M-Pesain Kenya, a technology to use mobile phones as proto-bank accounts. This in itself doesn’t necessarily guarantee inclusion, but it expands potential access to the system to people that most banks ignore

Pillar 3: The ability to monitor
Do you know where the money in the big banks goes? No, of course not. They don’t publish it, under the guise of commercial secrecy and confidentiality. It’s like they want to have their cake and eat it: “We’ll act as intermediaries on your behalf, but don’t ever ask for any accountability”. And what about the money in your pension fund? Also very little accountability. The intermediary system is incredibly opaque, but attempts to make it more transparent are emerging. Here are some examples:

  • Triodos Bank and Charity Bank are examples of banks that publish exactly what projects they lend to. This gives you the ability to hold them to account in a way that no other bank will allow you to do
  • Corporations are vehicles for extracting value out of assets and then distributing that value via financial instruments to shareholders and creditors. Corporate structures though, including those used by banks themselves, have reached a level of complexity approaching pure obsfucation. There can be no democratic accountability when you can’t even see who owns what, and how the money flows. Groups likeOpenCorporates and Open Oil though, are offering new open data tools to shine a light on the shadowy world of tax havens, ownership structures and contracts
  • Embedded in peer-to-peer models is a new model of accountability too. When people are treated as mere account numbers with credit scores by banks, the people in return feel little accountability towards the banks. On the other hand, if an individual has directly placed trust in me, I feel much more compelled to respect that

Pillar 4: An ethos of non-prescriptive DIY collaboration
At the heart of open source movements is a deep DIY ethos. This is in part about the sheer joy of producing things, but also about asserting individual power over institutionalised arrangements and pre-established officialdom. Alongside this, and deeply tied to the DIY ethos, is the search to remove individual alienation: You are not a cog in a wheel, producing stuff you don’t have a stake in, in order to consume stuff that you don’t know the origins of. Unalienated labour includes the right to produce where you feel most capable or excited.
This ethos of individual responsibility and creativity stands in contrast to the traditional passive frame of finance that is frequently found on both the Right and Left of the political spectrum. Indeed, the debates around ‘socially useful finance’ are seldom about reducing the alienation of people from their financial lives. They’re mostly about turning the existing financial sector into a slightly more benign dictatorship. The essence of DIY though, is to band together, not via the enforced hierarchy of the corporation or bureaucracy, but as part of a likeminded community of individuals creatively offering services to each other. So let’s take a look at a few examples of this

  1. BrewDog’s ‘Equity for Punks‘ share offering is probably only going to attract beer-lovers, but that’s the point – you get together as a group who has a mutual appreciation for a project, and you finance it, and then when you’re drinking the beer you’ll know you helped make it happen in a small way
  2. Community shares offer local groups the ability to finance projects that are meaningful to them in a local area. Here’s one for a solar co-operative, a pub, and a ferry boat service in Bristol
  3. We’ve already discussed how crowdfunding platforms open access to finance to people excluded from it, but they do this by offering would-be crowdfunders the chance to support things that excite them. I don’t have much cash, so I’m not in a position to actively finance people, but in my Indiegogo profile you can see I make an effort helping to publicise campaigns that I want to receive financing

Pillar 5: The right to fork
The right to dissent is a crucial component of a democratic society. But for dissent to be effective, it has to be informed and constructive, rather than reactive and regressive. There is much dissent towards the current financial system, but while people are free to voice their displeasure, they find it very difficult to actually act on their displeasure. We may loathe the smug banking oligopoly, but we’re frequently compelled to use them.
Furthermore, much dissent doesn’t have a clear vision of what alternative is sought. This is partially due to the fact that access to financial ‘source code’ is so limited. It’s hard to articulate ideas about what’s wrong when one cannot articulate how the current system operates. Most financial knowledge is held in proprietary formulations and obscure jargon-laden language within the financial sector, and this needs to change. It’s for this reason that I’m building the London School of Financial Activism, so ordinary people can explore the layers of financial code, from the deepest layer – the money itself – and then on to the institutions, instruments and networks that move it around….”