Building Creative Commons: The Five Pillars Of Open Source Finance


Brett Scott: “This is an article about Open Source Finance. It’s an idea I first sketched out at a talk I gave at the Open Data Institute in London. By ‘Open Source Finance’, I don’t just mean open source software programmes. Rather, I’m referring to something much deeper and broader. It’s a way of framing an overall change we might want to see in the financial system….

You can thus take on five conceptually separate, but mutualistic roles: Producer, consumer, validator, community member, or (competitive or complementary) breakaway. And these same five elements can underpin a future system of Open Source Finance. I’m framing this as an overall change we might want to see in the financial system, but perhaps we are already seeing it happening. So let’s look briefly at each pillar in turn.
Pillar 1: Access to the means of financial production
Very few of us perceive ourselves as offering financial services when we deposit our money in banks. Mostly we perceive ourselves as passive recipients of services. Put another way, we frequently don’t imagine we have the capability to produce financial services, even though the entire financial system is foundationally constructed from the actions of small-scale players depositing money into banks and funds, buying the products of companies that receive loans, and culturally validating the money system that the banks uphold. Let’s look though, at a few examples of prototypes that are breaking this down:

  1. Peer-to-peer finance models: If you decide to lend money to your friend, you directly perceive yourself as offering them a service. P2P finance platforms extend that concept far beyond your circle of close contacts, so that you can directly offer a financial service to someone who needs it. In essence, such platforms offer you access to an active, direct role in producing financial services, rather than an indirect, passive one.
  2. There are many interesting examples of actual open source financial software aimed at helping to fulfil the overall mission of an open source financial system. Check out Mifos and Cyclos, and Hamlets (developed by Community Forge’s Matthew Slater and others), all of which are designed to help people set up their own financial institutions
  3. Alternative currencies: There’s a reason why the broader public are suddenly interested in understanding Bitcoin. It’s a currency that people have produced themselves. As a member of the Bitcoin community, I am much more aware of my role in upholding – or producing – the system, than I am when using normal money, which I had no conscious role in producing. The scope toinvent your own currency goes far beyond crypto-currencies though: local currencies, time-banks, and mutual credit systems are emerging all over
  4. The Open Bank Project is trying to open up banks to third party apps that would allow a depositor to have much greater customisability of their bank account. It’s not aimed at bypassing banks in the way that P2P is, but it’s seeking to create an environment where an ecosystem of alternative systems can plug into the underlying infrastructure provided by banks

Pillar 2: Widespread distribution
Financial intermediaries like banks and funds serve as powerful gatekeepers to access to financing. To some extent this is a valid role – much like a publisher or music label will attempt to only publish books or music that they believe are high quality enough – but on the other hand, this leads to excessive power vested in the intermediaries, and systematic bias in what gets to survive. When combined with a lack of democratic accountability on the part of the intermediaries, you can have whole societies held hostage to the (arbitrary) whims, prejudices and interests of such intermediaries. Expanding access to financial services is thus a big front in the battle for financial democratisation. In addition to more traditional means to buildingfinancial inclusion – such as credit unions and microfinance – here are two areas to look at:

  • Crowdfunding: In the dominant financial system, you have to suck up to a single set of gatekeepers to get financing, hoping they won’t exclude you. Crowdfunding though, has expanded access to receiving financial services to a whole host of people who previously wouldn’t have access, such as artists, small-scale filmmakers, activists, and entrepreneurs with no track record. Crowdfunding can serve as a micro redistribution system in society, offering people a direct way to transfer wealth to areas that traditional welfare systems might neglect
  • Mobile banking: This is a big area, with important implications for international development and ICT4D. Check out innovations like M-Pesain Kenya, a technology to use mobile phones as proto-bank accounts. This in itself doesn’t necessarily guarantee inclusion, but it expands potential access to the system to people that most banks ignore

Pillar 3: The ability to monitor
Do you know where the money in the big banks goes? No, of course not. They don’t publish it, under the guise of commercial secrecy and confidentiality. It’s like they want to have their cake and eat it: “We’ll act as intermediaries on your behalf, but don’t ever ask for any accountability”. And what about the money in your pension fund? Also very little accountability. The intermediary system is incredibly opaque, but attempts to make it more transparent are emerging. Here are some examples:

  • Triodos Bank and Charity Bank are examples of banks that publish exactly what projects they lend to. This gives you the ability to hold them to account in a way that no other bank will allow you to do
  • Corporations are vehicles for extracting value out of assets and then distributing that value via financial instruments to shareholders and creditors. Corporate structures though, including those used by banks themselves, have reached a level of complexity approaching pure obsfucation. There can be no democratic accountability when you can’t even see who owns what, and how the money flows. Groups likeOpenCorporates and Open Oil though, are offering new open data tools to shine a light on the shadowy world of tax havens, ownership structures and contracts
  • Embedded in peer-to-peer models is a new model of accountability too. When people are treated as mere account numbers with credit scores by banks, the people in return feel little accountability towards the banks. On the other hand, if an individual has directly placed trust in me, I feel much more compelled to respect that

Pillar 4: An ethos of non-prescriptive DIY collaboration
At the heart of open source movements is a deep DIY ethos. This is in part about the sheer joy of producing things, but also about asserting individual power over institutionalised arrangements and pre-established officialdom. Alongside this, and deeply tied to the DIY ethos, is the search to remove individual alienation: You are not a cog in a wheel, producing stuff you don’t have a stake in, in order to consume stuff that you don’t know the origins of. Unalienated labour includes the right to produce where you feel most capable or excited.
This ethos of individual responsibility and creativity stands in contrast to the traditional passive frame of finance that is frequently found on both the Right and Left of the political spectrum. Indeed, the debates around ‘socially useful finance’ are seldom about reducing the alienation of people from their financial lives. They’re mostly about turning the existing financial sector into a slightly more benign dictatorship. The essence of DIY though, is to band together, not via the enforced hierarchy of the corporation or bureaucracy, but as part of a likeminded community of individuals creatively offering services to each other. So let’s take a look at a few examples of this

  1. BrewDog’s ‘Equity for Punks‘ share offering is probably only going to attract beer-lovers, but that’s the point – you get together as a group who has a mutual appreciation for a project, and you finance it, and then when you’re drinking the beer you’ll know you helped make it happen in a small way
  2. Community shares offer local groups the ability to finance projects that are meaningful to them in a local area. Here’s one for a solar co-operative, a pub, and a ferry boat service in Bristol
  3. We’ve already discussed how crowdfunding platforms open access to finance to people excluded from it, but they do this by offering would-be crowdfunders the chance to support things that excite them. I don’t have much cash, so I’m not in a position to actively finance people, but in my Indiegogo profile you can see I make an effort helping to publicise campaigns that I want to receive financing

Pillar 5: The right to fork
The right to dissent is a crucial component of a democratic society. But for dissent to be effective, it has to be informed and constructive, rather than reactive and regressive. There is much dissent towards the current financial system, but while people are free to voice their displeasure, they find it very difficult to actually act on their displeasure. We may loathe the smug banking oligopoly, but we’re frequently compelled to use them.
Furthermore, much dissent doesn’t have a clear vision of what alternative is sought. This is partially due to the fact that access to financial ‘source code’ is so limited. It’s hard to articulate ideas about what’s wrong when one cannot articulate how the current system operates. Most financial knowledge is held in proprietary formulations and obscure jargon-laden language within the financial sector, and this needs to change. It’s for this reason that I’m building the London School of Financial Activism, so ordinary people can explore the layers of financial code, from the deepest layer – the money itself – and then on to the institutions, instruments and networks that move it around….”

How Big Should Your Network Be?


Michael Simmons at Forbes: “There is a debate happening between software developers and scientists: How large can and should our networks be in this evolving world of social media? The answer to this question has dramatic implications for how we look at our own relationship building…

To better understand our limits, I connected with the famous British anthropologist and evolutionary psychologist, Robin Dunbar, creator of his namesake; Dunbar’s number.

Dunbar’s number, 150, is the suggested cognitive limit to the number of relationships we can maintain where both parties are willing to do favors for each other.


Dunbar’s discovery was in finding a very high correlation between the size of a species’ neocortex and the average social group size (see chart to right). The theory predicted 150 for humans, and this number is found throughout human communities over time….
Does Dunbar’s Number Still Apply In Today’s Connected World?
There are two camps when it comes to Dunbar’s number. The first camp is embodied by David Morin, the founder of Path, who built a whole social network predicated on the idea that you cannot have more than 150 friends. Robin Dunbar falls into this camp and even did an academic study on social media’s impact on Dunbar’s number. When I asked for his opinion, he replied:

The 150 limit applies to internet social networking sites just as it does in face-to-face life. Facebook’s own data shows that the average number of friends is 150-250 (within the range of variation in the face-to-face world). Remember that the 150 figure is just the average for the population as a whole. However, those who have more seem to have weaker friendships, suggesting that the amount of social capital is fixed and you can choose to spread it thickly or thinly.

Zvi Band, the founder of Contactually, a rapidly growing, venture-backed, relationship management tool, disagrees with both Morin and Dunbar, “We have the ability as a society to bust through Dunbar’s number. Current software can extend Dunbar’s number by at least 2-3 times.” To understand the power of Contactually and tools like it, we must understand the two paradigms people currently use when keeping in touch: broadcast & one-on-one.

While broadcast email makes it extremely easy to reach lots of people who want to hear from us, it is missing personalization. Personalization is what transforms information diffusion into personal relationship building. To make matters worse, email broadcast open rates have halved in size over the last decade.

On the other end of the spectrum is one-on-one outreach. Research performed by Facebook data scientists shows that one-on-one outreach is extremely effective and explains why:

Both the offering and the receiving of the intimate information increases relationship strength. Providing a partner with personal information expresses trust, encourages reciprocal self-disclosure, and engages the partner in at least some of the details of one’s daily life. Directed communication evokes norms of reciprocity, so may obligate partner to reply. The mere presence of the communication, which is relatively effortful compared to broadcast messages, also signals the importance of the relationship….”

When Tech Culture And Urbanism Collide


John Tolva: “…We can build upon the success of the work being done at the intersection of technology and urban design, right now.

For one, the whole realm of social enterprise — for-profit startups that seek to solve real social problems — has a huge overlap with urban issues. Impact Engine in Chicago, for instance, is an accelerator squarely focused on meaningful change and profitable businesses. One of their companies, Civic Artworks, has set as its goal rebalancing the community planning process.

The Code for America Accelerator and Tumml, both located in San Francisco, morph the concept of social innovation into civic/urban innovation. The companies nurtured by CfA and Tumml are filled with technologists and urbanists working together to create profitable businesses. Like WorkHands, a kind of LinkedIn for blue collar trades. Would something like this work outside a city? Maybe. Are its effects outsized and scale-ready in a city? Absolutely. That’s the opportunity in urban innovation.

Scale is what powers the sharing economy and it thrives because of the density and proximity of cities. In fact, shared resources at critical density is one of the only good definitions for what a city is. It’s natural that entrepreneurs have overlaid technology on this basic fact of urban life to amplify its effects. Would TaskRabbit, Hailo or LiquidSpace exist in suburbia? Probably, but their effects would be minuscule and investors would get restless. The city in this regard is the platform upon which sharing economy companies prosper. More importantly, companies like this change the way the city is used. It’s not urban planning, but it is urban (re)design and it makes a difference.

A twist that many in the tech sector who complain about cities often miss is that change in a city is not the same thing as change in city government. Obviously they are deeply intertwined; change is mighty hard when it is done at cross-purposes with government leadership. But it happens all the time. Non-government actors — foundations, non-profits, architecture and urban planning firms, real estate developers, construction companies — contribute massively to the shape and health of our cities.

Often this contribution is powered through policies of open data publication by municipal governments. Open data is the raw material of a city, the vital signs of what has happened there, what is happening right now, and the deep pool of patterns for what might happen next.

Tech entrepreneurs would do well to look at the organizations and companies capitalizing on this data as the real change agents, not government itself. Even the data in many cases is generated outside government. Citizens often do the most interesting data-gathering, with tools like LocalData. The most exciting thing happening at the intersection of technology and cities today — what really makes them “smart” — is what is happening at the periphery of city government. It’s easy to belly-ache about government and certainly there are administrations that to do not make data public (or shut it down), but tech companies who are truly interested in city change should know that there are plenty of examples of how to start up and do it.

And yet, the somewhat staid world of architecture and urban-scale design presents the most opportunity to a tech community interested in real urban change. While technology obviously plays a role in urban planning — 3D visual design tools like Revit and mapping services like ArcGIS are foundational for all modern firms — data analytics as a serious input to design matters has only been used in specialized (mostly energy efficiency) scenarios. Where are the predictive analytics, the holistic models, the software-as-a-service providers for the brave new world of urban informatics and The Internet of Things? Technologists, it’s our move.

Something’s amiss when some city governments — rarely the vanguard in technological innovation — have more sophisticated tools for data-driven decision-making than the private sector firms who design the city. But some understand the opportunity. Vannevar Technology is working on it, as is Synthicity. There’s plenty of room for the most positive aspects of tech culture to remake the profession of urban planning itself. (Look to NYU’s Center for Urban Science and Progress and the University of Chicago’s Urban Center for Computation and Data for leadership in this space.)…”

Brainlike Computers, Learning From Experience


The New York Times: “Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.

The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.

“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.

Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.

But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.

In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.

The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.”

Can a Better Taxonomy Help Behavioral Energy Efficiency?


Article at GreenTechEfficiency: “Hundreds of behavioral energy efficiency programs have sprung up across the U.S. in the past five years, but the effectiveness of the programs — both in terms of cost savings and reduced energy use — can be difficult to gauge.
Of nearly 300 programs, a new report from the American Council for an Energy-Efficient Economy was able to accurately calculate the cost of saved energy from only ten programs….
To help utilities and regulators better define and measure behavioral programs, ACEEE offers a new taxonomy of utility-run behavior programs that breaks them into three major categories:
Cognition: Programs that focus on delivering information to consumers.  (This includes general communication efforts, enhanced billing and bill inserts, social media and classroom-based education.)
Calculus: Programs that rely on consumers making economically rational decisions. (This includes real-time and asynchronous feedback, dynamic pricing, games, incentives and rebates and home energy audits.)
Social interaction: Programs whose key drivers are social interaction and belonging. (This includes community-based social marketing, peer champions, online forums and incentive-based gifts.)
….
While the report was mostly preliminary, it also offered four steps forward for utilities that want to make the most of behavioral programs.
Stack. The types of programs might fit into three broad categories, but judiciously blending cues based on emotion, reason and social interaction into programs is key, according to ACEEE. Even though the report recommends stacked programs that have a multi-modal approach, the authors acknowledge, “This hypothesis will remain untested until we see more stacked programs in the marketplace.”
Track. Just like other areas of grid modernization, utilities need to rethink how they collect, analyze and report the data coming out of behavioral programs. This should include metrics that go beyond just energy savings.
Share. As with other utility programs, behavior-based energy efficiency programs can be improved upon if utilities share results and if reporting is standardized across the country instead of varying by state.
Coordinate. Sharing is only the first step. Programs that merge water, gas and electricity efficiency can often gain better results than siloed programs. That approach, however, requires a coordinated effort by regional utilities and a change to how programs are funded and evaluated by regulators.”

6 New Year’s Strategies for Open Data Entrepreneurs


The GovLab’s Senior Advisor Joel Gurin: “Open Data has fueled a wide range of startups, including consumer-focused websites, business-to-business services, data-management tech firms, and more. Many of the companies in the Open Data 500 study are new ones like these. New Year’s is a classic time to start new ventures, and with 2014 looking like a hot year for Open Data, we can expect more startups using this abundant, free resource. For my new book, Open Data Now, I interviewed dozens of entrepreneurs and distilled six of the basic strategies that they’ve used.
1. Learn how to add value to free Open Data. We’re seeing an inversion of the value proposition for data. It used to be that whoever owned the data—particularly Big Data—had greater opportunities than those who didn’t. While this is still true in many areas, it’s also clear that successful businesses can be built on free Open Data that anyone can use. The value isn’t in the data itself but rather in the analytical tools, expertise, and interpretation that’s brought to bear. One oft-cited example: The Climate Corporation, which built a billion-dollar business out of government weather and satellite data that’s freely available for use.
2. Focus on big opportunities: health, finance, energy, education. A business can be built on just about any kind of Open Data. But the greatest number of startup opportunities will likely be in the four big areas where the federal government is focused on Open Data release. Last June’s Health Datapalooza showcased the opportunities in health. Companies like Opower in energy, GreatSchools in education, and Calcbench, SigFig, and Capital Cube in finance are examples in these other major sectors.
3. Explore choice engines and Smart Disclosure apps. Smart Disclosure – releasing data that consumers can use to make marketplace choices – is a powerful tool that can be the basis for a new sector of online startups. No one, it seems, has quite figured out how to make this form of Open Data work best, although sites like CompareTheMarket in the UK may be possible models. Business opportunities await anyone who can find ways to provide these much-needed consumer services. One example: Kayak, which competed in the crowded travel field by providing a great consumer interface, and which was sold to Priceline for $1.8 billion last year.
4. Help consumers tap the value of personal data. In a privacy-conscious society, more people will be interested in controlling their personal data and sharing it selectively for their own benefit. The value of personal data is just being recognized, and opportunities remain to be developed. There are business opportunities in setting up and providing “personal data vaults” and more opportunity in applying the many ways they can be used. Personal and Reputation.com are two leaders in this field.
5. Provide new data solutions to governments at all levels. Government datasets at the federal, state, and local level can be notoriously difficult to use. The good news is that these governments are now realizing that they need help. Data management for government is a growing industry, as Socrata, OpenGov, 3RoundStones, and others are finding, while companies like Enigma.io are turning government data into a more usable resource.
6. Look for unusual Open Data opportunities. Building a successful business by gathering data on restaurant menus and recipes is not an obvious route to success. But it’s working for Food Genius, whose founders showed a kind of genius in tapping an opportunity others had missed. While the big areas for Open Data are becoming clear, there are countless opportunities to build more niche businesses that can still be highly successful. If you have expertise in an area and see a customer need, there’s an increasingly good chance that the Open Data to help meet that need is somewhere to be found.”

The Postmodernity of Big Data


Essay by in the New Inquiry: “Big Data fascinates because its presence has always been with us in nature. Each tree, drop of rain, and the path of each grain of sand, both responds to and creates millions of data points, even on a short journey. Nature is the original algorithm, the most efficient and powerful. Mathematicians since the ancients have looked to it for inspiration; techno-capitalists now look to unlock its mysteries for private gain. Playing God has become all the more brisk and profitable thanks to cloud computing.
But beyond economic motivations for Big Data’s rise, are there also epistemological ones? Has Big Data come to try to fill the vacuum of certainty left by postmodernism? Does data science address the insecurities of the postmodern thought?
It turns out that trying to explain Big Data is like trying to explain postmodernism. Neither can be summarized effectively in a phrase, despite their champions’ efforts. Broad epistemological developments are compressed into cursory, ex post facto descriptions. Attempts to define Big Data, such as IBM’s marketing copy, which promises “insights gleaned” from “enterprise data warehouses that implement massively parallel processing,” “real-time scalability” and “parsing structured and unstructured sources,” focus on its implementation at the expense of its substance, decontextualizing it entirely . Similarly, definitions of postmodernism, like art critic Thomas McEvilley’s claim that it is “a renunciation that involves recognition of the relativity of the self—of one’s habit systems, their tininess, silliness, and arbitrariness” are accurate but abstract to the point of vagueness….
Big Data might come to be understood as Big Postmodernism: the period in which the influx of unstructured, non-teleological, non-narrative inputs ceased to destabilize the existing order but was instead finally mastered processed by sufficiently complex, distributed, and pluralized algorithmic regime. If Big Data has a skepticism built in, how this is different from the skepticism of postmodernism is perhaps impossible to yet comprehend”.

Open data policies, their implementation and impact: A framework for comparison


Paper by A Zuiderwijk, M Janssen in the Government Information Quarterly: “In developing open data policies, governments aim to stimulate and guide the publication of government data and to gain advantages from its use. Currently there is a multiplicity of open data policies at various levels of government, whereas very little systematic and structured research has been done on the issues that are covered by open data policies, their intent and actual impact. Furthermore, no suitable framework for comparing open data policies is available, as open data is a recent phenomenon and is thus in an early stage of development. In order to help bring about a better understanding of the common and differentiating elements in the policies and to identify the factors affecting the variation in policies, this paper develops a framework for comparing open data policies. The framework includes the factors of environment and context, policy content, performance indicators and public values. Using this framework, seven Dutch governmental policies at different government levels are compared. The comparison shows both similarities and differences among open data policies, providing opportunities to learn from each other’s policies. The findings suggest that current policies are rather inward looking, open data policies can be improved by collaborating with other organizations, focusing on the impact of the policy, stimulating the use of open data and looking at the need to create a culture in which publicizing data is incorporated in daily working processes. The findings could contribute to the development of new open data policies and the improvement of existing open data policies.”

A Bottom-Up Smart City?


Alicia Rouault at Data-Smart City Solutions: “America’s shrinking cities face a tide of disinvestment, abandonment, vacancy, and a shift toward deconstruction and demolition followed by strategic reinvestment, rightsizing, and a host of other strategies designed to renew once-great cities. Thriving megacity regions are experiencing rapid growth in population, offering a different challenge for city planners to redefine density, housing, and transportation infrastructure. As cities shrink and grow, policymakers are increasingly called to respond to these changes by making informed, data-driven decisions. What is the role of the citizen in this process of collecting and understanding civic data?
Writing for Forbes in “Open Sourcing the Neighborhood,” Professor of Sociology at Columbia University Saskia Sassen calls for “open source urbanism” as an antidote to the otherwise top-down smart city movement. This form of urbanism involves opening traditional verticals of information within civic and governmental institutions. Citizens can engage with and understand the logic behind decisions by exploring newly opened administrative data. Beyond opening these existing datasets, Sassen points out that citizen experts hold invaluable institutional memory that can serve as an alternate and legitimate resource for policymakers, economists, and urban planners alike.
In 2012, we created a digital platform called LocalData to address the production and use of community-generated data in a municipal context. LocalData is a digital mapping service used globally by universities, non-profits, and municipal governments to gather and understand data at a neighborhood scale. In contrast to traditional Census or administrative data, which is produced by a central agency and collected infrequently, our platform provides a simple method for both community-based organizations and municipal employees to gather real-time data on project-specific indicators: property conditions, building inspections, environmental issues or community assets. Our platform then visualizes data and exports it into formats integrated with existing systems in government to seamlessly provide accurate and detailed information for decision makers.
LocalData began as a project in Detroit, Michigan where the city was tackling a very real lack of standard, updated, and consistent condition information on the quality and status of vacant and abandoned properties. Many of these properties were owned by the city and county due to high foreclosure rates. One of Detroit’s strategies for combating crime and stabilizing neighborhoods is to demolish property in a targeted fashion. This strategy serves as a political win as much as providing an effective way to curb the secondary effects of vacancy: crime, drug use, and arson. Using LocalData, the city mapped critical corridors of emergent commercial property as an analysis tool for where to place investment, and documented thousands of vacant properties to understand where to target demolition.
Vacancy is not unique to the Midwest. Following our work with the Detroit Mayor’s office and planning department, LocalData has been used in dozens of other cities in the U.S. and abroad. Currently the Smart Chicago Collaborative is using LocalData to conduct a similar audit of vacant and abandoned property in southwest Chicagos. Though an effective tool for capturing building-specific information, LocalData has also been used to capture behavior and movement of goods. The MIT Megacities Logistics Lab has used LocalData to map and understand the intensity of urban supply chains by interviewing shop owners and mapping delivery routes in global megacities in Mexico, Colombia, Brazil and the U.S. The resulting information has been used with analytical models to help both city officials and companies to design better city logistics policies and operations….”

Using Social Media in Rulemaking: Possibilities and Barriers


New paper by Michael Herz (Cardozo Legal Studies Research Paper No. 417): “Web 2.0” is characterized by interaction, collaboration, non-static web sites, use of social media, and creation of user-generated content. In theory, these Web 2.0 tools can be harnessed not only in the private sphere but as tools for an e-topia of citizen engagement and participatory democracy. Notice-and-comment rulemaking is the pre-digital government process that most approached (while still falling far short of) the e-topian vision of public participation in deliberative governance. The notice-and-comment process for federal agency rulemaking has now changed from a paper process to an electronic one. Expectations for this switch were high; many anticipated a revolution that would make rulemaking not just more efficient, but also more broadly participatory, democratic, and dialogic. In the event, the move online has not produced a fundamental shift in the nature of notice-and-comment rulemaking. At the same time, the online world in general has come to be increasingly characterized by participatory and dialogic activities, with a move from static, text-based websites to dynamic, multi-media platforms with large amounts of user-generated content. This shift has not left agencies untouched. To the contrary, agencies at all levels of government have embraced social media – by late 2013 there were over 1000 registered federal agency twitter feeds and over 1000 registered federal agency Facebook pages, for example – but these have been used much more as tools for broadcasting the agency’s message than for dialogue or obtaining input. All of which invites the questions whether agencies could or should directly rely on social media in the rulemaking process.
This study reviews how federal agencies have been using social media to date and considers the practical and legal barriers to using social media in rulemaking, not just to raise the visibility of rulemakings, which is certainly happening, but to gather relevant input and help formulate the content of rules.
The study was undertaken for the Administrative Conference of the United States and is the basis for a set of recommendations adopted by ACUS in December 2013. Those recommendations overlap with but are not identical to the recommendations set out herein.”