Steven Hodas @ The Lean Startup Conference 2013 –…Steven runs an procurement-innovation program in one of the world’s most notorious bureaucracies: the New York City Department of Education. In a fear-driven atmosphere, with lots of incentive to not be embarrassed, he’ll talk about the challenges he’s faced and progress he’s made testing new ideas.
Building Creative Commons: The Five Pillars Of Open Source Finance
Brett Scott: “This is an article about Open Source Finance. It’s an idea I first sketched out at a talk I gave at the Open Data Institute in London. By ‘Open Source Finance’, I don’t just mean open source software programmes. Rather, I’m referring to something much deeper and broader. It’s a way of framing an overall change we might want to see in the financial system….
Pillar 1: Access to the means of financial production
Very few of us perceive ourselves as offering financial services when we deposit our money in banks. Mostly we perceive ourselves as passive recipients of services. Put another way, we frequently don’t imagine we have the capability to produce financial services, even though the entire financial system is foundationally constructed from the actions of small-scale players depositing money into banks and funds, buying the products of companies that receive loans, and culturally validating the money system that the banks uphold. Let’s look though, at a few examples of prototypes that are breaking this down:
- Peer-to-peer finance models: If you decide to lend money to your friend, you directly perceive yourself as offering them a service. P2P finance platforms extend that concept far beyond your circle of close contacts, so that you can directly offer a financial service to someone who needs it. In essence, such platforms offer you access to an active, direct role in producing financial services, rather than an indirect, passive one.
- There are many interesting examples of actual open source financial software aimed at helping to fulfil the overall mission of an open source financial system. Check out Mifos and Cyclos, and Hamlets (developed by Community Forge’s Matthew Slater and others), all of which are designed to help people set up their own financial institutions
- Alternative currencies: There’s a reason why the broader public are suddenly interested in understanding Bitcoin. It’s a currency that people have produced themselves. As a member of the Bitcoin community, I am much more aware of my role in upholding – or producing – the system, than I am when using normal money, which I had no conscious role in producing. The scope toinvent your own currency goes far beyond crypto-currencies though: local currencies, time-banks, and mutual credit systems are emerging all over
- The Open Bank Project is trying to open up banks to third party apps that would allow a depositor to have much greater customisability of their bank account. It’s not aimed at bypassing banks in the way that P2P is, but it’s seeking to create an environment where an ecosystem of alternative systems can plug into the underlying infrastructure provided by banks
Pillar 2: Widespread distribution
Financial intermediaries like banks and funds serve as powerful gatekeepers to access to financing. To some extent this is a valid role – much like a publisher or music label will attempt to only publish books or music that they believe are high quality enough – but on the other hand, this leads to excessive power vested in the intermediaries, and systematic bias in what gets to survive. When combined with a lack of democratic accountability on the part of the intermediaries, you can have whole societies held hostage to the (arbitrary) whims, prejudices and interests of such intermediaries. Expanding access to financial services is thus a big front in the battle for financial democratisation. In addition to more traditional means to buildingfinancial inclusion – such as credit unions and microfinance – here are two areas to look at:
- Crowdfunding: In the dominant financial system, you have to suck up to a single set of gatekeepers to get financing, hoping they won’t exclude you. Crowdfunding though, has expanded access to receiving financial services to a whole host of people who previously wouldn’t have access, such as artists, small-scale filmmakers, activists, and entrepreneurs with no track record. Crowdfunding can serve as a micro redistribution system in society, offering people a direct way to transfer wealth to areas that traditional welfare systems might neglect
- Mobile banking: This is a big area, with important implications for international development and ICT4D. Check out innovations like M-Pesain Kenya, a technology to use mobile phones as proto-bank accounts. This in itself doesn’t necessarily guarantee inclusion, but it expands potential access to the system to people that most banks ignore
Pillar 3: The ability to monitor
Do you know where the money in the big banks goes? No, of course not. They don’t publish it, under the guise of commercial secrecy and confidentiality. It’s like they want to have their cake and eat it: “We’ll act as intermediaries on your behalf, but don’t ever ask for any accountability”. And what about the money in your pension fund? Also very little accountability. The intermediary system is incredibly opaque, but attempts to make it more transparent are emerging. Here are some examples:
- Triodos Bank and Charity Bank are examples of banks that publish exactly what projects they lend to. This gives you the ability to hold them to account in a way that no other bank will allow you to do
- Corporations are vehicles for extracting value out of assets and then distributing that value via financial instruments to shareholders and creditors. Corporate structures though, including those used by banks themselves, have reached a level of complexity approaching pure obsfucation. There can be no democratic accountability when you can’t even see who owns what, and how the money flows. Groups likeOpenCorporates and Open Oil though, are offering new open data tools to shine a light on the shadowy world of tax havens, ownership structures and contracts
- Embedded in peer-to-peer models is a new model of accountability too. When people are treated as mere account numbers with credit scores by banks, the people in return feel little accountability towards the banks. On the other hand, if an individual has directly placed trust in me, I feel much more compelled to respect that
Pillar 4: An ethos of non-prescriptive DIY collaboration
At the heart of open source movements is a deep DIY ethos. This is in part about the sheer joy of producing things, but also about asserting individual power over institutionalised arrangements and pre-established officialdom. Alongside this, and deeply tied to the DIY ethos, is the search to remove individual alienation: You are not a cog in a wheel, producing stuff you don’t have a stake in, in order to consume stuff that you don’t know the origins of. Unalienated labour includes the right to produce where you feel most capable or excited.
This ethos of individual responsibility and creativity stands in contrast to the traditional passive frame of finance that is frequently found on both the Right and Left of the political spectrum. Indeed, the debates around ‘socially useful finance’ are seldom about reducing the alienation of people from their financial lives. They’re mostly about turning the existing financial sector into a slightly more benign dictatorship. The essence of DIY though, is to band together, not via the enforced hierarchy of the corporation or bureaucracy, but as part of a likeminded community of individuals creatively offering services to each other. So let’s take a look at a few examples of this
- BrewDog’s ‘Equity for Punks‘ share offering is probably only going to attract beer-lovers, but that’s the point – you get together as a group who has a mutual appreciation for a project, and you finance it, and then when you’re drinking the beer you’ll know you helped make it happen in a small way
- Community shares offer local groups the ability to finance projects that are meaningful to them in a local area. Here’s one for a solar co-operative, a pub, and a ferry boat service in Bristol
- We’ve already discussed how crowdfunding platforms open access to finance to people excluded from it, but they do this by offering would-be crowdfunders the chance to support things that excite them. I don’t have much cash, so I’m not in a position to actively finance people, but in my Indiegogo profile you can see I make an effort helping to publicise campaigns that I want to receive financing
Pillar 5: The right to fork
The right to dissent is a crucial component of a democratic society. But for dissent to be effective, it has to be informed and constructive, rather than reactive and regressive. There is much dissent towards the current financial system, but while people are free to voice their displeasure, they find it very difficult to actually act on their displeasure. We may loathe the smug banking oligopoly, but we’re frequently compelled to use them.
Furthermore, much dissent doesn’t have a clear vision of what alternative is sought. This is partially due to the fact that access to financial ‘source code’ is so limited. It’s hard to articulate ideas about what’s wrong when one cannot articulate how the current system operates. Most financial knowledge is held in proprietary formulations and obscure jargon-laden language within the financial sector, and this needs to change. It’s for this reason that I’m building the London School of Financial Activism, so ordinary people can explore the layers of financial code, from the deepest layer – the money itself – and then on to the institutions, instruments and networks that move it around….”
How Big Should Your Network Be?
Michael Simmons at Forbes: “There is a debate happening between software developers and scientists: How large can and should our networks be in this evolving world of social media? The answer to this question has dramatic implications for how we look at our own relationship building…
To better understand our limits, I connected with the famous British anthropologist and evolutionary psychologist, Robin Dunbar, creator of his namesake; Dunbar’s number.
Dunbar’s number, 150, is the suggested cognitive limit to the number of relationships we can maintain where both parties are willing to do favors for each other.
Dunbar’s discovery was in finding a very high correlation between the size of a species’ neocortex and the average social group size (see chart to right). The theory predicted 150 for humans, and this number is found throughout human communities over time….
Does Dunbar’s Number Still Apply In Today’s Connected World?
There are two camps when it comes to Dunbar’s number. The first camp is embodied by David Morin, the founder of Path, who built a whole social network predicated on the idea that you cannot have more than 150 friends. Robin Dunbar falls into this camp and even did an academic study on social media’s impact on Dunbar’s number. When I asked for his opinion, he replied:
The 150 limit applies to internet social networking sites just as it does in face-to-face life. Facebook’s own data shows that the average number of friends is 150-250 (within the range of variation in the face-to-face world). Remember that the 150 figure is just the average for the population as a whole. However, those who have more seem to have weaker friendships, suggesting that the amount of social capital is fixed and you can choose to spread it thickly or thinly.
Zvi Band, the founder of Contactually, a rapidly growing, venture-backed, relationship management tool, disagrees with both Morin and Dunbar, “We have the ability as a society to bust through Dunbar’s number. Current software can extend Dunbar’s number by at least 2-3 times.” To understand the power of Contactually and tools like it, we must understand the two paradigms people currently use when keeping in touch: broadcast & one-on-one.
While broadcast email makes it extremely easy to reach lots of people who want to hear from us, it is missing personalization. Personalization is what transforms information diffusion into personal relationship building. To make matters worse, email broadcast open rates have halved in size over the last decade.
On the other end of the spectrum is one-on-one outreach. Research performed by Facebook data scientists shows that one-on-one outreach is extremely effective and explains why:
Both the offering and the receiving of the intimate information increases relationship strength. Providing a partner with personal information expresses trust, encourages reciprocal self-disclosure, and engages the partner in at least some of the details of one’s daily life. Directed communication evokes norms of reciprocity, so may obligate partner to reply. The mere presence of the communication, which is relatively effortful compared to broadcast messages, also signals the importance of the relationship….”
When Tech Culture And Urbanism Collide
John Tolva: “…We can build upon the success of the work being done at the intersection of technology and urban design, right now.
For one, the whole realm of social enterprise — for-profit startups that seek to solve real social problems — has a huge overlap with urban issues. Impact Engine in Chicago, for instance, is an accelerator squarely focused on meaningful change and profitable businesses. One of their companies, Civic Artworks, has set as its goal rebalancing the community planning process.
The Code for America Accelerator and Tumml, both located in San Francisco, morph the concept of social innovation into civic/urban innovation. The companies nurtured by CfA and Tumml are filled with technologists and urbanists working together to create profitable businesses. Like WorkHands, a kind of LinkedIn for blue collar trades. Would something like this work outside a city? Maybe. Are its effects outsized and scale-ready in a city? Absolutely. That’s the opportunity in urban innovation.
Scale is what powers the sharing economy and it thrives because of the density and proximity of cities. In fact, shared resources at critical density is one of the only good definitions for what a city is. It’s natural that entrepreneurs have overlaid technology on this basic fact of urban life to amplify its effects. Would TaskRabbit, Hailo or LiquidSpace exist in suburbia? Probably, but their effects would be minuscule and investors would get restless. The city in this regard is the platform upon which sharing economy companies prosper. More importantly, companies like this change the way the city is used. It’s not urban planning, but it is urban (re)design and it makes a difference.
A twist that many in the tech sector who complain about cities often miss is that change in a city is not the same thing as change in city government. Obviously they are deeply intertwined; change is mighty hard when it is done at cross-purposes with government leadership. But it happens all the time. Non-government actors — foundations, non-profits, architecture and urban planning firms, real estate developers, construction companies — contribute massively to the shape and health of our cities.
Often this contribution is powered through policies of open data publication by municipal governments. Open data is the raw material of a city, the vital signs of what has happened there, what is happening right now, and the deep pool of patterns for what might happen next.
Tech entrepreneurs would do well to look at the organizations and companies capitalizing on this data as the real change agents, not government itself. Even the data in many cases is generated outside government. Citizens often do the most interesting data-gathering, with tools like LocalData. The most exciting thing happening at the intersection of technology and cities today — what really makes them “smart” — is what is happening at the periphery of city government. It’s easy to belly-ache about government and certainly there are administrations that to do not make data public (or shut it down), but tech companies who are truly interested in city change should know that there are plenty of examples of how to start up and do it.
And yet, the somewhat staid world of architecture and urban-scale design presents the most opportunity to a tech community interested in real urban change. While technology obviously plays a role in urban planning — 3D visual design tools like Revit and mapping services like ArcGIS are foundational for all modern firms — data analytics as a serious input to design matters has only been used in specialized (mostly energy efficiency) scenarios. Where are the predictive analytics, the holistic models, the software-as-a-service providers for the brave new world of urban informatics and The Internet of Things? Technologists, it’s our move.
Something’s amiss when some city governments — rarely the vanguard in technological innovation — have more sophisticated tools for data-driven decision-making than the private sector firms who design the city. But some understand the opportunity. Vannevar Technology is working on it, as is Synthicity. There’s plenty of room for the most positive aspects of tech culture to remake the profession of urban planning itself. (Look to NYU’s Center for Urban Science and Progress and the University of Chicago’s Urban Center for Computation and Data for leadership in this space.)…”
Brainlike Computers, Learning From Experience
The New York Times: “Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.
The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.
The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.
In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.
Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.
“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.
Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.
But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.
In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.
The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.”
Rethinking Why People Participate
Tiago Peixoto: “Having a refined understanding of what leads people to participate is one of the main concerns of those working with citizen engagement. But particularly when it comes to participatory democracy, that understanding is only partial and, most often, the cliché “more research is needed” is definitely applicable. This is so for a number of reasons, four of which are worth noting here.
- The “participatory” label is applied to greatly varied initiatives, raising obvious methodological challenges for comparative research and cumulative learning. For instance, while both participatory budgeting and online petitions can be roughly categorized as “participatory” processes, they are entirely different in terms of fundamental aspects such as their goals, institutional design and expected impact on decision-making.
- The fact that many participatory initiatives are conceived as “pilots” or one-off events gives researchers little time to understand the phenomenon, come up with sound research questions, and test different hypotheses over time. The “pilotitis” syndrome in the tech4accountability space is a good example of this.
- When designing and implementing participatory processes, in the face of budget constraints the first victims are documentation, evaluation and research. Apart from a few exceptions, this leads to a scarcity of data and basic information that undermines even the most heroic “archaeological” efforts of retrospective research and evaluation (a far from ideal approach).
- The semantic extravaganza that currently plagues the field of citizen engagement, technology and open government makes cumulative learning all the more difficult.
Precisely for the opposite reasons, our knowledge of electoral participation is in better shape. First, despite the differences between elections, comparative work is relatively easy, which is attested by the high number of cross-country studies in the field. Second, the fact that elections (for the most part) are repeated regularly and following a similar design enables the refinement of hypotheses and research questions over time, and specific time-related analysis (see an example here [PDF]). Third, when compared to the funds allocated to research in participatory initiatives, the relative amount of resources channeled into electoral studies and voting behavior is significantly higher. Here I am not referring to academic work only but also to the substantial resources invested by the private sector and parties towards a better understanding of elections and voting behavior. This includes a growing body of knowledge generated by get-out-the-vote (GOTV) research, with fascinating experimental evidence from interventions that seek to increase participation in elections (e.g. door-to-door campaigns, telemarketing, e-mail). Add to that the wealth of electoral data that is available worldwide (in machine-readable formats) and you have some pretty good knowledge to tap into. Finally, both conceptually and terminologically, the field of electoral studies is much more consistent than the field of citizen engagement which, in the long run, tends to drastically impact how knowledge of a subject evolves.
These reasons should be sufficient to capture the interest of those who work with citizen engagement. While the extent to which the knowledge from the field of electoral participation can be transferred to non-electoral participation remains an open question, it should at least provide citizen engagement researchers with cues and insights that are very much worth considering…”
Can a Better Taxonomy Help Behavioral Energy Efficiency?
Article at GreenTechEfficiency: “Hundreds of behavioral energy efficiency programs have sprung up across the U.S. in the past five years, but the effectiveness of the programs — both in terms of cost savings and reduced energy use — can be difficult to gauge.
Of nearly 300 programs, a new report from the American Council for an Energy-Efficient Economy was able to accurately calculate the cost of saved energy from only ten programs….
To help utilities and regulators better define and measure behavioral programs, ACEEE offers a new taxonomy of utility-run behavior programs that breaks them into three major categories:
Cognition: Programs that focus on delivering information to consumers. (This includes general communication efforts, enhanced billing and bill inserts, social media and classroom-based education.)
Calculus: Programs that rely on consumers making economically rational decisions. (This includes real-time and asynchronous feedback, dynamic pricing, games, incentives and rebates and home energy audits.)
Social interaction: Programs whose key drivers are social interaction and belonging. (This includes community-based social marketing, peer champions, online forums and incentive-based gifts.)
….
While the report was mostly preliminary, it also offered four steps forward for utilities that want to make the most of behavioral programs.
Stack. The types of programs might fit into three broad categories, but judiciously blending cues based on emotion, reason and social interaction into programs is key, according to ACEEE. Even though the report recommends stacked programs that have a multi-modal approach, the authors acknowledge, “This hypothesis will remain untested until we see more stacked programs in the marketplace.”
Track. Just like other areas of grid modernization, utilities need to rethink how they collect, analyze and report the data coming out of behavioral programs. This should include metrics that go beyond just energy savings.
Share. As with other utility programs, behavior-based energy efficiency programs can be improved upon if utilities share results and if reporting is standardized across the country instead of varying by state.
Coordinate. Sharing is only the first step. Programs that merge water, gas and electricity efficiency can often gain better results than siloed programs. That approach, however, requires a coordinated effort by regional utilities and a change to how programs are funded and evaluated by regulators.”
Crowdsourcing drug discovery: Antitumour compound identified
David Bradley in Spectroscopy.now: “American researchers have used “crowdsourcing” – the cooperation of a large number of interested non-scientists via the internet – to help them identify a new fungus. The species contains unusual metabolites, isolated and characterized, with the help of vibrational circular dichroism (VCD). One compound reveals itself to have potential antitumour activity.
So far, a mere 7 percent of the more than 1.5 million species of fungi thought to exist have been identified and an even smaller fraction of these have been the subject of research seeking bioactive natural products. …Robert Cichewicz of the University of Oklahoma, USA, and his colleagues hoped to remedy this situation by working with a collection of several thousand fungal isolates from three regions: Arctic Alaska, tropical Hawaii, and subtropical to semiarid Oklahoma. Collaborator Susan Mooberry of the University of Texas at San Antonio carried out biological assays on many fungal isolates looking for antitumor activity among the metabolites in Cichewicz’s collection. A number of interesting substances were identified…
However, the researchers realized quickly enough that the efforts of a single research team were inadequate if samples representing the immense diversity of the thousands of fungi they hoped to test were to be obtained and tested. They thus turned to the help of citizen scientists in a “crowdsourcing” initiative. In this approach, lay people with an interest in science, and even fellow scientists in other fields, were recruited to collect and submit soil from their gardens.
As the samples began to arrive, the team quickly found among them a previously unknown fungal strain – a Tolypocladium species – growing in a soil sample from Alaska. Colleague Andrew Miller of the University of Illinois did the identification of this new fungus, which was found to be highly responsive to making new compounds based on changes in its laboratory growth conditions. Moreover, extraction of the active chemicals from the isolate revealed a unique metabolite which was shown to have significant antitumour activity in laboratory tests. The team suggests that this novel substance may represent a valuable new approach to cancer treatment because it precludes certain biochemical mechanisms that lead to the emergence of drug resistance in cancer with conventional drugs…
The researchers point out the essential roles that citizen scientists can play. “Many of the groundbreaking discoveries, theories, and applied research during the last two centuries were made by scientists operating from their own homes,” Cichewicz says. “Although much has changed, the idea that citizen scientists can still participate in research is a powerful means for reinvigorating the public’s interest in science and making important discoveries,” he adds.”
6 New Year’s Strategies for Open Data Entrepreneurs
The GovLab’s Senior Advisor Joel Gurin: “Open Data has fueled a wide range of startups, including consumer-focused websites, business-to-business services, data-management tech firms, and more. Many of the companies in the Open Data 500 study are new ones like these. New Year’s is a classic time to start new ventures, and with 2014 looking like a hot year for Open Data, we can expect more startups using this abundant, free resource. For my new book, Open Data Now, I interviewed dozens of entrepreneurs and distilled six of the basic strategies that they’ve used.
1. Learn how to add value to free Open Data. We’re seeing an inversion of the value proposition for data. It used to be that whoever owned the data—particularly Big Data—had greater opportunities than those who didn’t. While this is still true in many areas, it’s also clear that successful businesses can be built on free Open Data that anyone can use. The value isn’t in the data itself but rather in the analytical tools, expertise, and interpretation that’s brought to bear. One oft-cited example: The Climate Corporation, which built a billion-dollar business out of government weather and satellite data that’s freely available for use.
2. Focus on big opportunities: health, finance, energy, education. A business can be built on just about any kind of Open Data. But the greatest number of startup opportunities will likely be in the four big areas where the federal government is focused on Open Data release. Last June’s Health Datapalooza showcased the opportunities in health. Companies like Opower in energy, GreatSchools in education, and Calcbench, SigFig, and Capital Cube in finance are examples in these other major sectors.
3. Explore choice engines and Smart Disclosure apps. Smart Disclosure – releasing data that consumers can use to make marketplace choices – is a powerful tool that can be the basis for a new sector of online startups. No one, it seems, has quite figured out how to make this form of Open Data work best, although sites like CompareTheMarket in the UK may be possible models. Business opportunities await anyone who can find ways to provide these much-needed consumer services. One example: Kayak, which competed in the crowded travel field by providing a great consumer interface, and which was sold to Priceline for $1.8 billion last year.
4. Help consumers tap the value of personal data. In a privacy-conscious society, more people will be interested in controlling their personal data and sharing it selectively for their own benefit. The value of personal data is just being recognized, and opportunities remain to be developed. There are business opportunities in setting up and providing “personal data vaults” and more opportunity in applying the many ways they can be used. Personal and Reputation.com are two leaders in this field.
5. Provide new data solutions to governments at all levels. Government datasets at the federal, state, and local level can be notoriously difficult to use. The good news is that these governments are now realizing that they need help. Data management for government is a growing industry, as Socrata, OpenGov, 3RoundStones, and others are finding, while companies like Enigma.io are turning government data into a more usable resource.
6. Look for unusual Open Data opportunities. Building a successful business by gathering data on restaurant menus and recipes is not an obvious route to success. But it’s working for Food Genius, whose founders showed a kind of genius in tapping an opportunity others had missed. While the big areas for Open Data are becoming clear, there are countless opportunities to build more niche businesses that can still be highly successful. If you have expertise in an area and see a customer need, there’s an increasingly good chance that the Open Data to help meet that need is somewhere to be found.”
The Postmodernity of Big Data
Essay by Michael Pepi in the New Inquiry: “Big Data fascinates because its presence has always been with us in nature. Each tree, drop of rain, and the path of each grain of sand, both responds to and creates millions of data points, even on a short journey. Nature is the original algorithm, the most efficient and powerful. Mathematicians since the ancients have looked to it for inspiration; techno-capitalists now look to unlock its mysteries for private gain. Playing God has become all the more brisk and profitable thanks to cloud computing.
But beyond economic motivations for Big Data’s rise, are there also epistemological ones? Has Big Data come to try to fill the vacuum of certainty left by postmodernism? Does data science address the insecurities of the postmodern thought?
It turns out that trying to explain Big Data is like trying to explain postmodernism. Neither can be summarized effectively in a phrase, despite their champions’ efforts. Broad epistemological developments are compressed into cursory, ex post facto descriptions. Attempts to define Big Data, such as IBM’s marketing copy, which promises “insights gleaned” from “enterprise data warehouses that implement massively parallel processing,” “real-time scalability” and “parsing structured and unstructured sources,” focus on its implementation at the expense of its substance, decontextualizing it entirely . Similarly, definitions of postmodernism, like art critic Thomas McEvilley’s claim that it is “a renunciation that involves recognition of the relativity of the self—of one’s habit systems, their tininess, silliness, and arbitrariness” are accurate but abstract to the point of vagueness….
Big Data might come to be understood as Big Postmodernism: the period in which the influx of unstructured, non-teleological, non-narrative inputs ceased to destabilize the existing order but was instead finally mastered processed by sufficiently complex, distributed, and pluralized algorithmic regime. If Big Data has a skepticism built in, how this is different from the skepticism of postmodernism is perhaps impossible to yet comprehend”.