Crowd-Sourced Augmented Realities: Social Media and the Power of Digital Representation


Pre-publication version of a chapter by Matthew Zook, Mark Graham and  Andrew Boulton  in S. Mains, J. Cupples, and C. Lukinbeal. Mediated Geographies/Geographies of Media. Springer Science International Handbooks in Human Geography, (Forthcoming): “A key and distinguishing feature of society today is that its increasingly documented by crowd-sourced social media discourse about public experiences. Much of this social media content is geo-referenced and exists in layers of information draped over the physical world, invisible to the naked eye but accessible to range of digital (and often) mobile devices. When we access these information layers, they mediate the mundane practices of everyday life, (e.g., What or who is nearby? How do I move from point A to B) through the creation of augmented realities, i.e., unstable, context dependent representations of places brought temporary into being by combining the space of material and virtual experience.
These augmented realities, as particular representations of locations, places and events, are vigorously promoted or contested and thus become important spots in which power is exercised, much in the same way that maps have long had power to reinforce or challenge the status quo. However, because many of the processes and practices behind the creation of augmented realities are unseen, its power is often overlooked in the process of representation or place-making. This paper highlights the points at which power acts and demonstrate that all representations of place – including augmented realities derived from social media – are products of and productive of, social relationships and associated power relations.”
Building upon a case study of Abbottabad, Pakistan after the raid on Osama bin Laden’s compound we construct a four-part typology of the power relations emerging from social practices that enact augmented realities. These include: Distributed power, the complex and socially/spatially distributed authorship of user-generated geospatial content; Communication power, the ways in which particular representations gain prominence; language is a particularly key variable; Code power, the autonomy of software code to regulate actions, or mediate content, or ordering representations in particular ways; and Timeless power, the ways in which digital representations of place reconfigure temporal relationships, particularly sequence and duration, between people and events.

Business Models That Take Advantage of Open Data Opportunities


Mark Boyd at the Programmeableweb: “At last week’s OKFestival in Berlin, Kat Borlongan and Chloé Bonnet from Parisian open data startup Five By Five moderated an interactive speed-geek session to examine how startups are building viability using open data and open data APIs. The picture that emerged revealed a variety of composite approaches being used, with all those presenting having just one thing in common: a commitment to fostering ecosystems that will allow other startups to build alongside them.
The OKFestival—hosted by the Open Knowledge Foundation—brought together more than 1,000 participants from around the globe working on various aspects of the open data agenda: the use of corporate data, open science research, government open data and crowdsourced data projects.
In a session held on the first day of the event, Borlongan facilitated an interactive workshop to help would-be entrepreneurs understand how startups are building business models that take advantage of open data opportunities to create sustainable, employment-generating businesses.
Citing research from the McKinsey Institute that calculates the value of open data to be worth $3 trillion globally, Borlongan said: “So the understanding of the open data process is usually: We throw open data over the wall, then we hold a hackathon, and then people will start making products off it, and then we make the $3 trillion.”
Borlongan argued that it is actually a “blurry identity to be an open data startup” and encouraged participants to unpack, with each of the startups presenting exactly how income can be generated and a viable business built in this space.
Jeni Tennison, from the U.K.’s Open Data Institute (which supports 15 businesses in its Startup Programme) categorizes two types of business models:

  1. Businesses that publish (but do not sell) open data.
  2. Businesses built on top of using open data.

Businesses That Publish but Do Not Sell Open Data

At the Open Data Institute, Tennison is investigating the possibility of an open address database that would provide street address data for every property in the U.K. She describes three types of business models that could be created by projects that generated and published such data:
Freemium: In this model, the bulk data of open addresses could be made available freely, “but if you want an API service, then you would pay for it.” Tennison pointed to lots of opportunities also to degrade the freemium-level data—for example, having it available in bulk but not at a particularly granular level (unless you pay for it), or by provisioning reuse on a share-only basis, but you would pay if you wanted the data for corporate use cases (similar to how OpenCorporates sells access to its data).
Cross-subsidy: In this approach, the data would be available, and the opportunities to generate income would come from providing extra services, like consultancy or white labeling data services alongside publishing the open data.
Network: In this business model, value is created by generating a network effect around the core business interest, which may not be the open data itself. As an example, Tennison suggested that if a post office or delivery company were to create the open address database, it might be interested in encouraging private citizens to collaboratively maintain or crowdsource the quality of the data. The revenue generated by this open data would then come from reductions in the cost of delivery services as the data improved accuracy.

Businesses Built on Top of Open Data

Six startups working in unique ways to make use of available open data also presented their business models to OKFestival attendees: Development Seed, Mapbox, OpenDataSoft, Enigma.io, Open Bank API, and Snips.

Startup: Development Seed
What it does: Builds solutions for development, public health and citizen democracy challenges by creating open source tools and utilizing open data.
Open data API focus: Regularly uses open data APIs in its projects. For example, it worked with the World Bank to create a data visualization website built on top of the World Bank API.
Type of business model: Consultancy, but it has also created new businesses out of the products developed as part of its work, most notably Mapbox (see below).

Startup: Enigma.io
What it does: Open data platform with advanced discovery and search functions.
Open data API focus: Provides the Enigma API to allow programmatic access to all data sets and some analytics from the Enigma platform.
Type of business model: SaaS including a freemium plan with no degradation of data and with access to API calls; some venture funding; some contracting services to particular enterprises; creating new products in Enigma Labs for potential later sale.

Startup: Mapbox
What it does: Enables users to design and publish maps based on crowdsourced OpenStreetMap data.
Open data API focus: Uses OpenStreetMap APIs to draw data into its map-creation interface; provides the Mapbox API to allow programmatic creation of maps using Mapbox web services.
Type of business model: SaaS including freemium plan; some tailored contracts for big map users such as Foursquare and Evernote.

Startup: Open Bank Project
What it does: Creates an open source API for use by banks.
Open data API focus: Its core product is to build an API so that banks can use a standard, open source API tool when creating applications and web services for their clients.
Type of business model: Contract license with tiered SLAs depending on the number of applications built using the API; IT consultancy projects.

Startup: OpenDataSoft
What it does: Provides an open data publishing platform so that cities, governments, utilities and companies can publish their own data portal for internal and public use.
Open data API focus: It’s able to route data sources into the portal from a publisher’s APIs; provides automatic API-creation tools so that any data set uploaded to the portal is then available as an API.
Type of business model: SaaS model with freemium plan, pricing by number of data sets published and number of API calls made against the data, with free access for academic and civic initiatives.

Startup: Snips
What it does: Predictive modeling for smart cities.
Open data API focus: Channels some open and client proprietary data into its modeling algorithm calculations via API; provides a predictive modeling API for clients’ use to programmatically generate solutions based on their data.
Type of business model: Creating one B2C app product for sale as a revenue-generation product; individual contracts with cities and companies to solve particular pain points, such as using predictive modeling to help a post office company better manage staff rosters (matched to sales needs) and a consultancy project to create a visualization mapping tool that can predict the risk of car accidents for a city….”

Portugal: Municipal Transparency Portal


The Municipal Transparency Portal is an initiative of the XIX constitutional Government to increase transparency of local public administration management toward citizens. Here are presented and made available a set of indicators regarding management of the 308 Portuguese municipalities, as well as their aggregation on inter-municipal entities (metropolitan areas and intermunicipal communities) when applicable.
Indicators
The indicators are organized in 6 groups:

    • Financial management: financial indicators relating to indebtedness, municipal revenue and expenditure
    • Administrative management: indicators relating to municipal human resources, public procurement and transparency of municipal information
    • Fiscal decisions of municipality: rates determined by the municipalities on IMI, IRS and IRC surcharge
    • Economic dynamics of the municipality: indicators about local economic activity of citizens and businesses
    • Municipal services: indicators regarding the main public services with relevant intervention of municipalities (water and waste treatment, education and housing)
    • Municipal electoral turnout: citizen taking part in local elections and voting results.

More: http://www.portalmunicipal.pt/”
 

Americans hate Congress. They will totally teach it a lesson by not voting.


in the Washington Post: “Americans are angry at Congress — more so than basically ever before. So it’s time to throw the bums out, right?
Well, not really. In fact, Americans appear prepared to deal with their historic unhappiness using perhaps the least-productive response: Staying home.
A new study shows that Americans are on-track to set a new low for turnout in a midterm election, and a record number of states could set their own new records for lowest percentage of eligible citizens casting ballots.
The study, from the Center for the Study of the American Electorate, shows turnout in the 25 states that have held statewide primaries for both parties is down by nearly one-fifth from the last midterm, in 2010. While 18.3 percent of eligible voters cast ballots back then, it has been just 14.8 percent so far this year. Similarly, 15 of the 25 states that have held statewide primaries so far have recorded record-low turnout….
This is all the more depressing when you realize that, less than 50 years ago, primary turnout was twice as high.


Courtesy: Center for the Study of the American Electorate

But, really, this isn’t all that new. As you can see above, turnout has been dropping steadily for years….
More than that, though, the poll reinforces that, no matter how upset people are with Congress, they still aren’t really feeling the need to do much of anything about it. Some might argue that they feel powerless to affect real change, but failure to even vote suggests they’re not really interested in trying — or maybe they’re not really all that mad.”

A framework for measuring smart cities


Paper by Félix Herrera Priano and Cristina Fajardo Guerra for the Proceedings of the 15th Annual International Conference on Digital Government Research: “Smart cities are an international phenomenon. Many cities are actively working to build or transform their models toward that of a Smart City. There is constant research and reports devoted to measuring the intelligence of cities through establishing specific methodologies and indicators (grouped by various criteria).
We believe the subject lacks a certain uniformity, which we aim to redress in this paper by suggesting a framework for properly measuring the smart level of a city.
Cities are complex and heterogeneous structures, which complicates comparisons between them. To address this we propose an N–dimensional measurement framework where each level or dimension supplies information of interest that is evaluated independently. As a result, the measure of a city’s intelligence is the result of the evaluations obtained for each of these levels.
To this end, we have typified the transformation (city to smart city) and the measurement (smart city ranking) processes.”

Big Money, Uncertain Return


Mary K. Pratt  in a MIT Technology Review Special Report on Data-Driven Health Care: “Hospitals are spending billions collecting and analyzing medical data. The one data point no one is tracking: the payoff…. Ten years ago, Kaiser Permanente began building a $4 billion electronic-health-record system that includes a comprehensive collection of health-care data ranging from patients’ treatment records to research-based clinical advice. Now Kaiser has added advanced analytics tools and data from more sources, including a pilot program that integrates information from patients’ medical devices.

Faced with new government regulations and insurer pressure to control costs, other health-care organizations are following Kaiser’s example and increasing their use of analytics. The belief: that mining their vast quantities of patient data will yield insights into the best treatments at the lowest cost.

But just how big will the financial payoff be? Terhilda Garrido, vice president of health IT transformation and analytics at Kaiser, admits she doesn’t know. Nor do other health-care leaders. The return on investment for health-care analytics programs remains elusive and nearly impossible for most to calculate…

Opportunities to identify the most effective treatments could slip away if CIOs and their teams aren’t able to quantify the return on their analytics investments. Health-care providers are under increasing pressure to cut costs in an era of capped billing, and executives at medical organizations won’t okay spending their increasingly limited dollars on data warehouses, analytics software, and data scientists if they can’t be sure they’ll see real benefit.

A new initiative at Cleveland Clinic shows the opportunities and challenges. By analyzing patients’ records on their overall health and medical conditions, the medical center determines which patients coming in for hip and knee replacements can get postoperative services in their own homes (the most cost-effective option), which ones will need a short stay in a skilled nursing facility, and which ones will have longer stints in a skilled nursing facility (the most costly option). The classifications control costs while still ensuring the best possible medical outcomes, says CIO C. Martin Harris.

That does translate into real—and significant—financial benefits, but Harris wonders how to calculate the payoff from his data investment. Should the costs of every system from which patient data is pulled be part of the equation in addition to the costs of the data warehouse and analytics tools? Calculating how much money is saved by implementing better protocols is not straightforward either. Harris hesitates to attribute better, more cost-effective patient outcomes solely to analytics when many other factors are also likely contributors…”

Power to Create


From the RSA: “In his 2014 Chief Executive’s lecture, Matthew Taylor will explore new thinking around the RSA’s core mission: to empower people to be capable, active participants in creating the world we want to live in.
The 21st century presents us with challenges of increasing scale and complexity, and yet we are failing to harness the ingenuity and skills of millions of individuals who could make a unique contribution towards our collective goals. Just as creativity is in ever greater demand, a vast resource of creative potential is going untapped.
In his lecture, Matthew will argue that we need to work towards a world that gives people the freedom to make the most of their capabilities. This will involve tackling the many constraints that limit individuals, and lock them out of the creative process.
Matthew argues that this can be done by combining new leadership and institutions that give us hope and excitement about the future, with a championing of individual creative endeavour and a 21st century spirit of solidarity and collaboration.
Listen to the audio

(full recording including audience Q&A)
Please right-click link and choose “Save Link As…” to download audio file onto your computer.

Read the transcript – Power to Create “

The People’s Platform


Book Review by Tim Wu in the New York Times: “Astra Taylor is a documentary filmmaker who has described her work as the “steamed broccoli” in our cultural diet. Her last film, “Examined Life,” depicted philosophers walking around and talking about their ideas. She’s the kind of creative person who was supposed to benefit when the Internet revolution collapsed old media hierarchies. But two decades since that revolution began, she’s not impressed: “We are at risk of starving in the midst of plenty,” Taylor writes. “Free culture, like cheap food, incurs hidden costs.” Instead of serving as the great equalizer, the web has created an abhorrent cultural feudalism. The creative masses connect, create and labor, while Google, Facebook and Amazon collect the cash.
Taylor’s thesis is simply stated. The pre-Internet cultural industry, populated mainly by exploitative conglomerates, was far from perfect, but at least the ancien régime felt some need to cultivate cultural institutions, and to pay for talent at all levels. Along came the web, which swept away hierarchies — as well as paychecks, leaving behind creators of all kinds only the chance to be fleetingly “Internet famous.” And anyhow, she says, the web never really threatened to overthrow the old media’s upper echelons, whether defined as superstars, like Beyoncé, big broadcast television shows or Hollywood studios. Instead, it was the cultural industry’s middle ­classes that have been wiped out and replaced by new cultural plantations ruled over by the West Coast aggregators.
It is hard to know if the title, “The People’s Platform,” is aspirational or sarcastic, since Taylor believes the classless aura of the web masks an unfair power structure. “Open systems can be starkly inegalitarian,” she says, arguing that the web is afflicted by what the feminist scholar Jo Freeman termed a “tyranny of structurelessness.” Because there is supposedly no hierarchy, elites can happily deny their own existence. (“We just run a platform.”) But the effects are real: The web has reduced professional creators to begging for scraps of attention from a spoiled public, and forced creators to be their own brand.

The tech industry might be tempted to dismiss Taylor’s arguments as merely a version of typewriter manufacturers’ complaints circa 1984, but that would be a mistake. “The People’s Platform” should be taken as a challenge by the new media that have long claimed to be improving on the old order. Can they prove they are capable of supporting a sustainable cultural ecosystem, in a way that goes beyond just hosting parties at the Sundance Film ­Festival?
We see some of this in the tech firms that have begun to pay for original content, as with Netflix’s investments in projects like “Orange Is the New Black.” It’s also worth pointing out that the support of culture is actually pretty cheap. Consider the nonprofit ProPublica, which employs investigative journalists, and has already won two Pulitzers, all on a budget of just over $10 million a year. That kind of money is a rounding error for much of Silicon Valley, where losing billions on bad acquisitions is routinely defended as “strategic.” If Google, Apple, Facebook and Amazon truly believe they’re better than the old guard, let’s see it.”
See : THE PEOPLE’S PLATFORM. Taking Back Power and Culture in the Digital Age By Astra Taylor, 276 pp. Metropolitan Books/Henry Holt & Company.

Indonesian techies crowdsource election results


Ben Bland in the Financial Times: “Three Indonesian tech experts say they have used crowdsourcing to calculate an accurate result for the country’s contested presidential election in six days, while 4m officials have been beavering away for nearly two weeks counting the votes by hand.

The Indonesian techies, who work for multinational companies, were spurred into action after both presidential candidates claimed victory and accused each other of trying to rig the convoluted counting process, raising fears that the country’s young democracy was under threat.

“We did this to prevent the nation being ripped apart because of two claims to victory that nobody can verify,” said Ainun Najib, who is based in Singapore. “This solution was only possible because all the polling station data were openly available for public scrutiny and verification.”

Mr Najib and two friends took advantage of the decision by the national election commission (KPU) to upload the individual results from Indonesia’s 480,000 polling stations to its website for the first time, in an attempt to counter widespread fears about electoral fraud.

The three Indonesians scraped the voting data from the KPU website on to a database and then recruited 700 friends and acquaintances through Facebook to type in the results and check them. They uploaded the data to a website called kawalpemilu.org, which means “guard the election” in Indonesian.

Throughout the process, Mr Najib said he had to fend off hacking attacks, forcing him to shift data storage to a cloud-based service. The whole exercise cost $10 for a domain name and $0.10 for the data storage….”

Can Experts Solve Poverty?


The #GlobalPOV Project:We all have experts in our lives. Computer experts, plumbing experts, legal experts — you name the problem, and there is someone out there who specializes in addressing that problem. Whether it’s a broken car, a computer glitch, or even a broken heart – call the expert, they’ll fix us right up.
So who do we call when society is broken? Who do we call when over a billion people live in poverty, unable to meet the basic requirements to sustain their lives? Or when the wealthiest 2% of the world owns 50% of the world’s assets?
We call experts, of course: poverty experts. But — who is a poverty expert, and can experts solve poverty?
VIDEO:

The #GlobalPOV Project is a program of the Global Poverty and Practice (GPP) Minor. Based at the Blum Center for Developing Economies, University of California, Berkeley, the GPP Minor creates new ways of thinking about poverty, inequality and undertaking poverty action.
Website: http://blumcenter.berkeley.edu/globalpov