“Open-washing”: The difference between opening your data and simply making them available


Christian Villum at the Open Knowledge Foundation Blog:  “Last week, the Danish it-magazine Computerworld, in an article entitled “Check-list for digital innovation: These are the things you must know“, emphasised how more and more companies are discovering that giving your users access to your data is a good business strategy. Among other they wrote:

(Translation from Danish) According to Accenture it is becoming clear to many progressive businesses that their data should be treated as any other supply chain: It should flow easily and unhindered through the whole organisation and perhaps even out into the whole eco-system – for instance through fully open API’s.

They then use Google Maps as an example, which firstly isn’t entirely correct, as also pointed out by the Neogeografen, a geodata blogger, who explains how Google Maps isn’t offering raw data, but merely an image of the data. You are not allowed to download and manipulate the data – or run it off your own server.

But secondly I don’t think it’s very appropriate to highlight Google and their Maps project as a golden example of a business that lets its data flow unhindered to the public. It’s true that they are offering some data, but only in a very limited way – and definitely not as open data – and thereby not as progressively as the article suggests.

Surely it’s hard to accuse Google of not being progressive in general. The article states how Google Maps’ data are used by over 800,000 apps and businesses across the globe. So yes, Google has opened its silo a little bit, but only in a very controlled and limited way, which leaves these 800,000 businesses dependent on the continual flow of data from Google and thereby not allowing them to control the very commodities they’re basing their business on. This particular way of releasing data brings me to the problem that we’re facing: Knowing the difference between making data available and making them open.

Open data is characterized by not only being available, but being both legally open (released under an open license that allows full and free reuse conditioned at most to giving credit to it’s source and under same license) and technically available in bulk and in machine readable formats – contrary to the case of Google Maps. It may be that their data are available, but they’re not open. This – among other reasons – is why the global community around the 100% open alternative Open Street Map is growing rapidly and an increasing number of businesses choose to base their services on this open initiative instead.

But why is it important that data are open and not just available? Open data strengthens the society and builds a shared resource, where all users, citizens and businesses are enriched and empowered, not just the data collectors and publishers. “But why would businesses spend money on collecting data and then give them away?” you ask. Opening your data and making a profit are not mutually exclusive. Doing a quick Google search reveals many businesses that both offer open data and drives a business on them – and I believe these are the ones that should be highlighted as particularly progressive in articles such as the one from Computerworld….

We are seeing a rising trend of what can be termed “open-washing” (inspired by “greenwashing“) – meaning data publishers that are claiming their data is open, even when it’s not – but rather just available under limiting terms. If we – at this critical time in the formative period of the data driven society – aren’t critically aware of the difference, we’ll end up putting our vital data streams in siloed infrastructure built and owned by international corporations. But also to give our praise and support to the wrong kind of unsustainable technological development.”

This algorithm can predict a revolution


Russell Brandom at the Verge: “For students of international conflict, 2013 provided plenty to examine. There was civil war in Syria, ethnic violence in China, and riots to the point of revolution in Ukraine. For those working at Duke University’s Ward Lab, all specialists in predicting conflict, the year looks like a betting sheet, full of predictions that worked and others that didn’t pan out.

Guerrilla campaigns intensified, proving out the prediction

When the lab put out their semiannual predictions in July, they gave Paraguay a 97 percent chance of insurgency, largely based on reports of Marxist rebels. The next month, guerrilla campaigns intensified, proving out the prediction. In the case of China’s armed clashes between Uighurs and Hans, the models showed a 33 percent chance of violence, even as the cause of each individual flare-up was concealed by the country’s state-run media. On the other hand, the unrest in Ukraine didn’t start raising alarms until the action had already started, so the country was left off the report entirely.

According to Ward Lab’s staff, the purpose of the project isn’t to make predictions but to test theories. If a certain theory of geopolitics can predict an uprising in Ukraine, then maybe that theory is onto something. And even if these specialists could predict every conflict, it would only be half the battle. “It’s a success only if it doesn’t come at the cost of predicting a lot of incidents that don’t occur,” says Michael D. Ward, the lab’s founder and chief investigator, who also runs the blog Predictive Heuristics. “But it suggests that we might be on the right track.”

If a certain theory of geopolitics can predict an uprising in Ukraine, maybe that theory is onto something

Forecasting the future of a country wasn’t always done this way. Traditionally, predicting revolution or war has been a secretive project, for the simple reason that any reliable prediction would be too valuable to share. But as predictions lean more on data, they’ve actually become harder to keep secret, ushering in a new generation of open-source prediction models that butt against the siloed status quo.

Will this country’s government face an acute existential threat in the next six months?

The story of automated conflict prediction starts at the Defense Advance Research Projects Agency, known as the Pentagon’s R&D wing. In the 1990s, DARPA wanted to try out software-based approaches to anticipating which governments might collapse in the near future. The CIA was already on the case, with section chiefs from every region filing regular forecasts, but DARPA wanted to see if a computerized approach could do better. They looked at a simple question: will this country’s government face an acute existential threat in the next six months? When CIA analysts were put to the test, they averaged roughly 60 percent accuracy, so DARPA’s new system set the bar at 80 percent, looking at 29 different countries in Asia with populations over half a million. It was dubbed ICEWS, the Integrated Conflict Early Warning System, and it succeeded almost immediately, clearing 80 percent with algorithms built on simple regression analysis….

On the data side, researchers at Georgetown University are cataloging every significant political event of the past century into a single database called GDELT, and leaving the whole thing open for public research. Already, projects have used it to map the Syrian civil war and diplomatic gestures between Japan and South Korea, looking at dynamics that had never been mapped before. And then, of course, there’s Ward Lab, releasing a new sheet of predictions every six months and tweaking its algorithms with every development. It’s a mirror of the same open-vs.-closed debate in software — only now, instead of fighting over source code and security audits, it’s a fight over who can see the future the best.”

Developing an open government plan in the open


Tim Hughes at OGP: “New laws, standards, policies, processes and technologies are critical for opening up government, but arguably just as (if not more) important are new cultures, behaviours and ways of working within government and civil society.
The development of an OGP National Action Plan, therefore, presents a twofold opportunity for opening up government: On the one hand it should be used to deliver a set of robust and ambitious commitments to greater transparency, participation and accountability. But just as importantly, the process of developing a NAP should also be used to model new forms of open and collaborative working within government and civil society. These two purposes of a NAP should be mutually reinforcing. An open and collaborative process can – as was the case in the UK – help to deliver a more robust and ambitious action plan, which in turn can demonstrate the efficacy of working in the open.
You could even go one step further to say that the development of an National Action Plan should present an (almost) “ideal” vision of what open government in a country could look like. If governments aren’t being open as they’re developing an open government action plan, then there’s arguably little hope that they’ll be open elsewhere.
As coordinators of the UK OGP civil society network, this was on our mind at the beginning and throughout the development of the UK’s 2013-15 National Action Plan. Crucially, it was also on the minds of our counterparts in the UK Government. From the start, therefore, the process was developed with the intention that it should itself model the principles of open government. Members of the UK OGP civil society network met with policy officials from the UK Government on a regular basis to scope out and develop the action plan, and we published regular updates of our discussions and progress for others to follow and engage with. The process wasn’t without its challenges – and there’s still much more we can do to open it up further in the future – but it was successful in moving far beyond the typical model of government deciding, announcing and defending its intentions and in delivering an action plan with some strong and ambitious commitments.
One of the benefits of working in an open and collaborative way is that it enabled us to conduct and publish a full – warts and all – review of what went well and what didn’t. So, consider this is an invitation to delve into our successes and failures, a challenge to do it better and a request to help us to do so too. Head over to the UK OGP civil society network blog to read about what we did, and tell us what you think: http://www.opengovernment.org.uk/national-action-plan/story-of-the-uk-national-action-plan-2013-15/

11 ways to rethink open data and make it relevant to the public


Miguel Paz at IJNET: “It’s time to transform open data from a trendy concept among policy wonks and news nerds into something tangible to everyday life for citizens, businesses and grassroots organizations. Here are some ideas to help us get there:
1. Improve access to data
Craig Hammer from the World Bank has tackled this issue, stating that “Open Data could be the game changer when it comes to eradicating global poverty”, but only if governments make available online data that become actionable intelligence: a launch pad for investigation, analysis, triangulation, and improved decision making at all levels.
2. Create open data for the end user
As Hammer wrote in a blog post for the Harvard Business Review, while the “opening” has generated excitement from development experts, donors, several government champions, and the increasingly mighty geek community, the hard reality is that much of the public has been left behind, or tacked on as an afterthought. Let`s get out of the building and start working for the end user.
3. Show, don’t tell
Regular folks don’t know what “open data” means. Actually, they probably don’t care what we call it and don’t know if they need it. Apple’s Steve Jobs said that a lot of times, people don’t know what they want until you show it to them. We need to stop telling them they need it and start showing them why they need it, through actionable user experience.
4. Make it relevant to people’s daily lives, not just to NGOs and policymakers’ priorities
A study of the use of open data and transparency in Chile showed the top 10 uses were for things that affect their lives directly for better or for worse: data on government subsidies and support, legal certificates, information services, paperwork. If the data doesn’t speak to priorities at the household or individual level, we’ve lost the value of both the “opening” of data, and the data itself.
5. Invite the public into the sandbox
We need to give people “better tools to not only consume, but to create and manipulate data,” says my colleague Alvaro Graves, Poderopedia’s semantic web developer and researcher. This is what Code for America does, and it’s also what happened with the advent of Web 2.0, when the availability of better tools, such as blogging platforms, helped people create and share content.
6. Realize that open data are like QR codes
Everyone talks about open data the way they used to talk about QR codes–as something ground breaking. But as with QR Codes, open data only succeeds with the proper context to satisfy the needs of citizens. Context is the most important thing to funnel use and success of open data as a tool for global change.
7. Make open data sexy and pop, like Jess3.com
Geeks became popular because they made useful and cool things that could be embraced by end users. Open data geeks need to stick with that program.
8. Help journalists embrace open data
Jorge Lanata, a famous Argentinian journalist who is now being targeted by the Cristina Fernández administration due to his unfolding of government corruption scandals, once said that 50 percent of the success of a story or newspaper is assured if journalists like it.
That’s true of open data as well. If journalists understand its value for the public interest and learn how to use it, so will the public. And if they do, the winds of change will blow. Governments and the private sector will be forced to provide better, more up-to-date and standardized data. Open data will be understood not as a concept but as a public information source as relevant as any other. We need to teach Latin American journalists to be part of this.
9. News nerds can help you put your open data to good use
In order to boost the use of open data by journalists we need news nerds, teams of lightweight and tech-heavy armored journalist-programmers who can teach colleagues how open data through brings us high-impact storytelling that can change public policies and hold authorities accountable.
News nerds can also help us with “institutionalizing data literacy across societies” as Hammer puts it. ICFJ Knight International Journalism Fellow and digital strategist Justin Arenstein calls these folks “mass mobilizers” of information. Alex Howard “points to these groups because they can help demystify data, to make it understandable by populations and not just statisticians.”
I call them News Ninja Nerds, accelerator taskforces that can foster innovationsin news, data and transparency in a speedy way, saving governments and organizations time and a lot of money. Projects like ProPublica’s Dollars For Docs are great examples of what can be achieved if you mix FOIA, open data and the will to provide news in the public interest.
10. Rename open data
Part of the reasons people don’t embrace concepts such as open data is because it is part of a lingo that has nothing to do with them. No empathy involved. Let’s start talking about people’s right to know and use the data generated by governments. As Tim O’Reilly puts it: “Government as a Platform for Greatness,” with examples we can relate to, instead of dead .PDF’s and dirty databases.
11. Don’t expect open data to substitute for thinking or reporting
Investigative Reporting can benefit from it. But “but there is no substitute for the kind of street-level digging, personal interviews, and detective work” great journalism projects entailed, says David Kaplan in a great post entitled, Why Open Data is Not Enough.”

What makes a good API?


Joshua Tauberer’s Blog: “There comes a time in every dataset’s life when it wants to become an API. That might be because of consumer demand or an executive order. How are you going to make a good one?…
Let’s take the common case where you have a relatively static, large dataset that you want to provide read-only access to. Here are 19 common attributes of good APIs for this situation. …
Granular Access. If the user wanted the whole thing they’d download it in bulk, so an API must be good at providing access to the most granular level practical for data users (h/t Ben Balter for the wording on that). When the data comes from a table, this usually means the ability to read a small slice of it using filters, sorting, and paging (limit/offset), the ability to get a single row by identifying it with a persistent, unique identifier (usually a numeric ID), and the ability to select just which fields should be included in the result output (good for optimizing bandwidth in mobile apps, h/t Eric Mill). (But see “intents” below.)
Deep Filtering. An API should be good at needle-in-haystack problems. Full text search is hard to do, so an API that can do it relieves a big burden for developers — if your API has any big text fields. Filters that can span relations or cross tables (i.e. joins) can be very helpful as well. But don’t go overboard. (Again, see “intents” below.)
Typed Values. Response data should be typed. That means that whether a field’s value is an integer, text, list, floating-point number, dictionary, null, or date should be encoded as a part of the value itself. JSON and XML with XSD are good at this. CSV and plain XML, on the other hand, are totally untyped. Types must be strictly enforced. Columns must choose a data type and stick with it, no exceptions. When encoding other sorts of data as text, the values must all absolutely be valid according to the most narrow regular expression that you can make. Provide that regular expression to the API users in documentation.
Normalize Tables, Then Denormalize. Normalization is the process of removing redundancy from tables by making multiple tables. You should do that. Have lots of primary keys that link related tables together. But… then… denormalize. The bottleneck of most APIs isn’t disk space but speed. Queries over denormalized tables are much faster than writing queries with JOINs over multiple tables. It’s faster to get data if it’s all in one response than if the user has to issue multiple API calls (across multiple tables) to get it. You still have to normalize first, though. Denormalized data is hard to understand and hard to maintain.
Be RESTful, And More. ”REST” is a set of practices. There are whole books on this. Here it is in short. Every object named in the data (often that’s the rows of the table) gets its own URL. Hierarchical relationships in the data are turned into nice URL paths with slashes. Put the URLs of related resources in output too (HATEOAS, h/t Ed Summers). Use HTTP GET and normal query string processing (a=x&b=y) for filtering, sorting, and paging. The idea of REST is that these are patterns already familiar to developers, and reusing existing patterns — rather than making up entirely new ones — makes the API more understandable and reusable. Also, use HTTPS for everything (h/t Eric Mill), and provide the API’s status as an API itself possibly at the root URL of the API’s URL space (h/t Eric Mill again).
….
Never Require Registration. Don’t have authentication on your API to keep people out! In fact, having a requirement of registration may contradict other guidelines (such as the 8 Principles of Open Government Data). If you do use an API key, make it optional. A non-authenticated tier lets developers quickly test the waters, and that is really important for getting developers in the door, and, again, it may be important for policy reasons as well. You can have a carrot to incentivize voluntary authentication: raise the rate limit for authenticated queries, for instance. (h/t Ben Balter)
Interactive Documentation. An API explorer is a web page that users can visit to learn how to build API queries and see results for test queries in real time. It’s an interactive browser tool, like interactive documentation. Relatedly, an “explain mode” in queries, which instead of returning results says what the query was and how it would be processed, can help developers understand how to use the API (h/t Eric Mill).
Developer Community. Life is hard. Coding is hard. The subject matter your data is about is probably very complex. Don’t make your API users wade into your API alone. Bring the users together, bring them to you, and sometimes go to them. Let them ask questions and report issues in a public place (such as github). You may find that users will answer other users’ questions. Wouldn’t that be great? Have a mailing list for longer questions and discussion about the future of the API. Gather case studies of how people are using the API and show them off to the other users. It’s not a requirement that the API owner participates heavily in the developer community — just having a hub is very helpful — but of course the more participation the better.
Create Virtuous Cycles. Create an environment around the API that make the data and API stronger. For instance, other individuals within your organization who need the data should go through the public API to the greatest extent possible. Those users are experts and will help you make a better API, once they realize they benefit from it too. Create a feedback loop around the data, meaning find a way for API users to submit reports of data errors and have a process to carry out data updates, if applicable and possible. Do this in the public as much as possible so that others see they can also join the virtuous cycle.”

"Natural Cities" Emerge from Social Media Location Data


Emerging Technology From the arXiv: “Nobody agrees on how to define a city. But the emergence of “natural cities” from social media data sets may change that, say computational geographers…
A city is a large, permanent human settlement. But try and define it more carefully and you’ll soon run into trouble. A settlement that qualifies as a city in Sweden may not qualify in China, for example. And the reasons why one settlement is classified as a town while another as a city can sometimes seem almost arbitrary.
City planners know this problem well.  They tend to define cities by administrative, legal or even historical boundaries that have little logic to them. Indeed, the same city can sometimes be defined in various different ways.
That causes all kinds of problems from counting the total population to working out who pays for the upkeep of the place.  Which definition do you use?
Now help may be at hand thanks to the work of Bin Jiang and Yufan Miao at the University of Gävle in Sweden. These guys have found a way to use people’s location recorded by social media to define the boundaries of so-called natural cities which have a close resemblance to real cities in the US.
Jiang and Miao began with a dataset from the Brightkite social network, which was active between 2008 and 2010. The site encouraged users to log in with their location details so that they could see other users nearby. So the dataset consists of almost 3 million locations in the US and the dates on which they were logged.
To start off, Jiang and Miao simply placed a dot on a map at the location of each login. They then connected these dots to their neighbours to form triangles that end up covering the entire mainland US.
Next, they calculated the size of each triangle on the map and plotted this size distribution, which turns out to follow a power law. So there are lots of tiny triangles but only a few  large ones.
Finally, the calculated the average size of the triangles and then coloured in all those that were smaller than average. The coloured areas are “natural cities”, say Jiang and Miao.
It’s easy to imagine that resulting map of triangles is of little value.  But to the evident surprise of ther esearchers, it produces a pretty good approximation of the cities in the US. “We know little about why the procedure works so well but the resulting patterns suggest that the natural cities effectively capture the evolution of real cities,” they say.
That’s handy because it suddenly gives city planners a way to study and compare cities on a level playing field. It allows them to see how cities evolve and change over time too. And it gives them a way to analyse how cities in different parts of the world differ.
Of course, Jiang and Miao will want to find out why this approach reveals city structures in this way. That’s still something of a puzzle but the answer itself may provide an important insight into the nature of cities (or at least into the nature of this dataset).
A few days ago, this blog wrote about how a new science of cities is emerging from the analysis of big data.  This is another example and expect to see more.
Ref:  http://arxiv.org/abs/1401.6756 : The Evolution of Natural Cities from the Perspective of Location-Based Social Media”

Civic Works Project translates data into community tools


The blog of the John S. and James L. Knight Foundation:”The Civic Works Project is a two-year effort to create apps and other tools to help increase the utility of local government data to benefit community organizations and the broader public. w
This project looks systemically at public and private information that can be used to engage residents, solve community problems and increase government accountability. We believe that there is a new frontier where information can be used to improve public services and community building efforts that benefit local residents.
Through the Civic Works Project, we’re seeking to improve access to information and identify solutions to problems facing diverse communities. Uncovering the value of data—and the stories behind it—can enhance the provision of public services through the smart application of technology.
Here’s some of what we’ve accomplished.
Partnership with WBEZ Public Data Blog
The WBEZ Public Data Blog is dedicated to examining and promoting civic data in Chicago, Cook County and Illinois. WBEZ is partnering with the Smart Chicago Collaborative to provide news and analysis on open government by producing content items that explain and tell stories hidden in public data. The project seeks to increase the utility, understanding, awareness and availability of local civic data. It comprises blog postings on the hidden uses of data and stories from the data, while including diverse voices and discussions on how innovations can improve civic life. It also features interviews with community organizations, businesses, government leaders and residents on challenges that could be solved through more effective use of public data.
Crime and Punishment in Chicago
The Crime and Punishment in Chicago project will provide an index of data sources regarding the criminal justice system in Chicago. This site will aggregate sources of data, how this data is generated, how to get it and what data is unavailable.
Illinois OpenTech Challenge
The Illinois Open Technology Challenge aims to bring governments, developers and communities together to create digital tools that use public data to serve today’s civic needs and promote economic development. Smart Chicago and our partners worked with government officials to publish 138 new datasets (34 in Champaign, 15 in Rockford, 12 in Belleville, and 77 from the 42 municipalities in the South Suburban Mayors and Managers Association) on the State of Illinois data portal. Smart Chicago has worked with developers in meet-ups all over the state—in six locations in four cities with 149 people. The project has also allowed Smart Chicago to conduct outreach in each of our communities to reach regular residents with needs that can be addressed through data and technology.
LocalData + SWOP
The LocalData + SWOP project is part of our effort to help bridge technology gaps in high-capacity organizations. This effort helps the Southwest Organizing Project collect information about vacant and abandoned housing using the LocalData tool.
Affordable Care Act Outreach App
With the ongoing implementation of the Affordable Care Act, community organizations such as LISC-Chicago have been hard at work providing navigators to help residents register through the healthcare.gov site.
Currently, LISC-Chicago organizers are in neighborhoods contacting residents and encouraging them to go to their closest Center for Working Families. Using a combination of software, such as Wufoo and Twilio, Smart Chicago is helping LISC with its outreach by building a tool that enables organizers to send text reminders to sign up for health insurance to residents.
Texting Tools: Twilio and Textizen
Smart Chicago is expanding the Affordable Care Act outreach project to engage residents in other ways using SMS messaging.
Smart Chicago is also a local provider for Textizen,  an SMS-based survey tool that civic organizations can use to obtain resident feedback. Organizations can create a survey campaign and then place the survey options on posters, postcards or screens during live events. They can then receive real-time feedback as people text in their answers.
WikiChicago
WikiChicago will be a hyper-local Wikipedia-like website that anyone can edit. For this project, Smart Chicago is partnering with the Chicago Public Library to feature local authors and books about Chicago, and to publish more information about Chicago’s rich history.”

Open data: Strategies for impact


Havey Lewis at Open Government Partnership Blog: “When someone used to talk about “data for good”, chances are they were wondering whether the open data stream they relied on was still going to be available in the future. Similarly, “good with data” meant that experienced data scientists were being sought for a deeply technical project. Both interpretations reflect a state of being rather than of doing: data being around for good; people being good with data.
Important though these considerations are, they miss what should be an obvious and more profound alternative.
Right now, organisations like DataKind™  and Periscopic, and many other entrepreneurs, innovators and established social enterprises that use open data, see things differently. They are using these straplines to shake up the status quo, to demonstrate that data-driven businesses can do well by doing good.
And it’s the confluence of the many national and international open data initiatives, and the growing number of technically able, socially responsible organisations that provide the opportunity for social as well as economic growth. The World Wide Web Foundation now estimates that there are over 370 open data initiatives around the world. Collectively, and through portals such as Quandl and and datacatalogs.org, these initiatives have made a staggering quantity of data available – in excess of eight million data sets. In addition, several successful and data-rich companies are entering into a new spirit of philanthropy – by donating their data for the public good. There’s no doubt that opening up data signals a new willingness by governments and businesses all over the world to engage with their citizens and customers in a new and more transparent way.
The challenge, though, is ensuring that these popular national and international open data initiatives are cohesive and impactful. And that the plans drawn up by public sector bodies to release specific data sets are based on the potential the data has to achieve a beneficial outcome, not – or, at least, not solely – based on the cost or ease of publication. Despite the best of intentions, only a relatively small proportion of open data sets now available has the latent potential to create significant economic or social impact. In our push to open up data and government, it seems that we may have fallen into the trap of believing the ends are the same as the means; that effect is the same as cause…”

It’s the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough.


Eric Kansa at LSE Blog: “…However, I’m increasingly convinced that advocating for openness in research (or government) isn’t nearly enough. There’s been too much of an instrumentalist justification for open data an open access. Many advocates talk about how it will cut costs and speed up research and innovation. They also argue that it will make research more “reproducible” and transparent so interpretations can be better vetted by the wider community. Advocates for openness, particularly in open government, also talk about the wonderful commercial opportunities that will come from freeing research…
These are all very big policy issues, but they need to be asked if the Open Movement really stands for reform and not just a further expansion and entrenchment of Neoliberalism. I’m using the term “Neoliberalism” because it resonates as a convenient label for describing how and why so many things seem to suck in Academia. Exploding student debt, vanishing job security, increasing compensation for top administrators, expanding bureaucracy and committee work, corporate management methodologies (Taylorism), and intensified competition for ever-shrinking public funding all fall under the general rubric of Neoliberalism. Neoliberal universities primarily serve the needs of commerce. They need to churn out technically skilled human resources (made desperate for any work by high loads of debt) and easily monetized technical advancements….
“Big Data,” “Data Science,” and “Open Data” are now hot topics at universities. Investments are flowing into dedicated centers and programs to establish institutional leadership in all things related to data. I welcome the new Data Science effort at UC Berkeley to explore how to make research data professionalism fit into the academic reward systems. That sounds great! But will these new data professionals have any real autonomy in shaping how they conduct their research and build their careers? Or will they simply be part of an expanding class of harried and contingent employees- hired and fired through the whims of creative destruction fueled by the latest corporate-academic hype-cycle?
Researchers, including #AltAcs and “data professionals”, need  a large measure of freedom. Miriam Posner’s discussion about the career and autonomy limits of Alt-academic-hood help highlight these issues. Unfortunately, there’s only one area where innovation and failure seem survivable, and that’s the world of the start-up. I’ve noticed how the “Entrepreneurial Spirit” gets celebrated lots in this space. I’m guilty of basking in it myself (10 years as a quasi-independent #altAc in a nonprofit I co-founded!).
But in the current Neoliberal setting, being an entrepreneur requires a singular focus on monetizing innovation. PeerJ and Figshare are nice, since they have business models that less “evil” than Elsevier’s. But we need to stop fooling ourselves that the only institutions and programs that we can and should sustain are the ones that can turn a profit. For every PeerJ or Figshare (and these are ultimately just as dependent on continued public financing of research as any grant-driven project), we also need more innovative organizations like the Internet Archive, wholly dedicated to the public good and not the relentless pressure to commoditize everything (especially their patrons’ privacy). We need to be much more critical about the kinds of programs, organizations, and financing strategies we (as a society) can support. I raised the political economy of sustainability issue at a recent ThatCamp and hope to see more discussion.
In reality so much of the Academy’s dysfunctions are driven by our new Gilded Age’s artificial scarcity of money. With wealth concentrated in so few hands, it is very hard to finance risk taking and entreprenurialism in the scholarly community, especially to finance any form of entrepreneurialism that does not turn a profit in a year or two.
Open Access and Open Data will make so much more of a difference if we had the same kind of dynamism in the academic and nonprofit sector as we have in the for-profit start-up sector. After all, Open Access and Open Data can be key enablers to allow much broader participation in research and education. However, broader participation still needs to be financed: you cannot eat an open access publication. We cannot gloss over this key issue.
We need more diverse institutional forms so that researchers can find (or found) the kinds of organizations that best channel their passions into contributions that enrich us all. We need more diverse sources of financing (new foundations, better financed Kickstarters) to connect innovative ideas with the capital needed to see them implemented. Such institutional reforms will make life in the research community much more livable, creative, and dynamic. It would give researchers more options for diverse and varied career trajectories (for-profit or not-for-profit) suited to their interests and contributions.
Making the case to reinvest in the public good will require a long, hard slog. It will be much harder than the campaign for Open Access and Open Data because it will mean contesting Neoliberal ideologies and constituencies that are deeply entrenched in our institutions. However, the constituencies harmed by Neoliberalism, particularly the student community now burdened by over $1 trillion in debt and the middle class more generally, are much larger and very much aware that something is badly amiss. As we celebrate the impressive strides made by the Open Movement in the past year, it’s time we broaden our goals to tackle the needs for wider reform in the financing and organization of research and education.
This post originally appeared on Digging Digitally and is reposted under a CC-BY license.”

Big Data’s Dangerous New Era of Discrimination


Michael Schrage in HBR blog: “Congratulations. You bought into Big Data and it’s paying off Big Time. You slice, dice, parse and process every screen-stroke, clickstream, Like, tweet and touch point that matters to your enterprise. You now know exactly who your best — and worst — customers, clients, employees and partners are.  Knowledge is power.  But what kind of power does all that knowledge buy?
Big Data creates Big Dilemmas. Greater knowledge of customers creates new potential and power to discriminate. Big Data — and its associated analytics — dramatically increase both the dimensionality and degrees of freedom for detailed discrimination. So where, in your corporate culture and strategy, does value-added personalization and segmentation end and harmful discrimination begin?
Let’s say, for example, that your segmentation data tells you the following:
Your most profitable customers by far are single women between the ages of 34 and 55 closely followed by “happily married” women with at least one child. Divorced women are slightly more profitable than “never marrieds.” Gay males — single and in relationships — are also disproportionately profitable. The “sweet spot” is urban and 28 to 50. These segments collectively account for roughly two-thirds of your profitability.  (Unexpected factoid: Your most profitable customers are overwhelmingly Amazon Prime subscriber. What might that mean?)
Going more granular, as Big Data does, offers even sharper ethno-geographic insight into customer behavior and influence:

  • Single Asian, Hispanic, and African-American women with urban post codes are most likely to complain about product and service quality to the company. Asian and Hispanic complainers happy with resolution/refund tend to be in the top quintile of profitability. African-American women do not.
  • Suburban Caucasian mothers are most likely to use social media to share their complaints, followed closely by Asian and Hispanic mothers. But if resolved early, they’ll promote the firm’s responsiveness online.
  • Gay urban males receiving special discounts and promotions are the most effective at driving traffic to your sites.

My point here is that these data are explicit, compelling and undeniable. But how should sophisticated marketers and merchandisers use them?
Campaigns, promotions and loyalty programs targeting women and gay males seem obvious. But should Asian, Hispanic and white females enjoy preferential treatment over African-American women when resolving complaints? After all, they tend to be both more profitable and measurably more willing to effectively use social media. Does it make more marketing sense encouraging African-American female customers to become more social media savvy? Or are resources better invested in getting more from one’s best customers? Similarly, how much effort and ingenuity flow should go into making more gay male customers better social media evangelists? What kinds of offers and promotions could go viral on their networks?…
Of course, the difference between price discrimination and discrimination positively correlated with gender, ethnicity, geography, class, personality and/or technological fluency is vanishingly small. Indeed, the entire epistemological underpinning of Big Data for business is that it cost-effectively makes informed segmentation and personalization possible…..
But the main source of concern won’t be privacy, per se — it will be whether and how companies and organizations like your own use Big Data analytics to justify their segmentation/personalization/discrimination strategies. The more effective Big Data analytics are in profitably segmenting and serving customers, the more likely those algorithms will be audited by regulators or litigators.
Tomorrow’s Big Data challenge isn’t technical; it’s whether managements have algorithms and analytics that are both fairly transparent and transparently fair. Big Data champions and practitioners had better be discriminating about how discriminating they want to be.”