Procurement and Civic Innovation


Derek Eder: “Have you ever used a government website and had a not-so-awesome experience? In our slick 2014 world of Google, Twitter and Facebook, why does government tech feel like it’s stuck in the 1990s?
The culprit: bad technology procurement.
Procurement is the procedure a government follows to buy something–letting suppliers know what they want, asking for proposals, restricting what kinds of proposal they will consider, limiting what kinds of firms they will do business with, and deciding if what they got what they paid for.
The City of Chicago buys technology about the same way that they buy health insurance, a bridge, or anything else in between. And that’s the problem.
Chicago’s government has a long history of corruption, nepotism and patronage. After each outrage, new rules are piled upon existing rules to prevent that crisis from happening again. Unfortunately, this accumulation of rules does not just protect against the bad guys, it also forms a huge barrier to entry for technology innovators.
So, the firms that end up building our city’s digital public services tend to be good at picking their way through the barriers of the procurement process, not at building good technology. Instead of making government tech contracting fair and competitive, procurement has unfortunately had the opposite effect.
So where does this leave us? Despite Chicago’s flourishing startup scene, and despite having one of the country’s largest community of civic technologists, the Windy City’s digital public services are still terribly designed and far too expensive to the taxpayer.

The Technology Gap

The best way to see the gap between Chicago’s volunteer civic tech community and the technology that the City pays is to look at an entire class of civic apps that are essentially facelifts on existing government websites….
You may have noticed an increase in quality and usability between these three civic apps and their official government counterparts.
Now consider this: all of the government sites took months to build and cost hundreds of thousands of dollars. Was My Car Towed, 2nd City Zoning and CrimeAround.us were all built by one to two people in a matter of days, for no money.
Think about that for a second. Consider how much the City is overpaying for websites its citizens can barely use. And imagine how much better our digital city services would be if the City worked with the very same tech startups they’re trying to nurture.
Why do these civic apps exist? Well, with the City of Chicago releasing hundreds of high quality datasets on their data portal over the past three years (for which they should be commended), a group of highly passionate and skilled technologists have started using their skills to develop these apps and many others.
It’s mostly for fun, learning, and a sense of civic duty, but it demonstrates there’s no shortage of highly skilled developers who are interested in using technology to make their city a better place to live in…
Two years ago, in the Fall of 2011, I learned about procurement in Chicago for the first time. An awesome group of developers, designers and I had just built ChicagoLobbyists.org – our very first civic app – for the City of Chicago’s first open data hackathon….
Since then, the City has often cited ChicagoLobbyists.org as evidence of the innovation-sparking potential of open data.
Shortly after our site launched, a Request For Proposals, or RFP, was issued by the City for an ‘Online Lobbyist Disclosure System.’
Hey! We just built one of those! Sure, we would need to make some updates to it—adding a way for lobbyists to log in and submit their info—but we had a solid start. So, our scrappy group of tech volunteers decided to respond to the RFP.
After reading all 152 pages of the document, we realized we had no chance of getting the bid. It was impossible for the ChicagoLobbyists.org group to meet the legal requirements (as it would have been for any small software shop):

  • audited financial statements for the past 3 years
  • an economic disclosure statement (EDS) and affidavit
  • proof of $500k workers compensation and employers liability
  • proof of $2 million in professional liability insurance”

A Brief History of Databases


Stephen Fortune: “Databases are mundane, the epitome of the everyday in digital society. Despite the enthusiasm and curiosity that such a ubiquitous and important item merits, arguably the only people to discuss them are those with curiosity enough to thumb through the dry and technical literature that chronicles the database’s ascension.
Which is a shame, because the use of databases actually illuminates so much about how we come to terms with the world around us. The history of databases is a tale of experts at different times attempting to make sense of complexity. As a result, the first information explosions of the early computer era left an enduring impact on how we think about structuring information. The practices, frameworks, and uses of databases, so pioneering at the time, have since become intrinsic to how organizations manage data. If we are facing another data deluge (for there have been many), it ’s different in kind to the ones that preceded it. The speed of today’s data production is precipitated not from a sudden appearance of entirely new technologies but because the demand and accessibility has steadily risen through the strata of society as databases become more and more ubiquitous and essential to aspects of our daily lives. And turns out we’re not drowning in data; we instead appear to have made a sort of unspoken peace with it, just as the Venetians and Dutch before us. We’ve built edifices to house the data and, witnessing that this did little to stem the flow, have subsequently created our enterprises atop and around them. Surveying the history of databases illuminates a lot about how we come to terms with the world around us, and how organizations have come to terms with us.

Unit Records & Punch Card Databases

The history of data processing is punctuated with many high water marks of data abundance. Each successive wave has been incrementally greater in volume, but all are united by the trope that data production exceeds what tabulators (whether machine or human) can handle. The growing amount of data gathered by the 1880 US Census (which took human tabulators 8 of the 10 years before the next census to compute) saw Herman Hollerith kickstart the data processing industry. He devised “Hollerith cards” (his personal brand of punchcard) and the keypunch, sorter, and tabulator unit record machines. The latter three machines were built for the sole purpose of crunching numbers, with the data represented by holes on the punch cards. Hollerith’s Tabulating Machine Company was later merged with three other companies into International Business Machines (IBM), an enterprise that casts a long shadow over this history of databases….”

The Challenges of Challenge.Gov: Adopting Private Sector Business Innovations in the Federal Government


I Mergel, SI Bretschneider, C Louis, J Smith at the HICSS ’14 Proceedings of the 2014 47th Hawaii International Conference on System Sciences: “As part of the Open Government Initiative in the U.S. federal government, the White House has introduced a new policy instrument called “Challenges and Prizes”, implemented as Challenge.gov that allows federal departments to run Open Innovation (OI) contests. This initiative was motivated by similar OI initiatives in the private sector and to enhance innovativeness and performance among federal agencies. Here we first define the underlying theoretical concepts of OI, crowd sourcing and contests and apply them to the existing theory of public ness and the creation of public goods. We then analyze over 200 crowd sourcing contests on CHALLENGE.GOV and conclude that federal departments and agencies use this policy instrument for four different purpose: awareness, service, knowledge and technical solutions. We conclude that Challenge.gov is currently used as an innovative format to inform and educate the public about public management problems and less frequently to solicit complex technological solutions from problem solvers.”

New Research Network to Study and Design Innovative Ways of Solving Public Problems


Network

MacArthur Foundation Research Network on Opening Governance formed to gather evidence and develop new designs for governing 

NEW YORK, NY, March 4, 2014 The Governance Lab (The GovLab) at New York University today announced the formation of a Research Network on Opening Governance, which will seek to develop blueprints for more effective and legitimate democratic institutions to help improve people’s lives.
Convened and organized by the GovLab, the MacArthur Foundation Research Network on Opening Governance is made possible by a three-year grant of $5 million from the John D. and Catherine T. MacArthur Foundation as well as a gift from Google.org, which will allow the Network to tap the latest technological advances to further its work.
Combining empirical research with real-world experiments, the Research Network will study what happens when governments and institutions open themselves to diverse participation, pursue collaborative problem-solving, and seek input and expertise from a range of people. Network members include twelve experts (see below) in computer science, political science, policy informatics, social psychology and philosophy, law, and communications. This core group is supported by an advisory network of academics, technologists, and current and former government officials. Together, they will assess existing innovations in governing and experiment with new practices and how institutions make decisions at the local, national, and international levels.
Support for the Network from Google.org will be used to build technology platforms to solve problems more openly and to run agile, real-world, empirical experiments with institutional partners such as governments and NGOs to discover what can enhance collaboration and decision-making in the public interest.
The Network’s research will be complemented by theoretical writing and compelling storytelling designed to articulate and demonstrate clearly and concretely how governing agencies might work better than they do today. “We want to arm policymakers and practitioners with evidence of what works and what does not,” says Professor Beth Simone Noveck, Network Chair and author of Wiki Government: How Technology Can Make Government Better, Democracy Stronger and Citi More Powerful, “which is vital to drive innovation, re-establish legitimacy and more effectively target scarce resources to solve today’s problems.”
“From prize-backed challenges to spur creative thinking to the use of expert networks to get the smartest people focused on a problem no matter where they work, this shift from top-down, closed, and professional government to decentralized, open, and smarter governance may be the major social innovation of the 21st century,” says Noveck. “The MacArthur Research Network on Opening Governance is the ideal crucible for helping  transition from closed and centralized to open and collaborative institutions of governance in a way that is scientifically sound and yields new insights to inform future efforts, always with an eye toward real-world impacts.”
MacArthur Foundation President Robert Gallucci added, “Recognizing that we cannot solve today’s challenges with yesterday’s tools, this interdisciplinary group will bring fresh thinking to questions about how our governing institutions operate, and how they can develop better ways to help address seemingly intractable social problems for the common good.”
Members
The MacArthur Research Network on Opening Governance comprises:
Chair: Beth Simone Noveck
Network Coordinator: Andrew Young
Chief of Research: Stefaan Verhulst
Faculty Members:

  • Sir Tim Berners-Lee (Massachusetts Institute of Technology (MIT)/University of Southampton, UK)
  • Deborah Estrin (Cornell Tech/Weill Cornell Medical College)
  • Erik Johnston (Arizona State University)
  • Henry Farrell (George Washington University)
  • Sheena S. Iyengar (Columbia Business School/Jerome A. Chazen Institute of International Business)
  • Karim Lakhani (Harvard Business School)
  • Anita McGahan (University of Toronto)
  • Cosma Shalizi (Carnegie Mellon/Santa Fe Institute)

Institutional Members:

  • Christian Bason and Jesper Christiansen (MindLab, Denmark)
  • Geoff Mulgan (National Endowment for Science Technology and the Arts – NESTA, United Kingdom)
  • Lee Rainie (Pew Research Center)

The Network is eager to hear from and engage with the public as it undertakes its work. Please contact Stefaan Verhulst to share your ideas or identify opportunities to collaborate.”

Coordinating the Commons: Diversity & Dynamics in Open Collaborations


Dissertation by Jonathan T. Morgan: “The success of Wikipedia demonstrates that open collaboration can be an effective model for organizing geographically-distributed volunteers to perform complex, sustained work at a massive scale. However, Wikipedia’s history also demonstrates some of the challenges that large, long-term open collaborations face: the core community of Wikipedia editors—the volunteers who contribute most of the encyclopedia’s content and ensure that articles are correct and consistent — has been gradually shrinking since 2007, in part because Wikipedia’s social climate has become increasingly inhospitable for newcomers, female editors, and editors from other underrepresented demographics. Previous research studies of change over time within other work contexts, such as corporations, suggests that incremental processes such as bureaucratic formalization can make organizations more rule-bound and less adaptable — in effect, less open— as they grow and age. There has been little research on how open collaborations like Wikipedia change over time, and on the impact of those changes on the social dynamics of the collaborating community and the way community members prioritize and perform work. Learning from Wikipedia’s successes and failures can help researchers and designers understand how to support open collaborations in other domains — such as Free/Libre Open Source Software, Citizen Science, and Citizen Journalism.

In this dissertation, I examine the role of openness, and the potential antecedents and consequences of formalization, within Wikipedia through an analysis of three distinct but interrelated social structures: community-created rules within the Wikipedia policy environment, coordination work and group dynamics within self-organized open teams called WikiProjects, and the socialization mechanisms that Wikipedia editors use to teach new community members how to participate.To inquire further, I have designed a new editor peer support space, the Wikipedia Teahouse, based on the findings from my empirical studies. The Teahouse is a volunteer-driven project that provides a welcoming and engaging environment in which new editors can learn how to be productive members of the Wikipedia community, with the goal of increasing the number and diversity of newcomers who go on to make substantial contributions to Wikipedia …”

True Collective Intelligence? A Sketch of a Possible New Field


Paper by Geoff Mulgan in Philosophy & Technology :” Collective intelligence is much talked about but remains very underdeveloped as a field. There are small pockets in computer science and psychology and fragments in other fields, ranging from economics to biology. New networks and social media also provide a rich source of emerging evidence. However, there are surprisingly few useable theories, and many of the fashionable claims have not stood up to scrutiny. The field of analysis should be how intelligence is organised at large scale—in organisations, cities, nations and networks. The paper sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to the possible intellectual barriers to progress.”

Big Data, Big New Businesses


Nigel Shaboldt and Michael Chui: “Many people have long believed that if government and the private sector agreed to share their data more freely, and allow it to be processed using the right analytics, previously unimaginable solutions to countless social, economic, and commercial problems would emerge. They may have no idea how right they are.

Even the most vocal proponents of open data appear to have underestimated how many profitable ideas and businesses stand to be created. More than 40 governments worldwide have committed to opening up their electronic data – including weather records, crime statistics, transport information, and much more – to businesses, consumers, and the general public. The McKinsey Global Institute estimates that the annual value of open data in education, transportation, consumer products, electricity, oil and gas, health care, and consumer finance could reach $3 trillion.

These benefits come in the form of new and better goods and services, as well as efficiency savings for businesses, consumers, and citizens. The range is vast. For example, drawing on data from various government agencies, the Climate Corporation (recently bought for $1 billion) has taken 30 years of weather data, 60 years of data on crop yields, and 14 terabytes of information on soil types to create customized insurance products.

Similarly, real-time traffic and transit information can be accessed on smartphone apps to inform users when the next bus is coming or how to avoid traffic congestion. And, by analyzing online comments about their products, manufacturers can identify which features consumers are most willing to pay for, and develop their business and investment strategies accordingly.

Opportunities are everywhere. A raft of open-data start-ups are now being incubated at the London-based Open Data Institute (ODI), which focuses on improving our understanding of corporate ownership, health-care delivery, energy, finance, transport, and many other areas of public interest.

Consumers are the main beneficiaries, especially in the household-goods market. It is estimated that consumers making better-informed buying decisions across sectors could capture an estimated $1.1 trillion in value annually. Third-party data aggregators are already allowing customers to compare prices across online and brick-and-mortar shops. Many also permit customers to compare quality ratings, safety data (drawn, for example, from official injury reports), information about the provenance of food, and producers’ environmental and labor practices.

Consider the book industry. Bookstores once regarded their inventory as a trade secret. Customers, competitors, and even suppliers seldom knew what stock bookstores held. Nowadays, by contrast, bookstores not only report what stock they carry but also when customers’ orders will arrive. If they did not, they would be excluded from the product-aggregation sites that have come to determine so many buying decisions.

The health-care sector is a prime target for achieving new efficiencies. By sharing the treatment data of a large patient population, for example, care providers can better identify practices that could save $180 billion annually.

The Open Data Institute-backed start-up Mastodon C uses open data on doctors’ prescriptions to differentiate among expensive patent medicines and cheaper “off-patent” varieties; when applied to just one class of drug, that could save around $400 million in one year for the British National Health Service. Meanwhile, open data on acquired infections in British hospitals has led to the publication of hospital-performance tables, a major factor in the 85% drop in reported infections.

There are also opportunities to prevent lifestyle-related diseases and improve treatment by enabling patients to compare their own data with aggregated data on similar patients. This has been shown to motivate patients to improve their diet, exercise more often, and take their medicines regularly. Similarly, letting people compare their energy use with that of their peers could prompt them to save hundreds of billions of dollars in electricity costs each year, to say nothing of reducing carbon emissions.

Such benchmarking is even more valuable for businesses seeking to improve their operational efficiency. The oil and gas industry, for example, could save $450 billion annually by sharing anonymized and aggregated data on the management of upstream and downstream facilities.

Finally, the move toward open data serves a variety of socially desirable ends, ranging from the reuse of publicly funded research to support work on poverty, inclusion, or discrimination, to the disclosure by corporations such as Nike of their supply-chain data and environmental impact.

There are, of course, challenges arising from the proliferation and systematic use of open data. Companies fear for their intellectual property; ordinary citizens worry about how their private information might be used and abused. Last year, Telefónica, the world’s fifth-largest mobile-network provider, tried to allay such fears by launching a digital confidence program to reassure customers that innovations in transparency would be implemented responsibly and without compromising users’ personal information.

The sensitive handling of these issues will be essential if we are to reap the potential $3 trillion in value that usage of open data could deliver each year. Consumers, policymakers, and companies must work together, not just to agree on common standards of analysis, but also to set the ground rules for the protection of privacy and property.”

Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters


Pew Internet: “Conversations on Twitter create networks with identifiable contours as people reply to and mention one another in their tweets. These conversational structures differ, depending on the subject and the people driving the conversation. Six structures are regularly observed: divided, unified, fragmented, clustered, and inward and outward hub and spoke structures. These are created as individuals choose whom to reply to or mention in their Twitter messages and the structures tell a story about the nature of the conversation.
If a topic is political, it is common to see two separate, polarized crowds take shape. They form two distinct discussion groups that mostly do not interact with each other. Frequently these are recognizably liberal or conservative groups. The participants within each separate group commonly mention very different collections of website URLs and use distinct hashtags and words. The split is clearly evident in many highly controversial discussions: people in clusters that we identified as liberal used URLs for mainstream news websites, while groups we identified as conservative used links to conservative news websites and commentary sources. At the center of each group are discussion leaders, the prominent people who are widely replied to or mentioned in the discussion. In polarized discussions, each group links to a different set of influential people or organizations that can be found at the center of each conversation cluster.
While these polarized crowds are common in political conversations on Twitter, it is important to remember that the people who take the time to post and talk about political issues on Twitter are a special group. Unlike many other Twitter members, they pay attention to issues, politicians, and political news, so their conversations are not representative of the views of the full Twitterverse. Moreover, Twitter users are only 18% of internet users and 14% of the overall adult population. Their demographic profile is not reflective of the full population. Additionally, other work by the Pew Research Center has shown that tweeters’ reactions to events are often at odds with overall public opinion— sometimes being more liberal, but not always. Finally, forthcoming survey findings from Pew Research will explore the relatively modest size of the social networking population who exchange political content in their network.
Still, the structure of these Twitter conversations says something meaningful about political discourse these days and the tendency of politically active citizens to sort themselves into distinct partisan camps. Social networking maps of these conversations provide new insights because they combine analysis of the opinions people express on Twitter, the information sources they cite in their tweets, analysis of who is in the networks of the tweeters, and how big those networks are. And to the extent that these online conversations are followed by a broader audience, their impact may reach well beyond the participants themselves.
Our approach combines analysis of the size and structure of the network and its sub-groups with analysis of the words, hashtags and URLs people use. Each person who contributes to a Twitter conversation is located in a specific position in the web of relationships among all participants in the conversation. Some people occupy rare positions in the network that suggest that they have special importance and power in the conversation.
Social network maps of Twitter crowds and other collections of social media can be created with innovative data analysis tools that provide new insight into the landscape of social media. These maps highlight the people and topics that drive conversations and group behavior – insights that add to what can be learned from surveys or focus groups or even sentiment analysis of tweets. Maps of previously hidden landscapes of social media highlight the key people, groups, and topics being discussed.

Conversational archetypes on Twitter

The Polarized Crowd network structure is only one of several different ways that crowds and conversations can take shape on Twitter. There are at least six distinctive structures of social media crowds which form depending on the subject being discussed, the information sources being cited, the social networks of the people talking about the subject, and the leaders of the conversation. Each has a different social structure and shape: divided, unified, fragmented, clustered, and inward and outward hub and spokes.
After an analysis of many thousands of Twitter maps, we found six different kinds of network crowds.

Polarized Crowds in Twitter Conversations
Click to view detail

Polarized Crowd: Polarized discussions feature two big and dense groups that have little connection between them. The topics being discussed are often highly divisive and heated political subjects. In fact, there is usually little conversation between these groups despite the fact that they are focused on the same topic. Polarized Crowds on Twitter are not arguing. They are ignoring one another while pointing to different web resources and using different hashtags.
Why this matters: It shows that partisan Twitter users rely on different information sources. While liberals link to many mainstream news sources, conservatives link to a different set of websites.

Tight Crowds in Twitter Conversations
Click to to view detail

Tight Crowd: These discussions are characterized by highly interconnected people with few isolated participants. Many conferences, professional topics, hobby groups, and other subjects that attract communities take this Tight Crowd form.
Why this matters: These structures show how networked learning communities function and how sharing and mutual support can be facilitated by social media.

Brand Clusters in Twitter Conversations
Click to view detail

Brand Clusters: When well-known products or services or popular subjects like celebrities are discussed in Twitter, there is often commentary from many disconnected participants: These “isolates” participating in a conversation cluster are on the left side of the picture on the left). Well-known brands and other popular subjects can attract large fragmented Twitter populations who tweet about it but not to each other. The larger the population talking about a brand, the less likely it is that participants are connected to one another. Brand-mentioning participants focus on a topic, but tend not to connect to each other.
Why this matters: There are still institutions and topics that command mass interest. Often times, the Twitter chatter about these institutions and their messages is not among people connecting with each other. Rather, they are relaying or passing along the message of the institution or person and there is no extra exchange of ideas.

Community Clusters in Twitter Conversations
Click to view detail

Community Clusters: Some popular topics may develop multiple smaller groups, which often form around a few hubs each with its own audience, influencers, and sources of information. These Community Clusters conversations look like bazaars with multiple centers of activity. Global news stories often attract coverage from many news outlets, each with its own following. That creates a collection of medium-sized groups—and a fair number of isolates (the left side of the picture above).
Why this matters: Some information sources and subjects ignite multiple conversations, each cultivating its own audience and community. These can illustrate diverse angles on a subject based on its relevance to different audiences, revealing a diversity of opinion and perspective on a social media topic.

Broadcast Networks in Twitter Conversations
Click to view detail

Broadcast Network: Twitter commentary around breaking news stories and the output of well-known media outlets and pundits has a distinctive hub and spoke structure in which many people repeat what prominent news and media organizations tweet. The members of the Broadcast Network audience are often connected only to the hub news source, without connecting to one another. In some cases there are smaller subgroups of densely connected people— think of them as subject groupies—who do discuss the news with one another.
Why this matters: There are still powerful agenda setters and conversation starters in the new social media world. Enterprises and personalities with loyal followings can still have a large impact on the conversation.

Support Networks in Twitter Conversations
Click to view detail

Support Network: Customer complaints for a major business are often handled by a Twitter service account that attempts to resolve and manage customer issues around their products and services. This produces a hub and spoke structure that is different from the Broadcast Network pattern. In the Support Network structure, the hub account replies to many otherwise disconnected users, creating outward spokes. In contrast, in the Broadcast pattern, the hub gets replied to or retweeted by many disconnected people, creating inward spokes.
Why this matters: As government, businesses, and groups increasingly provide services and support via social media, support network structures become an important benchmark for evaluating the performance of these institutions. Customer support streams of advice and feedback can be measured in terms of efficiency and reach using social media network maps.

Why is it useful to map the social landscape this way?

Social media is increasingly home to civil society, the place where knowledge sharing, public discussions, debates, and disputes are carried out. As the new public square, social media conversations are as important to document as any other large public gathering. Network maps of public social media discussions in services like Twitter can provide insights into the role social media plays in our society. These maps are like aerial photographs of a crowd, showing the rough size and composition of a population. These maps can be augmented with on the ground interviews with crowd participants, collecting their words and interests. Insights from network analysis and visualization can complement survey or focus group research methods and can enhance sentiment analysis of the text of messages like tweets.
Like topographic maps of mountain ranges, network maps can also illustrate the points on the landscape that have the highest elevation. Some people occupy locations in networks that are analogous to positions of strategic importance on the physical landscape. Network measures of “centrality” can identify key people in influential locations in the discussion network, highlighting the people leading the conversation. The content these people create is often the most popular and widely repeated in these networks, reflecting the significant role these people play in social media discussions.
While the physical world has been mapped in great detail, the social media landscape remains mostly unknown. However, the tools and techniques for social media mapping are improving, allowing more analysts to get social media data, analyze it, and contribute to the collective construction of a more complete map of the social media world. A more complete map and understanding of the social media landscape will help interpret the trends, topics, and implications of these new communication technologies.”

Can We Balance Data Protection With Value Creation?


A “privacy perspective” by Sara Degli Esposti: “In the last few years there has been a dramatic change in the opportunities organizations have to generate value from the data they collect about customers or service users. Customers and users are rapidly becoming collections of “data points” and organizations can learn an awful lot from the analysis of this huge accumulation of data points, also known as “Big Data.”

Organizations are perhaps thrilled, dreaming about new potential applications of digital data but also a bit concerned about hidden risks and unintended consequences. Take, for example, the human rights protections placed on personal data by the EU.  Regulators are watching closely, intending to preserve the eight basic privacy principles without compromising the free flow of information.
Some may ask whether it’s even possible to balance the two.
Enter the Big Data Protection Project (BDPP): an Open University study on organizations’ ability to leverage Big Data while complying with EU data protection principles. The study represents a chance for you to contribute to, and learn about, the debate on the reform of the EU Data Protection Directive. It is open to staff with interests in data management or use, from all types of organizations, both for-profit and nonprofit, with interests in Europe.
Join us by visiting the study’s page on the Open University website. Participants will receive a report with all the results. The BDP is a scientific project—no commercial organization is involved—with implications relevant to both policy-makers and industry representatives..
What kind of legislation do we need to create that positive system of incentive for organizations to innovate in the privacy field?
There is no easy answer.
That’s why we need to undertake empirical research into actual information management practices to understand the effects of regulation on people and organizations. Legal instruments conceived with the best intentions can be ineffective or detrimental in practice. However, other factors can also intervene and motivate business players to develop procedures and solutions which go far beyond compliance. Good legislation should complement market forces in bringing values and welfare to both consumers and organizations.
Is European data protection law keeping its promise of protecting users’ information privacy while contributing to the flourishing of the digital economy or not? Will the proposed General Data Protection Regulation (GDPR) be able to achieve this goal? What would you suggest to do to motivate organizations to invest in information security and take information privacy seriously?
Let’s consider for a second some basic ideas such as the eight fundamental data protection principles: notice, consent, purpose specification and limitation, data quality, respect of data subjects’ rights, information security and accountability. Many of these ideas are present in the EU 1995 Data Protection Directive, the U.S. Fair Information Practice Principles (FIPPs) andthe 1980 OECD Guidelines. The fundamental question now is, should all these ideas be brought into the future, as suggested in the proposed new GDPR, orshould we reconsider our approach and revise some of them, as recommended in the 21st century version of the 1980 OECD Guidelines?
As you may know, notice and consent are often taken as examples of how very good intentions can be transformed into actions of limited importance. Rather than increase people’s awareness of the growing data economy, notice and consent have produced a tick-box tendency accompanied by long and unintelligible privacy policies. Besides, consent is rarely freely granted. Individuals give their consent in exchange for some product or service or as part of a job relationship. The imbalance between the two goods traded—think about how youngsters perceive not having access to some social media as a form of social exclusion—and the lack of feasible alternatives often make an instrument, such as the current use made of consent, meaningless.
On the other hand, a principle such as data quality, which has received very limited attention, could offer opportunities to policy-makers and businesses to reopen the debate on users’ control of their personal data. Having updated, accurate data is something very valuable for organizations. Data quality is also key to the success of many business models. New partnerships between users and organizations could be envisioned under this principle.
Finally, data collection limitation and purpose specification could be other examples of the divide between theory and practice: The tendency we see is that people and businesses want to share, merge and reuse data over time and to do new and unexpected things. Of course, we all want to avoid function creep and prevent any detrimental use of our personal data. We probably need new, stronger mechanisms to ensure data are used for good purposes.
Digital data have become economic assets these days. We need good legislation to stop the black market for personal data and open the debate on how each of us wants to contribute to, and benefit from, the data economy.”

OpenRFPs


Clay Johnson: “Tomorrow at CodeAcross we’ll be launching our first community-based project, OpenRFPs. The goal is to liberate the data inside of every state RFP listing website in the country. We hope you’ll find your own state’s RFP site, and contribute a parser.
The Department of Better Technology’s goal is to improve the way government works by making it easier for small, innovative businesses to provide great technology to government. But those businesses can barely make it through the front door when the RFPs themselves are stored in archaic systems, with sloppy user interfaces and disparate data formats, or locked behind paywalls.
What are a business’s current choices? They can pay for subscription based services that are either shady vendors disguised as officious sounding associations, or have less than stellar user interfaces.
The sum of these experiences? The barrier to entry is so high that the small companies who are great at technology are effectively turned away at the door. The only ones who get through are those that can afford to spend employee time scouring for the right RFPs.
But the harm to business isn’t the big harm. Government contracting becomes an overlooked area by the transparency community, (especially at the state and local level,) because the data isn’t available en-masse unless it’s paid for commercially, which comes with a restrictive usage license.”