Procurement and Civic Innovation


Derek Eder: “Have you ever used a government website and had a not-so-awesome experience? In our slick 2014 world of Google, Twitter and Facebook, why does government tech feel like it’s stuck in the 1990s?
The culprit: bad technology procurement.
Procurement is the procedure a government follows to buy something–letting suppliers know what they want, asking for proposals, restricting what kinds of proposal they will consider, limiting what kinds of firms they will do business with, and deciding if what they got what they paid for.
The City of Chicago buys technology about the same way that they buy health insurance, a bridge, or anything else in between. And that’s the problem.
Chicago’s government has a long history of corruption, nepotism and patronage. After each outrage, new rules are piled upon existing rules to prevent that crisis from happening again. Unfortunately, this accumulation of rules does not just protect against the bad guys, it also forms a huge barrier to entry for technology innovators.
So, the firms that end up building our city’s digital public services tend to be good at picking their way through the barriers of the procurement process, not at building good technology. Instead of making government tech contracting fair and competitive, procurement has unfortunately had the opposite effect.
So where does this leave us? Despite Chicago’s flourishing startup scene, and despite having one of the country’s largest community of civic technologists, the Windy City’s digital public services are still terribly designed and far too expensive to the taxpayer.

The Technology Gap

The best way to see the gap between Chicago’s volunteer civic tech community and the technology that the City pays is to look at an entire class of civic apps that are essentially facelifts on existing government websites….
You may have noticed an increase in quality and usability between these three civic apps and their official government counterparts.
Now consider this: all of the government sites took months to build and cost hundreds of thousands of dollars. Was My Car Towed, 2nd City Zoning and CrimeAround.us were all built by one to two people in a matter of days, for no money.
Think about that for a second. Consider how much the City is overpaying for websites its citizens can barely use. And imagine how much better our digital city services would be if the City worked with the very same tech startups they’re trying to nurture.
Why do these civic apps exist? Well, with the City of Chicago releasing hundreds of high quality datasets on their data portal over the past three years (for which they should be commended), a group of highly passionate and skilled technologists have started using their skills to develop these apps and many others.
It’s mostly for fun, learning, and a sense of civic duty, but it demonstrates there’s no shortage of highly skilled developers who are interested in using technology to make their city a better place to live in…
Two years ago, in the Fall of 2011, I learned about procurement in Chicago for the first time. An awesome group of developers, designers and I had just built ChicagoLobbyists.org – our very first civic app – for the City of Chicago’s first open data hackathon….
Since then, the City has often cited ChicagoLobbyists.org as evidence of the innovation-sparking potential of open data.
Shortly after our site launched, a Request For Proposals, or RFP, was issued by the City for an ‘Online Lobbyist Disclosure System.’
Hey! We just built one of those! Sure, we would need to make some updates to it—adding a way for lobbyists to log in and submit their info—but we had a solid start. So, our scrappy group of tech volunteers decided to respond to the RFP.
After reading all 152 pages of the document, we realized we had no chance of getting the bid. It was impossible for the ChicagoLobbyists.org group to meet the legal requirements (as it would have been for any small software shop):

  • audited financial statements for the past 3 years
  • an economic disclosure statement (EDS) and affidavit
  • proof of $500k workers compensation and employers liability
  • proof of $2 million in professional liability insurance”

A Brief History of Databases


Stephen Fortune: “Databases are mundane, the epitome of the everyday in digital society. Despite the enthusiasm and curiosity that such a ubiquitous and important item merits, arguably the only people to discuss them are those with curiosity enough to thumb through the dry and technical literature that chronicles the database’s ascension.
Which is a shame, because the use of databases actually illuminates so much about how we come to terms with the world around us. The history of databases is a tale of experts at different times attempting to make sense of complexity. As a result, the first information explosions of the early computer era left an enduring impact on how we think about structuring information. The practices, frameworks, and uses of databases, so pioneering at the time, have since become intrinsic to how organizations manage data. If we are facing another data deluge (for there have been many), it ’s different in kind to the ones that preceded it. The speed of today’s data production is precipitated not from a sudden appearance of entirely new technologies but because the demand and accessibility has steadily risen through the strata of society as databases become more and more ubiquitous and essential to aspects of our daily lives. And turns out we’re not drowning in data; we instead appear to have made a sort of unspoken peace with it, just as the Venetians and Dutch before us. We’ve built edifices to house the data and, witnessing that this did little to stem the flow, have subsequently created our enterprises atop and around them. Surveying the history of databases illuminates a lot about how we come to terms with the world around us, and how organizations have come to terms with us.

Unit Records & Punch Card Databases

The history of data processing is punctuated with many high water marks of data abundance. Each successive wave has been incrementally greater in volume, but all are united by the trope that data production exceeds what tabulators (whether machine or human) can handle. The growing amount of data gathered by the 1880 US Census (which took human tabulators 8 of the 10 years before the next census to compute) saw Herman Hollerith kickstart the data processing industry. He devised “Hollerith cards” (his personal brand of punchcard) and the keypunch, sorter, and tabulator unit record machines. The latter three machines were built for the sole purpose of crunching numbers, with the data represented by holes on the punch cards. Hollerith’s Tabulating Machine Company was later merged with three other companies into International Business Machines (IBM), an enterprise that casts a long shadow over this history of databases….”

A Framework for Benchmarking Open Government Data Efforts


DS Sayogo, TA Pardo, M Cook in the HICSS ’14 Proceedings of the 2014 47th Hawaii International Conference on System Sciences: “This paper presents a preliminary exploration on the status of open government data worldwide as well as in-depth evaluation of selected open government data portals. Using web content analysis of the open government data portals from 35 countries, this study outlines the progress of open government data efforts at the national government level. This paper also conducted in-depth evaluation of selected cases to justify the application of a proposed framework for understanding the status of open government data initiatives. This paper suggest that findings of this exploration offer a new-level of understanding of the depth, breath, and impact of current open government data efforts. The review results also point to the different stages of open government data portal development in term of data content, data manipulation capability and participatory and engagement capability. This finding suggests that development of open government portal follows an incremental approach similar to those of e-government development stages in general. Subsequently, this paper offers several observations in terms of policy and practical implication of open government data portal development drawn from the application of the proposed framework”

Social Media as Government Watchdog


Gordon Crovitz in the Wall Street Journal: “Two new data points for the debate on whether greater access to the Internet leads to more freedom and fewer authoritarian regimes:

According to reports last week, Facebook plans to buy a company that makes solar-powered drones that can hover for years at high altitudes without refueling, which it would use to bring the Internet to parts of the world not yet on the grid. In contrast to this futuristic vision, Russia evoked land grabs of the analog Soviet era by invading Crimea after Ukrainians forced out Vladimir Putin‘s ally as president.
Internet idealists can point to another triumph in helping bring down Ukraine’s authoritarian government. Ukrainian citizens ignored intimidation including officious text messages: “Dear subscriber, you are registered as a participant in a mass disturbance.” Protesters made the most of social media to plan demonstrations and avoid attacks by security forces.
But Mr. Putin quickly delivered the message that social media only goes so far against a fully committed authoritarian. His claim that he had to invade to protect ethnic Russians in Crimea was especially brazen because there had been no loud outcry, on social media or otherwise, among Russian speakers in the region.
A new book reports the state of play on the Internet as a force for freedom. For a decade, Emily Parker, a former Wall Street Journal editorial-page writer and State Department staffer, has researched the role of the Internet in China, Cuba and Russia. The title of her book, “Now I Know Who My Comrades Are,” comes from a blogger in China who explained to Ms. Parker how the Internet helps people discover they are not alone in their views and aspirations for liberty.
Officials in these countries work hard to keep critics isolated and in fear. In Russia, Ms. Parker notes, there is also apathy because the Putin regime seems so entrenched. “Revolutions need a spark, often in the form of a political or economic crisis,” she observes. “Social media alone will not light that spark. What the Internet does create is a new kind of citizen: networked, unafraid, and ready for action.”
Asked about lessons from the invasion of Crimea, Ms. Parker noted that the Internet “chips away at Russia’s control over information.” She added: “Even as Russian state media tries to shape the narrative about Ukraine, ordinary Russians can go online to seek the truth.”
But this same shared awareness may also be accelerating a decline in U.S. influence. In the digital era, U.S. failure to make good on its promises reduces the stature of Washington faster than similar inaction did in the past.
Consider the Hungarian uprising of 1956, the first significant rebellion against Soviet control. The U.S. secretary of state, John Foster Dulles, said: “To all those suffering under communist slavery, let us say you can count on us.” Yet no help came as Soviet tanks rolled into Budapest, tens of thousands were killed, and the leader who tried to secede from the Warsaw Pact, Imre Nagy, was executed.
There were no Facebook posts or YouTube videos instantly showing the result of U.S. fecklessness. In the digital era, scenes of Russian occupation of Crimea are available 24/7. People can watch Mr. Putin’s brazen press conferences and see for themselves what he gets away with.
The U.S. stood by as Syrian civilians were massacred and gassed. There was instant global awareness when President Obama last year backed down from enforcing his “red line” when the Syrian regime used chemical weapons. American inaction in Syria sent a green light for Mr. Putin and others around the world to act with impunity.
Just in recent weeks, Iran tried to ship Syrian rockets to Gaza to attack Israel; Moscow announced it would use bases in Cuba, Venezuela and Nicaragua for its navy and bombers; and China budgeted a double-digit increase in military spending as President Obama cut back the U.S. military.
All institutions are more at risk in this era of instant communication and awareness. Reputations get lost quickly, whether it’s a misstep by a company, a gaffe by a politician, or a lack of resolve by an American president.
Over time, the power of the Internet to bring people together will help undermine authoritarian governments. But as Mr. Putin reminds us, in the short term a peaceful world depends more on a U.S. resolute in using its power and influence to deter aggression.”

openFDA


Dr. Taha Kass-Hout at the FDA: “Welcome to the new home of openFDA! We are incredibly excited to see so much interest in our work and hope that this site can be a valuable resource to those wishing to use public FDA data in both the public and private sector to spur innovation, further regulatory or scientific missions, educate the public, and save lives.
Through openFDA, developers and researchers will have easy access to high-value FDA public data through RESTful APIs and structured file downloads. In short, our goal is to make it simple for an application, mobile, or web developer, or all stripes of researchers, to use data from FDA in their work. We’ve done an extensive amount of research both internally and with potential external developers to identify which datasets are both in demand and have a high barrier of entry. As a result, our initial pilot project will cover a number of datasets from various areas within FDA, defined into three broad focus areas: Adverse Events, Product Recalls, and Product Labeling. These API’s won’t have one-on-one matching to FDA’s internal data organizational structure; rather, we intend to abstract on top of a myriad of datasets and provide appropriate metadata and identifiers when possible. Of course, we’ll always make the raw source data available for people who prefer to work that way (and it’s good to mention that we also will not be releasing any data that could potentially be used to identify individuals or other private information).
The openFDA initiative is one part of the larger Office of Informatics and Technology Innovation roadmap. As part of my role as FDA’s Chief Health Informatics Officer, I’m working to lead efforts to move FDA in to a cutting edge technology organization. You’ll be hearing more about our other initiatives, including Cloud Computing, High Performance Computing, Next Generation Sequencing, and mobile-first deployment in the near future.
As we work towards a release of openFDA we’ll begin to share more about our work and how you can get involved. In the meantime, I suggest you sign up for our listserv (on our home page) to get the latest updates on the project. You can also reach our team at [email protected] if there is a unique partnership opportunity or other collaboration you wish to discuss.”

Putting Crowdsourcing on the Map


MIT Technology Review: “Even in San Francisco, where Google’s roving Street View cars have mapped nearly every paved surface, there are still places that have remained untouched, such as the flights of stairs that serve as pathways between streets in some of the city’s hilliest neighborhoods.
It’s these places that a startup called Mapillary is focusing on. Cofounders Jan Erik Solem and Johan Gyllenspetz are attempting to build an open, crowdsourced, photographic map that lets smartphone users log all sorts of places, creating a richer view of the world than what is offered by Street View and other street-level mapping services. If contributors provide images often, that view could be more representative of how things look right now.
Google itself is no stranger to the benefits of crowdsourced map content: it paid $966 million last year for traffic and navigation app Waze, whose users contribute data. Google also lets people augment Street View content with their own images. But Solem and Gyllenspetz think there’s still plenty of room for Mapillary, which they say can be used for everything from tracking a nature hike to offering more up-to-date images to house hunters and Airbnb users.
Solem and Gyllenspetz have only been working on the project for four months; they released an iPhone app in November, and an Android app in January. So far, there are just a few hundred users who have shared about 100,000 photos on the service. While it’s free for anyone to use, the startup plans to eventually make money by licensing the data its users generate to companies.
With the app, a user can choose to collect images by walking, biking, or driving. Once you press a virtual shutter button within the app, it takes a photo every two seconds, until you press the button again. You can then upload the images to Mapillary’s service via Wi-Fi, where each photo’s location is noted through its GPS tag. Computer-vision software compares each photo with others that are within a radius of about 100 meters, searching for matching image features so it can find the geometric relationship between the photos. It then places those images properly on the map, and stitches them all together. When new images come in of an area that has already been mapped, Mapillary will add them to its database, too.
It can take less than 30 seconds for the images to show up on the Web-based map, but several minutes for the images to be fully processed. As with Google’s Street View photos, image-recognition software blurs out faces and license plate numbers.
Users can edit Mapillary’s map by moving around the icons that correspond to images—to fix a misplaced image, for instance. Eventually, users will also be able to add comments and tags.
So far, Mapillary’s map is quite sparse. But the few hundred users trying out Mapillary include some map providers in Europe, and the 100,000 or so images to the service ranging from a bike path on Venice Beach in California to a snow-covered ski slope in Sweden.
Street-level images can be viewed on the Web or through Mapillary’s smartphone apps (though the apps just pull up the Web page within the app). Blue lines and colored tags indicate where users have added photos to the map; you can zoom in to see them at the street level.

Navigating through photos is still quite rudimentary; you can tap or click to move from one image to the next with onscreen arrows, depending on the direction you want to explore.
Beyond technical and design challenges, the biggest issue Mapillary faces is convincing a large enough number of users to build up its store of images so that others will start using it and contributing as well, and then ensuring that these users keep coming back.”

How government can engage with citizens online – expert views


The Guardian: In our livechat on 28 February the experts discussed how to connect up government and citizens online. Digital public services are not just for ‘techno wizzy people’, so government should make them easier for everyone… Read the livechat in full
Michael Sanders, head of research for the behavioural insights team@mike_t_sanders
It’s important that government is a part of people’s lives: when people interact with government it shouldn’t be a weird and alienating experience, but one that feels part of their everyday lives.
Online services are still too often difficult to use: most people who use the HMRC website will do so infrequently, and will forget its many nuances between visits. This is getting better but there’s a long way to go.
Digital by default keeps things simple: one of our main findings from our research on improving public services is that we should do all we can to “make it easy”.
There is always a risk of exclusion: we should avoid “digital by default” becoming “digital only”.
Ben Matthews, head of communications at Futuregov@benrmatthews
We prefer digital by design to digital by default: sometimes people can use technology badly, under the guise of ‘digital by default’. We should take a more thoughtful approach to technology, using it as a means to an end – to help us be open, accountable and human.
Leadership is important: you can get enthusiasm from the frontline or younger workers who are comfortable with digital tools, but until they’re empowered by the top of the organisation to use them actively and effectively, we’ll see little progress.
Jargon scares people off: ‘big data’ or ‘open data’, for example….”

Predicting Individual Behavior with Social Networks


Article by Sharad Goel and Daniel Goldstein (Microsoft Research): “With the availability of social network data, it has become possible to relate the behavior of individuals to that of their acquaintances on a large scale. Although the similarity of connected individuals is well established, it is unclear whether behavioral predictions based on social data are more accurate than those arising from current marketing practices. We employ a communications network of over 100 million people to forecast highly diverse behaviors, from patronizing an off-line department store to responding to advertising to joining a recreational league. Across all domains, we find that social data are informative in identifying individuals who are most likely to undertake various actions, and moreover, such data improve on both demographic and behavioral models. There are, however, limits to the utility of social data. In particular, when rich transactional data were available, social data did little to improve prediction.”

The benefits—and limits—of decision models


Article by Phil Rosenzweig in McKinsey Quaterly: “The growing power of decision models has captured plenty of C-suite attention in recent years. Combining vast amounts of data and increasingly sophisticated algorithms, modeling has opened up new pathways for improving corporate performance.1 Models can be immensely useful, often making very accurate predictions or guiding knotty optimization choices and, in the process, can help companies to avoid some of the common biases that at times undermine leaders’ judgments.
Yet when organizations embrace decision models, they sometimes overlook the need to use them well. In this article, I’ll address an important distinction between outcomes leaders can influence and those they cannot. For things that executives cannot directly influence, accurate judgments are paramount and the new modeling tools can be valuable. However, when a senior manager can have a direct influence over the outcome of a decision, the challenge is quite different. In this case, the task isn’t to predict what will happen but to make it happen. Here, positive thinking—indeed, a healthy dose of management confidence—can make the difference between success and failure.

Where models work well

Examples of successful decision models are numerous and growing. Retailers gather real-time information about customer behavior by monitoring preferences and spending patterns. They can also run experiments to test the impact of changes in pricing or packaging and then rapidly observe the quantities sold. Banks approve loans and insurance companies extend coverage, basing their decisions on models that are continually updated, factoring in the most information to make the best decisions.
Some recent applications are truly dazzling. Certain companies analyze masses of financial transactions in real time to detect fraudulent credit-card use. A number of companies are gathering years of data about temperature and rainfall across the United States to run weather simulations and help farmers decide what to plant and when. Better risk management and improved crop yields are the result.
Other examples of decision models border on the humorous. Garth Sundem and John Tierney devised a model to shed light on what they described, tongues firmly in cheek, as one of the world’s great unsolved mysteries: how long will a celebrity marriage last? They came up with the Sundem/Tierney Unified Celebrity Theory, which predicted the length of a marriage based on the couple’s combined age (older was better), whether either had tied the knot before (failed marriages were not a good sign), and how long they had dated (the longer the better). The model also took into account fame (measured by hits on a Google search) and sex appeal (the share of those Google hits that came up with images of the wife scantily clad). With only a handful of variables, the model did a very good job of predicting the fate of celebrity marriages over the next few years.
Models have also shown remarkable power in fields that are usually considered the domain of experts. With data from France’s premier wine-producing regions, Bordeaux and Burgundy, Princeton economist Orley Ashenfelter devised a model that used just three variables to predict the quality of a vintage: winter rainfall, harvest rainfall, and average growing-season temperature. To the surprise of many, the model outperformed wine connoisseurs.
Why do decision models perform so well? In part because they can gather vast quantities of data, but also because they avoid common biases that undermine human judgment.2 People tend to be overly precise, believing that their estimates will be more accurate than they really are. They suffer from the recency bias, placing too much weight on the most immediate information. They are also unreliable: ask someone the same question on two different occasions and you may get two different answers. Decision models have none of these drawbacks; they weigh all data objectively and evenly. No wonder they do better than humans.

Can we control outcomes?

With so many impressive examples, we might conclude that decision models can improve just about anything. That would be a mistake. Executives need not only to appreciate the power of models but also to be cognizant of their limits.
Look back over the previous examples. In every case, the goal was to make a prediction about something that could not be influenced directly. Models can estimate whether a loan will be repaid but won’t actually change the likelihood that payments will arrive on time, give borrowers a greater capacity to pay, or make sure they don’t squander their money before payment is due. Models can predict the rainfall and days of sunshine on a given farm in central Iowa but can’t change the weather. They can estimate how long a celebrity marriage might last but won’t help it last longer or cause another to end sooner. They can predict the quality of a wine vintage but won’t make the wine any better, reduce its acidity, improve the balance, or change the undertones. For these sorts of estimates, finding ways to avoid bias and maintain accuracy is essential.
Executives, however, are not concerned only with predicting things they cannot influence. Their primary duty—as the word execution implies—is to get things done. The task of leadership is to mobilize people to achieve a desired end. For that, leaders need to inspire their followers to reach demanding goals, perhaps even to do more than they have done before or believe is possible. Here, positive thinking matters. Holding a somewhat exaggerated level of self-confidence isn’t a dangerous bias; it often helps to stimulate higher performance.
This distinction seems simple but it’s often overlooked. In our embrace of decision models, we sometimes forget that so much of life is about getting things done, not predicting things we cannot control.

Improving models over time

Part of the appeal of decision models lies in their ability to make predictions, to compare those predictions with what actually happens, and then to evolve so as to make more accurate predictions. In retailing, for example, companies can run experiments with different combinations of price and packaging, then rapidly obtain feedback and alter their marketing strategy. Netflix captures rapid feedback to learn what programs have the greatest appeal and then uses those insights to adjust its offerings. Models are not only useful at any particular moment but can also be updated over time to become more and more accurate.
Using feedback to improve models is a powerful technique but is more applicable in some settings than in others. Dynamic improvement depends on two features: first, the observation of results should not make any future occurrence either more or less likely and, second, the feedback cycle of observation and adjustment should happen rapidly. Both conditions hold in retailing, where customer behavior can be measured without directly altering it and results can be applied rapidly, with prices or other features changed almost in real time. They also hold in weather forecasting, since daily measurements can refine models and help to improve subsequent predictions. The steady improvement of models that predict weather—from an average error (in the maximum temperature) of 6 degrees Fahrenheit in the early 1970s to 5 degrees in the 1990s and just 4 by 2010—is testimony to the power of updated models.
For other events, however, these two conditions may not be present. As noted, executives not only estimate things they cannot affect but are also charged with bringing about outcomes. Some of the most consequential decisions of all—including the launch of a new product, entry into a new market, or the acquisition of a rival—are about mobilizing resources to get things done. Furthermore, the results are not immediately visible and may take months or years to unfold. The ability to gather and insert objective feedback into a model, to update it, and to make a better decision the next time just isn’t present.
None of these caveats call into question the considerable power of decision analysis and predictive models in so many domains. They help underscore the main point: an appreciation of decision analytics is important, but an understanding of when these techniques are useful and of their limitations is essential, too…”

Trust, Computing, and Society


New book edited by Richard H. R. Harper: “The Internet has altered how people engage with each other in myriad ways, including offering opportunities for people to act distrustfully. This fascinating set of essays explores the question of trust in computing from technical, socio-philosophical, and design perspectives. Why has the identity of the human user been taken for granted in the design of the Internet? What difficulties ensue when it is understood that security systems can never be perfect? What role does trust have in society in general? How is trust to be understood when trying to describe activities as part of a user requirement program? What questions of trust arise in a time when data analytics are meant to offer new insights into user behavior and when users are confronted with different sorts of digital entities? These questions and their answers are of paramount interest to computer scientists, sociologists, philosophers, and designers confronting the problem of trust.

  • Brings together authors from a variety of disciplines
  • Can be adopted in multiple course areas: computer science, philosophy, sociology, anthropology
  • Integrated, multidisciplinary approach to understanding trust as it relates to modern computing”

Table of Contents

Table of Contents

1. Introduction and overview Richard Harper
Part I. The Topography of Trust and Computing:
2. The role of trust in cyberspace David Clark
3. The new face of the internet Thomas Karagiannis
4. Trust as a methodological tool in security engineering George Danezis
Part II. Conceptual Points of View:
5. Computing and the search for trust Tom Simpson
6. The worry about trust Olli Lagerspetz
7. The inescapability of trust Bob Anderson and Wes Sharrock
8. Trust in interpersonal interaction and cloud computing Rod Watson
9. Trust, social identity, and computation Charles Ess
Part III. Trust in Design:
10. Design for trusted and trustworthy services M. Angela Sasse and Iacovos Kirlappos
11. Dialogues: trust in design Richard Banks
12. Trusting oneself Richard Harper and William Odom
13. Reflections on trust, computing and society Richard Harper
Bibliography.