Yo Yoshida, Founder & CEO, Appallicious in GovTech: “As Americans, we expect a certain standardization of basic services, infrastructure and laws — no matter where we call home. When you live in Seattle and take a business trip to New York, the electric outlet in the hotel you’re staying in is always compatible with your computer charger. When you drive from San Francisco to Los Angeles, I-5 doesn’t all-of-a-sudden turn into a dirt country road because some cities won’t cover maintenance costs. If you take a 10-minute bus ride from Boston to the city of Cambridge, you know the money in your wallet is still considered legal tender.
Procurement and Civic Innovation
Derek Eder: “Have you ever used a government website and had a not-so-awesome experience? In our slick 2014 world of Google, Twitter and Facebook, why does government tech feel like it’s stuck in the 1990s?
The culprit: bad technology procurement.
Procurement is the procedure a government follows to buy something–letting suppliers know what they want, asking for proposals, restricting what kinds of proposal they will consider, limiting what kinds of firms they will do business with, and deciding if what they got what they paid for.
The City of Chicago buys technology about the same way that they buy health insurance, a bridge, or anything else in between. And that’s the problem.
Chicago’s government has a long history of corruption, nepotism and patronage. After each outrage, new rules are piled upon existing rules to prevent that crisis from happening again. Unfortunately, this accumulation of rules does not just protect against the bad guys, it also forms a huge barrier to entry for technology innovators.
So, the firms that end up building our city’s digital public services tend to be good at picking their way through the barriers of the procurement process, not at building good technology. Instead of making government tech contracting fair and competitive, procurement has unfortunately had the opposite effect.
So where does this leave us? Despite Chicago’s flourishing startup scene, and despite having one of the country’s largest community of civic technologists, the Windy City’s digital public services are still terribly designed and far too expensive to the taxpayer.
The Technology Gap
The best way to see the gap between Chicago’s volunteer civic tech community and the technology that the City pays is to look at an entire class of civic apps that are essentially facelifts on existing government websites….
You may have noticed an increase in quality and usability between these three civic apps and their official government counterparts.
Now consider this: all of the government sites took months to build and cost hundreds of thousands of dollars. Was My Car Towed, 2nd City Zoning and CrimeAround.us were all built by one to two people in a matter of days, for no money.
Think about that for a second. Consider how much the City is overpaying for websites its citizens can barely use. And imagine how much better our digital city services would be if the City worked with the very same tech startups they’re trying to nurture.
Why do these civic apps exist? Well, with the City of Chicago releasing hundreds of high quality datasets on their data portal over the past three years (for which they should be commended), a group of highly passionate and skilled technologists have started using their skills to develop these apps and many others.
It’s mostly for fun, learning, and a sense of civic duty, but it demonstrates there’s no shortage of highly skilled developers who are interested in using technology to make their city a better place to live in…
Two years ago, in the Fall of 2011, I learned about procurement in Chicago for the first time. An awesome group of developers, designers and I had just built ChicagoLobbyists.org – our very first civic app – for the City of Chicago’s first open data hackathon….
Since then, the City has often cited ChicagoLobbyists.org as evidence of the innovation-sparking potential of open data.
Shortly after our site launched, a Request For Proposals, or RFP, was issued by the City for an ‘Online Lobbyist Disclosure System.’
Hey! We just built one of those! Sure, we would need to make some updates to it—adding a way for lobbyists to log in and submit their info—but we had a solid start. So, our scrappy group of tech volunteers decided to respond to the RFP.
After reading all 152 pages of the document, we realized we had no chance of getting the bid. It was impossible for the ChicagoLobbyists.org group to meet the legal requirements (as it would have been for any small software shop):
- audited financial statements for the past 3 years
- an economic disclosure statement (EDS) and affidavit
- proof of $500k workers compensation and employers liability
- proof of $2 million in professional liability insurance”
Social Media as Government Watchdog
Gordon Crovitz in the Wall Street Journal: “Two new data points for the debate on whether greater access to the Internet leads to more freedom and fewer authoritarian regimes:
According to reports last week, Facebook plans to buy a company that makes solar-powered drones that can hover for years at high altitudes without refueling, which it would use to bring the Internet to parts of the world not yet on the grid. In contrast to this futuristic vision, Russia evoked land grabs of the analog Soviet era by invading Crimea after Ukrainians forced out Vladimir Putin‘s ally as president.
Internet idealists can point to another triumph in helping bring down Ukraine’s authoritarian government. Ukrainian citizens ignored intimidation including officious text messages: “Dear subscriber, you are registered as a participant in a mass disturbance.” Protesters made the most of social media to plan demonstrations and avoid attacks by security forces.
But Mr. Putin quickly delivered the message that social media only goes so far against a fully committed authoritarian. His claim that he had to invade to protect ethnic Russians in Crimea was especially brazen because there had been no loud outcry, on social media or otherwise, among Russian speakers in the region.
A new book reports the state of play on the Internet as a force for freedom. For a decade, Emily Parker, a former Wall Street Journal editorial-page writer and State Department staffer, has researched the role of the Internet in China, Cuba and Russia. The title of her book, “Now I Know Who My Comrades Are,” comes from a blogger in China who explained to Ms. Parker how the Internet helps people discover they are not alone in their views and aspirations for liberty.
Officials in these countries work hard to keep critics isolated and in fear. In Russia, Ms. Parker notes, there is also apathy because the Putin regime seems so entrenched. “Revolutions need a spark, often in the form of a political or economic crisis,” she observes. “Social media alone will not light that spark. What the Internet does create is a new kind of citizen: networked, unafraid, and ready for action.”
Asked about lessons from the invasion of Crimea, Ms. Parker noted that the Internet “chips away at Russia’s control over information.” She added: “Even as Russian state media tries to shape the narrative about Ukraine, ordinary Russians can go online to seek the truth.”
But this same shared awareness may also be accelerating a decline in U.S. influence. In the digital era, U.S. failure to make good on its promises reduces the stature of Washington faster than similar inaction did in the past.
Consider the Hungarian uprising of 1956, the first significant rebellion against Soviet control. The U.S. secretary of state, John Foster Dulles, said: “To all those suffering under communist slavery, let us say you can count on us.” Yet no help came as Soviet tanks rolled into Budapest, tens of thousands were killed, and the leader who tried to secede from the Warsaw Pact, Imre Nagy, was executed.
There were no Facebook posts or YouTube videos instantly showing the result of U.S. fecklessness. In the digital era, scenes of Russian occupation of Crimea are available 24/7. People can watch Mr. Putin’s brazen press conferences and see for themselves what he gets away with.
The U.S. stood by as Syrian civilians were massacred and gassed. There was instant global awareness when President Obama last year backed down from enforcing his “red line” when the Syrian regime used chemical weapons. American inaction in Syria sent a green light for Mr. Putin and others around the world to act with impunity.
Just in recent weeks, Iran tried to ship Syrian rockets to Gaza to attack Israel; Moscow announced it would use bases in Cuba, Venezuela and Nicaragua for its navy and bombers; and China budgeted a double-digit increase in military spending as President Obama cut back the U.S. military.
All institutions are more at risk in this era of instant communication and awareness. Reputations get lost quickly, whether it’s a misstep by a company, a gaffe by a politician, or a lack of resolve by an American president.
Over time, the power of the Internet to bring people together will help undermine authoritarian governments. But as Mr. Putin reminds us, in the short term a peaceful world depends more on a U.S. resolute in using its power and influence to deter aggression.”
Putting Crowdsourcing on the Map
MIT Technology Review: “Even in San Francisco, where Google’s roving Street View cars have mapped nearly every paved surface, there are still places that have remained untouched, such as the flights of stairs that serve as pathways between streets in some of the city’s hilliest neighborhoods.
It’s these places that a startup called Mapillary is focusing on. Cofounders Jan Erik Solem and Johan Gyllenspetz are attempting to build an open, crowdsourced, photographic map that lets smartphone users log all sorts of places, creating a richer view of the world than what is offered by Street View and other street-level mapping services. If contributors provide images often, that view could be more representative of how things look right now.
Google itself is no stranger to the benefits of crowdsourced map content: it paid $966 million last year for traffic and navigation app Waze, whose users contribute data. Google also lets people augment Street View content with their own images. But Solem and Gyllenspetz think there’s still plenty of room for Mapillary, which they say can be used for everything from tracking a nature hike to offering more up-to-date images to house hunters and Airbnb users.
Solem and Gyllenspetz have only been working on the project for four months; they released an iPhone app in November, and an Android app in January. So far, there are just a few hundred users who have shared about 100,000 photos on the service. While it’s free for anyone to use, the startup plans to eventually make money by licensing the data its users generate to companies.
With the app, a user can choose to collect images by walking, biking, or driving. Once you press a virtual shutter button within the app, it takes a photo every two seconds, until you press the button again. You can then upload the images to Mapillary’s service via Wi-Fi, where each photo’s location is noted through its GPS tag. Computer-vision software compares each photo with others that are within a radius of about 100 meters, searching for matching image features so it can find the geometric relationship between the photos. It then places those images properly on the map, and stitches them all together. When new images come in of an area that has already been mapped, Mapillary will add them to its database, too.
It can take less than 30 seconds for the images to show up on the Web-based map, but several minutes for the images to be fully processed. As with Google’s Street View photos, image-recognition software blurs out faces and license plate numbers.
Users can edit Mapillary’s map by moving around the icons that correspond to images—to fix a misplaced image, for instance. Eventually, users will also be able to add comments and tags.
So far, Mapillary’s map is quite sparse. But the few hundred users trying out Mapillary include some map providers in Europe, and the 100,000 or so images to the service ranging from a bike path on Venice Beach in California to a snow-covered ski slope in Sweden.
Street-level images can be viewed on the Web or through Mapillary’s smartphone apps (though the apps just pull up the Web page within the app). Blue lines and colored tags indicate where users have added photos to the map; you can zoom in to see them at the street level.
Navigating through photos is still quite rudimentary; you can tap or click to move from one image to the next with onscreen arrows, depending on the direction you want to explore.
Beyond technical and design challenges, the biggest issue Mapillary faces is convincing a large enough number of users to build up its store of images so that others will start using it and contributing as well, and then ensuring that these users keep coming back.”
New Research Network to Study and Design Innovative Ways of Solving Public Problems
MacArthur Foundation Research Network on Opening Governance formed to gather evidence and develop new designs for governing
NEW YORK, NY, March 4, 2014 – The Governance Lab (The GovLab) at New York University today announced the formation of a Research Network on Opening Governance, which will seek to develop blueprints for more effective and legitimate democratic institutions to help improve people’s lives.
Convened and organized by the GovLab, the MacArthur Foundation Research Network on Opening Governance is made possible by a three-year grant of $5 million from the John D. and Catherine T. MacArthur Foundation as well as a gift from Google.org, which will allow the Network to tap the latest technological advances to further its work.
Combining empirical research with real-world experiments, the Research Network will study what happens when governments and institutions open themselves to diverse participation, pursue collaborative problem-solving, and seek input and expertise from a range of people. Network members include twelve experts (see below) in computer science, political science, policy informatics, social psychology and philosophy, law, and communications. This core group is supported by an advisory network of academics, technologists, and current and former government officials. Together, they will assess existing innovations in governing and experiment with new practices and how institutions make decisions at the local, national, and international levels.
Support for the Network from Google.org will be used to build technology platforms to solve problems more openly and to run agile, real-world, empirical experiments with institutional partners such as governments and NGOs to discover what can enhance collaboration and decision-making in the public interest.
The Network’s research will be complemented by theoretical writing and compelling storytelling designed to articulate and demonstrate clearly and concretely how governing agencies might work better than they do today. “We want to arm policymakers and practitioners with evidence of what works and what does not,” says Professor Beth Simone Noveck, Network Chair and author of Wiki Government: How Technology Can Make Government Better, Democracy Stronger and Citi More Powerful, “which is vital to drive innovation, re-establish legitimacy and more effectively target scarce resources to solve today’s problems.”
“From prize-backed challenges to spur creative thinking to the use of expert networks to get the smartest people focused on a problem no matter where they work, this shift from top-down, closed, and professional government to decentralized, open, and smarter governance may be the major social innovation of the 21st century,” says Noveck. “The MacArthur Research Network on Opening Governance is the ideal crucible for helping transition from closed and centralized to open and collaborative institutions of governance in a way that is scientifically sound and yields new insights to inform future efforts, always with an eye toward real-world impacts.”
MacArthur Foundation President Robert Gallucci added, “Recognizing that we cannot solve today’s challenges with yesterday’s tools, this interdisciplinary group will bring fresh thinking to questions about how our governing institutions operate, and how they can develop better ways to help address seemingly intractable social problems for the common good.”
Members
The MacArthur Research Network on Opening Governance comprises:
Chair: Beth Simone Noveck
Network Coordinator: Andrew Young
Chief of Research: Stefaan Verhulst
Faculty Members:
- Sir Tim Berners-Lee (Massachusetts Institute of Technology (MIT)/University of Southampton, UK)
- Deborah Estrin (Cornell Tech/Weill Cornell Medical College)
- Erik Johnston (Arizona State University)
- Henry Farrell (George Washington University)
- Sheena S. Iyengar (Columbia Business School/Jerome A. Chazen Institute of International Business)
- Karim Lakhani (Harvard Business School)
- Anita McGahan (University of Toronto)
- Cosma Shalizi (Carnegie Mellon/Santa Fe Institute)
Institutional Members:
- Christian Bason and Jesper Christiansen (MindLab, Denmark)
- Geoff Mulgan (National Endowment for Science Technology and the Arts – NESTA, United Kingdom)
- Lee Rainie (Pew Research Center)
The Network is eager to hear from and engage with the public as it undertakes its work. Please contact Stefaan Verhulst to share your ideas or identify opportunities to collaborate.”
The benefits—and limits—of decision models
Article by Phil Rosenzweig in McKinsey Quaterly: “The growing power of decision models has captured plenty of C-suite attention in recent years. Combining vast amounts of data and increasingly sophisticated algorithms, modeling has opened up new pathways for improving corporate performance.1 Models can be immensely useful, often making very accurate predictions or guiding knotty optimization choices and, in the process, can help companies to avoid some of the common biases that at times undermine leaders’ judgments.
Yet when organizations embrace decision models, they sometimes overlook the need to use them well. In this article, I’ll address an important distinction between outcomes leaders can influence and those they cannot. For things that executives cannot directly influence, accurate judgments are paramount and the new modeling tools can be valuable. However, when a senior manager can have a direct influence over the outcome of a decision, the challenge is quite different. In this case, the task isn’t to predict what will happen but to make it happen. Here, positive thinking—indeed, a healthy dose of management confidence—can make the difference between success and failure.
Where models work well
Examples of successful decision models are numerous and growing. Retailers gather real-time information about customer behavior by monitoring preferences and spending patterns. They can also run experiments to test the impact of changes in pricing or packaging and then rapidly observe the quantities sold. Banks approve loans and insurance companies extend coverage, basing their decisions on models that are continually updated, factoring in the most information to make the best decisions.
Some recent applications are truly dazzling. Certain companies analyze masses of financial transactions in real time to detect fraudulent credit-card use. A number of companies are gathering years of data about temperature and rainfall across the United States to run weather simulations and help farmers decide what to plant and when. Better risk management and improved crop yields are the result.
Other examples of decision models border on the humorous. Garth Sundem and John Tierney devised a model to shed light on what they described, tongues firmly in cheek, as one of the world’s great unsolved mysteries: how long will a celebrity marriage last? They came up with the Sundem/Tierney Unified Celebrity Theory, which predicted the length of a marriage based on the couple’s combined age (older was better), whether either had tied the knot before (failed marriages were not a good sign), and how long they had dated (the longer the better). The model also took into account fame (measured by hits on a Google search) and sex appeal (the share of those Google hits that came up with images of the wife scantily clad). With only a handful of variables, the model did a very good job of predicting the fate of celebrity marriages over the next few years.
Models have also shown remarkable power in fields that are usually considered the domain of experts. With data from France’s premier wine-producing regions, Bordeaux and Burgundy, Princeton economist Orley Ashenfelter devised a model that used just three variables to predict the quality of a vintage: winter rainfall, harvest rainfall, and average growing-season temperature. To the surprise of many, the model outperformed wine connoisseurs.
Why do decision models perform so well? In part because they can gather vast quantities of data, but also because they avoid common biases that undermine human judgment.2 People tend to be overly precise, believing that their estimates will be more accurate than they really are. They suffer from the recency bias, placing too much weight on the most immediate information. They are also unreliable: ask someone the same question on two different occasions and you may get two different answers. Decision models have none of these drawbacks; they weigh all data objectively and evenly. No wonder they do better than humans.
Can we control outcomes?
With so many impressive examples, we might conclude that decision models can improve just about anything. That would be a mistake. Executives need not only to appreciate the power of models but also to be cognizant of their limits.
Look back over the previous examples. In every case, the goal was to make a prediction about something that could not be influenced directly. Models can estimate whether a loan will be repaid but won’t actually change the likelihood that payments will arrive on time, give borrowers a greater capacity to pay, or make sure they don’t squander their money before payment is due. Models can predict the rainfall and days of sunshine on a given farm in central Iowa but can’t change the weather. They can estimate how long a celebrity marriage might last but won’t help it last longer or cause another to end sooner. They can predict the quality of a wine vintage but won’t make the wine any better, reduce its acidity, improve the balance, or change the undertones. For these sorts of estimates, finding ways to avoid bias and maintain accuracy is essential.
Executives, however, are not concerned only with predicting things they cannot influence. Their primary duty—as the word execution implies—is to get things done. The task of leadership is to mobilize people to achieve a desired end. For that, leaders need to inspire their followers to reach demanding goals, perhaps even to do more than they have done before or believe is possible. Here, positive thinking matters. Holding a somewhat exaggerated level of self-confidence isn’t a dangerous bias; it often helps to stimulate higher performance.
This distinction seems simple but it’s often overlooked. In our embrace of decision models, we sometimes forget that so much of life is about getting things done, not predicting things we cannot control.
…
Improving models over time
Part of the appeal of decision models lies in their ability to make predictions, to compare those predictions with what actually happens, and then to evolve so as to make more accurate predictions. In retailing, for example, companies can run experiments with different combinations of price and packaging, then rapidly obtain feedback and alter their marketing strategy. Netflix captures rapid feedback to learn what programs have the greatest appeal and then uses those insights to adjust its offerings. Models are not only useful at any particular moment but can also be updated over time to become more and more accurate.
Using feedback to improve models is a powerful technique but is more applicable in some settings than in others. Dynamic improvement depends on two features: first, the observation of results should not make any future occurrence either more or less likely and, second, the feedback cycle of observation and adjustment should happen rapidly. Both conditions hold in retailing, where customer behavior can be measured without directly altering it and results can be applied rapidly, with prices or other features changed almost in real time. They also hold in weather forecasting, since daily measurements can refine models and help to improve subsequent predictions. The steady improvement of models that predict weather—from an average error (in the maximum temperature) of 6 degrees Fahrenheit in the early 1970s to 5 degrees in the 1990s and just 4 by 2010—is testimony to the power of updated models.
For other events, however, these two conditions may not be present. As noted, executives not only estimate things they cannot affect but are also charged with bringing about outcomes. Some of the most consequential decisions of all—including the launch of a new product, entry into a new market, or the acquisition of a rival—are about mobilizing resources to get things done. Furthermore, the results are not immediately visible and may take months or years to unfold. The ability to gather and insert objective feedback into a model, to update it, and to make a better decision the next time just isn’t present.
None of these caveats call into question the considerable power of decision analysis and predictive models in so many domains. They help underscore the main point: an appreciation of decision analytics is important, but an understanding of when these techniques are useful and of their limitations is essential, too…”
Open Government -Opportunities and Challenges for Public Governance
Disinformation Visualization: How to lie with datavis
It all sounds very sinister, and indeed sometimes it is. It’s hard to see through a lie unless you stare it right in the face, and what better way to do that than to get our minds dirty and look at some examples of creative and mischievous visual manipulation.
Over the past year I’ve had a few opportunities to run Disinformation Visualization workshops, encouraging activists, designers, statisticians, analysts, researchers, technologists and artists to visualize lies. During these sessions I have used the DIKW pyramid (Data > Information > Knowledge > Wisdom), a framework for thinking about how data gains context and meaning and becomes information. This information needs to be consumed and understood to become knowledge. And finally when knowledge influences our insights and our decision making about the future it becomes wisdom. Data visualization is one of the ways to push data up the pyramid towards wisdom in order to affect our actions and decisions. It would be wise then to look at visualizations suspiciously.
Centuries before big data, computer graphics and social media collided and gave us the datavis explosion, visualization was mostly a scientific tool for inquiry and documentation. This history gave the artform its authority as an integral part of the scientific process. Being a product of human brains and hands, a certain degree of bias was always there, no matter how scientific the process was. The effect of these early off-white lies are still felt today, as even our most celebrated interactive maps still echo the biases of the Mercator map projection, grounding Europe and North America on the top of the world, over emphasizing their size and perceived importance over the Global South. Our contemporary practices of programmatically data driven visualization hide both the human eyes and hands that produce them behind data sets, algorithms and computer graphics, but the same biases are still there, only they’re harder to decipher…”
The Power to Give
Press Release: “HTC, a global leader in mobile innovation and design, today unveiled HTC Power To Give™, an initiative that aims to create the a supercomputer by harnessing the collective processing power of Android smartphones.
Currently in beta, HTC Power To Give aims to galvanize smartphone owners to unlock their unused processing power in order to help answer some of society’s biggest questions. Currently, the fight against cancer, AIDS and Alzheimer’s; the drive to ensure every child has clean water to drink and even the search for extra-terrestrial life are all being tackled by volunteer computing platforms.
Empowering people to use their Android smartphones to offer tangible support for vital fields of research, including medicine, science and ecology, HTC Power To Give has been developed in partnership with Dr. David Anderson of the University of California, Berkeley. The project will support the world’s largest volunteer computing initiative and tap into the powerful processing capabilities of a global network of smartphones.
Strength in numbers
One million HTC One smartphones, working towards a project via HTC Power To Give, could provide similar processing power to that of one of the world’s 30 supercomputers (one PetaFLOP). This could drastically shorten the research cycles for organizations that would otherwise have to spend years analyzing the same volume of data, potentially bringing forward important discoveries in vital subjects by weeks, months, years or even decades. For example, one of the programs available at launch is IBM’s World Community Grid, which gives anyone an opportunity to advance science by donating their computer, smartphone or tablet’s unused computing power to humanitarian research. To date, the World Community Grid volunteers have contributed almost 900,000 years’ worth of processing time to cutting-edge research.
Limitless future potential
Cher Wang, Chairwoman, HTC commented, “We’ve often used innovation to bring about change in the mobile industry, but this programme takes our vision one step further. With HTC Power To Give, we want to make it possible for anyone to dedicate their unused smartphone processing power to contribute to projects that have the potential to change the world.”
“HTC Power To Give will support the world’s largest volunteer computing initiative, and the impact that this project will have on the world over the years to come is huge. This changes everything,” noted Dr. David Anderson, Inventor of the Shared Computing Initiative BOINC, University of California, Berkeley.
Cher Wang added, “We’ve been discussing the impact that just one million HTC Power To Give-enabled smartphones could make, however analysts estimate that over 780 million Android phones were shipped in 2013i alone. Imagine the difference we could make to our children’s future if just a fraction of these Android users were able to divert some of their unused processing power to help find answers to the questions that concern us all.”
Opt-in with ease
After downloading the HTC Power To Give app from the Google Play™ store, smartphone owners can select the research programme to which they will divert a proportion of their phone’s processing power. HTC Power To Give will then run while the phone is chargingii and connected to a WiFi network, enabling people to change the world whilst sitting at their desk or relaxing at home.
The beta version of HTC Power To Give will be available to download from the Google Play store and will initially be compatible with the HTC One family, HTC Butterfly and HTC Butterfly s. HTC plans to make the app more widely available to other Android smartphone owners in the coming six months as the beta trial progresses.”