We Need a Citizen Maker Movement


Lorelei Kelly at the Huffington Post: “It was hard to miss the giant mechanical giraffe grazing on the White House lawn last week. For the first time ever, the President organized a Maker Faire–inviting entrepreneurs and inventors from across the USA to celebrate American ingenuity in the service of economic progress.
The maker movement is a California original. Think R2D2 serving margaritas to a jester with an LED news scroll. The #nationofmakers Twitter feed has dozens of examples of collaborative production, of making, sharing and learning.
But since this was the White House, I still had to ask myself, what would the maker movement be if the economy was not the starting point? What if it was about civics? What if makers decided to create a modern, hands-on democracy?
What is democracy anyway but a never ending remix of new prototypes? Last week’s White House Maker Faire heralded a new economic bonanza. This revolution’s poster child is 3-D printing– decentralized fabrication that is customized to meet local needs. On the government front, new design rules for democracy are already happening in communities, where civics and technology have generated a front line of maker cities.
But the distance between California’s tech capacity and DC does seem 3000 miles wide. The NSA’s over collection/surveillance problem and Healthcare.gov’s doomed rollout are part of the same system-wide capacity deficit. How do we close the gap between California’s revolution and our institutions?

  • In California, disruption is a business plan. In DC, it’s a national security threat.
  • In California, hackers are artists. In DC, they are often viewed as criminals.
  • In California, “cyber” is a dystopian science fiction word. In DC, cyber security is in a dozen oversight plans for Congress.
  • in California, individuals are encouraged to “fail forward.” In DC, risk-aversion is bipartisan.

Scaling big problems with local solutions is a maker specialty. Government policymaking needs this kind of help.
Here’s the issue our nation is facing: The inability of the non-military side of our public institutions to process complex problems. Today, this competence and especially the capacity to solve technical challenges often exist only in the private sector. If something is urgent and can’t be monetized, it becomes a national security problem. Which increasingly means that critical decision making that should be in the civilian remit instead migrates to the military. Look at our foreign policy. Good government is a counter terrorism strategy in Afghanistan. Decades of civilian inaction on climate change means that now Miami is referred to as a battle space in policy conversations.
This rhetoric reflects an understandable but unacceptable disconnect for any democracy.
To make matters more confusing, much of the technology in civics (like list building petitions) is suited for elections, not for governing. It is often antagonistic. The result? policy making looks like campaigning. We need some civic tinkering to generate governing technology that comes with relationships. Specifically, this means technology that includes many voices, but has identifiable channels for expertise that can sort complexity and that is not compromised by financial self-interest.
Today, sorting and filtering information is a huge challenge for participation systems around the world. Information now ranks up there with money and people as a lever of power. On the people front, the loud and often destructive individuals are showing up effectively. On the money front, our public institutions are at risk of becoming purely pay to play (wonks call this “transactional”).
Makers, ask yourselves, how can we turn big data into a political constituency for using real evidence–one that can compete with all the negative noise and money in the system? For starters, technologists out West must stop treating government like it’s a bad signal that can be automated out of existence. We are at a moment where our society requires an engineering mindset to develop modern, tech-savvy rules for democracy. We need civic makers….”

The Impact of Open: Keeping you healthy


of Sunlight: “In healthcare, the goal-set shared widely throughout the field is known as “the Triple Aim”: improving individual experience of care, improving population health, and reducing the cost of care. Across the wide array of initiatives undertaken by health care data users, the great majority seem to fall within the scope of at least one aspect of the Triple Aim. Below is a set of examples that reveal how data — both open and not — is being used to achieve its elements.

The use of open data to reduce costs:

The use of open data to improve quality of care:

  • Using open data on a substantial series of individual hospital quality measures, CMS created a hospital comparison tool that allows consumers to compare average quality of care outcomes across their local hospitals.

  • Non-profit organizations survey hospitals and have used this data to provide another national measure of hospital quality that consumers can use to select a high-quality hospital.

  • In New York state, widely-shared data on cardiac surgery outcomes associated with individual providers has led to improved outcomes and better understanding of successful techniques.

  • In the UK, the National Health Service is actively working towards defining concrete metrics to evaluate how the system as a whole is moving towards improved quality. …

  • The broad cultural shift towards data-sharing in healthcare appears to have facilitated additional secured sharing in order to achieve the joint goal of improving healthcare quality and effectiveness. The current effort to securely network of millions of patient data records through the federal PCORI system has the potential to advance understanding of disease treatment at an unprecedented pace.

  • Through third-party tools, people are able to use the products of aggregated patient data in order to begin diagnosing their own symptoms more accurately, giving them a head start in understanding how to optimize their visit to a provider.

The use of open data to improve population health:

  • Out of the three elements of the triple aim, population health may have the longest and deepest relationship with open data. Public datasets like those collected by the Centers for Disease Control and the US Census have for decades been used to monitor disease prevalence, verify access to health insurance, and track mortality and morbidity statistics.

  • Population health improvement has been a major focus for newer developments as well. Health data has been a regular feature in tech efforts to improve the ways that governments — including local health departments — reach their constituencies. The use of data in new communication tools improves population health by increasing population awareness of local health trends and disease prevention opportunities. Two examples of this work in action include the Chicago Health Atlas, which combines health data and healthcare consumer problem-solving, and Philadelphia’s map interface to city data about available flu vaccines.

One final observation for open data advocates to take from health data concerns the way that the sector encourages the two-way information flow: it embraces the notion that data users can also be data producers. Open data ecosystems are properly characterized by multi-directional relationships among governmental and non-governmental actors, with opportunities for feedback, correction and augmentation of open datasets. That this happens at the scale of health data is important and meaningful for open data advocates who can face push-back when they ask their governments to ingest externally-generated data….”

Microsoft Unveils Machine Learning for the Masses


The service, called Microsoft Azure Machine Learning, was announced Monday but won’t be available until July. It combines Microsoft’s own software with publicly available open source software, packaged in a way that is easier to use than most of the arcane strategies currently in use.
“This is drag-and-drop software,” said Joseph Sirosh, vice president for machine learning at Microsoft. “My high schooler is using this.”
That would be a big step forward in popularizing what is currently a difficult process in increasingly high demand. It would also further the ambitions of Satya Nadella, Microsoft’s chief executive, of making Azure the center of Microsoft’s future.
Users of Azure Machine Learning will have to keep their data in Azure, and Microsoft will provide ways to move data from competing services, like Amazon Web Services. Pricing has not yet been finalized, Mr. Sirosh said, but will be based on a premium to Azure’s standard computing and transmission charges.
Machine learning computers examine historical data through different algorithms and programming languages to make predictions. The process is commonly used in Internet search, fraud detection, product recommendations and digital personal assistants, among other things.
As more data is automatically stored online, there are opportunities to use machine learning for performing maintenance, scheduling hospital services, and anticipating disease outbreaks and crime, among other things. The methods have to become easier and cheaper to be popular, however.
That is the goal of Azure Machine Learning. “This is, as far as I know, the first comprehensive machine learning service in the cloud,” Mr. Sirosh said. “I’m leveraging every asset in Microsoft for this.” He is also using ways of accessing an open source version of R, a standard statistical language, while in Azure.
Microsoft is likely to face competition from rival cloud companies, including Google and Amazon. Both Google and Amazon have things like data frameworks used in building machine learning algorithms, as well as their own analysis services. IBM is eager to make use of its predictive software in its cloud business. Visualization companies like Tableau specialize in presenting the results so they can be acted on easily…”

The Data Revolution in Policy-Making


at the Open Institute: “There continues to be a great deal of dialogue and debate on what the data revolution from the report of the High Level Panel on the Post-2015 Development Agenda is all about. However, some have raised concerns that the emerging narrative around opening up data, strengthening national statistics offices or building capacity for e-government may not be revolutionary enough. In thinking through this it becomes clear that revolutions are highly contextual events. The Arab spring happened due to the unique factors of the cultural and social-economic environment in the Middle East and North Africa (MENA). A similar ‘spring’ may not happen in the same way in sub-Sahara Africa due to the peculiarities of the region. Attempting to replicate it is therefore an exercise in futility for those hoping for regime change.
We have just published a think piece on the role of public participation in policy making and how a data revolution could play out in that space. None of the ideas are revolutionary. They have been proposed and piloted in various countries to various extents over time. For instance, in some contexts strengthening and safe guarding the autonomy of the national statistics office may not seem revolutionary to some, in some countries it may be unprecedented (this is not part of the report). And that is okay. Nation states should be allowed, in their efforts to build capable and developmental institutions, to interpret the revolution for themselves.
In sub-Sahara Africa the availability of underlying data used to develop public policy is almost non-existent. Even when citizens are expected to participate in the formulation process and implementation of the policies, the data is still difficult to find. This neuters public participation and is a disservice to open government. Therefore making this detailed data and the accompanying rationale publicly available would be a revolutionary change in both culture and policy on access to information and potentially empower citizens to participate.
The data revolution is an opportunity to mainstream statistics into public discourse on public policy in ways that citizens can understand and engage with. I hope African countries will be willing to put an effort in translating the data revolution into an African revolution. If not, there’s a risk we shall continue singing about a revolution and never actually have one.
Download the ThinkPiece here”

15 Ways to bring Civic Innovation to your City


Chris Moore at AcuitasGov: “In my previous blog post I wrote about a desire to see our Governments transform to be part of the  21st century.  I saw a recent reference to how governments across Canada have lost their global leadership, how government in Canada at all levels is providing analog services to a digital society.  I couldn’t agree more.  I have been thinking lately about some practical ways that Mayors and City Managers could innovate in their communities.  I realize that there are a number of municipal elections happening this fall across Canada, a time when leadership changes and new ideas emerge.  So this blog is also for Mayoral candidates who have a sense that technology and innovation have a role to play in their city and in their administration.
I thought I would identify 15 initiatives that cities could pursue as part of their Civic Innovation Strategy.   For the last 50 years technology in local government in Canada has been viewed as an expense, as a necessary evil, not always understood by elected officials and senior administrators.  Information and Technology is part of every aspect of a city, it is critical in delivering services.  It is time to not just think of this as an expense but as an investment, as a way to innovate, reduce costs, enhance citizen service delivery and transform government operations.
Here are my top 15 ways to bring Civic Innovation to your city:
1. Build 21st Century Digital Infrastructure like the Chattanooga Gig City Project.
2. Build WiFi networks like the City of Edmonton on your own and in partnership with others.
3. Provide technology and internet to children and youth in need like the City of Toronto.
4. Connect to a national Education and Research network like Cybera in Alberta and CANARIE.
5. Create a Mayors Task-force on Innovation and Technology leveraging your city’s resources.
6. Run a hackathon or two or three like the City of Glasgow or maybe host a hacking health event like the City of Vancouver.
7. Launch a Startup incubator like Startup Edmonton or take it to the next level and create a civic lab like the City of Barcelona.
8. Develop an Open Government Strategy, I like to the Open City Strategy from Edmonton.
9. If Open Government is too much then just start with Open Data, Edmonton has one of the best.
10. Build a Citizen Dashboard to showcase your cities services and commitment to the public.
11. Put your Crime data online like the Edmonton Police Service.
12. Consider a pilot project with sensor technology for parking like the City of Nice or for  waste management like the City of Barcelona.
13. Embrace Car2Go, Modo and UBER as ways to move people in your city.
14. Consider turning your IT department into the Innovation and Technology Department like they did at the City of Chicago.
15. Partner with other near by local governments to create a shared Innovation and Technology agency.
Now more than ever before cities need to find ways to innovate, to transform and to create a foundation that is sustainable.  Now is the time for both courage and innovations in government.  What is your city doing to move into the 21st Century?”

The Civil Service in an Age of Open Government


Tunji Olaopa at AllAfrica.com: “…The question then is: How does a bureaucratic administrative civil service structure respond to the challenge of modernisation? The first condition for modernisation is to target the loci of the governance or the centre of public administration.
Public administration as governance derives from the recent transformation of the economy and government of industrial societies that has led to (a) a radical change in the internal modes of functioning; and (b) the expansion of governmental activities into a ‘governance network’ that brings in non-state actors into the governance system. The second condition demanded by the modernising imperative is the urgency of opening up the government within the framework of an ‘open society’.
Both conditions are interrelated because governance requires the participation of non-state actors and the entire citizenry through a technologically-motivated open platform that facilitates transparency, collaboration and participation. The open society or open government paradigm has philosophical antecedent. Immediately after the horrors of the Second World War, the Austrian philosopher, Karl Popper, wrote a classic: Open Society and Its Enemies (1945).
The open society and open government dynamics speak to the need for eternal vigilance of the human race that guides their freedom and creativity to foreclose the multiplication of the Hitlers of this world and specifically, those that Popper regarded as Totalitarian ideologues namely, Hegel, Marx and Plato. And, the urgent and constant need to innovate and recreate ideas, paradigms and institutions in a way that transform our individual and collective wellbeing. The recent uproars generated by the Arab Spring in the Middle East constitute a negative indication of a refusal to open up the government or the society to constant interrogation.
In administrative reform terms, the ‘open society’ imagery simply challenges our civil services into a persistent and creative rethinking of our institutional and structural dynamics in a manner that transform the system into a world class performance mode. It insists that the principle that government–not just its laws and policies, but the reasons and processes of decisions that generated those policies and the flows of money that fund their implementation–should be open.
Open government gives the civil service clear advantages: (a) First, it is a critical attempt to challenge administrative closure that locks the people out of decisions and processes that governs their lives; (b) Second, open government deals with bureau-pathology by reversing the obscurity of brilliant public servants whose creative initiatives are usually left to disappear within the vast hierarchies that define the bureaucracy; (c) Third, open government helps the government redirect its citizens’ trust and respect; and (d) Lastly, the open government initiative enables the civil service to transcend itself away from its acute analogue/hierarchical/opaque status to becoming a cutting-edge digital/network/open system that works.
The governance and open government reform demand a reassessment of administrative reality especially within a third world context like Nigeria where our postcolonial predicament has left us burdened and in anguish. However, our reassessment goes deeper than opening up the processes and functioning of government. Gary Francione, the American philosopher, counsels that ‘If we are ever going to see a paradigm shift, we have to be clear about how we want the present paradigm to shift.’ The open government initiative is just one indication of where we want to go. Other indication of needed transformation will necessarily include:
o From resource-based to competency-based HRM;
o From ‘input-process’ to ‘output-results’ orientation;
o From Weberianism to a new institutional philosophy tantalisingly typified by the assumptions of neo-Weberianism…”

Privacy and Open Government


Paper by Teresa Scassa in Future Internet: “The public-oriented goals of the open government movement promise increased transparency and accountability of governments, enhanced citizen engagement and participation, improved service delivery, economic development and the stimulation of innovation. In part, these goals are to be achieved by making more and more government information public in reusable formats and under open licences. This paper identifies three broad privacy challenges raised by open government. The first is how to balance privacy with transparency and accountability in the context of “public” personal information. The second challenge flows from the disruption of traditional approaches to privacy based on a collapse of the distinctions between public and private sector actors. The third challenge is that of the potential for open government data—even if anonymized—to contribute to the big data environment in which citizens and their activities are increasingly monitored and profiled.”

LifeLogging: personal big data


Paper by Gurrin, Cathal and Smeaton, Alan F. and Doherty, Aiden R. at Foundations and Trends in Information Retrieval: “We have recently observed a convergence of technologies to foster the emergence of lifelogging as a mainstream activity. Computer storage has become significantly cheaper, and advancements in sensing technology allows for the efficient sensing of personal activities, locations and the environment. This is best seen in the growing popularity of the quantified self movement, in which life activities are tracked using wearable sensors in the hope of better understanding human performance in a variety of tasks. This review aims to provide a comprehensive summary of lifelogging, to cover its research history, current technologies, and applications. Thus far, most of the lifelogging research has focused predominantly on visual lifelogging in order to capture life details of life activities, hence we maintain this focus in this review. However, we also reflect on the challenges lifelogging poses to an information retrieval scientist. This review is a suitable reference for those seeking a information retrieval scientist’s perspective on lifelogging and the quantified self.”

How Crowdsourced Astrophotographs on the Web Are Revolutionizing Astronomy


Emerging Technology From the arXiv: “Astrophotography is currently undergoing a revolution thanks to the increased availability of high quality digital cameras and the software available to process the pictures after they have been taken.
Since photographs of the night sky are almost always better with long exposures that capture more light, this processing usually involves combining several images of the same part of the sky to produce one with a much longer effective exposure.
That’s all straightforward if you’ve taken the pictures yourself with the same gear under the same circumstances. But astronomers want to do better.
“The astrophotography group on Flickr alone has over 68,000 images,” say Dustin Lang at Carnegie Mellon University in Pittsburgh and a couple of pals. These and other images represent a vast source of untapped data for astronomers.
The problem is that it’s hard to combine images accurately when little is known about how they were taken. Astronomers take great care to use imaging equipment in which the pixels produce a signal that is proportional to the number of photons that hit.
But the same cannot be said of the digital cameras widely used by amateurs. All kinds of processes can end up influencing the final image.
So any algorithm that combines them has to cope with these variations. “We want to do this without having to infer the (possibly highly nonlinear) processing that has been applied to each individual image, each of which has been wrecked in its own loving way by its creator,” say Lang and co.
Now, these guys say they’ve cracked it. They’ve developed a system that automatically combines images from the same part of the sky to increase the effective exposure time of the resulting picture. And they say the combined images can rival those from much professional telescopes.
They’ve tested this approach by downloading images of two well-known astrophysical objects: the NGC 5907 Galaxy and the colliding pair of galaxies—Messier 51a and 51b.
For NGC 5907, they ended up with 4,000 images from Flickr, 1,000 from Bing and 100 from Google. They used an online system called astrometry.net that automatically aligns and registers images of the night sky and then combined the images using their new algorithm, which they call Enhance.
The results are impressive. They say that the combined images of NGC5907 (bottom three images) show some of the same faint features that revealed a single image taken over 11 hours of exposure using a 50 cm telescope (the top left image). All the images reveal the same kind of fine detail such as a faint stellar stream around the galaxy.
The combined image for the M51 galaxies is just as impressive, taking only 40 minutes to produce on a single processor. It reveals extended structures around both galaxies, which astronomers know to be debris from their gravitational interaction as they collide.
Lang and co say these faint features are hugely important because they allow astronomers to measure the age, mass ratios, and orbital configurations of the galaxies involved. Interestingly, many of these faint features are not visible in any of the input images taken from the Web. They emerge only once images have been combined.
One potential problem with algorithms like this is that they need to perform well as the number of images they combine increases. It’s no good if they grind to a halt as soon as a substantial amount of data becomes available.
On this score, Lang and co say astronomers can rest easy. The performance of their new Enhance algorithm scales linearly with the number of images it has to combine. That means it should perform well on large datasets.
The bottom line is that this kind of crowd-sourced astronomy has the potential to make a big impact, given that the resulting images rival those from large telescopes.
And it could also be used for historical images, say Lang and co. The Harvard Plate Archives, for example, contain half a million images dating back to the 1880s. These were all taken using different emulsions, with different exposures and developed using different processes. So the plates all have different responses to light, making them hard to compare.
That’s exactly the problem that Lang and co have solved for digital images on the Web. So it’s not hard to imagine how they could easily combine the data from the Harvard archives as well….”
Ref: arxiv.org/abs/1406.1528 : Towards building a Crowd-Sourced Sky Map

Opening Public Transportation Data in Germany


Thesis by Kaufmann, Stefan: “Open data has been recognized as a valuable resource, and public institutions have taken to publishing their data under open licenses, also in Germany. However, German public transit agencies are still reluctant to publish their schedules as open data. Also, two widely used data exchange formats used in German transit planning are proprietary, with no documentation publicly available. Through this work, one of the proprietary formats was reverse-engineered, and a transformation process into the open GTFS schedule format was developed. This process allowed a partnering transit operator to publish their schedule as open data. Also, through a survey taken with German transit authorities and operators, the prevalence of transit data exchange formats, and reservations concerning open transit data were evaluated. The survey brought a series of issues to light which serve as obstacles for opening up transit data. Addressing the issues found through this work, and partnering with open-minded transit authorities to further develop transit data publishing processes can serve as a foundation for wider adoption of publishing open transit data in Germany”