what is impossible


impossible.com – is a new website and app that encourages people to do things for others for free. People can post wishes of things that they want or need help with and offer what they can give – can be things or skills. Impossible shows these wishes and offers and people can connect with one another. You can also create thank you posts to send people.

Innovating for the Global South: New book offers practical insights


Press Release: “Despite the vast wealth generated in the last half century, in today’s world inequality is worsening and poverty is becoming increasingly chronic. Hundreds of millions of people continue to live on less than $2 per day and lack basic human necessities such as nutritious food, shelter, clean water, primary health care, and education.
Innovating for the Global South: Towards an Inclusive Innovation Agenda, the latest book from Rotman-UTP Publishing and the first volume in the Munk Series on Global Affairs, offers fresh solutions for reducing poverty in the developing world. Highlighting the multidisciplinary expertise of the University of Toronto’s Global Innovation Group, leading experts from the fields of engineering, public health, medicine, management, and public policy examine the causes and consequences of endemic poverty and the challenges of mitigating its effects from the perspective of the world’s poorest of the poor.
Can we imagine ways to generate solar energy to run essential medical equipment in the countryside? Can we adapt information and communication technologies to provide up-to-the-minute agricultural market prices for remote farming villages? How do we create more inclusive innovation processes to hear the voices of those living in urban slums? Is it possible to reinvent a low-cost toilet that operates beyond the water and electricity grids?
Motivated by the imperatives of developing, delivering, and harnessing innovation in the developing world, Innovating for the Global South is essential reading for managers, practitioners, and scholars of development, business, and policy.
“As we see it, Innovating for the Global South is fundamentally about innovating scalable solutions that mitigate the effects of poverty and underdevelopment in the Global South. It is not about inventing some new gizmo for some untapped market in the developing world,” say Profs. Dilip Soman and Joseph Wong of the UofT, who are two of the editors of the volume.
The book is edited and also features contributions by three leading UofT thinkers who are tackling innovation in the global south from three different academic perspectives.

  • Dilip Soman is Corus Chair in Communication Strategy and a professor of Marketing at the Rotman School of Management.
  • Janice Gross Stein is the Belzberg Professor of Conflict Management in the Department of Political Science and Director of the Munk School of Global Affairs.
  • Joseph Wong is Ralph and Roz Halbert Professor of Innovation at the Munk School of Global Affairs and Canada Research Chair in Democratization, Health, and Development in the Department of Political Science.

The chapters in the book address the process of innovation from a number of vantage points.
Introduction: Rethinking Innovation – Joseph Wong and Dilip Soman
Chapter 1: Poverty, Invisibility, and Innovation – Joseph Wong
Chapter 2: Behaviourally Informed Innovation – Dilip Soman
Chapter 3: Appropriate Technologies for the Global South – Yu-Ling Cheng (University of Toronto, Chemical Engineering and Applied Chemistry) and Beverly Bradley (University of Toronto, Centre for Global Engineering)
Chapter 4: Globalization of Biopharmaceutical Innovation: Implications for Poor-Market Diseases – Rahim Rezaie (University of Toronto, Munk School of Global Affairs, Research Fellow)
Chapter 5: Embedded Innovation in Health – Anita M. McGahan (University of Toronto, Rotman School of Management, Associate Dean of Research), Rahim Rezaie and Donald C. Cole (University of Toronto, Dalla Lana School of Public Health)
Chapter 6: Scaling Up: The Case of Nutritional Interventions in the Global South – Ashley Aimone Phillips (Registered Dietitian), Nandita Perumal (University of Toronto, Doctoral Fellow, Epidemiology), Carmen Ho (University of Toronto, Doctoral Fellow, Political Science), and Stanley Zlotkin (University of Toronto and the Hospital for Sick Children,Paediatrics, Public Health Sciences and Nutritional Sciences)
Chapter 7: New Models for Financing Innovative Technologies and Entrepreneurial Organizations in the Global South – Murray R. Metcalfe (University of Toronto, Centre for Global Engineering, Globalization)
Chapter 8: Innovation and Foreign Policy – Janice Gross Stein
Conclusion: Inclusive Innovation – Will Mitchell (University of Toronto, Rotman School of Management, Strategic Management), Anita M. McGahan”

Interactive map helps patients find the fastest emergency healthcare


Springwise: “Waiting is an activity that most people receiving healthcare have acutely experienced, whether it’s being put on a list for treatment or simply passing time before a GP visit. Cutting waiting time is something that can drastically improve patients’ experience, and in the past we’ve seen ideas such as HealthSpot use telemedicine to deliver healthcare advice remotely. For accidents and emergencies that require more hands-on attention however, a new platform called ER Wait Watcher enables users to determine which nearby hospital is likely to see them first.
Developed by journalism nonprofit ProPublica, the tool simply asks for those who have had an accident or emergency to enter in their zip code or street name. Using a number of factors — including average reported wait times for each hospital, how far away each hospital is and live Google traffic reports — it then lists which institution is likely to see them first. Users can see how long they can expect to wait on average before they will be seen, sent home, receive pain medication for a broken bone or transferred to their room for more serious injuries. The site works on the premise that the nearest hospital isn’t necessarily the one that will get patients the treatment they need most speedily.
The site does advise patients to call up their hospital for a more accurate estimate, and users currently need to navigate to the website to use it. Are there ways to make this kind of platform more accurate or user-friendly? Website: www.propublica.org”

Big Data for Law


legislation.gov.uk: “The National Archives has received ‘big data’ funding from the Arts and Humanities Research Council (AHRC) to deliver the ‘Big Data for Law‘ project. Just over £550,000 will enable the project to transform how we understand and use current legislation, delivering a new service – legislation.gov.uk Research – by March 2015. There are an estimated 50 million words in the statute book, with 100,000 words added or changed every month. Search engines and services like legislation.gov.uk have transformed access to legislation. Law is accessed by a much wider group of people, the majority of whom are typically not legally trained or qualified. All users of legislation are confronted by the volume of legislation, its piecemeal structure, frequent amendments, and the interaction of the statute book with common law and European law. Not surprisingly, many find the law difficult to understand and comply with. There has never been a more relevant time for research into the architecture and content of law, the language used in legislation and how, through interpretation by the courts, it is given effect. Research that will underpin the drive to deliver good, clear and effective law. Researchers typically lack the raw data, the tools, and the methods to undertake research across the whole statute book. Meanwhile, the combination of low cost cloud computing, open source software and new methods of data analysis – the enablers of the big data revolution – are transforming research in other fields. Big data research is perfectly possible with legislation if only the basic ingredients – the data, the tools and some tried and trusted methods – were as readily available as the computing power and the storage. The vision for this project is to address that gap by providing a new Legislation Data Research Infrastructure at research.legislation.gov.uk. Specifically tailored to researchers’ needs, it will consist of downloadable data, online tools for end-users; and open source tools for researchers to download, adapt and use….
There are three main areas for research:

  • Understanding researchers’ needs: to ensure the service is based on evidenced need, capabilities and limitations, putting big data technologies in the hands of non-technical researchers for the first time.
  • Deriving new open data from closed data: no one has all the data that researchers might find useful. For example, the potentially personally identifiable data about users and usage of legislation.gov.uk cannot be made available as open data but is perfect for processing using existing big data tools; eg to identify clusters in legislation or “recommendations” datasets of “people who read Act A or B also looked at Act Y or Z”. The project will look whether it is possible to create new open data sets from this type of closed data. An N-Grams dataset and appropriate user interface for legislation or related case law, for example, would contain sequences of words/phrases/statistics about their frequency of occurrence per document. N-Grams are useful for research in linguistics or history, and could be used to provide a predictive text feature in a drafting tool for legislation.
  • Pattern language for legislation: We need new ways of codifying and modelling the architecture of the statute book to make it easier to research its entirety using big data technologies. The project will seek to learn from other disciplines, applying the concept of a ‘pattern language’ to legislation. Pattern languages have revolutionised software engineering over the last twenty years and have the potential to do the same for our understanding of the statute book. A pattern language is simply a structured method of describing good design practices, providing a common vocabulary between users and specialists, structured around problems or issues, with a solution. Patterns are not created or invented – they are identified as ‘good design’ based on evidence about how useful and effective they are. Applied to legislation, this might lead to a common vocabulary between the users of legislation and legislative drafters, to identifying useful and effective drafting practices and solutions that deliver good law. This could enable a radically different approach to structuring teaching materials or guidance for legislators.”

Open Data is an Essential Ingredient for Better Development Research


Aiddata blogpost: “UNICEF is making data a priority by re-launching the “UNICEF Child Info” department as “UNICEF Data” and actively promoting the use and collection of data to guide development. While their data is not subnational, it is comprehensive and expansive in its indicators. UNICEF’s mission calls for the use of the power of statistics and data to tell a story about the quality of life for children around the world. The connection between improving data and improving lives is a critical one that, while sometimes overshadowed by technical discussions on providing better data, is at the core of open data and the data transparency initiatives. By using evidence to anchor their decision-making, the UNICEF Data initiative hopes to craft and inspire better ways of caring for and empowering children across the globe.”

Habermas and the Garants : Narrowing the gap between policy and practice in French organisation – citizen engagement


New paper by Judy Burnside-Lawry, Carolyne Lee and Sandrine Rui: “This article draws on a case study of organisation–citizen engagement during railway infrastructure planning in southwest France, to examine the nature of participatory democracy, both conceptually —as elucidated by Habermas and others— and empirically, as recently practised within the framework of a model established in one democratically governed country.
We analyse roles played by the state organisation responsible for building railway infrastructure; the National Commission for Public Debate; and the Garants, who oversee and facilitate the participatory process as laid down by the French law of Public Debate. We conclude by arguing that despite its normative aspects and its lack of provision for analysis of power relations, Habermas’s theory of communicative action can be used to evaluate the quality of organisation –citizen engagement, potentially providing a basis for informing actual models of democratic participation.”

New Programming Language Removes Human Error from Privacy Equation


MIT Technology Review: “Anytime you hear about Facebook inadvertently making your location public, or revealing who is stalking your profile, it’s likely because a programmer added code that inadvertently led to a bug.
But what if there was a system in place that could substantially reduce such privacy breaches and effectively remove human error from the equation?
One MIT PhD thinks she has the answer, and its name is Jeeves.
This past month, Jean Yang released an open-source Python version of “Jeeves,” a programming language with built-in privacy features that free programmers from having to provide on-the-go ad-hoc maintenance of privacy settings.
Given that somewhere between 10 and 20 percent of all code is related to privacy policy, Yang thinks that Jeeves will be an attractive option for social app developers who are looking to be more efficient in their use of programmer resources – as well as those who are hoping to assuage users’ privacy concerns about if and how they use your data.
For more information about Jeeves visit the project site.
For more information on Yang visit her CSAIL page.”

The FDA is making adverse event and recall data available to app developers


in FierceBioTechIT: “When Beth Noveck arrived at the White House she had a clear, albeit unusual, mission–to apply the transparency and collaboration of the open-source movement to government. Noveck has now left the White House, but the ideas she brought are still percolating through the governmental machine. In 2014, the thinking is set to lead to a new, more open FDA.
Regulatory Focus reports the agency has quietly created a website and initiative called openFDA. At this stage the project is still in the prelaunch phase, but the FDA has already given a teaser of its plans. When the program opens for beta access later this year, users will gain access to structured data sets as application programming interfaces (APIs) and raw downloads. The ultimate scope of the project is unclear, but for now the FDA is working on making three data sets available.
The three data sets will give users unprecedented access to FDA archives of adverse events, product recalls and label information. Together the three data sets represent a substantial slice of what many people want to know about the FDA. The adverse event database contains details of millions of side effects and medication errors, while the recall information the FDA is preparing to share gathers all the public notices of products withdrawn from the market.
Making the data available as an API–a way for machines to talk to each other–means third parties can use the information as the basis for apps. The experience of the National Aeronautics and Space Administration (NASA) gives some indication of what might happen once the FDA opens up its data. One year after making its data available as an API in 2011, NASA began holding an annual Space Apps Challenge. At the event, people create apps and APIs.
Some challenges have no obvious use for NASA, such as a project to make a 3D printed model of the dark side of the moon from NASA data. Others could clearly be the starting point for technology used by the space agency. In one challenge, teams were tasked with creating a miniaturized modular research satellite for use on Mars. NASA is working to the same White House digital playbook as the FDA. How the FDA interprets the broad goals in the drug regulation arena remains to be seen.
– read Regulatory Focusarticle
– here’s the openFDA page
– check out NASA’s challenges

What makes a good API?


Joshua Tauberer’s Blog: “There comes a time in every dataset’s life when it wants to become an API. That might be because of consumer demand or an executive order. How are you going to make a good one?…
Let’s take the common case where you have a relatively static, large dataset that you want to provide read-only access to. Here are 19 common attributes of good APIs for this situation. …
Granular Access. If the user wanted the whole thing they’d download it in bulk, so an API must be good at providing access to the most granular level practical for data users (h/t Ben Balter for the wording on that). When the data comes from a table, this usually means the ability to read a small slice of it using filters, sorting, and paging (limit/offset), the ability to get a single row by identifying it with a persistent, unique identifier (usually a numeric ID), and the ability to select just which fields should be included in the result output (good for optimizing bandwidth in mobile apps, h/t Eric Mill). (But see “intents” below.)
Deep Filtering. An API should be good at needle-in-haystack problems. Full text search is hard to do, so an API that can do it relieves a big burden for developers — if your API has any big text fields. Filters that can span relations or cross tables (i.e. joins) can be very helpful as well. But don’t go overboard. (Again, see “intents” below.)
Typed Values. Response data should be typed. That means that whether a field’s value is an integer, text, list, floating-point number, dictionary, null, or date should be encoded as a part of the value itself. JSON and XML with XSD are good at this. CSV and plain XML, on the other hand, are totally untyped. Types must be strictly enforced. Columns must choose a data type and stick with it, no exceptions. When encoding other sorts of data as text, the values must all absolutely be valid according to the most narrow regular expression that you can make. Provide that regular expression to the API users in documentation.
Normalize Tables, Then Denormalize. Normalization is the process of removing redundancy from tables by making multiple tables. You should do that. Have lots of primary keys that link related tables together. But… then… denormalize. The bottleneck of most APIs isn’t disk space but speed. Queries over denormalized tables are much faster than writing queries with JOINs over multiple tables. It’s faster to get data if it’s all in one response than if the user has to issue multiple API calls (across multiple tables) to get it. You still have to normalize first, though. Denormalized data is hard to understand and hard to maintain.
Be RESTful, And More. ”REST” is a set of practices. There are whole books on this. Here it is in short. Every object named in the data (often that’s the rows of the table) gets its own URL. Hierarchical relationships in the data are turned into nice URL paths with slashes. Put the URLs of related resources in output too (HATEOAS, h/t Ed Summers). Use HTTP GET and normal query string processing (a=x&b=y) for filtering, sorting, and paging. The idea of REST is that these are patterns already familiar to developers, and reusing existing patterns — rather than making up entirely new ones — makes the API more understandable and reusable. Also, use HTTPS for everything (h/t Eric Mill), and provide the API’s status as an API itself possibly at the root URL of the API’s URL space (h/t Eric Mill again).
….
Never Require Registration. Don’t have authentication on your API to keep people out! In fact, having a requirement of registration may contradict other guidelines (such as the 8 Principles of Open Government Data). If you do use an API key, make it optional. A non-authenticated tier lets developers quickly test the waters, and that is really important for getting developers in the door, and, again, it may be important for policy reasons as well. You can have a carrot to incentivize voluntary authentication: raise the rate limit for authenticated queries, for instance. (h/t Ben Balter)
Interactive Documentation. An API explorer is a web page that users can visit to learn how to build API queries and see results for test queries in real time. It’s an interactive browser tool, like interactive documentation. Relatedly, an “explain mode” in queries, which instead of returning results says what the query was and how it would be processed, can help developers understand how to use the API (h/t Eric Mill).
Developer Community. Life is hard. Coding is hard. The subject matter your data is about is probably very complex. Don’t make your API users wade into your API alone. Bring the users together, bring them to you, and sometimes go to them. Let them ask questions and report issues in a public place (such as github). You may find that users will answer other users’ questions. Wouldn’t that be great? Have a mailing list for longer questions and discussion about the future of the API. Gather case studies of how people are using the API and show them off to the other users. It’s not a requirement that the API owner participates heavily in the developer community — just having a hub is very helpful — but of course the more participation the better.
Create Virtuous Cycles. Create an environment around the API that make the data and API stronger. For instance, other individuals within your organization who need the data should go through the public API to the greatest extent possible. Those users are experts and will help you make a better API, once they realize they benefit from it too. Create a feedback loop around the data, meaning find a way for API users to submit reports of data errors and have a process to carry out data updates, if applicable and possible. Do this in the public as much as possible so that others see they can also join the virtuous cycle.”

We need a new Bismarck to tame the machines


Michael Ignatieff in the Financial Times: “A question haunting democratic politics everywhere is whether elected governments can control the cyclone of technological change sweeping through their societies. Democracy comes under threat if technological disruption means that public policy no longer has any leverage on job creation. Democracy is also in danger if digital technologies give states powers of total surveillance.

If, in the words of Google chairman Eric Schmidt, there is a “race between people and computers” even he suspects people may not win, democrats everywhere should be worried. In the same vein, Lawrence Summers, former Treasury secretary, recently noted that new technology could be liberating but that the government needed to soften its negative effects and make sure the benefits were distributed fairly. The problem, he went on, was that “we don’t yet have the Gladstone, the Teddy Roosevelt or the Bismarck of the technology era”.

These Victorian giants have much to teach us. They were at the helm when their societies were transformed by the telegraph, the electric light, the telephone and the combustion engine. Each tried to soften the blow of change, and to equalise the benefits of prosperity for working people. With William Gladstone it was universal primary education and the vote for Britain’s working men. With Otto von Bismarck it was legislation that insured German workers against ill-health and old age. For Roosevelt it was the entire progressive agenda, from antitrust legislation and regulation of freight rates to the conservation of America’s public lands….

The Victorians created the modern state to tame the market in the name of democracy but they wanted a nightwatchman state, not a Leviathan. Thanks to the new digital technologies, the state they helped create now has powers of surveillance that threaten our privacy and freedom. What new technology makes possible, states will do. Keeping technology in the service of democracy will not be easy. Asking judges to guard the guards only bloats the state apparatus still further. Allowing dissident insiders to get away with leaking the state’s secrets will only result in more secretive, paranoid and controlling government.

The Victorians would have said there is a solution – representative government itself – but it requires citizens to trust their representatives to hold the government in check. The Victorians created modern, mass representative democracy so that collective public choice could control change for everyone’s benefit. They believed that representatives, if given the authority and the necessary information, could control the power that technology confers on the modern state.
This is still a viable ideal but we have plenty of rebuilding before our democratic institutions are ready for the task. Congress and parliament need to regain trust and capability; and, if they do, we can start recovering the faith of the Victorians we so sorely need: the belief that democracy can master the technologies that are transforming our lives.