The Small World Initiative: An Innovative Crowdsourcing Platform for Antibiotics


Ana Maria Barral et al in FASEB Journal: “The Small World Initiative™ (SWI) is an innovative program that encourages students to pursue careers in science and sets forth a unique platform to crowdsource new antibiotics. It centers around an introductory biology course through which students perform original hands-on field and laboratory research in the hunt for new antibiotics. Through a series of student-driven experiments, students collect soil samples, isolate diverse bacteria, test their bacteria against clinically-relevant microorganisms, and characterize those showing inhibitory activity. This is particularly relevant since over two thirds of antibiotics originate from soil bacteria or fungi. SWI’s approach also provides a platform to crowdsource antibiotic discovery by tapping into the intellectual power of many people concurrently addressing a global challenge and advances promising candidates into the drug development pipeline. This unique class approach harnesses the power of active learning to achieve both educational and scientific goals…..We will discuss our preliminary student evaluation results, which show the compelling impact of the program in comparison to traditional introductory courses. Ultimately, the mission of the program is to provide an evidence-based approach to teaching introductory biology concepts in the context of a real-world problem. This approach has been shown to be particularly impactful on underrepresented STEM talent pools, including women and minorities….(More)”

Outstanding Challenges in Recent Open Government Data Initiatives


Paper by Usamah A. Algemili: “In recent years, we have witnessed increasing interest in government data. Many governments around the world have sensed the value of its passive data sets. These governments started their Open Data policies, yet many countries are on the way converting raw data into useful representation. This paper surveys the previous efforts of Open Data initiatives. It discusses the various challenges that open data projects may encounter during the transformation from passive data sets towards Open Data culture. It reaches out project teams acquiring their practical assessment. Thus, an online form has been distributed among project teams. The questionnaire was developed in alignment with previous literature of data integration challenges. 138 eligible professional participated, and their responds has been analyzed by the researcher. The result section identifies the most critical challenges from project-teams’ point-of-view, and the findings show four obstacles that stand out as critical challenges facing project teams. This paper casts on these challenges, and it attempts to indicate the missing gap between current guidelines and practical experience. Accordingly, this paper presents the current infrastructure of Open Data framework followed by additional recommendations that may lead to successful implementation of Open Data development….(More)”

Robot Regulators Could Eliminate Human Error


 in the San Francisco Chronicle and Regblog: “Long a fixture of science fiction, artificial intelligence is now part of our daily lives, even if we do not realize it. Through the use of sophisticated machine learning algorithms, for example, computers now work to filter out spam messages automatically from our email. Algorithms also identify us by our photos on Facebook, match us with new friends on online dating sites, and suggest movies to watch on Netflix.

These uses of artificial intelligence hardly seem very troublesome. But should we worry if government agencies start to use machine learning?

Complaints abound even today about the uncaring “bureaucratic machinery” of government. Yet seeing how machine learning is starting to replace jobs in the private sector, we can easily fathom a literal machinery of government in which decisions made by human public servants increasingly become made by machines.

Technologists warn of an impending “singularity,” when artificial intelligence surpasses human intelligence. Entrepreneur Elon Musk cautions that artificial intelligence poses one of our “biggest existential threats.” Renowned physicist Stephen Hawking eerily forecasts that artificial intelligence might even “spell the end of the human race.”

Are we ready for a world of regulation by robot? Such a world is closer than we think—and it could actually be worth welcoming.

Already government agencies rely on machine learning for a variety of routine functions. The Postal Service uses learning algorithms to sort mail, and cities such as Los Angeles use them to time their traffic lights. But while uses like these seem relatively benign, consider that machine learning could also be used to make more consequential decisions. Disability claims might one day be processed automatically with the aid of artificial intelligence. Licenses could be awarded to airplane pilots based on what kinds of safety risks complex algorithms predict each applicant poses.

Learning algorithms are already being explored by the Environmental Protection Agency to help make regulatory decisions about what toxic chemicals to control. Faced with tens of thousands of new chemicals that could potentially be harmful to human health, federal regulators have supported the development of a program to prioritize which of the many chemicals in production should undergo the more in-depth testing. By some estimates, machine learning could save the EPA up to $980,000 per toxic chemical positively identified.

It’s not hard then to imagine a day in which even more regulatory decisions are automated. Researchers have shown that machine learning can lead to better outcomes when determining whether parolees ought to be released or domestic violence orders should be imposed. Could the imposition of regulatory fines one day be determined by a computer instead of a human inspector or judge? Quite possibly so, and this would be a good thing if machine learning could improve accuracy, eliminate bias and prejudice, and reduce human error, all while saving money.

But can we trust a government that bungled the initial rollout of Healthcare.gov to deploy artificial intelligence responsibly? In some circumstances we should….(More)”

Global sharing of HIV vaccine research


Springwise: “Noticing that a global, collaborative effort is missing in the world of HIV vaccine research, scientists came together to make it a reality. Populated by research from the Collaboration for AIDS Vaccine Discovery — an international network of laboratories — DataSpace is a partnership between the Statistical Center for HIV/AIDS Research and Prevention, data management and software development company LabKey, and technology product development company Artefact.

Through pooled research results, scientists hope to make data more accessible and comparable. Two aspects make the platform particularly powerful. The Artefact team hand-coded a number of research points to allow results from multiple studies to be compared like-for-like. And rather than discard the findings of failed or inconclusive studies, DataSpace includes them in analysis, vastly increasing the volume of available information.

Material is added as study results become available, creating a constantly developing resource. Being able to quickly test ideas online helps researchers make serendipitous connections and avoid duplicating efforts….(More)”

Workplace innovation in the public sector


Eurofound: “Innovative organisational practices in the workplace, which aim to make best use of human capital, are traditionally associated with the private sector. The nature of the public sector activities makes it more difficult to identify these types of internal innovation in publicly funded organisations.

It is widely thought that public sector organisations are neither dynamic nor creative and are typified by a high degree of inertia. Yet the necessity of innovation ought not to be dismissed. The public sector represents a quarter of total EU employment, and it is of critical importance as a provider and regulator of services. Improving how it performs has a knock-on effect not only for private sector growth but also for citizens’ satisfaction. Ultimately, this improves governance itself.

So how can innovative organisation practices help in dealing with the challenges faced by the public sector? Eurofound, as part of a project on workplace innovation in European companies, carried out case studies of both private and public sector organisations. The findings show a number of interesting practices and processes used.

Employee participation

The case studies from the public sector, some of which are described below, demonstrate the central role of employee participation in the implementation of workplace innovation and its impacts on organisation and employees. They indicate that innovative practices have resulted in enhanced organisational performance and quality of working life.

It is widely thought that changes in the public sector are initiated as a response to government policies. This is often true, but workplace innovation may also be introduced as a result of well-designed initiatives driven by external pressures (such as the need for a more competitive public service) or internal pressures (such as a need to update the skills map to better serve the public).

Case study findings

The state-owned Lithuanian energy company Lietuvos Energijos Gamyba (140 KB PDF) encourages employee participation by providing a structured framework for all employees to propose improvements. This has required a change in managerial approach and has spread a sense of ownership horizontally and vertically in the company. The Polish public transport company Jarosław City Transport (191 KB PDF), when faced with serious financial stability challenges, as well as implementing operational changes, set up ways for employees’ voices to be heard, which enabled a contributory dialogue and strengthened partnerships. Consultation, development of mutual trust, and common involvement ensured an effective combination of top-down and bottom-up initiatives.

The Lithuanian Post, AB Lietuvos Pastas (136 KB PDF) experienced a major organisation transformation in 2010 to improve efficiency and quality of service. Through a programme of ‘Loyalty day’ monthly visits, both top and middle management of the central administration visit any part of the company and work with colleagues in other units. Under budgetary pressure to ‘earn their money’, the Danish Vej and Park Bornholm (142 KB PDF) construction services in roads, parks and forests had to find innovative solutions to deal with a merger and privatisation. Their intervention had the characteristics of workplace partnership with a new set of organisational values set from the bottom up. Self-managing teams are essential for the operation of the company.

The world of education has provided new structures that provide better outcomes for students. The South West University of Bulgaria (214 KB PDF) also operates small self-managing teams responsible for employee scheduling. Weekly round-tables encourage participation in collectively finding solutions, creating a more effective environment in which to respond to the competitive demands of education provision.

In Poland, an initiative by the Pomeranian Library (185 KB PDF) improved employee–management dialogue and communication through increased participation. The initiative is a response to the new frameworks for open access to knowledge for users, with the library mirroring the user experience through its own work practices.

Through new dialogue, government advisory bodies have also developed employee-led improvement. Breaking away from a traditional hierarchy is considered important in achieving a more flexible work organisation. Under considerable pressure, the top-heavy management of the British Geological Survey (89 KB PDF) now operates a flexible matrix that promotes innovative and entrepreneurial ways of working. And in Germany, Niersverband (138 KB PDF), a publicly owned water-management company innovated through training, learning, reflection partnerships and workplace partnerships. New occupational profiles were developed to meet external demands. Based on dialogue concerning workplace experiences and competences, employees acquired new qualifications that allowed the company to be more competitive.

In the Funen Village Museum in Odense, Denmark, (143 KB PDF) innovation came about at the request of staff looking for more flexibility in how they work. Formerly most of their work was maintenance tasks, but now they can now engage more with visitors. Control of schedules has moved to the team rather than being the responsibility of a single manager. As a result, museum employees are now hosts as well as craftspeople. They no longer feel ‘forgotten’ and are happier in their work….(More)”

The report Workplace innovation in European companies provides a full analysis of the case studies.

The 51 case studies and the  list of companies (PDF 119 KB) the case studies are based on are available for download.

Fifty Shades of Open


Jeffrey Pomerantz and Robin Peek at First Monday: “Open source. Open access. Open society. Open knowledge. Open government. Even open food. Until quite recently, the word “open” had a fairly constant meaning. The over-use of the word “open” has led to its meaning becoming increasingly ambiguous. This presents a critical problem for this important word, as ambiguity leads to misinterpretation.

“Open” has been applied to a wide variety of words to create new terms, some of which make sense, and some not so much. When we started writing this essay, we thought our working title was simply amusing. But the working title became the actual title, as we found that there are at least 50 different terms in which the word “open” is used, encompassing nearly as many different criteria for openness. In this essay we will attempt to make sense of this open season on the word “open.”

Opening the door on open

The word “open” is, perhaps unsurprisingly, a very old one in the English language, harking back to Early Old English. Unlike some words in English, the definition of “open” has changed very little in the intervening thousand-plus years: the earliest recorded uses of the word are completely consistent with its modern usage as an adjective, indicating a passage through or an access into something (Oxford English Dictionary, 2016).

This meaning leads to the development in the fifteenth century of the phrases “open house,” meaning an establishment in which all are welcome, and “open air,” meaning unenclosed outdoor spaces. One such unenclosed outdoor space that figured large in the fifteenth century, and continues to do so today, is the Commons (Hardin, 1968): land or other resources that are not privately owned, but are available for use to all members of a community. The word “open” in these phrases indicates that all have access to a shared resource. All are welcome to visit an open house, but not to move in; all are welcome to walk in the open air or graze their sheep on the Commons, but not to fence the Commons as part of their backyard. (And the moment at which Commons land ceases to be open is precisely the moment it is fenced by an owner, which is in fact what happened in Great Britain during the Enclosure movement of the sixteenth through eighteenth centuries.)

Running against the grain of this cultural movement to enclosure, the nineteenth century saw the circulating library become the norm — rather than libraries in which massive tomes were literally chained to desks. The interpretation of the word “open” to mean a shared resource to which all had access, fit neatly into the philosophy of the modern library movement of the nineteenth century. The phrases “open shelves” and “open stacks” emerged at this time, referring to resources that were directly available to library users, without necessarily requiring intervention by a librarian. Naturally, however, not all library resources were made openly available, nor are they even today. Furthermore, resources are made openly available with the understanding that, like Commons land, they must be shared: library resources have a due date.

The twentieth century saw an increase in the use of the word “open,” as well as a hint of the confusion that was to come about the interpretation of the word. The term “open society” was coined prior to World War I, to indicate a society tolerant of religious diversity. The “open skies” policy enables a nation to allow other nations’ commercial aviation to fly through its airspace — though, importantly, without giving up control of its airspace. The Open University was founded in the United Kingdom in 1969, to provide a university education to all, with no formal entry requirements. The meaning of the word “open” is quite different across these three terms — or perhaps it would be more accurate to say that these terms use different shadings of the word.

But it has been the twenty-first century that has seen the most dramatic increase in the number of terms that use “open.” The story of this explosion in the use of the word “open” begins, however, with a different word entirely: the word “free.”….

Introduction
Opening the door on open
Speech, beer, and puppies
Open means rights
Open means access
Open means use
Open means transparent
Open means participatory
Open means enabling openness
Open means philosophically aligned with open principles
Openwashing and its discontents
Conclusion

Yelp, Google Hold Pointers to Fix Governments


Christopher Mims at the Wall Street Journal: “When Kaspar Korjus was born, he was given a number before he was given a name, as are all babies in Estonia. “My name is 38712012796, which I got before my name of Kaspar,”says Mr. Korjus.

In Estonia, much of life—voting, digital signatures, prescriptions, taxes, banktransactions—is conducted with this number. The resulting services aren’t just more convenient, they are demonstrably better. It takes an Estonian three minutes to file his or her taxes.

Americans are unlikely to accept a unified national ID system. But Estonia offers an example of the kind of innovation possible around government services, a competitive factor for modern nations.

The former Soviet republic—with a population of 1.3 million, roughly the size of SanDiego—is regularly cited as a world leader in e-governance. At base, e-governance is about making government function as well as private enterprise, mostly by adopting the same information-technology infrastructure and management techniques as the world’s most technologically savvy corporations.

It isn’t that Estonia devotes more people to the problem—it took only 60 to build the identity system. It is that the country’s leaders are willing to empower those engineers.“There is a need for politicians to not only show leadership but also there is a need to take risks,” says Estonia’s prime minister, Taavi Rõivas.

In the U.S., Matt Lira, senior adviser for House Majority Leader Kevin McCarthy, says the gap between the government’s information technology and the private sector’s has grown larger than ever. Americans want to access government services—paying property taxes or renewing a driver’s license—as easily as they look up a restaurant on Yelp or a business on Alphabet’s Google, says Neil Kleiman, a professor of policy at New York University who collaborates with cities in this subject area.

The government is unlikely to catch up soon. The Government Accountability Office last year estimated that about 25% of the federal government’s 738 major IT investments—projected to cost a total of $42 billion—were in danger of significant delays or cost overruns.

One reason for such overruns is the government’s reliance on big, monolithic projects based on proposal documents that can run to hundreds of pages. It is an approach to software development that is at least 20 years out of date. Modern development emphasizes small chunks of code accomplished in sprints and delivered to end users quickly so that problems can be identified and corrected.

Two years ago, the Obama administration devised a novel way to address these issues:assembling a crack team of coders and project managers from the likes of Google,Amazon.com and Microsoft and assigning them to big government boondoggles to help existing IT staff run more like the private sector. Known as 18F, this organization and its sister group, the U.S. Digital Service, are set to hit 500 staffers by the end of 2016….(More)”

‘Big data’ was supposed to fix education. It didn’t. It’s time for ‘small data’


Pasi Sahlberg and Jonathan Hasak in the Washington Post: “One thing that distinguishes schools in the United States from schools around the world is how data walls, which typically reflect standardized test results, decorate hallways and teacher lounges. Green, yellow, and red colors indicate levels of performance of students and classrooms. For serious reformers, this is the type of transparency that reveals more data about schools and is seen as part of the solution to how to conduct effective school improvement. These data sets, however, often don’t spark insight about teaching and learning in classrooms; they are based on analytics and statistics, not on emotions and relationships that drive learning in schools. They also report outputs and outcomes, not the impacts of learning on the lives and minds of learners….

If you are a leader of any modern education system, you probably care a lot about collecting, analyzing, storing, and communicating massive amounts of information about your schools, teachers, and students based on these data sets. This information is “big data,” a term that first appeared around 2000, which refers to data sets that are so large and complex that processing them by conventional data processing applications isn’t possible. Two decades ago, the type of data education management systems processed were input factors of education system, such as student enrollments, teacher characteristics, or education expenditures handled by education department’s statistical officer. Today, however, big data covers a range of indicators about teaching and learning processes, and increasingly reports on student achievement trends over time.

With the outpouring of data, international organizations continue to build regional and global data banks. Whether it’s the United Nations, the World Bank, the European Commission, or the Organization for Economic Cooperation and Development, today’s international reformers are collecting and handling more data about human development than before. Beyond government agencies, there are global education and consulting enterprises like Pearson and McKinsey that see business opportunities in big data markets.

Among the best known today is the OECD’s Program for International Student Assessment (PISA), which measures reading, mathematical, and scientific literacy of 15-year-olds around the world. OECD now also administers an Education GPS, or a global positioning system, that aims to tell policymakers where their education systems place in a global grid and how to move to desired destinations. OECD has clearly become a world leader in the big data movement in education.

Despite all this new information and benefits that come with it, there are clear handicaps in how big data has been used in education reforms. In fact, pundits and policymakers often forget that Big data, at best, only reveals correlations between variables in education, not causality. As any introduction to statistics course will tell you, correlation does not imply causation….
We believe that it is becoming evident that big data alone won’t be able to fix education systems. Decision-makers need to gain a better understanding of what good teaching is and how it leads to better learning in schools. This is where information about details, relationships and narratives in schools become important. These are what Martin Lindstrom calls “small data”: small clues that uncover huge trends. In education, these small clues are often hidden in the invisible fabric of schools. Understanding this fabric must become a priority for improving education.

To be sure, there is not one right way to gather small data in education. Perhaps the most important next step is to realize the limitations of current big data-driven policies and practices. Too strong reliance on externally collected data may be misleading in policy-making. This is an example of what small data look like in practice:

  • It reduces census-based national student assessments to the necessary minimum and transfer saved resources to enhance the quality of formative assessments in schools and teacher education on other alternative assessment methods. Evidence shows that formative and other school-based assessments are much more likely to improve quality of education than conventional standardized tests.
  • It strengthens collective autonomy of schools by giving teachers more independence from bureaucracy and investing in teamwork in schools. This would enhance social capital that is proved to be critical aspects of building trust within education and enhancing student learning.
  • It empowers students by involving them in assessing and reflecting their own learning and then incorporating that information into collective human judgment about teaching and learning (supported by national big data). Because there are different ways students can be smart in schools, no one way of measuring student achievement will reveal success. Students’ voices about their own growth may be those tiny clues that can uncover important trends of improving learning.

Edwards Deming once said that “without data you are another person with an opinion.” But Deming couldn’t have imagined the size and speed of data systems we have today….(More)”

OSoMe: The IUNI observatory on social media


Clayton A Davis et al at Peer J. PrePrint:  “The study of social phenomena is becoming increasingly reliant on big data from online social networks. Broad access to social media data, however, requires software development skills that not all researchers possess. Here we present the IUNI Observatory on Social Media, an open analytics platform designed to facilitate computational social science. The system leverages a historical, ongoing collection of over 70 billion public messages from Twitter. We illustrate a number of interactive open-source tools to retrieve, visualize, and analyze derived data from this collection. The Observatory, now available at osome.iuni.iu.edu, is the result of a large, six-year collaborative effort coordinated by the Indiana University Network Science Institute.”…(More)”

A Framework for Understanding Data Risk


Sarah Telford and Stefaan G. Verhulst at Understanding Risk Forum: “….In creating the policy, OCHA partnered with the NYU Governance Lab (GovLab) and Leiden University to understand the policy and privacy landscape, best practices of partner organizations, and how to assess the data it manages in terms of potential harm to people.

We seek to share our findings with the UR community to get feedback and start a conversation around the risk to using certain types of data in humanitarian and development efforts and when understanding risk.

What is High-Risk Data?

High-risk data is generally understood as data that includes attributes about individuals. This is commonly referred to as PII or personally identifiable information. Data can also create risk when it identifies communities or demographics within a group and ties them to a place (i.e., women of a certain age group in a specific location). The risk comes when this type of data is collected and shared without proper authorization from the individual or the organization acting as the data steward; or when the data is being used for purposes other than what was initially stated during collection.

The potential harms of inappropriately collecting, storing or sharing personal data can affect individuals and communities that may feel exploited or vulnerable as the result of how data is used. This became apparent during the Ebola outbreak of 2014, when a number of data projects were implemented without appropriate risk management measures. One notable example was the collection and use of aggregated call data records (CDRs) to monitor the spread of Ebola, which not only had limited success in controlling the virus, but also compromised the personal information of those in Ebola-affected countries. (See Ebola: A Big Data Disaster).

A Data-Risk Framework

Regardless of an organization’s data requirements, it is useful to think through the potential risks and harms for its collection, storage and use. Together with the Harvard Humanitarian Initiative, we have set up a four-step data risk process that includes doing an assessment and inventory, understanding risks and harms, and taking measures to counter them.

  1. Assessment – The first step is to understand the context within which the data is being generated and shared. The key questions to ask include: What is the anticipated benefit of using the data? Who has access to the data? What constitutes the actionable information for a potential perpetrator? What could set off the threat to the data being used inappropriately?
  1. Data Inventory – The second step is to take inventory of the data and how it is being stored. Key questions include: Where is the data – is it stored locally or hosted by a third party? Where could the data be housed later? Who might gain access to the data in the future? How will we know – is data access being monitored?
  1. Risks and Harms – The next step is to identify potential ways in which risk might materialize. Thinking through various risk-producing scenarios will help prepare staff for incidents. Examples of risks include: your organization’s data being correlated with other data sources to expose individuals; your organization’s raw data being publicly released; and/or your organization’s data system being maliciously breached.
  1. Counter-Measures – The next step is to determine what measures would prevent risk from materializing. Methods and tools include developing data handling policies, implementing access controls to the data, and training staff on how to use data responsibly….(More)