Having poor quality news coverage is especially problematic when the political process is sharply polarized. As has been documented by political scientists Tom Mann and Norman Ornstein, the United States has a Congress today where the most conservative Democrat is to the left of the most moderate Republican. [1] There are many reasons for this spike in polarization, but there is little doubt that the news media amplify and exacerbate social and political divisions.
Too often, journalists follow a “Noah’s Ark” approach to coverage in which a strong liberal is paired with a vocal conservative in an ideological food fight. The result is polarization of discourse and “false equivalence” in reporting. This lack of nuanced analysis confuses viewers and makes it difficult for them to sort out the contrasting facts and opinions. People get the sense that there are only two policy options and that there are few gradations or complexities in the positions that are reported.
In this paper, West and Stone review challenges facing the news media in an age of political polarization. This includes hyper-competitiveness in news coverage, a dramatic decline in local journalism and resulting nationalization of the news, and the personalization of coverage. After discussing these problems and how they harm current reporting, they present several ideas for nudging news producers and consumers towards more thoughtful and less polarizing responses.”
"Natural Cities" Emerge from Social Media Location Data
Emerging Technology From the arXiv: “Nobody agrees on how to define a city. But the emergence of “natural cities” from social media data sets may change that, say computational geographers…
A city is a large, permanent human settlement. But try and define it more carefully and you’ll soon run into trouble. A settlement that qualifies as a city in Sweden may not qualify in China, for example. And the reasons why one settlement is classified as a town while another as a city can sometimes seem almost arbitrary.
City planners know this problem well. They tend to define cities by administrative, legal or even historical boundaries that have little logic to them. Indeed, the same city can sometimes be defined in various different ways.
That causes all kinds of problems from counting the total population to working out who pays for the upkeep of the place. Which definition do you use?
Now help may be at hand thanks to the work of Bin Jiang and Yufan Miao at the University of Gävle in Sweden. These guys have found a way to use people’s location recorded by social media to define the boundaries of so-called natural cities which have a close resemblance to real cities in the US.
Jiang and Miao began with a dataset from the Brightkite social network, which was active between 2008 and 2010. The site encouraged users to log in with their location details so that they could see other users nearby. So the dataset consists of almost 3 million locations in the US and the dates on which they were logged.
To start off, Jiang and Miao simply placed a dot on a map at the location of each login. They then connected these dots to their neighbours to form triangles that end up covering the entire mainland US.
Next, they calculated the size of each triangle on the map and plotted this size distribution, which turns out to follow a power law. So there are lots of tiny triangles but only a few large ones.
Finally, the calculated the average size of the triangles and then coloured in all those that were smaller than average. The coloured areas are “natural cities”, say Jiang and Miao.
It’s easy to imagine that resulting map of triangles is of little value. But to the evident surprise of ther esearchers, it produces a pretty good approximation of the cities in the US. “We know little about why the procedure works so well but the resulting patterns suggest that the natural cities effectively capture the evolution of real cities,” they say.
That’s handy because it suddenly gives city planners a way to study and compare cities on a level playing field. It allows them to see how cities evolve and change over time too. And it gives them a way to analyse how cities in different parts of the world differ.
Of course, Jiang and Miao will want to find out why this approach reveals city structures in this way. That’s still something of a puzzle but the answer itself may provide an important insight into the nature of cities (or at least into the nature of this dataset).
A few days ago, this blog wrote about how a new science of cities is emerging from the analysis of big data. This is another example and expect to see more.
Ref: http://arxiv.org/abs/1401.6756 : The Evolution of Natural Cities from the Perspective of Location-Based Social Media”
Building Transparency Momentum
Aspen Baker in the Stanford Social Innovation Review: “Even engaged citizens in Oakland, Calif., didn’t know the city had a Public Ethics Commission, let alone what its purpose was, when I joined its ranks three years ago. And people who did know about it didn’t have many nice things to say: Local blogs sneered at its lack of power and few politicians feared its oversight. Created in 1996 as a watchdog organization responsible for opening up city government, the commission had become just another element of Oakland’s cumbersome, opaque bureaucracy.
It’s easy to see why. Technology and media have dramatically changed our expectations for what defines transparency and accountability. For example, in the past, walking into City Hall, making an official request for a public record, and receiving it in the mail within two weeks meant good, open government. Now, if an Internet search doesn’t instantly turn up an answer to your question about local government, the assumption often is: Government’s hiding something.
This is rarely the case. Consider that Oakland has more than 40,000 boxes full of paper documents housed in locations throughout the city, not to mention hundreds of thousands of email messages generated each year. Records management is a serious—and legal—issue, and it’s fallen way behind the times. In an age when local municipalities are financially stretched more than ever before (38 US cities have declared bankruptcy since 2010), the ability of cities to invest in the technology, systems, and staff—and to facilitate the culture change that cities often need—is a real, major challenge.
Yet, for the innovators, activists, and leaders within and outside city government, this difficult moment is also one of significant opportunity for change; and many are seizing it.
Last month, the Transparency Project of the Public Ethics Commission—a subcommittee that I initiated and have led as chair for the last year—released a report detailing just how far Oakland has come and how far we have to go to create a culture of innovation, accountability, and transparency throughout all levels of the city.
Collaboration Is Critical
What comes through the report loud and clear is the important role that collaboration between city staff, the community, nonprofits, and others played in shifting expectations and establishing new standards—including the momentum generated by the volunteer-led “City Camps,” a gathering of citizens, city government, and businesses to work on open government issues, and the recent launch of RecordTrac, an online public records request tracking system built by Code for America Fellows that departments throughout the city have successfully adopted. RecordTrac makes information available to everyone, not just the person who requested it.
Ideas and Experiments Matter
Innovators didn’t let financial circumstances get in the way of thinking about what even a cash-strapped, problem-plagued city like Oakland could do to meet the new expectations of its citizens to find information quickly and easily online. The commission’s executive director Whitney Barazoto, along with her collaborators, didn’t think “small and practical”; they chose “big and futuristic” instead. Most importantly, they sought to experiment with new ways of spreading ideas and engaging the public in discussions—far beyond the standard (and often ineffective) “three minutes at the mic” practice at public meetings….
The “Toward Collective Transparency” report details the history of the innovative efforts to increase transparency within the City of Oakland and offers a number of recommendations for what’s next. The most defining feature of this report is its acknowledgment of the significant cultural changes that are taking place within the city, and around the country, in the way we think about the role of government, citizens, and the type of engagement and collaboration that can—and should—exist between the two.
It’s easy to get caught up in what’s gone wrong, but our subcommittee made a choice early on not to get buried in the past. We capitalized on our commission’s strengths rather than our weaknesses, leaving “deficit thinking” behind so that we could think creatively about what the commission and city were uniquely positioned to do.
Why does all this matter?
Last year, John Bridgeland and Peter Orszag, former officials in the administrations of President Obama and President George W. Bush, wrote an article in The Atlantic titled, “Can Government Play Moneyball?” They pointed out the need to measure the impact of government spending using the evidence-based statistical approach that the Oakland A’s own manager, Billy Beane, made famous. They argued that the same kind of scarcity Billy faced building a competitive baseball team is not unlike the scarcity that the federal government is facing, and they hope it will help government break some of its own traditions. Governments at all levels—city, county, state and federal—are all facing revenue challenges, but we can’t let that stop progress and change.
It takes a lot more than data and technology to improve the way government operates and engages with its citizens; it demands vision and leadership. We need innovators who can break traditions and make the future come alive through collaboration, ideas, and experiments.”
Big Data’s Dangerous New Era of Discrimination
Michael Schrage in HBR blog: “Congratulations. You bought into Big Data and it’s paying off Big Time. You slice, dice, parse and process every screen-stroke, clickstream, Like, tweet and touch point that matters to your enterprise. You now know exactly who your best — and worst — customers, clients, employees and partners are. Knowledge is power. But what kind of power does all that knowledge buy?
Big Data creates Big Dilemmas. Greater knowledge of customers creates new potential and power to discriminate. Big Data — and its associated analytics — dramatically increase both the dimensionality and degrees of freedom for detailed discrimination. So where, in your corporate culture and strategy, does value-added personalization and segmentation end and harmful discrimination begin?
Let’s say, for example, that your segmentation data tells you the following:
Your most profitable customers by far are single women between the ages of 34 and 55 closely followed by “happily married” women with at least one child. Divorced women are slightly more profitable than “never marrieds.” Gay males — single and in relationships — are also disproportionately profitable. The “sweet spot” is urban and 28 to 50. These segments collectively account for roughly two-thirds of your profitability. (Unexpected factoid: Your most profitable customers are overwhelmingly Amazon Prime subscriber. What might that mean?)
Going more granular, as Big Data does, offers even sharper ethno-geographic insight into customer behavior and influence:
- Single Asian, Hispanic, and African-American women with urban post codes are most likely to complain about product and service quality to the company. Asian and Hispanic complainers happy with resolution/refund tend to be in the top quintile of profitability. African-American women do not.
- Suburban Caucasian mothers are most likely to use social media to share their complaints, followed closely by Asian and Hispanic mothers. But if resolved early, they’ll promote the firm’s responsiveness online.
- Gay urban males receiving special discounts and promotions are the most effective at driving traffic to your sites.
My point here is that these data are explicit, compelling and undeniable. But how should sophisticated marketers and merchandisers use them?
Campaigns, promotions and loyalty programs targeting women and gay males seem obvious. But should Asian, Hispanic and white females enjoy preferential treatment over African-American women when resolving complaints? After all, they tend to be both more profitable and measurably more willing to effectively use social media. Does it make more marketing sense encouraging African-American female customers to become more social media savvy? Or are resources better invested in getting more from one’s best customers? Similarly, how much effort and ingenuity flow should go into making more gay male customers better social media evangelists? What kinds of offers and promotions could go viral on their networks?…
Of course, the difference between price discrimination and discrimination positively correlated with gender, ethnicity, geography, class, personality and/or technological fluency is vanishingly small. Indeed, the entire epistemological underpinning of Big Data for business is that it cost-effectively makes informed segmentation and personalization possible…..
But the main source of concern won’t be privacy, per se — it will be whether and how companies and organizations like your own use Big Data analytics to justify their segmentation/personalization/discrimination strategies. The more effective Big Data analytics are in profitably segmenting and serving customers, the more likely those algorithms will be audited by regulators or litigators.
Tomorrow’s Big Data challenge isn’t technical; it’s whether managements have algorithms and analytics that are both fairly transparent and transparently fair. Big Data champions and practitioners had better be discriminating about how discriminating they want to be.”
The Age of ‘Infopolitics’
Colin Koopman in the New York Times: “We are in the midst of a flood of alarming revelations about information sweeps conducted by government agencies and private corporations concerning the activities and habits of ordinary Americans. After the initial alarm that accompanies every leak and news report, many of us retreat to the status quo, quieting ourselves with the thought that these new surveillance strategies are not all that sinister, especially if, as we like to say, we have nothing to hide.
One reason for our complacency is that we lack the intellectual framework to grasp the new kinds of political injustices characteristic of today’s information society. Everyone understands what is wrong with a government’s depriving its citizens of freedom of assembly or liberty of conscience. Everyone (or most everyone) understands the injustice of government-sanctioned racial profiling or policies that produce economic inequality along color lines. But though nearly all of us have a vague sense that something is wrong with the new regimes of data surveillance, it is difficult for us to specify exactly what is happening and why it raises serious concern, let alone what we might do about it.
Our confusion is a sign that we need a new way of thinking about our informational milieu. What we need is a concept of infopolitics that would help us understand the increasingly dense ties between politics and information. Infopolitics encompasses not only traditional state surveillance and data surveillance, but also “data analytics” (the techniques that enable marketers at companies like Target to detect, for instance, if you are pregnant), digital rights movements (promoted by organizations like the Electronic Frontier Foundation), online-only crypto-currencies (like Bitcoin or Litecoin), algorithmic finance (like automated micro-trading) and digital property disputes (from peer-to-peer file sharing to property claims in the virtual world of Second Life). These are only the tip of an enormous iceberg that is drifting we know not where.
Surveying this iceberg is crucial because atop it sits a new kind of person: the informational person. Politically and culturally, we are increasingly defined through an array of information architectures: highly designed environments of data, like our social media profiles, into which we often have to squeeze ourselves. The same is true of identity documents like your passport and individualizing dossiers like your college transcripts. Such architectures capture, code, sort, fasten and analyze a dizzying number of details about us. Our minds are represented by psychological evaluations, education records, credit scores. Our bodies are characterized via medical dossiers, fitness and nutrition tracking regimens, airport security apparatuses. We have become what the privacy theorist Daniel Solove calls “digital persons.” As such we are subject to infopolitics (or what the philosopher Grégoire Chamayou calls “datapower,” the political theorist Davide Panagia “datapolitik” and the pioneering thinker Donna Haraway “informatics of domination”).
Today’s informational person is the culmination of developments stretching back to the late 19th century. It was in those decades that a number of early technologies of informational identity were first assembled. Fingerprinting was implemented in colonial India, then imported to Britain, then exported worldwide. Anthropometry — the measurement of persons to produce identifying records — was developed in France in order to identify recidivists. The registration of births, which has since become profoundly important for initiating identification claims, became standardized in many countries, with Massachusetts pioneering the way in the United States before a census initiative in 1900 led to national standardization. In the same era, bureaucrats visiting rural districts complained that they could not identify individuals whose names changed from context to context, which led to initiatives to universalize standard names. Once fingerprints, biometrics, birth certificates and standardized names were operational, it became possible to implement an international passport system, a social security number and all other manner of paperwork that tells us who someone is. When all that paper ultimately went digital, the reams of data about us became radically more assessable and subject to manipulation, which has made us even more informational.
We like to think of ourselves as somehow apart from all this information. We are real — the information is merely about us. But what is it that is real? What would be left of you if someone took away all your numbers, cards, accounts, dossiers and other informational prostheses? Information is not just about you — it also constitutes who you are….”
Google Hangouts vs Twitter Q&As: how the US and Europe are hacking traditional diplomacy
Wired (UK): “We’re not yet sure if diplomacy is going digital or just the conversations we’re having,” Moira Whelan, Deputy Assistant Secretary for Digital Strategy, US Department of State, admitted on stage at TedxStockholm. “Sometimes you just have to dive in, and we’re going to, but we’re not really sure where we’re going.”
The US has been at the forefront of digital diplomacy for many years now. President Obama was the first leader to sign up to Twitter, and has amassed the greatest number of followers among his peers at nearly 41 million. The account is, however, mainly run by his staff. It’s understandable, but demonstrates that there still remains a diplomatic disconnect in a country Whelan says knows it’s “ready, leading the conversation and on cutting edge”.
In Europe Swedish Minister for Foreign Affairs Carl Bildt, on the other hand, carries out regular Q&As on the social network and is regarded as one of the most conversational leaders on Twitter and the best connected, according to annual survey Twiplomacy. Our own William Hague is chasing Bildt with close to 200,000 followers, and is the world’s second most connected Foreign Minister, while David Cameron is active on a daily basis with more than 570,000 followers. London was in fact the first place to host a “Diplohack”, an event where ambassadors are brought together with developers and others to hack traditional diplomacy, and Whelan travelled to Sweden to take place in the third European event, the Stockholm Initiative for Digital Diplomacy held 16-17 January in conjunction with TedxStockholm.
Nevertheless, Whelan, who has worked for the state for a decade, says the US is in the game and ready to try new things. Case in point being its digital diplomacy reaction to the crisis in Syria last year.
“In August 2013 we witnessed tragic events in Syria, and obviously the President of the United States and his security team jumped into action,” said Whelan. “We needed to bear witness and… very clearly saw the need for one thing — a Google+ Hangout.” With her tongue-in-cheek comment, Whelan was pointing out social media’s incredibly relevant role in communicating to the public what’s going on when crises hit, and in answering concerns and questions through it.
“We saw speeches and very disturbing images coming at us,” continued Whelan. “We heard leaders making impassioned speeches, and we ourselves had conversations about what we were seeing and how we needed to engage and inform; to give people the chance to engage and ask questions of us.
“We thought, clearly let’s have a Google+ Hangout. Three people joined us and Secretary John Kerry — Nicholas Kirstof of the New York Times, executive editor of Syria Deeply, Lara Setrakian and Andrew Beiter, a teacher affiliated with the Holocaust Memorial Museum who specialises in how we talk about these topics with our children.”
In the run up to the Hangout, news of the event trickled out and soon Google was calling, asking if it could advertise the session at the bottom of other Hangouts, then on YouTube ads. “Suddenly 15,000 people were watching the Secretary live — that’s by far largest number we’d seen. We felt we’d tapped into something, we knew we’d hit success at what was a challenging time. We were engaging the public and could join with them to communicate a set of questions. People want to ask questions and get very direct answers, and we know it’s a success. We’ve talked to Google about how we can replicate that. We want to transform what we’re doing to make that the norm.”
Secretary of State John Kerry is, Whelan told Wired.co.uk later, “game for anything” when it comes to social media — and having the department leader enthused at the prospect of taking digital diplomacy forward is obviously key to its success.
“He wanted us to get on Instagram and the unselfie meme during the Philippines crisis was his idea — an assistant had seen it and he held a paper in front of him with the URL to donate funds to Typhoon Haiyan victims,” Whelan told Wired.co.uk at the Stockholm diplohack. “President Obama came in with a mandate that social media would be present and pronounced in all our departments.”
“[As] government changes and is more influenced away from old paper models and newspapers, suspenders and bow ties, and more into young innovators wanting to come in and change things,” Whelan continued, “I think it will change the way we work and help us get smarter.”
Use big data and crowdsourcing to detect nuclear proliferation, says DSB
FierceGovernmentIT: “A changing set of counter-nuclear proliferation problems requires a paradigm shift in monitoring that should include big data analytics and crowdsourcing, says a report from the Defense Science Board.
Much has changed since the Cold War when it comes to ensuring that nuclear weapons are subject to international controls, meaning that monitoring in support of treaties covering declared capabilities should be only one part of overall U.S. monitoring efforts, says the board in a January report (.pdf).
There are challenges related to covert operations, such as testing calibrated to fall below detection thresholds, and non-traditional technologies that present ambiguous threat signatures. Knowledge about how to make nuclear weapons is widespread and in the hands of actors who will give the United States or its allies limited or no access….
The report recommends using a slew of technologies including radiation sensors, but also exploitation of digital sources of information.
“Data gathered from the cyber domain establishes a rich and exploitable source for determining activities of individuals, groups and organizations needed to participate in either the procurement or development of a nuclear device,” it says.
Big data analytics could be used to take advantage of the proliferation of potential data sources including commercial satellite imaging, social media and other online sources.
The report notes that the proliferation of readily available commercial satellite imagery has created concerns about the introduction of more noise than genuine signal. “On balance, however, it is the judgment from the task force that more information from remote sensing systems, both commercial and dedicated national assets, is better than less information,” it says.
In fact, the ready availability of commercial imagery should be an impetus of governmental ability to find weak signals “even within the most cluttered and noisy environments.”
Crowdsourcing also holds potential, although the report again notes that nuclear proliferation analysis by non-governmental entities “will constrain the ability of the United States to keep its options open in dealing with potential violations.” The distinction between gathering information and making political judgments “will erode.”
An effort by Georgetown University students (reported in the Washington Post in 2011) to use open source data analyzing the network of tunnels used in China to hide its missile and nuclear arsenal provides a proof-of-concept on how crowdsourcing can be used to augment limited analytical capacity, the report says – despite debate on the students’ work, which concluded that China’s arsenal could be many times larger than conventionally accepted…
For more:
– download the DSB report, “Assessment of Nuclear Monitoring and Verification Technologies” (.pdf)
– read the WaPo article on the Georgetown University crowdsourcing effort”
Citizen roles in civic problem-solving and innovation
Satish Nambisan: “Can citizens be fruitfully engaged in solving civic problems? Recent initiatives in cities such as Boston (Citizens Connect), Chicago (Smart Chicago Collaborative), San Francisco (ImproveSF) and New York (NYC BigApps) indicate that citizens can be involved in not just identifying and reporting civic problems but in conceptualizing, designing and developing, and implementing solutions as well.
The availability of new technologies (e.g. social media) has radically lowered the cost of collaboration and the “distance” between government agencies and the citizens they serve. Further involving citizens — who are often closest to and possess unique knowledge about the problems they face — makes a lot of sense given the increasing complexity of the problems that need to be addressed.
A recent research report that I wrote highlights four distinct roles that citizens can play in civic innovation and problem-solving.
As explorer, citizens can identify and report emerging and existing civic problems. For example, Boston’s Citizen Connect initiative enables citizens to use specially built smartphone apps to report minor and major civic problems (from potholes and graffiti to water/air pollution). Closer to home, both Wisconsin and Minnesota have engaged thousands of citizen volunteers in collecting data on the quality of water in their neighborhood streams, lakes and rivers (the data thus gathered are analyzed by the state pollution control agency). Citizens also can be engaged in data analysis. The N.Y.-based Datakind initiative involves citizen volunteers using their data analysis skills to mine public data in health, education, environment, etc., to identify important civic issues and problems.
As “ideator,”citizens can conceptualize novel solutions to well-defined problems in public services. For example, the federal government’s Challenge.gov initiative employs online contests and competitions to solicit innovative ideas from citizens to solve important civic problems. Such “crowdsourcing” initiatives also have been launched at the county, city and state levels (e.g. Prize2theFuture competition in Birmingham, Ala.; ImproveSF in San Francisco).
As designer, citizens can design and/or develop implementable solutions to well-defined civic problems. For example, as part of initiatives such as NYC Big Apps and Apps for California, citizens have designed mobile apps to address specific issues such as public parking availability, public transport delays, etc. Similarly, the City Repair project in Portland, Ore., focuses on engaging citizens in co-designing and creatively transforming public places into sustainable community-oriented urban spaces.
As diffuser,citizens can play the role of a change agent and directly support the widespread adoption of civic innovations and solutions. For example, in recent years, physicians interacting with peer physicians in dedicated online communities have assisted federal and state government agencies in diffusing health technology innovations such as electronic medical record systems (EMRs).
In the private sector, companies across industries have benefited much from engaging with their customers in innovation. Evidence so far suggests that the benefits from citizen engagement in civic problem-solving are equally tangible, valuable and varied. However, the challenges associated with organizing such citizen co-creation initiatives are also many and imply the need for government agencies to adopt an intentional, well-thought-out approach….”
Social Media: A Critical Introduction
New book: “Now more than ever, we need to understand social media – the good as well as the bad. We need critical knowledge that helps us to navigate the controversies and contradictions of this complex digital media landscape. Only then can we make informed judgements about what’s
happening in our media world, and why.
Showing the reader how to ask the right kinds of questions about social media, Christian Fuchs takes us on a journey across social media,
delving deep into case studies on Google, Facebook, Twitter, WikiLeaks and Wikipedia. The result lays bare the structures and power relations
at the heart of our media landscape.
This book is the essential, critical guide for understanding social media and for all students of media studies and sociology. Readers will
never look at social media the same way again.
Sample chapter:
Twitter and Democracy: A New Public Sphere?
Introduction: What is a Critical Introduction to Social Media?“
How Internet surveillance predicts disease outbreak before WHO
Kurzweil News: “Have you ever Googled for an online diagnosis before visiting a doctor? If so, you may have helped provide early warning of an infectious disease epidemic.
In a new study published in Lancet Infectious Diseases, Internet-based surveillance has been found to detect infectious diseases such as Dengue Fever and Influenza up to two weeks earlier than traditional surveillance methods, according to Queensland University of Technology (QUT) research fellow and senior author of the paper Wenbiao Hu.
Hu, based at the Institute for Health and Biomedical Innovation, said there was often a lag time of two weeks before traditional surveillance methods could detect an emerging infectious disease.
“This is because traditional surveillance relies on the patient recognizing the symptoms and seeking treatment before diagnosis, along with the time taken for health professionals to alert authorities through their health networks. In contrast, digital surveillance can provide real-time detection of epidemics.”
Hu said the study used search engine algorithms such as Google Trends and Google Insights. It found that detecting the 2005–06 avian influenza outbreak “Bird Flu” would have been possible between one and two weeks earlier than official surveillance reports.
“In another example, a digital data collection network was found to be able to detect the SARS outbreak more than two months before the first publications by the World Health Organization (WHO),” Hu said.
According to this week’s CDC FluView report published Jan. 17, 2014, influenza activity in the United States remains high overall, with 3,745 laboratory-confirmed influenza-associated hospitalizations reported since October 1, 2013 (credit: CDC)
“Early detection means early warning and that can help reduce or contain an epidemic, as well alert public health authorities to ensure risk management strategies such as the provision of adequate medication are implemented.”
Hu said the study found that social media including Twitter and Facebook and microblogs could also be effective in detecting disease outbreaks. “The next step would be to combine the approaches currently available such as social media, aggregator websites, and search engines, along with other factors such as climate and temperature, and develop a real-time infectious disease predictor.”
“The international nature of emerging infectious diseases combined with the globalization of travel and trade, have increased the interconnectedness of all countries and that means detecting, monitoring and controlling these diseases is a global concern.”
The other authors of the paper were Gabriel Milinovich (first author), Gail Williams and Archie Clements from the University of Queensland School of Population, Health and State.
Supramap
Another powerful tool is Supramap, a web application that synthesizes large, diverse datasets so that researchers can better understand the spread of infectious diseases across hosts and geography by integrating genetic, evolutionary, geospatial, and temporal data. It is now open-source — create your own maps here.
Associate Professor Daniel Janies, Ph.D., an expert in computational genomics at the Wexner Medical Center at The Ohio State University (OSU), worked with software engineers at the Ohio Supercomputer Center (OSC) to allow researchers and public safety officials to develop other front-end applications that draw on the logic and computing resources of Supramap.
It was originally developed in 2007 to track the spread and evolution of pandemic (H1N1) and avian influenza (H5N1).
“Using SUPRAMAP, we initially developed maps that illustrated the spread of drug-resistant influenza and host shifts in H1N1 and H5N1 influenza and in coronaviruses, such as SARS,” said Janies. “SUPRAMAP allows the user to track strains carrying key mutations in a geospatial browser such as Google Earth. Our software allows public health scientists to update and view maps on the evolution and spread of pathogens.”
Grant funding through the U.S. Army Research Laboratory and Office supports this Innovation Group on Global Infectious Disease Research project. Support for the computational requirements of the project comes from the American Museum of Natural History (AMNH) and OSC. Ohio State’s Wexner Medical Center, Department of Biomedical Informatics and offices of Academic Affairs and Research provide additional support.”
See also
- Gabriel J Milinovich, Gail M Williams, Archie C A Clements, Wenbiao Hu, Internet-based surveillance systems for monitoring emerging infectious diseases, The Lancet Infectious Diseases, 2013, DOI: 10.1016/S1473-3099(13)70244-5
- Daniel A. Janies et al., The Supramap project: linking pathogen genomes with geography to fight emergent infectious diseases, Cladistics, 2012, DOI: 10.1111/j.1096-0031.2010.00314.x (open access)