Big Thinkers. Big Data. Big Opportunity: Announcing The LinkedIn Economic Graph Challeng


at Linkedin Official Blog: “LinkedIn’s vision is to create economic opportunity for every member of the global workforce. Facilitating economic empowerment is a big task that will require bold thinking by smart, passionate individuals and groups. Today, we’re kicking off an initiative that aims to encourage this type of big thinking: the LinkedIn Economic Graph Challenge.
The LinkedIn Economic Graph Challenge is an idea that emerged from the development of the Economic Graph, a digital mapping of the global economy, comprised of a profile for every professional, company, job opportunity, the skills required to obtain those opportunities, every higher education organization, and all the professionally relevant knowledge associated with each of these entities. With these elements in place, we can connect talent with opportunity at massive scale.
We are launching the LinkedIn Economic Graph Challenge to encourage researchers, academics, and data-driven thinkers to propose how they would use data from LinkedIn to solve some of the most challenging economic problems of our times. We invite anyone who is interested to submit your most innovative, ambitious ideas. In return, we will recognize the three strongest proposals for using data from LinkedIn to generate a positive impact on the global economy, and present the team and/or individual with a $25,000 (USD) research award and the resources to complete their proposed research, with the potential to have it published….
We look forward to your submissions! For more information, please visit the LinkedIn Economic Graph Challenge website….”

Atlas of Cities


New book edited by Paul Knox:  “More than half the world’s population lives in cities, and that proportion is expected to rise to three-quarters by 2050. Urbanization is a global phenomenon, but the way cities are developing, the experience of city life, and the prospects for the future of cities vary widely from region to region. The Atlas of Cities presents a unique taxonomy of cities that looks at different aspects of their physical, economic, social, and political structures; their interactions with each other and with their hinterlands; the challenges and opportunities they present; and where cities might be going in the future.
Each chapter explores a particular type of city—from the foundational cities of Greece and Rome and the networked cities of the Hanseatic League, through the nineteenth-century modernization of Paris and the industrialization of Manchester, to the green and “smart” cities of today. Expert contributors explore how the development of these cities reflects one or more of the common themes of urban development: the mobilizing function (transport, communication, and infrastructure); the generative function (innovation and technology); the decision-making capacity (governance, economics, and institutions); and the transformative capacity (society, lifestyle, and culture)….
Table of ContentsIntroduction[PDF] pdf-icon

Data revolution: How the UN is visualizing the future


Kate Krukiel at Microsoft Government: “…world leaders met in New York for the 69th session of the United Nations (UN) General Assembly. Progress toward achieving the eight Millennium Development Goals (MDGs) by the December 2015 target date—just 454 days away—was top of mind. So was the post-2015 agenda, which will pick up where the MDGs leave off. Ahead of the meetings, the UN Millennium Campaign asked Microsoft to build real-time visualizations of the progress on each goal—based on data spanning 21 targets, 60 indicators, and about 190 member countries. With the data visualizations we created (see them at http://www.mdgleaders.org/), UN and global leaders can decide where to focus in the next 15 months and, more importantly, where change needs to happen post-2015. Their experience offers three lessons for governments:

1. Data has a shelf life.

Since the MDGs were launched in 2000, the UN has relied on annual reports to assess its progress. But in August, UN Secretary-General Ban Ki-moon called for a “data revolution for sustainable development”, which in effect makes real-time data visualization a requirement, not just for tracking the MDGs, but for everything from Ebola to climate change….

2.Governments need visualization tools.

Just as the UN is using data visualization to track its progress and plan for the future, you can use the technology to better understand the massive amounts of data you collect—on everything from water supply and food prices to child mortality and traffic jams. Data visualization technology makes it possible to pull insights from historical data, develop forecasts, and spot gaps in your data far easier than you can with raw data. As they say, a picture is worth a thousand words. To get a better idea of what’s possible, check out the MDG visualizations Microsoft created for the UN using our Power BI tool.

3.The private sector can help.

The UN called on the private sector to assist in determining the exact MDG progress and inspire ongoing global efforts. …

Follow the UN’s lead and join the #datarevolution now, if you haven’t already. It’s an opportunity to work across silos and political boundaries to address the world’s most pressing problems. It takes citizens’ points of view into account through What People Want. And it extends to the private sector, where expertise in using technology to create a sustainable future already exists. I encourage all government leaders to engage. To follow where the UN takes its revolution, watch for updates on the Data Revolution Group website or follow them on Twitter @data_rev….”

Killer Apps in the Gigabit Age


New Pew report By , and : “The age of gigabit connectivity is dawning and will advance in coming years. The only question is how quickly it might become widespread. A gigabit connection can deliver 1,000 megabits of information per second (Mbps). Globally, cloud service provider Akamai reports that the average global connection speed in quarter one of 2014 was 3.9 Mbps, with South Korea reporting the highest average connection speed, 23.6 Mbps and the US at 10.5 Mbps.1
In some respects, gigabit connectivity is not a new development. The US scientific community has been using hyper-fast networks for several years, changing the pace of data sharing and enabling levels of collaboration in scientific disciplines that were unimaginable a generation ago.
Gigabit speeds for the “average Internet user” are just arriving in select areas of the world. In the US, Google ran a competition in 2010 for communities to pitch themselves for the construction of the first Google Fiber network running at 1 gigabit per second—Internet speeds 50-100 times faster than the majority of Americans now enjoy. Kansas City was chosen among 1,100 entrants and residents are now signing up for the service. The firm has announced plans to build a gigabit network in Austin, Texas, and perhaps 34 other communities. In response, AT&T has said it expects to begin building gigabit networks in up to 100 US cities.2 The cities of Chattanooga, Tennessee; Lafayette, Louisiana; and Bristol, Virginia, have super speedy networks, and pockets of gigabit connectivity are in use in parts of Las Vegas, Omaha, Santa Monica, and several Vermont communities.3 There are also other regional efforts: Falcon Broadband in Colorado Springs, Colorado; Brooklyn Fiber in New York; Monkey Brains in San Francisco; MINET Fiber in Oregon; Wicked Fiber in Lawrence, Kansas; and Sonic.net in California, among others.4 NewWave expects to launch gigabit connections in 2015 in Poplar Bluff, Missouri Monroe, Rayville, Delhi; and Tallulah, Louisiana, and Suddenlink Communications has launched Operation GigaSpeed.5
In 2014, Google and Verizon were among the innovators announcing that they are testing the capabilities for currently installed fiber networks to carry data even more efficiently—at 10 gigabits per second—to businesses that handle large amounts of Internet traffic.
To explore the possibilities of the next leap in connectivity we asked thousands of experts and Internet builders to share their thoughts about likely new Internet activities and applications that might emerge in the gigabit age. We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts, many of whom play active roles in Internet evolution as technology builders, researchers, managers, policymakers, marketers, and analysts. We also invited comments from those who have made insightful predictions to our previous queries about the future of the Internet. (For more details, please see the section “About this Canvassing of Experts.”)…”

Trust: A History


New book by Geoffrey Hosking: “Today there is much talk of a ‘crisis of trust’; a crisis which is almost certainly genuine, but usually misunderstood. Trust: A History offers a new perspective on the ways in which trust and distrust have functioned in past societies, providing an empirical and historical basis against which the present crisis can be examined, and suggesting ways in which the concept of trust can be used as a tool to understand our own and other societies.
Geoffrey Hosking argues that social trust is mediated through symbolic systems, such as religion and money, and the institutions associated with them, such as churches and banks. Historically these institutions have nourished trust, but the resulting trust networks have tended to create quite tough boundaries around themselves, across which distrust is projected against outsiders. Hosking also shows how nation-states have been particularly good at absorbing symbolic systems and generating trust among large numbers of people, while also erecting distinct boundaries around themselves, despite an increasingly global economy. He asserts that in the modern world it has become common to entrust major resources to institutions we know little about, and suggests that we need to learn from historical experience and temper this with more traditional forms of trust, or become an ever more distrustful society, with potentially very destabilising consequences.”

Beyond the “Good Governance” mantra


Alan Hudson at Global Integrity: “…The invocation of “Good Governance” is something that happens a lot, including in ongoing discussions of whether and how governance – or governance-related issues – should be addressed in the post-2015 development framework. Rather than simply squirm uncomfortably every time someone invokes the “Good Governance” mantra, I thought it would be more constructive to explain – again (see here and here) – why I find the phrase problematic, and to outline why I think that “Open Governance” might be a more helpful formulation.
My primary discomfort with the “Good Governance” mantra is that it obscures and wishes away much of the complexity about governance. Few would disagree with the idea that: i) governance arrangements have distributional consequences; ii) governance arrangements play a role in shaping progress towards development outcomes; and iii) effective governance arrangements – forms of governance – will vary by context. But the “Good Governance” mantra, it seems to me, unhelpfully side-steps these key issues, avoiding, or at least postponing, a number of key questions: good from whose perspective, good for what, good for where?
Moreover, the notion of “Good Governance” risks giving the impression that “we” – which tends to mean people outside of the societies that they’re talking about – know what governance is good, and further still that “we” know what needs to happen to make governance good. On both counts, the evidence is that that is seldom the case.
These are not new points. A number of commentators including Merilee Grindle, Matt Andrews, Mushtaq Khan and, most recently, Brian Levy, have pointed out the problems with a “Good Governance” agenda for many years. But, despite their best efforts, in policy discussions, including around post-2015, their warnings are too rarely heeded.
However, rather than drop the language of governance entirely, I do think that there is value in a more flexible, perhaps less normative – or differently normative, more focused on function than form – notion of governance. One that centers on transparency, participation and accountability. One that is about promoting the ability of communities in particular places to address the governance challenges relating to the specific priorities that they face, and which puts people in those places – rather than outsiders – center-stage in improving governance in ways that work for them. Indeed, the targets in the Open Working Group’s Goal 16 includes important elements of this.
The “Good Governance” mantra may be hard to shake, but I remain hopeful that open governance – a more flexible framing which is about empowering people and governments with information so that they can work together to tackle problems they prioritize, in their particular places – may yet win the day. The sooner that happens, the better.”

The Web Observatory: A Middle Layer for Broad Data


New paper by Tiropanis Thanassis, Hall Wendy, Hendler James, and de Larrinaga Christian in Big Data: “The Web Observatory project1 is a global effort that is being led by the Web Science Trust,2 its network of WSTnet laboratories, and the wider Web Science community. The goal of this project is to create a global distributed infrastructure that will foster communities exchanging and using each other’s web-related datasets as well as sharing analytic applications for research and business web applications.3 It will provide the means to observe the digital planet, explore its processes, and understand their impact on different sectors of human activity.
The project is creating a network of separate web observatories, collections of datasets and tools for analyzing data about the Web and its use, each with their own use community. This allows researchers across the world to develop and share data, analytic approaches, publications related to their datasets, and tools (Fig. 1). The network of web observatories aims to bridge the gap that currently exists between big data analytics and the rapidly growing web of “broad data,”4 making it difficult for a large number of people to engage with them….”

Things Fall Apart: How Social Media Leads to a Less Stable World


Commentary by Curtis Hougland at Knowledge@Wharton: “James Foley. David Haines. Steven Sotloff. The list of people beheaded by followers of the Islamic State of Iraq and Syria (ISIS) keeps growing. The filming of these acts on video and distribution via social media platforms such as Twitter represent a geopolitical trend in which social media has become the new frontline for proxy wars across the globe. While social media does indeed advance connectivity and wealth among people, its proliferation at the same time results in a markedly less stable world.
That social media benefits mankind is irrefutable. I have been an evangelist for the power of new media for 20 years. However, technology in the form of globalized communication, transportation and supply chains conspires to make today’s world more complex. Events in any corner of the world now impact the rest of the globe quickly and sharply. Nations are being pulled apart along sectarian seams in Iraq, tribal divisions in Afghanistan, national interests in Ukraine and territorial fences in Gaza. These conflicts portend a quickening of global unrest, confirmed by Foreign Policy magazine’s map of civil protest. The ISIS videos are simply the exposed wire. I believe that over the next century, even great nations will Balkanize — break into smaller nations. One of the principal drivers of this Balkanization is social media Twitter .
Social media is a behavior, an expression of the innate human need to socialize and share experiences. Social media is not simply a set of technology channels and networks. Both the public and private sectors have underestimated the human imperative to behave socially. The evidence is now clear with more than 52% of the population living in cities and approximately 2 billion people active in social media globally. Some 96% of content emanates from individuals, not brands, media or governments — a volume that far exceeds participation in democratic elections.
Social media is not egalitarian, though. Despite the exponential growth of user-generated content, people prefer to congregate online around like-minded individuals. Rather than seek out new beliefs, people choose to reinforce their existing political opinions through their actions online. This is illustrated in Pew Internet’s 2014 study, “Mapping Twitter Topic Networks from Polarized Crowds to Community Clusters.” Individuals self-organize by affinity, and within affinity, by sensibility and personality. The ecosystem of social media is predicated on delivering more of what the user already likes. This, precisely, is the function of a Follow or Like. In this way, media coagulates rather than fragments online….”

Crowdsourcing and collaborative translation: mass phenomena or silent threat to translation studies?


Article by Alberto Fernandez Costales: ” This article explores the emerging phenomenon of amateur translation and tries to shed some light on the implications this process may have both for Translation Studies as an academic discipline and for the translation industry itself. The paper comments on the main activities included within the concept of fan translation and approaches the terminological issues concerning the categorization of “non-professional translation”. In addition, the article focuses on the existing differences between collaborative translation and crowdsourcing, and posits new hypotheses regarding the development of these initiatives and the possible erosion of the boundaries which separate them. The question of who-does-what in the industry of translation is a major issue to be addressed in order to gain a clear view of the global state of translation today.”

Open Data as Universal Service. New perspectives in the Information Profession


Paper by L. Fernando Ramos Simón et al in Procedia – Social and Behavioral Sciences: “The Internet provides a global information flow, which improves living conditions in poor countries as well as in rich countries. Owing to its abundance and quality, public information (meteorological, geographic, transport information. and also the content managed in libraries, archives and museums) is an incentive for change, becoming invaluable and accessible to all citizens. However, it is clear that Open Data plays a significant role and provides a business service in the digital economy. Nevertheless, it is unknown how this amount of public data may be introduced as universal service to make it available to all citizens in matters of education, health, culture . In fact, a function or role which has traditionally been assumed by libraries. In addition, information professionals will have to acquire new skills that enable them to assume a new role in the information management: data management (Open Data) and content management (Open Content). Thus, this study analyzes new roles, which will be assumed by new information professionals such as metadata, interoperability, access licenses, information search and retrieval tools and applications for data queries…”