Research Note by Soichiro Takagi: “Open data generally refers to a movement in which public organizations provide data in a machine-readable format to the public, so that anyone can reuse the data. Open data is becoming an important phenomenon in Japan. At this moment, utilization of open data in Japan is emerging with collaborative efforts among small units of production such as individuals. These collaborations have been also observed in the Open Source Software (OSS) movement, but collaboration in open data is somewhat different in respect to small-scale, distributed collaboration. The aim of this research note is to share the phenomena of open data as an object of economic analysis with readers by describing the movement and providing a preliminary analysis. This note discusses how open data is associated with mass collaboration from the viewpoint of organizational economics. It also provides the results of empirical analysis on how the regional characteristics of municipalities affect the decision of local governments to conduct open data initiatives.”
Cities Find Rewards in Cheap Technologies
Nanette Byrnes at MIT Technology Review: “Cities around the globe, whether rich or poor, are in the midst of a technology experiment. Urban planners are pulling data from inexpensive sensors mounted on traffic lights and park benches, and from mobile apps on citizens’ smartphones, to analyze how their cities really operate. They hope the data will reveal how to run their cities better and improve urban life. City leaders and technology experts say that managing the growing challenges of cities well and affordably will be close to impossible without smart technology.
Fifty-four percent of humanity lives in urban centers, and almost all of the world’s projected population growth over the next three decades will take place in cities, including many very poor cities. Because of their density and often strained infrastructure, cities have an outsize impact on the environment, consuming two-thirds of the globe’s energy and contributing 70 percent of its greenhouse-gas emissions. Urban water systems are leaky. Pollution levels are often extreme.
But cities also contribute most of the world’s economic production. Thirty percent of the world’s economy and most of its innovation are concentrated in just 100 cities. Can technology help manage rapid population expansion while also nurturing cities’ all-important role as an economic driver? That’s the big question at the heart of this Business Report.
Selling answers to that question has become a big business. IBM, Cisco, Hitachi, Siemens, and others have taken aim at this market, publicizing successful examples of cities that have used their technology to tackle the challenges of parking, traffic, transportation, weather, energy use, water management, and policing. Cities already spend a billion dollars a year on these systems, and that’s expected to grow to $12 billion a year or more in the next 10 years.
To justify this kind of outlay, urban technologists will have to move past the test projects that dominate discussions today. Instead, they’ll have to solve some of the profound and growing problems of urban living. Cities leaning in that direction are using various technologies to ease parking, measure traffic, and save water (see “Sensing Santander”), reduce rates of violent crime (see “Data-Toting Cops”), and prepare for ever more severe weather patterns.
There are lessons to be learned, too, from cities whose grandiose technological ideas have fallen short, like the eco-city initiative of Tianjin, China (see “China’s Future City”), which has few residents despite great technology and deep government support.
The streets are similarly largely empty in the experimental high-tech cities of Songdo, South Korea; Masdar City, Abu Dhabi; and Paredes, Portugal, which are being designed to have minimal impact on the environment and offer high-tech conveniences such as solar-powered air-conditioning and pneumatic waste disposal systems instead of garbage trucks. Meanwhile, established cities are taking a much more incremental, less ambitious, and perhaps more workable approach, often benefiting from relatively inexpensive and flexible digital technologies….”
Research Handbook On Transparency
New book edited by Padideh Ala’i and Robert G. Vaughn: ‘”Transparency” has multiple, contested meanings. This broad-ranging volume accepts that complexity and thoughtfully contrasts alternative views through conceptual pieces, country cases, and assessments of policies–such as freedom of information laws, whistleblower protections, financial disclosure, and participatory policymaking procedures.’
– Susan Rose-Ackerman, Yale University Law School, US
In the last two decades transparency has become a ubiquitous and stubbornly ambiguous term. Typically understood to promote rule of law, democratic participation, anti-corruption initiatives, human rights, and economic efficiency, transparency can also legitimate bureaucratic power, advance undemocratic forms of governance, and aid in global centralization of power. This path-breaking volume, comprising original contributions on a range of countries and environments, exposes the many faces of transparency by allowing readers to see the uncertainties, inconsistencies and surprises contained within the current conceptions and applications of the term….
The expert contributors identify the goals, purposes and ramifications of transparency while presenting both its advantages and shortcomings. Through this framework, they explore transparency from a number of international and comparative perspectives. Some chapters emphasize cultural and national aspects of the issue, with country-specific examples from China, Mexico, the US and the UK, while others focus on transparency within global organizations such as the World Bank and the WTO. A number of relevant legal considerations are also discussed, including freedom of information laws, financial disclosure of public officials and whistleblower protection…”
ShareHub: at the Heart of Seoul's Sharing Movement
Cat Johnson at Shareable: “In 2012, Seoul publicly announced its commitment to becoming a sharing city. It has since emerged as a leader of the global sharing movement and serves as a model for cities around the world. Supported by the municipal government and embedded in numerous parts of everyday life in Seoul, the Sharing City project has proven to be an inspiration to city leaders, entrepreneurs, and sharing enthusiasts around the world.
At the heart of Sharing City, Seoul is ShareHub, an online platform that connects users with sharing services, educates and informs the public about sharing initiatives, and serves as the online hub for the Sharing City, Seoul project. Now a year and a half into its existence, ShareHub, which is powered by Creative Commons Korea (CC Korea), has served 1.4 million visitors since launching, hosts more than 350 articles about sharing, and has played a key role in promoting sharing policies and projects. Shareable connected with Nanshil Kwon, manager of ShareHub, to find out more about the project, its role in promoting sharing culture, and the future of the sharing movement in Seoul….”
The Role Of Open Data In Choosing Neighborhood
PlaceILive Blog: “To what extent is it important to get familiar with our environment?
If we think about how the world surrounding us has changed throughout the years, it is not so unreasonable that, while walking to work, we might encounter some new little shops, restaurants, or gas stations we had never noticed before. Likewise, how many times did we wander about for hours just to find green spaces for a run? And the only one we noticed was even more polluted than other urban areas!
Citizens are not always properly informed about the evolution of the places they live in. And that is why it would be crucial for people to be constantly up-to-date with accurate information of the neighborhood they have chosen or are going to choose.
London is a neat evidence of how transparency in providing data is basic in order to succeed as a Smart City.
The GLA’s London Datastore, for instance, is a public platform of datasets revealing updated figures on the main services offered by the town, in addition to population’s lifestyle and environmental risks. These data are then made more easily accessible to the community through the London Dashboard.
The importance of dispensing free information can be also proved by the integration of maps, which constitute an efficient means of geolocation. Consulting a map where it’s easy to find all the services you need as close as possible can be significant in the search for a location.
(source: Smart London Plan)
The Open Data Index, published by The Open Knowledge Foundation in 2013, is another useful tool for data retrieval: it showcases a rank of different countries in the world with scores based on openness and availability of data attributes such as transport timetables and national statistics.
Here it is possible to check UK Open Data Census and US City Open Data Census.
As it was stated, making open data available and easily findable online not only represented a success for US cities but favoured apps makers and civic hackers too. Lauren Reid, a spokesperson at Code for America, reported according to Government Technology: “The more data we have, the better picture we have of the open data landscape.”
That is, on the whole, what Place I Live puts the biggest effort into: fostering a new awareness of the environment by providing free information, in order to support citizens willing to choose the best place they can live.
The outcome is soon explained. The website’s homepage offers visitors the chance to type address of their interest, displaying an overview of neighborhood parameters’ evaluation and a Life Quality Index calculated for every point on the map.
The research of the nearest medical institutions, schools or ATMs thus gets immediate and clear, as well as the survey about community’s generic information. Moreover, data’s reliability and accessibility are constantly examined by a strong team of professionals with high competence in data analysis, mapping, IT architecture and global markets.
For the moment the company’s work is focused on London, Berlin, Chicago, San Francisco and New York, while higher goals to reach include more than 200 cities.
US Open Data Census finally saw San Francisco’s highest score achievement as a proof of the city’s labour in putting technological expertise at everyone’s disposal, along with the task of fulfilling users’ needs through meticulous selections of datasets. This challenge seems to be successfully overcome by San Francisco’s new investment, partnering with the University of Chicago, in a data analytics dashboard on sustainability performance statistics named Sustainable Systems Framework, which is expected to be released in beta version by the the end of 2015’s first quarter.
Another remarkable collaboration in Open Data’s spread comes from the Bartlett Centre for Advanced Spatial Analysis (CASA) of the University College London (UCL); Oliver O’Brien, researcher at UCL Department of Geography and software developer at the CASA, is indeed one of the contributors to this cause.
Among his products, an interesting accomplishment is London’s CityDashboard, a real-time reports’ control panel in terms of spatial data. The web page also allows to visualize the whole data translated into a simplified map and to look at other UK cities’ dashboards.
Plus, his Bike Share Map is a live global view to bicycle sharing systems in over a hundred towns around the world, since bike sharing has recently drawn a greater public attention as an original form of transportation, in Europe and China above all….”
3D printed maps could help the blind navigate their city
Springwise: “Modern technology has turned many of the things we consume from physical objects into pixels on a screen. While this has benefited the majority of us, those with sight difficulties don’t get along well with visual stimuli or touchscreen devices. In the past, we’ve seen Yahoo! Japan develop Hands On Search, a project that lets blind kids carry out web searches with 3D printed results. Now the country’s governmental department GSI is creating software that will enable those with visual impairments to print out 3D versions of online maps.
The official mapping body for Japan — much like the US Geological Survey — GSI already has paper maps for the blind, using embossed surfaces to mark out roads. It’s now developing a program that is able to do the same thing for digital maps.
The software first differentiates the highways, railway lines and walkways from the rest of the landscape. It then creates a 3D relief model that uses different textures to distinguish the features so that anyone running their finger along them will be able to determine what it is. The program also takes into account contour lines, creating accurate topographical representations of a particular area….
Website: www.gsi.go.jp“
Killer Apps in the Gigabit Age
New Pew report By Lee Rainie, Janna Anderson and Jennifer Connolly: “The age of gigabit connectivity is dawning and will advance in coming years. The only question is how quickly it might become widespread. A gigabit connection can deliver 1,000 megabits of information per second (Mbps). Globally, cloud service provider Akamai reports that the average global connection speed in quarter one of 2014 was 3.9 Mbps, with South Korea reporting the highest average connection speed, 23.6 Mbps and the US at 10.5 Mbps.1
In some respects, gigabit connectivity is not a new development. The US scientific community has been using hyper-fast networks for several years, changing the pace of data sharing and enabling levels of collaboration in scientific disciplines that were unimaginable a generation ago.
Gigabit speeds for the “average Internet user” are just arriving in select areas of the world. In the US, Google ran a competition in 2010 for communities to pitch themselves for the construction of the first Google Fiber network running at 1 gigabit per second—Internet speeds 50-100 times faster than the majority of Americans now enjoy. Kansas City was chosen among 1,100 entrants and residents are now signing up for the service. The firm has announced plans to build a gigabit network in Austin, Texas, and perhaps 34 other communities. In response, AT&T has said it expects to begin building gigabit networks in up to 100 US cities.2 The cities of Chattanooga, Tennessee; Lafayette, Louisiana; and Bristol, Virginia, have super speedy networks, and pockets of gigabit connectivity are in use in parts of Las Vegas, Omaha, Santa Monica, and several Vermont communities.3 There are also other regional efforts: Falcon Broadband in Colorado Springs, Colorado; Brooklyn Fiber in New York; Monkey Brains in San Francisco; MINET Fiber in Oregon; Wicked Fiber in Lawrence, Kansas; and Sonic.net in California, among others.4 NewWave expects to launch gigabit connections in 2015 in Poplar Bluff, Missouri Monroe, Rayville, Delhi; and Tallulah, Louisiana, and Suddenlink Communications has launched Operation GigaSpeed.5
In 2014, Google and Verizon were among the innovators announcing that they are testing the capabilities for currently installed fiber networks to carry data even more efficiently—at 10 gigabits per second—to businesses that handle large amounts of Internet traffic.
To explore the possibilities of the next leap in connectivity we asked thousands of experts and Internet builders to share their thoughts about likely new Internet activities and applications that might emerge in the gigabit age. We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts, many of whom play active roles in Internet evolution as technology builders, researchers, managers, policymakers, marketers, and analysts. We also invited comments from those who have made insightful predictions to our previous queries about the future of the Internet. (For more details, please see the section “About this Canvassing of Experts.”)…”
CityBeat: Visualizing the Social Media Pulse of the City
“CityBeat is a an academic research project set to develop an application that sources, monitors and analyzes hyper-local information from multiple social media platforms such as Instagram and Twitter in real time.
This project was led by researchers at the Jacobs Institute at Cornell Tech, in collaboration with the The New York World (Columbia Journalism School), Rutgers University, NYU, and Columbia University….
If you are interested in the technical details, we have published several papers detailing the process of building CityBeat. Enjoy your read!
Xia C., Schwartz, R., Xie K., Krebs A., Langdon A., Ting J. and Naaman M., CityBeat: Real-time Social Media Visualization of Hyper-local City Data. In Proceedings, WWW 2014, Seoul, Korea, April 2014. [PDF]
Xie K., Xia C., Grinberg N., Schwartz R., and Naaman M., Robust detection of hyper-local events from geotagged social media data. In Proceedings of the 13th Workshop on Multimedia Data Mining in KDD, 2013. [PDF]
Schwartz, R., Naaman M., Matni, Z. (2013) Making Sense of Cities Using Social Media: Requirements for Hyper-Local Data Aggregation Tools. In Proceedings, WCMCW at ICWSM 2013, Boston, USA, July 2013. [PDF]
Data Mining Reveals How Social Coding Succeeds (And Fails)
Emerging Technology From the arXiv : “Collaborative software development can be hugely successful or fail spectacularly. An analysis of the metadata associated with these projects is teasing apart the difference….
The process of developing software has undergone huge transformation in the last decade or so. One of the key changes has been the evolution of social coding websites, such as GitHub and BitBucket.
These allow anyone to start a collaborative software project that other developers can contribute to on a voluntary basis. Millions of people have used these sites to build software, sometimes with extraordinary success.
Of course, some projects are more successful than others. And that raises an interesting question: what are the differences between successful and unsuccessful projects on these sites?
Today, we get an answer from Yuya Yoshikawa at the Nara Institute of Science and Technology in Japan and a couple of pals at the NTT Laboratories, also in Japan. These guys have analysed the characteristics of over 300,000 collaborative software projects on GitHub to tease apart the factors that contribute to success. Their results provide the first insights into social coding success from this kind of data mining.
A social coding project begins when a group of developers outline a project and begin work on it. These are the “internal developers” and have the power to update the software in a process known as a “commit”. The number of commits is a measure of the activity on the project.
External developers can follow the progress of the project by “starring” it, a form of bookmarking on GitHub. The number of stars is a measure of the project’s popularity. These external developers can also request changes, such as additional features and so on, in a process known as a pull request.
Yoshikawa and co begin by downloading the data associated with over 300,000 projects from the GitHub website. This includes the number of internal developers, the number of stars a project receives over time and the number of pull requests it gets.
The team then analyse the effectiveness of the project by calculating factors such as the number of commits per internal team member, the popularity of the project over time, the number of pull requests that are fulfilled and so on.
The results provide a fascinating insight into the nature of social coding. Yoshikawa and co say the number of internal developers on a project plays a significant role in its success. “Projects with larger numbers of internal members have higher activity, popularity and sociality,” they say….
Ref: arxiv.org/abs/1408.6012 : Collaboration on Social Media: Analyzing Successful Projects on Social Coding”
Rethinking Democracy
Dani Rodrik at Project Syndicate: “By many measures, the world has never been more democratic. Virtually every government at least pays lip service to democracy and human rights. Though elections may not be free and fair, massive electoral manipulation is rare and the days when only males, whites, or the rich could vote are long gone. Freedom House’s global surveys show a steady increase from the 1970s in the share of countries that are “free” – a trend that the late Harvard political scientist Samuel Huntington dubbed the “third wave” of democratization….
A true democracy, one that combines majority rule with respect for minority rights, requires two sets of institutions. First, institutions of representation, such as political parties, parliaments, and electoral systems, are needed to elicit popular preferences and turn them into policy action. Second, democracy requires institutions of restraint, such as an independent judiciary and media, to uphold fundamental rights like freedom of speech and prevent governments from abusing their power. Representation without restraint – elections without the rule of law – is a recipe for the tyranny of the majority.
Democracy in this sense – what many call “liberal democracy” – flourished only after the emergence of the nation-state and the popular upheaval and mobilization produced by the Industrial Revolution. So it should come as no surprise that the crisis of liberal democracy that many of its oldest practitioners currently are experiencing is a reflection of the stress under which the nation-state finds itself….
In developing countries, it is more often the institutions of restraint that are failing. Governments that come to power through the ballot box often become corrupt and power-hungry. They replicate the practices of the elitist regimes they replaced, clamping down on the press and civil liberties and emasculating (or capturing) the judiciary. The result has been called “illiberal democracy” or “competitive authoritarianism.” Venezuela, Turkey, Egypt, and Thailand are some of the better-known recent examples.
When democracy fails to deliver economically or politically, perhaps it is to be expected that some people will look for authoritarian solutions. And, for many economists, delegating economic policy to technocratic bodies in order to insulate them from the “folly of the masses” almost always is the preferred approach.
…
Effective institutions of restraint do not emerge overnight; and it might seem like those in power would never want to create them. But if there is some likelihood that I will be voted out of office and that the opposition will take over, such institutions will protect me from others’ abuses tomorrow as much as they protect others from my abuses today. So strong prospects for sustained political competition are a key prerequisite for illiberal democracies to turn into liberal ones over time.
Optimists believe that new technologies and modes of governance will resolve all problems and send democracies centered on the nation-state the way of the horse-drawn carriage. Pessimists fear that today’s liberal democracies will be no match for the external challenges mounted by illiberal states like China and Russia, which are guided only by hardnosed realpolitik. Either way, if democracy is to have a future, it will need to be rethought.”