The Crypto-democracy and the Trustworthy


New Paper by Sebastien Gambs, Samuel Ranellucci, and Alain Tapp: “In the current architecture of the Internet, there is a strong asymmetry in terms of power between the entities that gather and process personal data (e.g., major Internet companies, telecom operators, cloud providers, …) and the individuals from which this personal data is issued. In particular, individuals have no choice but to blindly trust that these entities will respect their privacy and protect their personal data. In this position paper, we address this issue by proposing an utopian crypto-democracy model based on existing scientific achievements from the field of cryptography. More precisely, our main objective is to show that cryptographic primitives, including in particular secure multiparty computation, offer a practical solution to protect privacy while minimizing the trust assumptions. In the crypto-democracy envisioned, individuals do not have to trust a single physical entity with their personal data but rather their data is distributed among several institutions. Together these institutions form a virtual entity called the Trustworthy that is responsible for the storage of this data but which can also compute on it (provided first that all the institutions agree on this). Finally, we also propose a realistic proof-of-concept of the Trustworthy, in which the roles of institutions are played by universities. This proof-of-concept would have an important impact in demonstrating the possibilities offered by the crypto-democracy paradigm.”

Data + Design: A simple introduction to preparing and visualizing information


Open access book by By Trina Chiasson, Dyanna Gregory, and all of these people:(Foreword) Data are all around us and always have been. Everything throughout history has always had the potential to be quantified: theoretically, one could count every human who has ever lived, every heartbeat that has ever beaten, every step that was ever taken, every star that has ever shone, every word that has ever been uttered or written. Each of these collective things can be represented by a number. But only recently have we had the technology to efficiently surface these hidden numbers, leading to greater insight into our human condition.
But what does this mean, exactly? What are the cultural effects of having easy access to data? It means, for one thing, that we all need to be more data literate. It also means we have to be more design literate. As the old adage goes, statistics lie. Well, data visualizations lie, too. How can we learn how to first, effectively read data visualizations; and second, author them in such a way that is ethical and clearly communicates the data’s inherent story?

At the intersection of art and algorithm, data visualization schematically abstracts information to bring about a deeper understanding of the data, wrapping it in an element of awe.

Maria Popova, Stories for the Information Age, Businessweek

My favorite description of data visualization comes from the prolific blogger, Maria Popova, who said that data visualization is “at the intersection of art and algorithm.” To learn about the history of data visualization is to become an armchair cartographer, explorer, and statistician….”
Early visual explorations of data focused mostly on small snippets of data gleaned to expand humanity’s understanding of the geographical world, mainly through maps. Starting with the first recognized world maps of the 13th century, scientists, mathematicians, philosophers, and sailors used math to visualize the invisible. Stars and suns were plotted, coastlines and shipping routes charted. Data visualization, in its native essence, drew the lines, points, and coordinates that gave form to the physical world and our place in it. It answered questions like “Where am I?”, “How do I get there?”, and “How far is it?”

How Local Governments Can Use Instameets to Promote Citizen Engagement


Chris Shattuck at Arc3Communications: “With more than 200 million active monthly users, Instagram reports that it shares more than 20 million photos every day with a combined average of 1.6 billion likes.
Instagram engagement is also more than 15 times that of Facebook with a user base that is predominately young, female and affluent, according to a recent report by L2, a think tank for digital innovation.
Therefore, it’s no wonder that 92 percent of prestige brands prominently incorporate Instagram into their social media strategies, according to the same report.
However, many local governments have been slow to adopt this rapidly maturing platform, even though many of their constituents are already actively using it.
So how can local governments utilize the power of Instagram to promote citizen engagement that is still organic and social?
Creating Instameets to promote local government events, parks, civic landmarks and institutional buildings may be part of that answer.
Once an Instagram meetup community is created for a city any user can suggest a “meet-up” where members get together at a set place, date and time to snap away at a landmark, festival, or other event of note – preferably with a unique hashtag so that photos can be easily shared.
For example, where other marketing efforts to brand the City of Atlanta failed, #weloveatl has become a popular, organic hashtag that crosses cultural and economic boundaries for photographers looking to share their favorite things about Atlanta and benefit the Atlanta Community Food Bank.
And in May, users were able to combine that energy with a worldwide Instameet campaign to photograph Streets Alive Atlanta, a major initiative by the Atlanta Bicycle Coalition.
This organic collaboration provides a unique example for local governments seeking to promote their cities and use Instameets….”

EU: GLOW (Global Legislative Openness Week)


GLOW is a celebration of open, participatory legislative processes around the world as well as an opportunity for diverse stakeholders to collaborate with one another and make progress toward adopting and implementing open-government commitments. The week is being led by the Legislative Openness Working Group of the Open Government Partnership, which is co-anchored by the National Democratic Institute and the Congress of Chile. 
The campaign kicks off with the International Day of Democracy on September 15, and throughout the 10 days you are invited to share your ideas and experiences, kickstart new transparency tools and engage members of your community in dialogue. Learn more about the global open government movement at OGP, and stay tuned into GLOW events by following this site and #OpenParl2014.
Where will GLOW be happening?
GLOW will connect a range of legislative openness activities, organized independently by civil society organizations and parliaments around the world. You can follow the action on Twitter by using the hashtag #OpenParl2014. We hope the GLOW campaign will inspire you to design and organize your own event or activity during this week. If you’d like to share your event and collaborate with others during GLOW, please send us a note.
The week’s festivities will be anchored by two Working Group meetings of civil society and parliamentary members. Beginning on the International Day of Democracy, September 15, the Working Group will host a regional meeting on expanding civic engagement through parliamentary openness in Podgorica, Montenegro, hosted in partnership with the Parliament of Montenegro. The week will conclude with the Working Group’s annual meeting in Chile, on September 25 and 26, 2014, where members will discuss progress made in the year since the Working Group’s launch. This meeting coincides with the 11th Plenary Assembly of ParlAmericas, an independent network composed of the national legislatures of the 35 independent states of the Americas, which will also consider issues of legislative openness as part of its meeting….” (More)

Participatory Budgeting: Ten Actions to Engage Citizens via Social Media


New report by Victoria Gordon for the IBM Center for the Business of Government: “Participatory budgeting is an innovation in direct citizen participation in government decision-making that began 25 years ago in a town in Brazil. It has since spread to 1,000 other cities worldwide and is gaining interest in U.S. cities as well.
Dr. Gordon’s report offers an overview of the state of participatory budgeting, and the potential value of integrating the use of social media into the participatory process design. Her report details three case studies of U.S. communities that have undertaken participatory budgeting initiatives.  While these cases are relatively small in scope, they provide insights into what potential users need to consider if they wanted to develop their own initiatives.
Based on her research and observations, Dr. Gordon recommends ten actions community leaders can take to create the right participatory budgeting infrastructure to increase citizen participation and assess its impact.  A key element in her recommendations is to proactively incorporate social media strategies”

When Big Data Maps Your Safest, Shortest Walk Home


Sarah Laskow at NextCity: “Boston University and University of Pittsburgh researchers are trying to do the same thing that got the creators of the app SketchFactor into so much trouble over the summer. They’re trying to show people how to avoid dangerous spots on city streets while walking from one place to another.
“What we are interested in is finding paths that offer trade-offs between safety and distance,” Esther Galbrun, a postdoc at Boston University, recently said in New York at the 3rd International Workshop on Urban Computing, held in conjunction with KDD2014.
She was presenting, “Safe Navigation in Urban Environments,” which describes a set of algorithms that would give a person walking through a city options for getting from one place to another — the shortest path, the safest path and a number of alternatives that balanced between both factors. The paper takes existing algorithms, well defined in theory — nothing new or fancy, Galbrun says — and applies them to a problem that people face everyday.
Imagine, she suggests, that a person is standing at the Philadelphia Museum of Art, and he wants to walk home, to his place on Wharton Street. (Galbrun and her colleagues looked at Philadelphia and Chicago because those cities have made their crime data openly available.) The walk is about three miles away, and one option would be to take the shortest path back. But maybe he’s worried about safety. Maybe he’s willing to take a little bit of a longer walk if it means he has to worry less about crime. What route should he take then?
Services like Google Maps have excelled at finding the shortest, most direct routes from Point A to Point B. But, increasingly, urban computing is looking to capture other aspects of moving about a place. “Fast is only one option,” says co-author Konstantinos Pelechrinis. “There are noble objectives beyond the surface path that you can put inside this navigation problem.” You might look for the path that will burn the most calories; a Yahoo! lab has considered how to send people along the most scenic route.
But working on routes that do more than give simple directions can have its pitfalls. The SketchFactor app relies both on crime data, when it’s available, and crowdsourced comments to reveal potential trouble spots to users. When it was released this summer, tech reporters and other critics immediately started talking about how it could easily become a conduit for racism. (“Sketchy” is, after all, a very subjective measure.)
So far, though, the problem with the SketchFactor app is less that it offers racially skewed perspectives than that the information it does offer is pretty useless — if entertaining. A pinpoint marked “very sketchy” is just as likely to flag an incident like a Jewish man eating pork products or hipster kids making too much noise as it is to flag a mugging.
Here, then, is a clear example of how Big Data has an advantage over Big Anecdata. The SafePath set-up measures risk more objectively and elegantly. It pulls in openly available crime data and considers simple data like time, location and types of crime. While a crime occurs at a discrete point, the researchers wanted to estimate the risk of a crime on every street, at every point. So they use a mathematical tool that smooths out the crime data over the space of the city and allows them to measure the relative risk of witnessing a crime on every street segment in a city….”

What Is Big Data?


datascience@berkeley Blog: ““Big Data.” It seems like the phrase is everywhere. The term was added to the Oxford English Dictionary in 2013 External link, appeared in Merriam-Webster’s Collegiate Dictionary by 2014 External link, and Gartner’s just-released 2014 Hype Cycle External link shows “Big Data” passing the “Peak of Inflated Expectations” and on its way down into the “Trough of Disillusionment.” Big Data is all the rage. But what does it actually mean?
A commonly repeated definition External link cites the three Vs: volume, velocity, and variety. But others argue that it’s not the size of data that counts, but the tools being used, or the insights that can be drawn from a dataset.
To settle the question once and for all, we asked 40+ thought leaders in publishing, fashion, food, automobiles, medicine, marketing and every industry in between how exactly they would define the phrase “Big Data.” Their answers might surprise you! Take a look below to find out what big data is:

  1. John Akred, Founder and CTO, Silicon Valley Data Science
  2. Philip Ashlock, Chief Architect of Data.gov
  3. Jon Bruner, Editor-at-Large, O’Reilly Media
  4. Reid Bryant, Data Scientist, Brooks Bell
  5. Mike Cavaretta, Data Scientist and Manager, Ford Motor Company
  6. Drew Conway, Head of Data, Project Florida
  7. Rohan Deuskar, CEO and Co-Founder, Stylitics
  8. Amy Escobar, Data Scientist, 2U
  9. Josh Ferguson, Chief Technology Officer, Mode Analytics
  10. John Foreman, Chief Data Scientist, MailChimp

FULL LIST at datascience@berkeley Blog”

Data Mining Reveals How Social Coding Succeeds (And Fails)


Emerging Technology From the arXiv : “Collaborative software development can be hugely successful or fail spectacularly. An analysis of the metadata associated with these projects is teasing apart the difference….
The process of developing software has undergone huge transformation in the last decade or so. One of the key changes has been the evolution of social coding websites, such as GitHub and BitBucket.
These allow anyone to start a collaborative software project that other developers can contribute to on a voluntary basis. Millions of people have used these sites to build software, sometimes with extraordinary success.
Of course, some projects are more successful than others. And that raises an interesting question: what are the differences between successful and unsuccessful projects on these sites?
Today, we get an answer from Yuya Yoshikawa at the Nara Institute of Science and Technology in Japan and a couple of pals at the NTT Laboratories, also in Japan.  These guys have analysed the characteristics of over 300,000 collaborative software projects on GitHub to tease apart the factors that contribute to success. Their results provide the first insights into social coding success from this kind of data mining.
A social coding project begins when a group of developers outline a project and begin work on it. These are the “internal developers” and have the power to update the software in a process known as a “commit”. The number of commits is a measure of the activity on the project.
External developers can follow the progress of the project by “starring” it, a form of bookmarking on GitHub. The number of stars is a measure of the project’s popularity. These external developers can also request changes, such as additional features and so on, in a process known as a pull request.
Yoshikawa and co begin by downloading the data associated with over 300,000 projects from the GitHub website. This includes the number of internal developers, the number of stars a project receives over time and the number of pull requests it gets.
The team then analyse the effectiveness of the project by calculating factors such as the number of commits per internal team member, the popularity of the project over time, the number of pull requests that are fulfilled and so on.
The results provide a fascinating insight into the nature of social coding. Yoshikawa and co say the number of internal developers on a project plays a significant role in its success. “Projects with larger numbers of internal members have higher activity, popularity and sociality,” they say….
Ref: arxiv.org/abs/1408.6012 : Collaboration on Social Media: Analyzing Successful Projects on Social Coding”

Roles, Trust, and Reputation in Social Media Knowledge Markets


New book edited by Bertino, Elisa, and Matei Sorin Adam: This title discusses the emerging trends in defining, measuring, and operationalizing reputation as a new and essential component of the knowledge that is generated and consumed online. The book also proposes a future research agenda related to these issues—with the ultimate goal of shaping the next generation of theoretical and analytic strategies needed for understanding how knowledge markets are influenced by social interactions and reputations built around functional roles.
Roles, Trust, and Reputation in Social Media Knowledge Markets exposes issues that have not been satisfactorily dealt with in the current literature. In a broader sense, the volume aims to change the way in which knowledge generation in social media spaces is understood and utilized. The tools, theories, and methodologies proposed here offer concrete avenues for developing the next generation of research strategies and applications that will help: tomorrow’s information consumers make smarter choices, developers to create new tools, and researchers to launch new research programs….

  • Proposes new methods for understanding how opinion leaders and influential authors emerge on social media knowledge markets
  • Advances new approaches to theory-based understanding of how social media reputations emerge and shape content and public opinion
  • Highlights the most important understudied or promising areas of research regarding reputation and authorship on social media
  • Reviews existing accomplishments in the field of reputation research on social media knowledge markets
  • Features a multidisciplinary team of authors, covering several disciplines
  • Includes both senior, established authors and emerging, innovative voices”

The Decalogue of Policy Making 2.0: Results from Analysis of Case Studies on the Impact of ICT for Governance and Policy Modelling


Paper by Sotirios Koussouris, Fenareti Lampathaki, Gianluca Misuraca, Panagiotis Kokkinakos, and Dimitrios Askounis: “Despite the availability of a myriad of Information and Communication Technologies (ICT) based tools and methodologies for supporting governance and the formulation of policies, including modelling expected impacts, these have proved to be unable to cope with the dire challenges of the contemporary society. In this chapter we present the results of the analysis of a set of promising cases researched in order to understand the possible impact of what we define ‘Policy Making 2.0’, which refers to ‘a set of methodologies and technological solutions aimed at enabling better, timely and participative policy-making’. Based on the analysis of these cases we suggest a bouquet of (mostly ICT-related) practical and research recommendations that are relevant to researchers, practitioners and policy makers in order to guide the introduction and implementation of Policy Making 2.0 initiatives. We argue that this ‘decalogue’ of Policy Making 2.0 could be an operational checklist for future research and policy to further explore the potential of ICT tools for governance and policy modelling, so to make next generation policy making more ‘intelligent’ and hopefully able to solve or anticipate the societal challenges we are (and will be) confronted today and in the future.