Chris Hughes in New Republic: “We’ve known for a long time that big companies can stalk our every digital move and customize our every Web interaction. Our movements are tracked by credit cards, Gmail, and tollbooths, and we haven’t seemed to care all that much.
That is, until this week’s news of government eavesdropping, with the help of these very same big companies—Verizon, Facebook, and Google, among others. For the first time, America is waking up to the realities of what all this information—known in the business as “big data”—enables governments and corporations to do….
We are suddenly wondering, Can the rise of enormous data systems that enable this surveillance be stopped or controlled? Is it possible to turn back the clock?
Technologists see the rise of big data as the inevitable march of history, impossible to prevent or alter. Viktor Mayer-Schönberger and Kenneth Cukier’s recent book Big Data is emblematic of this argument: They say that we must cope with the consequences of these changes, but they never really consider the role we play in creating and supporting these technologies themselves….
But these well-meaning technological advocates have forgotten that as a society, we determine our own future and set our own standards, norms, and policy. Talking about technological advancements as if they are pre-ordained science erases the role of human autonomy and decision-making in inventing our own future. Big data is not a Leviathan that must be coped with, but a technological trend that we have made possible and support through social and political policy.”
A Citizen’s Guide to Open Government, E-Government, and Government 2.0
Inside the MPA@UNC Blog: “Engaged citizens want clear, credible information from the government about how it’s carrying on its business. They don’t want to thumb through thousands of files or wait month after month or go through the rigors of filing claims through FOIA (Freedom of Information Act). They want government information, services, and communication to be forthcoming and swift. The Open Government, Government 2.0, and E-Governance movements fill the need of connecting citizens with the government and each other to foster a more open, collaborative, and efficient public sector through the use of new technology and public data.
Open Government is defined by the OECD (Organisation for Economic Cooperation and Development) as “the transparency of government actions, the accessibility of government services and information, and the responsiveness of government to new ideas, demands and needs.”
E-Government is defined by the World Bank as “the use by government agencies of information technologies that have the ability to transform relations with citizens, businesses, and other arms of government. These technologies can serve a variety of different ends: better delivery of government services to citizens, improved interactions with business and industry, citizen empowerment through access to information, or more efficient government management. The resulting benefits can be less corruption, increased transparency, greater convenience, revenue growth, and/or cost reductions.”
Government 2.0 is defined by Gartner Research as “the use of Web 2.0 technologies, both internally and externally, to increase collaboration and transparency and potentially transform the way government agencies relate to citizens and operate.”
Open Government and E-Government paved the way for Government 2.0, a collaborative technology whose mission is to improve government transparency and efficiency. How? Gov 2.0 has been called the next generation of government because it not only utilizes new technologies such as social media, cloud computing, and other apps, it is a means to increase citizen participation….
We have compiled a list of organizations, blogs, guides, and tools to help citizens and public service leaders better understand the Open Government, E-Government, and Government 2.0 movement….”
Could CrowdOptic Be Used For Disaster Response?
Patrick Meier: “Crowds—rather than sole individuals—are increasingly bearing witness to disasters large and small. Instagram users, for example, snapped 800,000 #Sandy pictures during the hurricane last year. One way to make sense of this vast volume and velocity of multimedia content—Big Data—during disasters is with PhotoSynth, as blogged here. Another perhaps more sophisticated approach would be to use CrowdOptic, which automatically zeros in on the specific location that eyewitnesses are looking at when using their smartphones to take pictures or recording videos….How does it work? CrowdOptic simply triangulates line-of-sight intersections using sensory metadata from pictures and videos taken using a smartphone. The basic approach is depicted in the figure below. The areas of intersection is called a focal cluster. CrowdOptic automatically identifies the location of these clusters….Clearly, all this could have important applications for disaster response and information forensics.”
The "audience" as participative, idea generating, decision making citizens: will they transform government?
‘Digital natives’ tap into the wisdom of the crowd
Statistics seem to bear this out. More than 84 per cent of this group, aged between 15 and 30, own a smartphone, compared with 63 per cent of the total population, according to the 2013 Consumer Connection System study of 11,000 adults in 50 countries from Carat, the media researchers. More than 80 per cent have a Facebook profile and nearly 70 per cent regularly visit blogs. The 2012 Millennial impact report, which looked at how this generation connects with non-profit organisations, found 67 per cent interacted with charities on Facebook and 70 per cent made online donations….
Ms Long says that while older generations are “search first”, millennials are “social first”. The tendency for constant online peer group consulting is most extreme at the younger end of the age group. “Millennials are the first generation that are purely about recommendations. They ‘crowd source’ everything. Even if they are walking down the street looking for a cup of coffee, they won’t go in somewhere if they see on a site that it has had a bad review,” she says.”
Is Crowdsourcing the Future for Crime Investigation?
Joe Harris in IFSEC Global: “Following April’s Boston Marathon bombings, many people around the world wanted to help in any way they could. Previously, there would have been little but financial assistance that they could have offered.
However, with the advent of high-quality cameras on smartphone devices, and services such as YouTube and Flickr, it was not long before the well-known online collectives such as Reddit and 4chan mobilized members of the public to ask them to review hundreds of thousands of photos and videos taken on the day to try and identify potential suspects….Here in the UK, we recently had the successful launch of Facewatch, and we have seen other regional attempts — such as Greater Manchester Police’s services and appeals app — to use the goodwill of members of the public to help trace, identify, or report suspected criminals and the crimes that they commit.
Does this herald a new era in transparency? Are we seeing the first steps towards a more transparent future where rapid information flow means that there really is nowhere to hide? Or are we instead falling into some Orwellian society construct where people are scared to speak out or think for themselves?”
Why Big Data Is Not Truth
Quentin Hardy in the New York Times: “Kate Crawford, a researcher at Microsoft Research, calls the problem “Big Data fundamentalism — the idea with larger data sets, we get closer to objective truth.” Speaking at a conference in Berkeley, Calif., on Thursday, she identified what she calls “six myths of Big Data.”
Myth 1: Big Data is New
In 1997, there was a paper that discussed the difficulty of visualizing Big Data, and in 1999, a paper that discussed the problems of gaining insight from the numbers in Big Data. That indicates that two prominent issues today in Big Data, display and insight, had been around for awhile…..
Myth 2: Big Data Is Objective
Over 20 million Twitter messages about Hurricane Sandy were posted last year. … “These were very privileged urban stories.” And some people, privileged or otherwise, put information like their home addresses on Twitter in an effort to seek aid. That sensitive information is still out there, even though the threat is gone.
Myth 3: Big Data Doesn’t Discriminate
“Big Data is neither color blind nor gender blind,” Ms. Crawford said. “We can see how it is used in marketing to segment people.” …
Myth 4: Big Data Makes Cities Smart
…, moving cities toward digital initiatives like predictive policing, or creating systems where people are seen, whether they like it or not, can promote lots of tension between individuals and their governments.
Myth 5: Big Data Is Anonymous
A study published in Nature last March looked at 1.5 million phone records that had personally identifying information removed. It found that just four data points of when and where a call was made could identify 95 percent of individuals. …
Myth 6: You Can Opt Out
… given the ways that information can be obtained in these big systems, “what are the chances that your personal information will never be used?”
Before Big Data disappears into the background as another fact of life, Ms. Crawford said, “We need to think about how we will navigate these systems. Not just individually, but as a society.”
New Book: Digital Methods
New book by Richard Rogers, Director of the Govcom.org Foundation (Amsterdam) and the Digital Methods Initiative: “In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals with broader questions. How can we study social media to learn something about society rather than about social media use? How can hyperlinks reveal not just the value of a Web site but the politics of association? Rogers proposes repurposing Web-native techniques for research into cultural change and societal conditions. We can learn to reapply such “methods of the medium” as crawling and crowd sourcing, PageRank and similar algorithms, tag clouds and other visualizations; we can learn how they handle hits, likes, tags, date stamps, and other Web-native objects. By “thinking along” with devices and the objects they handle, digital research method! s can follow the evolving methods of the medium.
Rogers uses this new methodological outlook to examine the findings of inquiries into 9/11 search results, the recognition of climate change skeptics by climate-change-related Web sites, the events surrounding the Srebrenica massacre according to Dutch, Serbian, Bosnian, and Croatian Wikipedias, presidential candidates’ social media “friends,” and the censorship of the Iranian Web. With Digital Methods, Rogers introduces a new vision and method for Internet research and at the same time applies them to the Web’s objects of study, from tiny particles (hyperlinks) to large masses (social media).”
"A bite of me"
I spend hours every day surfing the internet. Meanwhile, companies like Facebook and Google have been using my online information (the websites I visit, the friends I have, the videos I watch) for their own benefit.
In 2012, advertising revenue in the United States was around $30 billion. That same year, I made exactly $0 from my own data. But what if I tracked everything myself? Could I at least make a couple bucks back?
I started looking at the terms of service for the websites I often use. In their privacy policies, I have found sentences like this: “You grant a worldwide, non-exclusive, royalty-free license to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such content in any and all media or distribution methods (now known or later developed).” I’ve basically agreed to give away a lifelong, international, sub-licensable right to use my personal data….
Check out myprivacy.info to see some of the visualizations I’ve made.
http://myprivacy.info”
Life and Death of Tweets Not so Random After All
MIT Technology Review: “MIT assistant professor Tauhid Zaman and two other researchers (Emily Fox at the University of Washington and Eric Bradlow at the University of Pennsylvania’s Wharton School) have come up with a model that can predict how many times a tweet will ultimately be retweeted, minutes after it is posted. The model was created by collecting retweets on a slew of topics and looking at the time when the original tweet was posted and how fast it spread. That provided knowledge used to predict how popular a new tweet will be by looking at how many times it was retweeted shortly after it was first posted.
The researchers’ findings were explained in a paper submitted to the Annals of Applied Statistics. In the paper, the authors note that “understanding retweet behavior could lead to a better understanding of how broader ideas spread in Twitter and in other social networks,” and such data may be helpful in a number of areas, like marketing and political campaigning.
You can check out the model here.”