Crowdsourcing voices to study Parkinson’s disease


TedMed: “Mathematician Max Little is launching a project that aims to literally give Parkinson’s disease (PD) patients a voice in their own diagnosis and help them monitor their disease progression.
Patients Voice Analysis (PVA) is an open science project that uses phone-based voice recordings and self-reported symptoms, along with software Little designed, to track disease progression. Little, a TEDMED 2013 speaker and TED Fellow, is partnering with the online community PatientsLikeMe, co-founded by TEDMED 2009 speaker James Heywood, and Sage Bionetworks, a non-profit research organization, to conduct the research.
The new project is an extension of Little’s Parkinson’s Voice Initiative, which used speech analysis algorithms to diagnose Parkinson’s from voice records with the help of 17,000 volunteers. This time, he seeks to not only detect markers of PD, but also to add information reported by patients using PatientsLikeMe’s Parkinson’s Disease Rating Scale (PDRS), a tool that documents patients’ answers to questions that measure treatment effectiveness and disease progression….
As openly shared information, the collected data has potential to help vast numbers of individuals by tapping into collective ingenuity. Little has long argued that for science to progress, researchers need to democratize research and move past jostling for credit. Sage Bionetworks has designed a platform called Synapse to allow data sharing with collaborative version control, an effort led by open data advocate John Wilbanks.
“If you can’t share your data, how can you reproduce your science? One of the big problems we’re facing with this kind of medical research is the data is not open and getting access to it is a nightmare,” Little says.
With the PVA project, “Basically anyone can log on download the anonymized data and play around with data mining techniques. We don’t really care what people are able to come up with. We just want the most accurate prediction we can get.
“In research, you’re almost always constrained by what you think is the best way to do things. Unless you open it to the community at large, you’ll never know,” he says.”

Selfiecity


New Project aimed at investigating the style of self-portraits (selfies) in five cities across the world: “Selfiecity investigates selfies using a mix of theoretic, artistic and quantitative methods:

  • We present our findings about the demographics of people taking selfies, their poses and expressions.
  • Rich media visualizations (imageplots) assemble thousands of photos to reveal interesting patterns.
  • The interactive selfiexploratory allows you to navigate the whole set of 3200 photos.
  • Finally, theoretical essays discuss selfies in the history of photography, the functions of images in social media, and methods and dataset.”

L’intelligence d’une ville : ses citoyens


Michel Dumais: “Tic toc! disions-nous. Bientôt la centième. Et avec la cent-unième, de nouveaux défis. Ville intelligente, disiez-vous? Je subodore le traditionnel appel de pied aux trois lettres et à une logique administrative archaïque. Et si on faisait plutôt appel à l’intelligence de ceux qui connaissent le plus leur ville, ses citoyens?

Pour régler un problème (et même à l’occasion, un «pas d’problème»), les administrations regardent du côté de ces logiciels mammouth qui, sur papier, sont censés faire tout, qui engloutissent des centaines de millions de dollars, mais qui, finalement, font les manchettes des médias parce qu’il faut y injecter encore plus d’argent. Et qui permettent aux TI d’asseoir encore plus leur contrôle sur une administration.

Bref, lorsque l’on parle de ville intelligente, plusieurs y voient le pactole. Ah! Reste que ce qui était «acceptable», hier, ne l’est plus aujourd’hui. Et que la réalisation d’une ville intelligente n’est surtout pas un défi technologique, loin de là.

LA QUESTION DU SANS-FIL
Il y a des années de cela, la simple logique eut voulu que la Ville cesse de penser «big telcos» afin de conclure rapidement une alliance avec l’organisme communautaire «Île sans fil» et ainsi favoriser le déploiement rapide sur l’île de la technologie sans fil.

Une telle alliance, un modèle dans le genre, existe.

Mais pas à Montréal. Plutôt à Québec, alors que la Ville et l’organisme communautaire «Zap Québec» travaillent main dans la main pour le plus grand bénéfice des citoyens de Québec et des touristes. Et à Montréal? On jase, on jase.

Donc, une ville intelligente. C’est une ville qui sait, à l’aide des technologies, comment harnacher ses infrastructures et les mettre au service de ses citoyens tout en réalisant des économies et en favorisant le développement durable.

C’est aussi une ville qui sait écouter et mobiliser ses citoyens, ses militants et ses entrepreneurs, tout en leur donnant des outils (comme des données utilisables) afin qu’ils puissent eux aussi créer des services destinés à leur organisation et à tous les citoyens de la ville. Sans compter que tous ces outils facilitent la prise de décisions chez les maires d’arrondissement et le comité exécutif.

Bref, une ville intelligente selon le professeur Rudolf Giffinger, c’est ça: «une économie intelligente, une mobilité intelligente, un environnement intelligent, des habitants intelligents, un mode de vie intelligent et, enfin, une administration intelligente».

J’invite le lecteur à regarder LifeApps, une extraordinaire série télé diffusée sur le site de la chaîne AlJazeera. Le sujet: des jeunes et de moins jeunes militants, bidouilleurs, qui s’impliquent et créent des services pour leur communauté.”

Are bots taking over Wikipedia?


Kurzweil News: “As crowdsourced Wikipedia has grown too large — with more than 30 million articles in 287 languages — to be entirely edited and managed by volunteers, 12 Wikipedia bots have emerged to pick up the slack.

The bots use Wikidata — a free knowledge base that can be read and edited by both humans and bots — to exchange information between entries and between the 287 languages.

Which raises an interesting question: what portion of Wikipedia edits are generated by humans versus bots?

To find out (and keep track of other bot activity), Thomas Steiner of Google Germany has created an open-source application (and API): Wikipedia and Wikidata Realtime Edit Stats, described in an arXiv paper.
The percentages of bot vs. human edits as shown in the application is constantly changing.  A KurzweilAI snapshot on Feb. 20 at 5:19 AM EST showed an astonishing 42% of Wikipedia being edited by bots. (The application lists the 12 bots.)


Anonymous vs. logged-In humans (credit: Thomas Steiner)
The percentages also vary by language. Only 5% of English edits were by bots; but for Serbian pages, in which few Wikipedians apparently participate, 96% of edits were by bots.

The application also tracks what percentage of edits are by anonymous users. Globally, it was 25 percent in our snapshot and a surprising 34 percent for English — raising interesting questions about corporate and other interests covertly manipulating Wikipedia information.

Can Twitter Predict Major Events Such As Mass Protests?


Emerging Technology From the arXiv : “The idea that social media sites such as Twitter can predict the future has a controversial history. In the last few years, various groups have claimed to be able to predict everything from the outcome of elections to the box office takings for new movies.
It’s fair to say that these claims have generated their fair share of criticism. So it’s interesting to see a new claim come to light.
Today, Nathan Kallus at the Massachusetts Institute of Technology in Cambridge says he has developed a way to predict crowd behaviour using statements made on Twitter. In particular, he has analysed the tweets associated with the 2013 coup d’état in Egypt and says that the civil unrest associated with this event was clearly predictable days in advance.
It’s not hard to imagine how the future behaviour of crowds might be embedded in the Twitter stream. People often signal their intent to meet in advance and even coordinate their behaviour using social media. So this social media activity is a leading indicator of future crowd behaviour.
That makes it seem clear that predicting future crowd behaviour is simply a matter of picking this leading indicator out of the noise.
Kallus says this is possible by mining tweets for any mention of future events and then analysing trends associated with them. “The gathering of crowds into a single action can often be seen through trends appearing in this data far in advance,” he says.
It turns out that exactly this kind of analysis is available from a company called Recorded Future based in Cambridge, which scans 300,000 different web sources in seven different languages from all over the world. It then extracts mentions of future events for later analysis….
The bigger question is whether it’s possible to pick out this evidence in advance. In other words, is possible to make predictions before the events actually occur?
That’s not so clear but there are good reasons to be cautious. First of all, while it’s possible to correlate Twitter activity to real protests, it’s also necessary to rule out false positives. There may be significant Twitter trends that do not lead to significant protests in the streets. Kallus does not adequately address the question of how to tell these things apart.
Then there is the question of whether tweets are trustworthy. It’s not hard to imagine that when it comes to issues of great national consequence, propaganda, rumor and irony may play a significant role. So how to deal with this?
There is also the question of demographics and whether tweets truly represent the intentions and activity of the population as a whole. People who tweet are overwhelmingly likely to be young but there is another silent majority that plays hugely important role. So can the Twitter firehose really represent the intentions of this part of the population too?
The final challenge is in the nature of prediction. If the Twitter feed is predictive, then what’s needed is evidence that it can be used to make real predictions about the future and not just historical predictions about the past.
We’ve looked at some of these problems with the predictive power of social media before and the challenge is clear: if there is a claim to be able to predict the future, then this claim must be accompanied by convincing evidence of an actual prediction about an event before it happens.
Until then, it would surely be wise to be circumspect about the predictive powers of Twitter and other forms of social media.
Ref: arxiv.org/abs/1402.2308: Predicting Crowd Behavior with Big Public Data”

Structuring Big Data to Facilitate Democratic Participation in International Law


New paper by Roslyn Fuller: “This is an interdisciplinary article focusing on the interplay between information and communication technology (ICT) and international law (IL). Its purpose is to open up a dialogue between ICT and IL practitioners that focuses on the ways in which ICT can enhance equitable participation in international legal structures, particularly through capturing the possibilities associated with big data. This depends on the ability of individuals to access big data, for it to be structured in a manner that makes it accessible and for the individual to be able to take action based on it.”

LocalWiki turns open local data into open local knowledge


Marina Kukso at OpenGovVoices:” LocalWiki is an open knowledge project focusing on giving everyone the opportunity to collaborate to create and share all kinds of information about the place where they live.

The project started in 2004 in Davis, Calif. as the Davis Wiki, now the primary local information resource for Davis residents. One-in-seven residents have contributed to the project and, in a given month, almost every resident uses it.

In 2010, we received funding from the Knight Foundation to bring LocalWiki to many more communities. We created a wiki software specifically designed for local collaboration and have seen adoption in more than 70 communities worldwide. People now use LocalWiki for everything from mapping out nature trails to planning a grassroots mayoral election candidate debate….

There’s a great deal of expertise within our communities, and at LocalWiki we see part of the mission of our work as providing a platform for people to contextualize and make meaning out of the information made available through open data and open gov efforts at the local level.

There are obviously limitations to the ability of programming laypeople to make use of open data to create new knowledge to drive action, most notably many people’s lack of expertise in data analysis, but with LocalWiki we hope to at least address some of those limitations by making it significantly easier for people to collaborate to create meaning out of open data and to share it with others. This is why LocalWiki has a wysiwyg editor, which includes mapping as a core feature and prioritizes usability in design.

Finally, adding information about a community on LocalWiki is a way to create new open data. It’s incredibly important to make things like internal city crime statistics public, but residents’ perspectives on the relative safety of their neighborhoods is a different kind of data that provides additional insights into public safety challenges and adds complexity to the picture created by statistics.”

11 ways to rethink open data and make it relevant to the public


Miguel Paz at IJNET: “It’s time to transform open data from a trendy concept among policy wonks and news nerds into something tangible to everyday life for citizens, businesses and grassroots organizations. Here are some ideas to help us get there:
1. Improve access to data
Craig Hammer from the World Bank has tackled this issue, stating that “Open Data could be the game changer when it comes to eradicating global poverty”, but only if governments make available online data that become actionable intelligence: a launch pad for investigation, analysis, triangulation, and improved decision making at all levels.
2. Create open data for the end user
As Hammer wrote in a blog post for the Harvard Business Review, while the “opening” has generated excitement from development experts, donors, several government champions, and the increasingly mighty geek community, the hard reality is that much of the public has been left behind, or tacked on as an afterthought. Let`s get out of the building and start working for the end user.
3. Show, don’t tell
Regular folks don’t know what “open data” means. Actually, they probably don’t care what we call it and don’t know if they need it. Apple’s Steve Jobs said that a lot of times, people don’t know what they want until you show it to them. We need to stop telling them they need it and start showing them why they need it, through actionable user experience.
4. Make it relevant to people’s daily lives, not just to NGOs and policymakers’ priorities
A study of the use of open data and transparency in Chile showed the top 10 uses were for things that affect their lives directly for better or for worse: data on government subsidies and support, legal certificates, information services, paperwork. If the data doesn’t speak to priorities at the household or individual level, we’ve lost the value of both the “opening” of data, and the data itself.
5. Invite the public into the sandbox
We need to give people “better tools to not only consume, but to create and manipulate data,” says my colleague Alvaro Graves, Poderopedia’s semantic web developer and researcher. This is what Code for America does, and it’s also what happened with the advent of Web 2.0, when the availability of better tools, such as blogging platforms, helped people create and share content.
6. Realize that open data are like QR codes
Everyone talks about open data the way they used to talk about QR codes–as something ground breaking. But as with QR Codes, open data only succeeds with the proper context to satisfy the needs of citizens. Context is the most important thing to funnel use and success of open data as a tool for global change.
7. Make open data sexy and pop, like Jess3.com
Geeks became popular because they made useful and cool things that could be embraced by end users. Open data geeks need to stick with that program.
8. Help journalists embrace open data
Jorge Lanata, a famous Argentinian journalist who is now being targeted by the Cristina Fernández administration due to his unfolding of government corruption scandals, once said that 50 percent of the success of a story or newspaper is assured if journalists like it.
That’s true of open data as well. If journalists understand its value for the public interest and learn how to use it, so will the public. And if they do, the winds of change will blow. Governments and the private sector will be forced to provide better, more up-to-date and standardized data. Open data will be understood not as a concept but as a public information source as relevant as any other. We need to teach Latin American journalists to be part of this.
9. News nerds can help you put your open data to good use
In order to boost the use of open data by journalists we need news nerds, teams of lightweight and tech-heavy armored journalist-programmers who can teach colleagues how open data through brings us high-impact storytelling that can change public policies and hold authorities accountable.
News nerds can also help us with “institutionalizing data literacy across societies” as Hammer puts it. ICFJ Knight International Journalism Fellow and digital strategist Justin Arenstein calls these folks “mass mobilizers” of information. Alex Howard “points to these groups because they can help demystify data, to make it understandable by populations and not just statisticians.”
I call them News Ninja Nerds, accelerator taskforces that can foster innovationsin news, data and transparency in a speedy way, saving governments and organizations time and a lot of money. Projects like ProPublica’s Dollars For Docs are great examples of what can be achieved if you mix FOIA, open data and the will to provide news in the public interest.
10. Rename open data
Part of the reasons people don’t embrace concepts such as open data is because it is part of a lingo that has nothing to do with them. No empathy involved. Let’s start talking about people’s right to know and use the data generated by governments. As Tim O’Reilly puts it: “Government as a Platform for Greatness,” with examples we can relate to, instead of dead .PDF’s and dirty databases.
11. Don’t expect open data to substitute for thinking or reporting
Investigative Reporting can benefit from it. But “but there is no substitute for the kind of street-level digging, personal interviews, and detective work” great journalism projects entailed, says David Kaplan in a great post entitled, Why Open Data is Not Enough.”

Three ways digital leaders can operate successfully in local government


in The Guardian: “The landscape of digital is constantly changing and being redefined with every new development, technology breakthrough, success and failure. We need digital public sector leaders who can properly navigate this environment, and follow these three guidelines.
1. Champion open data
We need leaders who can ensure that information and data is open by default, and secure when absolutely required. Too often councils commission digital programmes only to find the data generated does not easily integrate with other systems, or that data is not council-owned and can only be accessed at further cost.
2. Don’t get distracted by flashy products
Leaders must adopt an agnostic approach to technology, and not get seduced by the ever-increasing number of digital technologies and lose sight of real user and business needs.
3. Learn from research
Tales of misplaced IT investments plague the public sector, and senior leaders are understandably hesitant when considering future investments. To avoid causing even more disruption, we should learn from research findings such as those of the New Local Government Network’s recent digital roundtables on what works.
Making the decision to properly invest in digital leadership will not just improve decision making about digital solutions and strategies. It will also bring in the knowledge needed to navigate the complex security requirements that surround public-sector IT. And it will ensure that practices honed in the digital environment become embedded in the council more generally.
In Devon, for example, we are making sure all the services we offer online are based on the experience and behaviour of users. This has led service teams to refocus on the needs of citizens rather than those of the organisation. And our experiences of future proofing, agility and responsiveness are informing service design throughout the council.
What’s holding us back?
Across local government there is still a fragmented approach to collaboration. In central government, the Government Digital Service is charged with providing the right environment for change across all government departments. However, in local government, digital leaders often work alone without a unifying strategy across the sector. It is important to understand and recognise that the Government Digital Service is more than just a team pushing and promoting digital in central government: they are the future of central government, attempting to transform everything.
Initiatives such as LocalGov Digital, (O2’s Local Government Digital Fund), Forum (the DCLG’s local digital alliance) and the Guardian’s many public sector forums and networks are all helping to push forward debate, spread good practice and build a sense of urgent optimism around the local government digital agenda. But at present there is no equivalent to the unified force of the Government Digital Service.”

Canadian Organizations Join Forces to Launch Open Data Institute to Foster Open Government


Press Release: “The Canadian Digital Media Network, the University of Waterloo, Communitech, OpenText and Desire2Learn today announced the creation of the Open Data Institute.

The Open Data Institute, which received support from the Government of Canada in this week’s budget, will work with governments, academic institutions and the private sector to solve challenges facing “open government” efforts and realize the full potential of “open data.”
According to a statement, partners will work on development of common standards, the integration of data from different levels of government and the commercialization of data, “allowing Canadians to derive greater economic benefit from datasets that are made available by all levels of government.”
The Open Data Institute is a public-private partnership. Founding partners will contribute $3 million in cash and in-kind contributions over three years to establish the institute, a figure that has been matched by the Government of Canada.
“This is a strategic investment in Canada’s ability to lead the digital economy,” said Kevin Tuer, Managing Director of CDMN. “Similar to how a common system of telephone exchanges allowed world-wide communication, the Open Data Institute will help create a common platform to share and access datasets.”
“This will allow the development of new applications and products, creating new business opportunities and jobs across the country,” he added.
“The Institute will serve as a common forum for government, academia and the private sector to collaborate on Open Government initiatives with the goal of fueling Canadian tech innovation,” noted OpenText President and CEO Mark J. Barrenechea
“The Open Data Institute has the potential to strengthen the regional economy and increase our innovative capacity,” added Feridun Hamdullahpur, president and vice-chancellor of the University of Waterloo.