Antidisciplinary


Joi Ito at LinkedIn: “One of the first words that I learned when I joined the Media Lab was “antidisciplinary.” It was listed an a requirement in an ad seeking applicants for a new faculty position. Interdisciplinary work is when people from different disciplines work together. An antidisciplinary project isn’t a sum of a bunch of disciplines but something entirely new – the word defies easy definition. But what it means to me is someone or something that doesn’t fit within traditional academic discipline­­­–a field of study with its own particular words, frameworks, and methods. Most academics are judged by how many times they have published in prestigious, peer-reviewed journals. Peer review usually consists of the influential members of your field reviewing your work and deciding whether it is important and unique. This architecture often leads to a dynamic where researchers focus more on impressing a small number of experts in their own field than on taking the high risk of an unconventional approach. This dynamic reinforces the cliché of academics–learning more and more about less and less. It causes a hyper-specialization where people in different areas have a very difficult time collaborating–or even communicating–with people in different fields. For me, antidisciplinary research is akin to mathematician Stanislaw Ulam’s famous observation that the study of non-linear physics is like the study of “non-elephant animals.” Antidisciplinary is all about the non-elephant animals.
The Media Lab focuses on “uniqueness, impact and magic.” What our students and faculty do should be unique. We shouldn’t be doing something that someone else is doing. If someone else starts doing it, we should stop. Everything we do should have impact. Lastly, things should induce us to be passionate and should go beyond incremental thinking. “Magic” means that we take on projects that inspire us. In the Lifelong Kindergarten group, researchers often describe the “Four Ps of Creative Learning” as Projects, Peers, Passion and Play. Play is extremely important for creative learning. There is a great deal of research showing that rewards and pressure can motivate people to “produce,” but creative learning and thinking requires the “space” that play creates. Pressure and rewards can often diminish that space, and thus, squash creative thinking….”

Smarter video games, thanks to crowdsourcing


AAAS –Science Magazine: “Despite the stereotypes, any serious gamer knows it’s way more fun to play with real people than against the computer. Video game artificial intelligence, or AI, just isn’t very good; it’s slow, predictable, and generally stupid. All that stands to change, however, if GiantOtter, a Massachusetts-based startup, has its way, New Scientist reports. By crowdsourcing the AI’s learning, GiantOtter hopes to build systems where the computer can learn based on player’s previous behaviors, decision-making, and even voice communication—yes, the computer is listening in as you strategize. The hope is that by abandoning the traditional scripted programming models, AIs can be taught to mimic human behaviors, leading to more dynamic and challenging scenarios even in incredibly complex games like Blizzard Entertainment Inc.’s professionally played StarCraft II.

Data-based Civic Participation


New workshop paper by C. A. Le Dantec in  HCOMP 2014/Citizen + X: Workshop on Volunteer-based Crowdsourcing in Science, Public Health and Government, Pittsburgh, PA. November 2, 2014:  “Within the past five years, a new form of technology -mediated public participation that experiments with crowdsourced data production in place of community discourse has emerged. Examples of this class of system include SeeClickFix, PublicStuff, and Street Bump, each of which mediate feedback about local neighborhood issues and help communities mobilize resources to address those issues. The experiments being playing out by this new class of services are derived from a form of public participation built on the ideas of smart cities where residents and physical environments are instrumented to provide data to improve operational efficiency and sustainability (Caragliu, Del Bo, and Nijkamp 2011). Ultimately, smart cities is the application to local government all the efficiencies that computing has always promised—efficiencies of scale, of productivity, of data—minus the messiness and contention of citizenship that play out through more traditional modes of public engagement and political discourse.
The question then, is what might it look like to incorporate more active forms of civic participation and issue advocacy in an app- and data-driven world? To begin to explore this question, my students and I have developed a smartphone app as part of a larger regional planning partnership with the City of Atlanta and the Atlanta Regional Commission. The app, called Cycle Atlanta, enables cyclists to record their ride data —where they have gone, why they went there, what kind of cyclist they are— in an effort to both generate data for planners developing new bicycling infrastructure and to broaden public participation and input in the creation of those plans…”
 

How social media is reshaping news


Monica Anderson And Andrea Caumont at Pew Research Center: “The ever-growing digital native news world now boasts about 5,000 digital news sector jobs, according to our recent calculations, 3,000 of which are at 30 big digital-only news outlets. Many of these digital organizations emphasize the importance of social media in storytelling and engaging their audiences. As journalists gather for the annual Online News Association conference, here are answers to five questions about social media and the news.
1 How do social media sites stack up on news? When you take into account both the total reach of a site (the share of Americans who use it) and the proportion of users who get news on the site, Facebook is the obvious news powerhouse among the social media sites. Roughly two-thirds (64%) of U.S. adults use the site, and half of those users get news there — amounting to 30% of the general population….
2 How do social media users participate in news? Half of social network site users have shared news stories, images or vidoes , and nearly as many  (46%) have discussed a news issue or event. In addition to sharing news on social media, a small number are also covering the news themselves, by posting photos or videos of news events. Pew Research found that in 2014, 14% of social media users posted their own photos of news events to a social networking site, while 12% had posted videos. This practice has played a role in a number of recent breaking news events, including the riots in Ferguson, Mo
3 How do social media users discover news? Facebook is an important source of website referrals for many news outlets, but the users who arrive via Facebook spend far less time and consume far fewer pages than those who arrive directly. The same is true of users arriving by search. Our analysis of comScore data found visitors who go to a news media website directly spend roughly three times as long as those who wind up there through search or Facebook, and they view roughly five times as many pages per month. This higher level of engagement from direct visitors is evident whether a site’s traffic is driven by search or social sharing and it has big implications for news organizations who are experimenting with digital subscriptions while endeavoring to build a loyal audience.
4 What’s the news experience like on Facebook? Our study of news consumption on Facebook found Facebook users are experiencing a relatively diverse array of news stories on the site — roughly half of Facebook users regularly see six different topic areas. The most common news people see is entertainment news: 73% of Facebook users regularly see this kind of content on the site. Unlike Twitter, where a core function is the distribution of information as news breaks, Facebook is not yet a place many turn to for learning about breaking news. …
5 How does social media impact the discussion of news events? Our recent survey revealed social media doesn’t always facilitate conversation around the important issues of the day. In fact, we found people were less willing to discuss their opinion on the Snowden-NSA story on social media than they were in person. And Facebook and Twitter users were less likely to want to share their opinions in many face-to-face settings, especially if they felt their social audience disagreed with them.”

Big Talk about Big Data: Discourses of ‘Evidence’ and Data in British Civil Society


From the Digital Economy “Communities and Culture” Network: “The term ‘Big Data’ carries a great deal of currency in business and academic spheres. Data and their subsequent analysis are obviously not new. ‘Bigness’ in this context often refers to three characteristics that differentiate it from so-called ‘small’ data: volume, variety, and velocity. These three attributes of ‘bigness’, promising to open novel, macro-level perspectives on complex issues (Boyd and Crawford 2011), led enthusiasts like Chris Anderson to claim that ‘with enough data, the numbers speak for themselves”. But is this actually the case? Critical voices like Manovich (2011) argue that data never exist in ‘raw’ forms but are rather influenced by humans who—whether intentionally or not—select and construct them in certain ways.
These debates about data are relevant to wider discussions about digital change in society because they point to a more general concern about the potential of all sizes of data to selectively reveal dimensions of social phenomena on which decisions or policies are based. Crucially, if data generation and analysis is not entirely neutral but rather carries assumptions about what is ‘worthwhile’ or ‘acceptable’ to measure in the first place, then it raises critical questions of whether preferences for certain types of research—particularly work conducted under the auspices of a Big Data ‘brand’—reflect coherent sets of values and worldviews. What assumptions underpin preferences for ‘evidence-based’ research based on data? What qualities does such a phrase signify or confer to research? Which ‘sizes’ of data qualify as ‘evidence’ in the first place, or, to play on Anderson’s words, what kinds of data are allowed to speak for themselves in the realms of policy, media, and advocacy?
Hosted at the ESRC Centre on Migration, Policy, and Society (COMPAS) and The Migration Observatory at the University of Oxford, this project critically interrogates the values that inform demands by civil society organisations for research that is ‘data-driven’ or ‘evidence-based’. Specifically, it aims to document the extent to which perceived advantages of data ‘bigness’ (volume, variety, and velocity) influence these demands.
Read the report.

Forget GMOs. The Future of Food Is Data—Mountains of It


Cade Metz at Wired: “… Led by Dan Zigmond—who previously served as chief data scientist for YouTube, then Google Maps—this ambitious project aims to accelerate the work of all the biochemists, food scientists, and chefs on the first floor, providing a computer-generated shortcut to what Hampton Creek sees as the future of food. “We’re looking at the whole process,” Zigmond says of his data team, “trying to figure out what it all means and make better predictions about what is going to happen next.”

The project highlights a movement, spreading through many industries, that seeks to supercharge research and development using the kind of data analysis and manipulation pioneered in the world of computer science, particularly at places like Google and Facebook. Several projects already are using such techniques to feed the development of new industrial materials and medicines. Others hope the latest data analytics and machine learning techniques can help diagnosis disease. “This kind of approach is going to allow a whole new type of scientific experimentation,” says Jeremy Howard, who as the president of Kaggle once oversaw the leading online community of data scientists and is now applying tricks of the data trade to healthcare as the founder of Enlitic.
Zigmond’s project is the first major effort to apply “big data” to the development of food, and though it’s only just getting started—with some experts questioning how effective it will be—it could spur additional research in the field. The company may license its database to others, and Hampton Creek founder and CEO Josh Tetrick says it may even open source the data, so to speak, freely sharing it with everyone. “We’ll see,” says Tetrick, a former college football linebacker who founded Hampton Creek after working on economic and social campaigns in Liberia and Kenya. “That would be in line with who we are as a company.”…
Initially, Zigmond and his team will model protein interactions on individual machines, using tools like the R programming language (a common means of crunching data) and machine learning algorithms much like those that recommend products on Amazon.com. As the database expands, they plan to arrange for much larger and more complex models that run across enormous clusters of computer servers, using the sort of sweeping data-analysis software systems employed by the likes of Google. “Even as we start to get into the tens and hundreds of thousands and millions of proteins,” Zigmond says, “it starts to be more than you can handle with traditional database techniques.”
In particular, Zigmond is exploring the use of deep learning, a form of artificial intelligence that goes beyond ordinary machine learning. Google is using deep learning to drive the speech recognition system in Android phones. Microsoft is using it to translate Skype calls from one language to another. Zigmond believes it can help model the creation of new foods….”

Redesigning that first encounter with online government


Nancy Scola in the Washington Post: “Teardowns,” Samuel Hulick calls them, and by that he means his step-by-step dissections of how some of world’s most popular digital services — Gmail, Evernote, Instragram — welcome new users. But the term might give an overly negative sense of what Hulick is up to. The Portland, Ore., user-experience designer highlights both the good and bad in his critiques, and his annotated slideshows, under the banner of UserOnboard, have gained a following among design aficionados.

Now Hulick is partnering with two of those fans, a pair of Code for America fellows, to encourage the public to do the same for, say, the process of applying for food stamps.  It’s called CitizenOnboard.
Using the original UserOnboard is like taking a tour through some of the digital sites you know best — but with an especially design-savvy friend by your side pointing out the kinks. “The user experience,” or UX on these sites, “is often tacked on haphazardly,” says Hulick, who launched UserOnboard in December 2013 and who is also the author of the recent book “The Elements of User Onboarding.” What’s he looking for in a good UX, he says, is something non-designers can spot, too. “If you were the Web site, what tone would you take? How would you guide people through your process?”
Hulick reviews what’s working and what’s not, and adds a bit of sass: Gmail pre-populates its inbox with a few welcome messages: “Preloading some emails is a nice way to deal with the ‘cold start’ problem,” Hulick notes. Evernote nudges new users to check out its blog and other apps: “It’s like a restaurant rolling out the dessert cart while I’m still trying to decide if I even want to eat there.” Instagram’s first backdrop is a photo of someone taking a picture: “I’m learning how to Instagram by osmosis!”….
CitizenOnboard’s pitch is to get the public to do that same work. They suggest starting with state food stamp programs. Hulick tackled his. The onboarding for Oregon’s SNAP service is 118 slides long, but that’s because there is much to address. In one step, applications must, using a drop-down menu, identify how those in their family are related to one another. “It took a while to figure out who should be the relation ‘of’ the other,” Hulick notes in his teardown. “In fact, I’m still not 100% sure I got it right.”…”

Announcing New U.S. Open Government Commitments on the Third Anniversary of the Open Government Partnership


US White House Fact Sheet: “Three years ago, President Obama joined with the leaders of seven other nations to launch the Open Government Partnership (OGP), an international partnership between governments and civil society to promote transparency, fight corruption, energize civic engagement, and leverage new technologies to open up governments worldwide.  The United States and other founding countries pledged to transform the way that governments serve their citizens in the 21st century.  Today, as heads of state of OGP participating countries gather at the UN General Assembly, this partnership has grown from 8 to 65 nations and hundreds of civil society organizations around the world. These countries are embracing the challenge by taking steps in partnership with civil society to increase the ability of citizens to engage their governments, access government data to fuel entrepreneurship and innovation, and promote accountability….
The United States is committed to continuing to lead by example in OGP.  Since assuming office, President Obama has prioritized making government more open and accountable and has taken substantial steps to increase citizen participation, collaboration with civil society, and transparency in government.  The United States will remain a global leader of international efforts to promote transparency, stem corruption and hold to account those who exploit the public’s trust for private gain.  Yesterday, President Obama announced several steps the United States is taking to deepen our support for civil society globally.
Today, to mark the third anniversary of OGP, President Obama is announcing four new and expanded open government initiatives that will advance our efforts through the end of 2015.
1.      Promote Open Education to Increase Awareness and Engagement
Open education is the open sharing of digital learning materials, tools, and practices that ensures free access to and legal adoption of learning resources.  The United States is committed to open education and will:

  • Raise open education awareness and identify new partnerships. The U.S. Department of State, the U.S. Department of Education, and the Office of Science and Technology Policy will jointly host a workshop on challenges and opportunities in open education internationally with stakeholders from academia, industry, and government.
  • Pilot new models for using open educational resources to support learning.  The State Department will conduct three pilots overseas by December 2015 that use open educational resources to support learning in formal and informal learning contexts. The pilots’ results, including best practices, will be made publicly available for interested educators.
  • Launch an online skills academy. The Department of Labor (DOL), with cooperation from the Department of Education, will award $25 million through competitive grants to launch an online skills academy in 2015 that will offer open online courses of study, using technology to create high-quality, free, or low-cost pathways to degrees, certificates, and other employer-recognized credentials.

2.      Deliver Government Services More Effectively Through Information Technology
The Administration is committed to serving the American people more effectively and efficiently through smarter IT delivery. The newly launched U.S. Digital Service will work to remove barriers to digital service delivery and remake the experience that people and businesses have with their government. To improve delivery of Federal services, information, and benefits, the Administration will:

  • Expand digital service delivery expertise in government. Throughout 2015, the Administration will continue recruiting top digital talent from the private and public sectors to expand services across the government. These individuals —who have expertise in technology, procurement, human resources, and financing —will serve as digital professionals in a number of capacities in the Federal government, including the new U.S. Digital Service and 18F digital delivery team within the U.S. General Services Administration, as well as within Federal agencies. These teams will take best practices from the public and private sectors and scale them across agencies with a focus on the customer experience.
  • Build digital services in the open. The Administration will expand its efforts to build digital services in the open. This includes using open and transparent processes intended to better understand user needs, testing pilot digital projects, and designing and developing digital services at scale. In addition, building on the recently published Digital Services Playbook, the Administration will continue to openly publish best practices on collaborative websites that enable the public to suggest improvements.
  • Adopt an open source software policy. Using and contributing back to open source software can fuel innovation, lower costs, and benefit the public. No later than December 31, 2015, the Administration will work through the Federal agencies to develop an open source software policy that, together with the Digital Services Playbook, will support improved access to custom software code developed for the Federal government.

3.      Increase Transparency in Spending
The Administration has made an increasing amount of Federal spending data publicly available and searchable, allowing nationwide stakeholders to perform analysis of Federal spending. The Administration will build on these efforts by committing to:

  • Improve USAspending.gov. In 2015, the Administration will launch a refreshed USAspending.gov website that will improve the site’s design and user experience, including better enabling users to explore the data using interactive maps and improving the search functionality and application programming interface.
  • Improve accessibility and reusability of Federal financial data.  In 2015, as part of implementation of the DATA Act,[2] the Administration will work to improve the accessibility and reusability of Federal financial data by issuing data element definition standards and standards for exchanging financial data. The Administration, through the Office of Management and Budget, will leverage industry data exchange standards to the extent practicable to maximize the sharing and utilization of Federal financial data.
  • Explore options for visualization and publication of additional Federal financial data.  The Administration, through the Treasury Department, will use small-scale pilots to help explore options for visualizing and publishing Federal financial data from across the government as required by the DATA Act.
  • Continue to engage stakeholders. The Administration will continue to engage with a broad group of stakeholders to seek input on Federal financial transparency initiatives including DATA Act implementation, by hosting town hall meetings, conducting interactive workshops, and seeking input via open innovation collaboration tools.

4.      Use Big Data to Support Greater Openness and Accountability
President Obama has recognized the growing importance of “big data” technologies for our economy and the advancement of public good in areas such as education, energy conservation, and healthcare. The Administration is taking action to ensure responsible uses of big data to promote greater openness and accountability across a range of areas and sectors. As part of the work it is doing in this area, the Administration has committed to:

  • Enhance sharing of best practices on data privacy for state and local law enforcement.  Federal agencies with expertise in law enforcement, privacy, and data practices will seek to enhance collaboration and information sharing about privacy best practices among state and local law enforcement agencies receiving Federal grants.
  • Ensure privacy protection for big data analyses in health. Big data introduces new opportunities to advance medicine and science, improve health care, and support better public health. To ensure that individual privacy is protected while capitalizing on new technologies and data, the Administration, led by the Department of Health and Human Services, will: (1) consult with stakeholders to assess how Federal laws and regulations can best accommodate big data analyses that promise to advance medical science and reduce health care costs; and (2) develop recommendations for ways to promote and facilitate research through access to data while safeguarding patient privacy and autonomy.
  • Expand technical expertise in government to stop discrimination. U.S. Government departments and agencies will work to expand their technical expertise to identify outcomes facilitated by big data analytics that may have a discriminatory impact on protected classes. …”

Data Visualization: Principles and Practice (Second Edition)


Book by Alexandru C. Telea : “This book explores the study of processing and visually representing data sets. Data visualization is closely related to information graphics, information visualization, scientific visualization, and statistical graphics. This second edition presents a better treatment of the relationship between traditional scientific visualization and information visualization, a description of the emerging field of visual analytics, and updated techniques using the GPU and new generations of software tools and packages. This edition is also enhanced with exercises and downloadable code and data sets. See also Supplemental Material.

Plenario


About Plenario: “Plenario makes it possible to rethink the way we use open data. Instead of being constrained by the data that is accessible and usable, let’s start by formulating our questions and then find the data to answer them. Plenario makes this easy by tying together all datasets on one map and one timeline—because in the real world, everything affects everything else…
The problem
Over the past few years, levels of government from the federal administration to individual municipalities like the City of Chicago have begun embracing open data, releasing datasets publicly for free. This movement has vastly increased the amount of data available, but existing platforms and technologies are designed mainly to view and access individual datasets one at a time. This restriction contradicts decades of research contending that no aspect of the urban landscape is truly isolated; in today’s cities, everything is connected to everything else.
Furthermore, researchers are often limited in the questions they can ask by the data available to answer them. It is not uncommon to spend 75% of one’s time locating, downloading, cleaning, and standardizing the relevant datasets—leaving precious little resources for the important work.
What we do
Plenario is designed to take us from “spreadsheets on the web”1 to truly smart open data. This rests on two fundamental breakthroughs:

1)  Allow users to assemble and download data from multiple, independent data sources, such as two different municipal data portals, or the federal government and a privately curated dataset.
2)  Unite all datasets along a single spatial and temporal index, making it possible to do complex aggregations with one query.

With these advances, Plenario allows users to study regions over specified time periods using all relevant data, regardless of original source, and represent the data as a single time series. By providing a single, centralized hub for open data, the Plenario platform enables urban scientists to ask the right questions with as few constraints as possible….
being implemented by the Urban Center for Computation and Data and DataMade