The false promise of the digital humanities


Adam Kirsch in the New Republic: “The humanities are in crisis again, or still. But there is one big exception: digital humanities, which is a growth industry. In 2009, the nascent field was the talk of the Modern Language Association (MLA) convention: “among all the contending subfields,” a reporter wrote about that year’s gathering, “the digital humanities seem like the first ‘next big thing’ in a long time.” Even earlier, the National Endowment for the Humanities created its Office of Digital Humanities to help fund projects. And digital humanities continues to go from strength to strength, thanks in part to the Mellon Foundation, which has seeded programs at a number of universities with large grantsmost recently, $1 million to the University of Rochester to create a graduate fellowship.

Despite all this enthusiasm, the question of what the digital humanities is has yet to be given a satisfactory answer. Indeed, no one asks it more often than the digital humanists themselves. The recent proliferation of books on the subjectfrom sourcebooks and anthologies to critical manifestosis a sign of a field suffering an identity crisis, trying to determine what, if anything, unites the disparate activities carried on under its banner. “Nowadays,” writes Stephen Ramsay in Defining Digital Humanities, “the term can mean anything from media studies to electronic art, from data mining to edutech, from scholarly editing to anarchic blogging, while inviting code junkies, digital artists, standards wonks, transhumanists, game theorists, free culture advocates, archivists, librarians, and edupunks under its capacious canvas.”

Within this range of approaches, we can distinguish a minimalist and a maximalist understanding of digital humanities. On the one hand, it can be simply the application of computer technology to traditional scholarly functions, such as the editing of texts. An exemplary project of this kind is the Rossetti Archive created by Jerome McGann, an online repository of texts and images related to the career of Dante Gabriel Rossetti: this is essentially an open-ended, universally accessible scholarly edition. To others, however, digital humanities represents a paradigm shift in the way we think about culture itself, spurring a change not just in the medium of humanistic work but also in its very substance. At their most starry-eyed, some digital humanistssuch as the authors of the jargon-laden manifesto and handbook Digital_Humanitieswant to suggest that the addition of the high-powered adjective to the long-suffering noun signals nothing less than an epoch in human history: “We live in one of those rare moments of opportunity for the humanities, not unlike other great eras of cultural-historical transformation such as the shift from the scroll to the codex, the invention of movable type, the encounter with the New World, and the Industrial Revolution.”

The language here is the language of scholarship, but the spirit is the spirit of salesmanshipthe very same kind of hyperbolic, hard-sell approach we are so accustomed to hearing about the Internet, or  about Apple’s latest utterly revolutionary product. Fundamental to this kind of persuasion is the undertone of menace, the threat of historical illegitimacy and obsolescence. Here is the future, we are made to understand: we can either get on board or stand athwart it and get run over. The same kind of revolutionary rhetoric appears again and again in the new books on the digital humanities, from writers with very different degrees of scholarly commitment and intellectual sophistication.

In Uncharted, Erez Aiden and Jean-Baptiste Michel, the creators of the Google Ngram Vieweran online tool that allows you to map the frequency of words in all the printed matter digitized by Googletalk up the “big data revolution”: “Its consequences will transform how we look at ourselves…. Big data is going to change the humanities, transform the social sciences, and renegotiate the relationship between the world of commerce and the ivory tower.” These breathless prophecies are just hype. But at the other end of the spectrum, even McGann, one of the pioneers of what used to be called “humanities computing,” uses the high language of inevitability: “Here is surely a truth now universally acknowledged: that the whole of our cultural inheritance has to be recurated and reedited in digital forms and institutional structures.”

If ever there were a chance to see the ideological construction of reality at work, digital humanities is it. Right before our eyes, options are foreclosed and demands enforced; a future is constructed as though it were being discovered. By now we are used to this process, since over the last twenty years the proliferation of new technologies has totally discredited the idea of opting out of “the future.”…

The promise and perils of giving the public a policy ‘nudge’


Nicholas Biddle and Katherine Curchin at the Conversation: “…These behavioural insights are more than just intellectual curiosities. They are increasingly being used by policymakers inspired by Richard Thaler and Cass Sunstein’s bestselling manifesto for libertarian paternalism, Nudge.
The British and New South Wales governments have set up behavioural insights units. Many other governments around Australia are following their lead.
Most of the attention so far has been on how behavioural insights could be employed to make people slimmer, greener, more altruistic or better savers. However, it’s time we started thinking and talking about the impact these ideas could have on social policy – programs and payments that aim to reduce disadvantage and narrow divergence in opportunity.
While applying behavioural insights can potentially improve the efficiency and effectiveness of social policy, unscrupulous or poorly thought through applications could be disturbing and damaging. It would appear behavioural insights inspired the UK government’s so-called “Nudge Unit” to force job seekers to undergo bogus personality tests – on pain of losing benefits if they refused.
The idea seemed to be that because people readily believe that any vaguely worded combination of character traits applies to them – which is why people connect with their star sign – the results of a fake psychometric test can dupe them into believing they have a go-getting personality.
In our view, this is not how behavioural insights should be applied. This UK example seems to be a particularly troubling case of the use of “nudges” in conjunction with, rather than instead of, coercion. This is the worst of both worlds: not libertarian paternalism, but authoritarian paternalism.
Ironically, this instance betrays a questionable understanding of behavioural insights or at the very least a very short-term focus. Research tells us that co-operative behaviour depends on the perception of fairness and successful framing requires trust.
Dishonest interventions, which make the government seem both unfair and untrustworthy, should have the longer-term effect of undermining its ability to elicit cooperation and successfully frame information.
Some critics have assumed nudge is inherently conservative or neoliberal. Yet these insights could inform progressive reform in many ways.
For example, taking behavioural insights seriously would encourage a redesign of employment services. There is plenty of scope for thinking more rigorously about how job seekers’ interactions with employment services unintentionally inhibit their motivation to search for work.

Beware accidental nudges

More than just a nudge here or there, behavioural insights can be used to reflect on almost all government decisions. Too often governments accidentally nudge citizens in the opposite direction to where they want them to go.
Take the disappointing take-up of the Matched Savings Scheme, which is part of New Income Management in the Northern Territory. It matches welfare recipients’ savings dollar-for-dollar up to a maximum of A$500 and is meant to get people into the habit of saving regularly.
No doubt saving is extremely hard for people on very low incomes. But another reason so few people embraced the savings program may be a quirk in its design: people had to save money out of their non-income-managed funds, but the $500 reward they received from the government went into their income-managed account.
To some people this appears to have signalled the government’s bad faith. It said to them: even if you demonstrate your responsibility with money, we still won’t trust you.
The Matched Savings Scheme was intended to be a carrot, not a stick. It was supposed to complement the coercive element of income management by giving welfare recipients an incentive to improve their budgeting. Instead it was perceived as an invitation to welfare recipients to be complicit in their own humiliation.
The promise of an extra $500 would have been a strong lure for Homo economicus, but it wasn’t for Homo sapiens. People out of work or on income support are no more or less rational than merchant bankers or economics professors. Their circumstances and choices are different though.
The idiosyncrasies of human decision-making don’t mean that the human brain is fundamentally flawed. Most of the biases that we mentioned earlier are adaptive. But they do mean that policy makers need to appreciate how we differ from rational utility maximisers.”
Real humans are not worse than economic man. We’re just different and we deserve policies made for Homo sapiens, not Homo economicus.

Open Government Data Gains Global Momentum


Wyatt Kash in Information Week: “Governments across the globe are deepening their strategic commitments and working more closely to make government data openly available for public use, according to public and private sector leaders who met this week at the inaugural Open Government Data Forum in Abu Dhabi, hosted by the United Nations and the United Arab Emirates, April 28-29.

Data experts from Europe, the Middle East, the US, Canada, Korea, and the World Bank highlighted how one country after another has set into motion initiatives to expand the release of government data and broaden its use. Those efforts are gaining traction due to multinational organizations, such as the Open Government Partnership, the Open Data Institute, The World Bank, and the UN’s e-government division, that are trying to share practices and standardize open data tools.
In the latest example, the French government announced April 24 that it is joining the Open Government Partnership, a group of 64 countries working jointly to make their governments more open, accountable, and responsive to citizens. The announcement caps a string of policy shifts, which began with the formal release of France’s Open Data Strategy in May 2011 and which parallel similar moves by the US.
The strategy committed France to providing “free access and reuse of public data… using machine-readable formats and open standards,” said Romain Lacombe, head of innovation for the French prime minister’s open government task force, Etalab. The French government is taking steps to end the practice of selling datasets, such as civil and case-law data, and is making them freely reusable. France launched a public data portal, Data.gouv.fr, in December 2011 and joined a G8 initiative to engage with open data innovators worldwide.
For South Korea, open data is not just about achieving greater transparency and efficiency, but is seen as digital fuel for a nation that by 2020 expects to achieve “ambient intelligence… when all humans and things are connected together,” said Dr. YoungSun Lee, who heads South Korea’s National Information Society Agency.
He foresees open data leading to a shift in the ways government will function: from an era of e-government, where information is delivered to citizens, to one where predictive analysis will foster a “creative government,” in which “government provides customized services for each individual.”
The open data movement is also propelling innovative programs in the United Arab Emirates. “The role of open data in directing economic and social decisions pertaining to investments… is of paramount importance” to the UAE, said Dr. Ali M. Al Khouri, director general of the Emirates Identity Authority. It also plays a key role in building public trust and fighting corruption, he said….”

Findings of the Big Data and Privacy Working Group Review


John Podesta at the White House Blog: “Over the past several days, severe storms have battered Arkansas, Oklahoma, Mississippi and other states. Dozens of people have been killed and entire neighborhoods turned to rubble and debris as tornadoes have touched down across the region. Natural disasters like these present a host of challenges for first responders. How many people are affected, injured, or dead? Where can they find food, shelter, and medical attention? What critical infrastructure might have been damaged?
Drawing on open government data sources, including Census demographics and NOAA weather data, along with their own demographic databases, Esri, a geospatial technology company, has created a real-time map showing where the twisters have been spotted and how the storm systems are moving. They have also used these data to show how many people live in the affected area, and summarize potential impacts from the storms. It’s a powerful tool for emergency services and communities. And it’s driven by big data technology.
In January, President Obama asked me to lead a wide-ranging review of “big data” and privacy—to explore how these technologies are changing our economy, our government, and our society, and to consider their implications for our personal privacy. Together with Secretary of Commerce Penny Pritzker, Secretary of Energy Ernest Moniz, the President’s Science Advisor John Holdren, the President’s Economic Advisor Jeff Zients, and other senior officials, our review sought to understand what is genuinely new and different about big data and to consider how best to encourage the potential of these technologies while minimizing risks to privacy and core American values.
Over the course of 90 days, we met with academic researchers and privacy advocates, with regulators and the technology industry, with advertisers and civil rights groups. The President’s Council of Advisors for Science and Technology conducted a parallel study of the technological trends underpinning big data. The White House Office of Science and Technology Policy jointly organized three university conferences at MIT, NYU, and U.C. Berkeley. We issued a formal Request for Information seeking public comment, and hosted a survey to generate even more public input.
Today, we presented our findings to the President. We knew better than to try to answer every question about big data in three months. But we are able to draw important conclusions and make concrete recommendations for Administration attention and policy development in a few key areas.
There are a few technological trends that bear drawing out. The declining cost of collection, storage, and processing of data, combined with new sources of data like sensors, cameras, and geospatial technologies, mean that we live in a world of near-ubiquitous data collection. All this data is being crunched at a speed that is increasingly approaching real-time, meaning that big data algorithms could soon have immediate effects on decisions being made about our lives.
The big data revolution presents incredible opportunities in virtually every sector of the economy and every corner of society.
Big data is saving lives. Infections are dangerous—even deadly—for many babies born prematurely. By collecting and analyzing millions of data points from a NICU, one study was able to identify factors, like slight increases in body temperature and heart rate, that serve as early warning signs an infection may be taking root—subtle changes that even the most experienced doctors wouldn’t have noticed on their own.
Big data is making the economy work better. Jet engines and delivery trucks now come outfitted with sensors that continuously monitor hundreds of data points and send automatic alerts when maintenance is needed. Utility companies are starting to use big data to predict periods of peak electric demand, adjusting the grid to be more efficient and potentially averting brown-outs.
Big data is making government work better and saving taxpayer dollars. The Centers for Medicare and Medicaid Services have begun using predictive analytics—a big data technique—to flag likely instances of reimbursement fraud before claims are paid. The Fraud Prevention System helps identify the highest-risk health care providers for waste, fraud, and abuse in real time and has already stopped, prevented, or identified $115 million in fraudulent payments.
But big data raises serious questions, too, about how we protect our privacy and other values in a world where data collection is increasingly ubiquitous and where analysis is conducted at speeds approaching real time. In particular, our review raised the question of whether the “notice and consent” framework, in which a user grants permission for a service to collect and use information about them, still allows us to meaningfully control our privacy as data about us is increasingly used and reused in ways that could not have been anticipated when it was collected.
Big data raises other concerns, as well. One significant finding of our review was the potential for big data analytics to lead to discriminatory outcomes and to circumvent longstanding civil rights protections in housing, employment, credit, and the consumer marketplace.
No matter how quickly technology advances, it remains within our power to ensure that we both encourage innovation and protect our values through law, policy, and the practices we encourage in the public and private sector. To that end, we make six actionable policy recommendations in our report to the President:
Advance the Consumer Privacy Bill of Rights. Consumers deserve clear, understandable, reasonable standards for how their personal information is used in the big data era. We recommend the Department of Commerce take appropriate consultative steps to seek stakeholder and public comment on what changes, if any, are needed to the Consumer Privacy Bill of Rights, first proposed by the President in 2012, and to prepare draft legislative text for consideration by stakeholders and submission by the President to Congress.
Pass National Data Breach Legislation. Big data technologies make it possible to store significantly more data, and further derive intimate insights into a person’s character, habits, preferences, and activities. That makes the potential impacts of data breaches at businesses or other organizations even more serious. A patchwork of state laws currently governs requirements for reporting data breaches. Congress should pass legislation that provides for a single national data breach standard, along the lines of the Administration’s 2011 Cybersecurity legislative proposal.
Extend Privacy Protections to non-U.S. Persons. Privacy is a worldwide value that should be reflected in how the federal government handles personally identifiable information about non-U.S. citizens. The Office of Management and Budget should work with departments and agencies to apply the Privacy Act of 1974 to non-U.S. persons where practicable, or to establish alternative privacy policies that apply appropriate and meaningful protections to personal information regardless of a person’s nationality.
Ensure Data Collected on Students in School is used for Educational Purposes. Big data and other technological innovations, including new online course platforms that provide students real time feedback, promise to transform education by personalizing learning. At the same time, the federal government must ensure educational data linked to individual students gathered in school is used for educational purposes, and protect students against their data being shared or used inappropriately.
Expand Technical Expertise to Stop Discrimination. The detailed personal profiles held about many consumers, combined with automated, algorithm-driven decision-making, could lead—intentionally or inadvertently—to discriminatory outcomes, or what some are already calling “digital redlining.” The federal government’s lead civil rights and consumer protection agencies should expand their technical expertise to be able to identify practices and outcomes facilitated by big data analytics that have a discriminatory impact on protected classes, and develop a plan for investigating and resolving violations of law.
Amend the Electronic Communications Privacy Act. The laws that govern protections afforded to our communications were written before email, the internet, and cloud computing came into wide use. Congress should amend ECPA to ensure the standard of protection for online, digital content is consistent with that afforded in the physical world—including by removing archaic distinctions between email left unread or over a certain age.
We also identify several broader areas ripe for further study, debate, and public engagement that, collectively, we hope will spark a national conversation about how to harness big data for the public good. We conclude that we must find a way to preserve our privacy values in both the domestic and international marketplace. We urgently need to build capacity in the federal government to identify and prevent new modes of discrimination that could be enabled by big data. We must ensure that law enforcement agencies using big data technologies do so responsibly, and that our fundamental privacy rights remain protected. Finally, we recognize that data is a valuable public resource, and call for continuing the Administration’s efforts to open more government data sources and make investments in research and technology.
While big data presents new challenges, it also presents immense opportunities to improve lives, the United States is perhaps better suited to lead this conversation than any other nation on earth. Our innovative spirit, technological know-how, and deep commitment to values of privacy, fairness, non-discrimination, and self-determination will help us harness the benefits of the big data revolution and encourage the free flow of information while working with our international partners to protect personal privacy. This review is but one piece of that effort, and we hope it spurs a conversation about big data across the country and around the world.
Read the Big Data Report.
See the fact sheet from today’s announcement.

A New Map Gives New Yorkers the Power to Report Traffic Hazards


Sarah Goodyear in the Atlantic/Cities: “Ask any New Yorker about unsafe conditions on the city’s streets. Go ahead, ask.
You might want to sit down. This is going to take a while.
New York City’s streets are some of the most heavily used public spaces in the nation. A lot of the time, the swirling mass of users share space remarkably well. Every second in New York, it sometimes seems, a thousand people just barely miss colliding, thanks to a finely honed sense of self-preservation and spatial awareness.
The dark side is that sometimes, they do collide. These famously chaotic and contested streets are often life-threatening. Drivers routinely drive well over the 30 mph speed limit, run red lights, and fail to yield to pedestrians in crosswalks.  Pedestrians step out into traffic, sometimes without looking at what’s coming their way. Bicyclists ride the wrong way up one-way streets.
In recent years, the city has begun to address the problem, mainly through design solutions like better bike infrastructure, pedestrian refuges, and crosswalk countdown clocks. Still, last year, 286 New Yorkers died in traffic crashes.
Mayor Bill de Blasio vowed almost as soon as he was sworn into office in January to pursue an initiative called Vision Zero, which aims to eliminate traffic fatalities through a combination of design, enforcement, and education.
A new tool in the Vision Zero effort was unveiled earlier this week: a map of the city on which people can log their observations and complaints about chronically unsafe conditions. The map offers a menu of icons including red-light running, double-parking, failure to yield, and speeding, and allows users to plot them on a map of the city’s streets. Sites where pedestrian fatalities have occurred since 2009 are marked, and the most dangerous streets in each borough for people on foot are colored red.

The map, a joint project of DOT, the NYPD, and the Taxi and Limousine Commission, has only been live for a couple of days. Already, it is speckled with dozens of multicolored dots indicating problem areas. (Full disclosure: The map was designed by OpenPlans, a nonprofit affiliated with Streetsblog, where I worked several years ago.)…”

Saving Big Data from Big Mouths


Cesar A. Hidalgo in Scientific American: “It has become fashionable to bad-mouth big data. In recent weeks the New York Times, Financial Times, Wired and other outlets have all run pieces bashing this new technological movement. To be fair, many of the critiques have a point: There has been a lot of hype about big data and it is important not to inflate our expectations about what it can do.
But little of this hype has come from the actual people working with large data sets. Instead, it has come from people who see “big data” as a buzzword and a marketing opportunity—consultants, event organizers and opportunistic academics looking for their 15 minutes of fame.
Most of the recent criticism, however, has been weak and misguided. Naysayers have been attacking straw men, focusing on worst practices, post hoc failures and secondary sources. The common theme has been to a great extent obvious: “Correlation does not imply causation,” and “data has biases.”
Critics of big data have been making three important mistakes:
First, they have misunderstood big data, framing it narrowly as a failed revolution in social science hypothesis testing. In doing so they ignore areas where big data has made substantial progress, such as data-rich Web sites, information visualization and machine learning. If there is one group of big-data practitioners that the critics should worship, they are the big-data engineers building the social media sites where their platitudes spread. Engineering a site rich in data, like Facebook, YouTube, Vimeo or Twitter, is extremely challenging. These sites are possible because of advances made quietly over the past five years, including improvements in database technologies and Web development frameworks.
Big data has also contributed to machine learning and computer vision. Thanks to big data, Facebook algorithms can now match faces almost as accurately as humans do.
And detractors have overlooked big data’s role in the proliferation of computational design, data journalism and new forms of artistic expression. Computational artists, journalists and designers—the kinds of people who congregate at meetings like Eyeo—are using huge sets of data to give us online experiences that are unlike anything we experienced in paper. If we step away from hypothesis testing, we find that big data has made big contributions.
The second mistake critics often make is to confuse the limitations of prototypes with fatal flaws. This is something I have experienced often. For example, in Place Pulse—a project I created with my team the M.I.T. Media Lab—we used Google Street View images and crowdsourced visual surveys to map people’s perception of a city’s safety and wealth. The original method was rife with limitations that we dutifully acknowledged in our paper. Google Street View images are taken at arbitrary times of the day and showed cities from the perspective of a car. City boundaries were also arbitrary. To overcome these limitations, however, we needed a first data set. Producing that first limited version of Place Pulse was a necessary part of the process of making a working prototype.
A year has passed since we published Place Pulse’s first data set. Now, thanks to our focus on “making,” we have computer vision and machine-learning algorithms that we can use to correct for some of these easy-to-spot distortions. Making is allowing us to correct for time of the day and dynamically define urban boundaries. Also, we are collecting new data to extend the method to new geographical boundaries.
Those who fail to understand that the process of making is iterative are in danger of  being too quick to condemn promising technologies.  In 1920 the New York Times published a prediction that a rocket would never be able to leave  atmosphere. Similarly erroneous predictions were made about the car or, more recently, about iPhone’s market share. In 1969 the Times had to publish a retraction of their 1920 claim. What similar retractions will need to be published in the year 2069?
Finally, the doubters have relied too heavily on secondary sources. For instance, they made a piñata out of the 2008 Wired piece by Chris Anderson framing big data as “the end of theory.” Others have criticized projects for claims that their creators never made. A couple of weeks ago, for example, Gary Marcus and Ernest Davis published a piece on big data in the Times. There they wrote about another of one of my group’s projects, Pantheon, which is an effort to collect, visualize and analyze data on historical cultural production. Marcus and Davis wrote that Pantheon “suggests a misleading degree of scientific precision.” As an author of the project, I have been unable to find where I made such a claim. Pantheon’s method section clearly states that: “Pantheon will always be—by construction—an incomplete resource.” That same section contains a long list of limitations and caveats as well as the statement that “we interpret this data set narrowly, as the view of global cultural production that emerges from the multilingual expression of historical figures in Wikipedia as of May 2013.”
Bickering is easy, but it is not of much help. So I invite the critics of big data to lead by example. Stop writing op–eds and start developing tools that improve on the state of the art. They are much appreciated. What we need are projects that are worth imitating and that we can build on, not obvious advice such as “correlation does not imply causation.” After all, true progress is not something that is written, but made.”

Can technology end homelessness?


Geekwire: “At the heart of Seattle’s Pioneer Square neighborhood exists a unique juxtaposition.
Inside a two-story brick building is the Impact Hub co-working space and business incubator, a place where entrepreneurs are busily working on ideas to improve the world we live in.
hacktoendhomelessnessBut walk outside the Impact Hub’s doors, and you’ll enter an entirely different world.
Homelessness. Drugs. Violence.
Now, those two contrasting scenes are coming together.
This weekend, more than 100 developers, designers, entrepreneurs and do-gooders will team up at the Impact Hub for the first-ever Hack to End Homelessness, a four-day event that encourages participants to envision and create ideas to alleviate the homelessness problem in Seattle.
The Washington Low Income Housing Alliance, Real Change and several other local homeless services and advocacy groups have already submitted project proposals, which range from an e-commerce site showcasing artwork of homeless youth to a social network focusing on low-end mobile phones for people who are homeless.
Seattle has certainly made an effort to fix its homelessness problem. Back in 2005, the Committee to End Homelessness established a 10-year plan to dramatically reduce the number of people without homes in the region. By the end of 2014, the goal was to “virtually end,” homelessness in King County.
But fast-forward to today and that hasn’t exactly come to fruition. There are more than 2,300 people in Seattle sleeping in the streets — up 16 percent from 2013 — and city data shows nearly 10,000 households checking into shelters or transitional housing last year. Thousands of others may not be on the streets or in shelters, yet still live without a permanent place to sleep at night.
While some efforts of the committee have helped curb homelessness, it’s clear that there is still a problem — one that has likely been affected by rising rent prices in the area.
Candace Faber, one of the event organizers, said that her team has been shocked by the growth of homelessness in the Seattle metropolitan area. They’re worried not only about how many people do not have a permanent home, but what kind of impact the problem is having on the city as a whole.
“With Seattle experiencing the highest rent hikes in the nation, we’re concerned that, without action, our city will not be able to remain the dynamic, affordable place it is now,” Faber said. “We don’t want to lose our entrepreneurial spirit or wind up with a situation like San Francisco, where you can’t afford to innovate without serious VC backing and there’s serious tension between the housing community and tech workers.”
That raises the question: How, exactly, can technology fix the homeless problem? The stories of these Seattle entrepreneurs helps to provide the answer.

FROM SHELTERS TO STARTUPS

Kyle Kesterson knows a thing or two about being homeless.
That’s because the Freak’n Genius co-founder and CEO spent his childhood living in 14 different homes and apartments, in addition to a bevy of shelters and transitional houses. The moving around and lack of permanent housing made going to school difficult, and finding acceptance anywhere was nearly impossible.
“I was always the new kid, the poor kid, and the smallest kid,” Kesterson says now. “You just become the target of getting picked on.”
By the time he was 15, Kesterson realized that school wasn’t a place that fit his learning style. So, he dropped out to help run his parents’ house-cleaning business in Seattle.
That’s when Kesterson, now a talented artist and designer, further developed his creative skills. The Yuba City, Calif. native would spend hours locked in a room perusing through deviantART.com, a new Internet community where other artists from around the world were sharing their own work and receiving feedback.

So now Kesterson, who plans on attending the final presentations at the Hack for Homelessness event on Sunday, is using his own experiences to teach youth about finding solutions to problems with a entrepreneurial lens. When it comes to helping at-risk youth, or those that are homeless, Kesterson says it’s about finding a thriving and supportive environment — the same one he surrounded himself with while surfing through deviantART 14 years ago.
“Externally, our environment plays a significant role in either setting people up for growth and success, or weighting people down, sucking the life out of them, and eventually leaving them at or near rock bottom,” he said.
For Kesterson, it’s entrepreneurs who can help create these environments for people, and show them that they have the ability and power to solve problems and truly make a difference.
“Entrepreneurs need to first focus on the external and internal environments of those that are homeless,” he said. “Support, help, and inspire. Become a part of their network to mentor and make connections with the challenges they are faced with the way we lean on our own mentor networks.”

FIXING THE ROOT

Lindsay Caron Epstein has always, in some shape or form, been an entrepreneur at heart.
She figured out a way to survive after moving to Arizona from New Jersey with only $100. She figured out how to land a few minimum wage jobs and eventually start a non-profit community center for at-risk youth at just 22 years old.
And now, Caron using her entrepreneurial spirit to help figure out ways to fix social challenges like homelessness.
The 36-year-old is CEO and founder of ActivateHub, a startup working alongside other socially-conscious companies in Seattle’s Fledge Accelerator. ActivateHub is a “community building social action network,” or a place where people can find local events put on by NGOs and other organizations working on a wide variety of issues.
Caron found the inspiration to start the company after organizing programs for troubled youth in Arizona and studying the homelessness problem while in school. She became fascinated with how communities were built in a way that could help people and pull them out of tough situations, but there didn’t appear to be an easy way for people to get involved.
“If you do a Google search for poverty, homelessness, climate change — any issue you care about — you’ll just find news articles and blogs,” Caron explained. “You don’t find who in your community is working on those problems and you don’t find out how you can get involved.”
Caron says her company can help those that may not have a home or have anything to do. ActivateHub, she said, might give them a reason to become engaged in something and create a sense of value in the community.
“It gives people a reason to clean up and enables them to make connections,” said Caron, who will also be attending this weekend’s event. “Some people need that inspiration and purpose to change their situation, and a lot of times that motivation isn’t there.”
Of course, ActivateHub alone isn’t going to solve the homelessness problem by itself. Caron knows this and thinks that entrepreneurs can help by focusing on more preventative measures. Sure, technology can be used to help connect homeless people to certain resources, but there’s a deeper issue at hand for Caron…”

House passes bill to eliminate wasteful reports


Federal Times: “Agencies would stop producing a variety of unnecessary reports, under legislation passed by the House April 28.
The Government Reports Elimination Act would cut reports from across government and save agencies about $1 million over the next five years. The legislation is sponsored by House Oversight and Government Reform Committee chairman Darrell Issa, R-Calif, and by Reps. Gerry Connolly, D-VA., and Rob Woodall, R-Ga. Senators Mark Warner, D-Va., and Sen. Kelly Ayotte, R-N.H., have introduced a companion bill in the Senate.
“Congress relies on accurate, timely reports to inform its spending and policy decisions, but outdated or duplicative reports are simply a waste of government resources,” Issa said in a press release.
Connolly said it is important that Congress leverage every opportunity to streamline or eliminate antiquated reporting requirements in a bipartisan way.
“Enacting our bipartisan legislation will free up precious agency resources, allowing taxpayer dollars to be devoted to operations that are truly mission-critical, high priority functions,” Connolly said.”
Bill at: http://www.cbo.gov/publication/45303

Crowdsourcing the future: predictions made with a social network


New Paper by Clifton Forlines et al: “Researchers have long known that aggregate estimations built from the collected opinions of a large group of people often outperform the estimations of individual experts. This phenomenon is generally described as the “Wisdom of Crowds”. This approach has shown promise with respect to the task of accurately forecasting future events. Previous research has demonstrated the value of utilizing meta-forecasts (forecasts about what others in the group will predict) when aggregating group predictions. In this paper, we describe an extension to meta-forecasting and demonstrate the value of modeling the familiarity among a population’s members (its social network) and applying this model to forecast aggregation. A pair of studies demonstrates the value of taking this model into account, and the described technique produces aggregate forecasts for future events that are significantly better than the standard Wisdom of Crowds approach as well as previous meta-forecasting techniques.”
VIDEO:

Mapping the Intersection Between Social Media and Open Spaces in California


Stamen Design: “Last month, Stamen launched parks.stamen.com, a project we created in partnership with the Electric Roadrunner Lab, with the goal of revealing the diversity of social media activity that happens inside parks and other open spaces in California. If you haven’t already looked at the site, please go visit it now! Find your favorite park, or the parks that are nearest to you, or just stroll between random parks using the wander button. For more background about the goals of the project, read Eric’s blog post: A Conversation About California Parks.
In this post I’d like to describe some of the algorithms we use to collect the social media data that feeds the park pages. Currently we collect data from four social media platforms: Twitter, Foursquare, Flickr, and Instagram. We chose these because they all have public APIs (Application Programming Interfaces) that are easy to work with, and we expect they will provide a view into the different facets of each park, and the diverse communities who enjoy these parks. Each social media service creates its own unique geographies, and its own way of representing these parks. For example, the kinds of photos you upload to Instagram might be different from the photos you upload to Flickr. The way you describe experiences using Twitter might be different from the moments you document by checking into Foursquare. In the future we may add more feeds, but for now there’s a lot we can learn from these four.
Through the course of collecting data from these social network services, I also found that each service’s public API imposes certain constraints on our queries, producing their own intricate patterns. Thus, the quirks of how each API was written results in distinct and fascinating geometries. Also, since we are only interested in parks for this project, the process of culling non-park-related content further produces unusual and interesting patterns. Rural areas have large parks that cover huge areas, while cities have lots of (relatively) tiny parks, which creates its own challenges for how we query the APIs.
Broadly, we followed a similar approach for all the social media services. First, we grab the geocoded data from the APIs. This ignores any media that don’t have a latitude and longitude associated with them. In Foursquare, almost all checkins have a latitude and longitude, and for Flickr and Instagram most photos have a location associated with them. However, for Twitter, only around 1% of all tweets have geographic coordinates. But as we will see, even 1% still results in a whole lot of tweets!
After grabbing the social media data, we intersect it with the outlines of parks and open spaces in California, using polygons from the California Protected Areas Database maintained by GreenInfo Network. Everything that doesn’t intersect one of these parks, we throw away. The following maps represent the data as it looks before the filtering process.
But enough talking, let’s look at some maps!”