Aatsinki: The Story of Arctic Cowboys
Despite the conclusion of the “official” hackathon workday on Saturday, the Aatsinki: The Story of Arctic Cowboys team left POV’s offices intent on continuing their efforts.
Having completed enough work to have a rough prototype, the team members turned to their network of peers over the evening to solicit feedback, an effort that helps them assess their priorities for Day 2. Members had hoped to get working prototypes of two project elements together for the screening scheduled to take place at the end of the day. But with time in shrinking supply, the team decided to winnow their focus to polish only one of the elements.
“With projects like these, you can either go deep, or go wide,” said developer Mike Knowlton. “We decided to go deep on the issue of land-use rights.”
The decision appeared to pay off. At 5:15 PM, the team announced via Twitter that they had achieved “code freeze,” the point at which the product had been determined to have hit the end of its current iteration. The members had the luxury of using the next 45 minutes to prepare for their presentation.
At the demo, filmmaker Jessica Oreck explained that the idea behind the project was to provide content related to her film that could more pointedly address some of the issues faced by her subjects.
“We came up with the idea of an online debate,” said Oreck. “But it’s not a debate between people online. It’s a debate between you, the viewer and the Aatsinki family that’s moderated by Mike, [designer Hal Siegel] and I.”
In order to achieve that goal, the newly created web app first shows the user footage from the year-long shoot to give viewers an introduction to the film’s subject. Users then choose from one of five different subjects in which the dialogue would take place. In the case of the demo the subject was reindeer herding, and the app asked the audience how the reindeer’s grazing lands should be used, and then presented a counter argument to their choice.
Siegel said his goal was to create something simple and direct that mirrored both the feeling of the film and the complexity of the situation faced by the herders.
“We were more intrigued by asking questions rather than providing answers,” he said.
At the end of Day 1, the Op-Video team had cobbled together a working framework for their project. Their development approach allowed for different pieces of the project to be taken home and worked on independently by different members. By the morning of Day 2, the team has made the decision to focus on two content modules: one element that pairs audio with interactive graphics, and another that links the audio with animation. The technique allowed the team to play with several different approaches to the project.
At the team’s demonstration, Posner explains that while he had applied to the Hackathon under the Op-Video moniker, the team’s work has changed so substantially that they’d rechristened the project The Numbers. The demo product, which team members describe as a “data doc,” starts with an animated video that provides an overview of unemployment numbers, using audio interviews conducted with experts the team had pulled together over the previous few days.
The hope is that the video can contain evergreen content that is married with recent data pulled from online sources by code. For example, if the “seasonal adjustment” module is chosen, viewers will hear an explanation of the subject with the audio synced to interactive graphs built using Google’s visualization tools. In another module, the code pulls the latest data from the Bureau of Labor Statistics and converts it into an animated font designed by Posner that seamlessly blends into the video.
Living Los Sures
The Living Los Sures team got a fair amount of work done on Day 1, after narrowing their goals and discussing possible objectives, and dispensing with ideas that seemed too ambitious. But on the morning of Day 2 the finish line was still far off on the horizon. Instead of focusing on a back-end user interface through which content could be introduced to their project, the team makes a decision to focus on the front-end, which will still require significant work given the product’s complexity.
For software engineer Kyle Warren, one of the main objectives in working on the project was writing code that could have a life beyond the demo. “We set out to make code that we might be able to use later, that was something that the team really emphasized early on,” he said.
At their presentation, UnionDocs artistic director Christopher Allen explained that their goal was to take the 1984 documentary Los Sures, about the Williamsburg neighborhood of Brooklyn, New York, and update and annotate it with newly gathered content. As the original film plays, the viewer is provided with a visual cue when the ancillary content is available. If clicked, the film’s window shrinks, and is joined by several others containing a new video, photo or text content to explore.
“What you’re looking at is more of a production tool than a presentation tool,” said Allen. “To make interactive experiences, I think we need more tools where we can test out content. Right now we’re doing it in a kind of backward way, where we’ve got content and we’re trying to plug in stuff that makes sense.” Allen sees some of the largest challenges for the project as creative ones — essentially in trying to figure out what content will add to the user experience.
StoryCorps Audio Slideshows
Day 2 for the StoryCorps Audio Slideshow team meant continuing with their goal of taking the show’s archival material and representing it in captivating way.
“We wanted to somehow tie our stories to the newscycle,” said StoryCorps producer Michael Garofalo.
In order to do so, developer Antonio Kaplan used a news API to filter out the stories that are front-and-center in public discourse at the moment. On the visual front end, the stories were represented by photos — the larger the photo, the greater the story’s current relevance to today’s headlines.
The second element of the project was intended to add something dynamic to the listening experience. In order to do that, the team pulled keywords from transcripts and presented them as text that synched with the audio piece.
“We noticed the story had an emotional arc, just from the words,” said Garofalo. It went from negative to positive, and we thought, how can we represent that without it being distracting.”
Feed Me A Story
For the Feed Me A Story team, Day 2 meant a personnel change. Sumin Chou, who helped the team define the interactions of the video cookbook app, was replaced by visual designer Sonna Kim, who could take Day 1′s wireframes and give them visual life.
The team started Day 2 draping visuals over the iPad app framework coded by iOS developer Lauren Hasson, then tying in video and text already created by filmmakers Laura Nova and Theresa Loong. Over the course of the day, the team’s progress was so significant that they flirted with the idea of going beyond their goal of the “minimum viable product.”
The prototype iPad app consisted of a series of video screens, navigated by swipe. The app is a catalog of stories exploring documentary subjects’ relationships to specific dishes. The viewer hit a button at any time to flip the video, revealing the recipe for the food being discussed.
At their presentation, filmmaker Laura Nova explained that they had always described their project as a documentary-style video cookbook, “but we had no idea what that would look like,” she said.
In the future, the team wants users to mark their favorite recipes and then collect them to create, in effect, a video e-cookbook.
Of all the teams, the Feed Me A Story group’s project may have changed the most from their original vision. Nova and Loong had initially wanted to created a platform that would allow user generated content to be shared. “I think Lauren made us realize that we had generated material that we had filmed that we wanted to share,” said Nova. “The user generated stuff should happen later, when people have seen lots of examples.”