Fitbit Time

Reading: Time Frames by Scott McCloud, New Media Reader.

This reading was a great finale to Virginia Tech’s New Media Seminar.  McCloud’s dissection of time and space in comics is fantastic!  Take a look at the image below – is it one point in time, or are there a sequence of points?  Are you aware of this when you are reading the comic?

From this blog.

This week we were asked to think about how time is encoded in some other medium or technology. I happened to be staring at my Fitbit log, so let’s think about how Fitbit considers time.  Here is a screenshot of my Fitbit Dashboard from the weekend.

Screen Shot 2015-04-29 at 6.10.19 PM

You can click through the days using the arrow tabs on the left and right – in this way, time is depicted as pages in a book.  Each page is one day (12:00 AM to 11:59 PM), divided into 15 minute increments.  However, there’s another aspect of time: clicking on the “Daily” dropdown you can get a 7-day panel:

Screen Shot 2015-04-29 at 6.11.27 PM

Time looks different now. Each bar represents 24 hours.  Further, you can select different types of information to view: steps, heart rate, calories, etc.

Screen Shot 2015-04-29 at 6.11.52 PM

Fitbit (and other activity/calories/diet logging systems) tend to think of time in a similar way:

  1. A day is from 12AM to 11:59PM.  Not when you wake up and when you go to bed; not from noon to noon.
  2. When you consider a different length of time (e.g., a week instead of a day), you tend to get a different representation of the data. You can only see one panel at a time, one “page” in your time book.
  3. Time is associated with wins and failures.  You see smiley faces and gold stars during time frames where you met your goals.  You can visually tell your “bad” days from your “good” ones.   I played in an ultimate tournament on Saturday, so it was a decidedly “good” Fitbit day.

Junk Yard Education

Reading: Chapter 7 (“Learning Webs”) of Deschooling Society by Ivan Illich, 1971.

This week we were challenged with the task of considering a “deschooled society.”  Illich writes,

We must conceive of new relational structures which are deliberately set up to facilitate access to [educational] resources for the use of anybody who is motivated to seek them for his education.

The first of these networks provides “reference services to educational objects.”  This section, specifically the notion of understanding machines by dismantling them, reminded me of one out-of-school childhood experience.  If an appliance broke in my house, my dad would give it to my sister and me instead of tossing it out.  Armed with a handy set of screwdrivers and pliers, we proceeded to destroy the thing, learning a bit about how the item worked in the process.  In particular, I remember doing this with a hair dryer (which, in fairness, is a pretty boring thing to tear apart).  With each new piece of junk, my sister and I began to understand “how stuff works,”  or at least “the pieces you needed to get stuff to work.”  We never bothered to put the appliances back together – destruction was much more fun.

Broken appliances that can be opened, prodded, and dissected with a screwdriver are potential teaching tools for elementary school aged kids.  To my knowledge, there is no classroom unit on “junk.”    Further, we would need to use “old junk.”  As Illich points out, modern junk tends to be impenetrable – would you rather take apart a rotary phone or an iphone?  A typewriter or a laptop?

And on the other end of the spectrum — putting things together — there are some great “build a bike” programs (including one in Providence, RI) where you learn how to assemble your own bike and get to take home the final product.  Maybe as a kid you learn how to take junk apart, and as an adult you learn how to put it back together?


Reading: Two Selections by Brenda Laurel, available from the New Media Reader.

  1. “The Six Elements and the Causal Relations Among Them.” Computers as Theater, 49-65. 2nd ed., 1993.
  2. “Star Raiders: Dramatic Interaction in a Small World,” Ph.D. Thesis, Ohio State University, pp. 81-86, 1986.

I am going to tackle the task of identifying a form of human-computer interaction (HCI) that has some/most/all of Aristotle’s six qualitative elements of drama:

  • Enactment: All that is seen
  • Melody (Pattern): All that is heard
  • Language: Selection/arrangement of words
  • Thought: Inferred processes leading to choice
  • Character: Groups of traits, inferred from agents’ patterns of choice
  • Action: The whole action being represented.

This is a tall order, in part because we must keep in mind that “the whole action must have a beginning, a middle, and an end” for it to be a satisfying plot.  Video games, TV, and movies all have this notion, as sovink77 has written about in her post.

Here’s one technology that, if it becomes less pricey, may bring a new dimension to human-computer entertainment.  The Cave Automatic Virtual Environment (or CAVE) looks like a very boring room – a “box” with white walls.  However, when you add a bunch of of projectors along with head and hand tracking capabilities, the CAVE becomes a 3D interactive world.  In grad school my friends modeled bat flight, wrote 3D dynamic poetry, and developed virtual painting techniques using the CAVE.  Researchers at UC Davis have also pioneered this work in virtual reality, for example with their augmented sandbox:

However, none of these examples of the CAVE follow the notion of a storyline.  If we can interact with a virtual world in this way, we’re getting closer to an interactive video game.

Once we have this environment, then I believe we will have all six elements Laurel described in human-computer activity.  While it is still way too expensive to build your own CAVE in your living room, don’t bother: Microsoft has already filed a patent for it.

“Transmission of intelligence”

Reading: “Will There Be Condominiums in Data Space?” by Bill Viola.  Video 80(5):36-41. 1982.

The notion of “data space” has changed drastically with the advancement of technology. I enjoyed Viola’s examples of data space in history as elaborate memory systems, “mnemo-technics.”  My example of data space highlights the brain’s amazing ability to adapt to new information.

In the 1980s, with the possibility of merging computers and video, Viola writes about the potential shift from models of the eye (video) and ear (music) to models of of “thought processes and conceptual structures in the brain.”

It was Nikola Tesla, the original uncredited inventor of the radio, who called it “transmission of intelligence.”  He saw something there that others didn’t.  After all these years, video is finally getting “intelligence,” the eye is being reattached to the brain.

In fact, there is a literal example of “the eye being reattached to the brain” in neuroscience.  Instead of connecting videos to computers, researchers are connecting brains to computers.  For example, neurophysiologist Andrew Schwartz at the University of Pittsburgh has made headway for brain-controlled prosthetics in a long-term study.

The Modular Prosthetic Limb was designed by Johns Hopkins University’s Applied Physics Laboratory and funded by DARPA.  From The Thought Experiment.

“Neuroprosthetics” have huge implications for people paralyzed by injury or disease (such as ALS).  While brain-computer interfaces may still sound like science fiction, the area of research has been around for decades.  In the early 2000s I learned about Johnny Ray, who suffered a stroke in 1997 that left him with locked-in syndrome where he was unable to move or speak.  In a study at Emory University, a relatively simple surgery allowed him to to move a computer cursor with his thoughts, giving him the power to spell using a visual keyboard, communicating for the first time in years.  Just like mneumo-technics, which allowed people to manage pieces of information, these brain-computer interfaces have the potential to restore physical control by a remapping of movement in the brain.

Logo for 3D printing is (almost) here

Reading: “Personal Dynamic Media” by Alan Kay and Adele Goldberg.  Computer 10(3):31-41. March 1977.  Available as a sample chapter from the New Media Reader.

Here, Kay and Goldberg describe the potential of their Dynabook to influence the lives of any person as a creative and educational outlet.  I am impressed that they have users from a range of ages and expertise levels, from children to professional animators and musicians.  In the late 60s, they began to study how children interact with technology:

At that time we became interested in focusing on children as our “user community.”  We were greatly encouraged by the Bolt Beranek and Newman/MIT Logo work that uses a robot turtle that draws on paper, a CRT version of the turtle, and a single music generator to get kids to program.

Logo instructions to make a square. Link.

If you don’t know what Logo is, you are missing out.  It is a programmable turtle icon that can move around a plane and draw lines according to pre-programmed instructions.  I loved this guy in my childhood – in fact, I didn’t realize that I had ever programmed before college since this was more of a “game” than a “computer program.”  Ed Price from Microsoft has a nice history of the Logo turtle.

Our task for the seminar was to share a nugget and/or app that fulfills Key and Goldberg’s predictions.  I am amazed by how relevant many themes from this seminar remain.  Just this week I read a blog post about Madeup, developed by Chris Johnson at University of Wisconsin, Eau Claire.  It is basically Logo for 3D printing.  With a few simple commands, we can build blocks, tubes, and other shapes that can be printed on a 3D printer.  Well, it’s not completely done yet – there are still 34 days to go for his Kickstarter campaign.

Screenshot from Kickstarter video.