Usability Lab

Open Source, Portable Usability Testing Lab: Part 1

Note: The pipeline in this post is horribly inefficient and lacks audio. Please see Ray Strode’s blogpost on usability video pipelines to obtain a much improved version of the pipeline in this blog post.

Thanks to all the folks who responded so quickly to help out on my last blog post! Because of your help, a bug got filed and fixed upstream in gstreamer, and a new build of gstreamer containing the fix for Fedora is on the way.

So I bet you were wondering why I was so interested in getting AVF videos into gstreamer, right? Well, I’m going to tell you anyway.

I’m in the process of putting together a portable usability testing lab. The key component to this usability testing lab is a quad-video-input video mixer / DVR unit. It can be hooked up to 3 cameras and one scan converter so you can have 3 panels of the user / testing environment and one panel showing the screen of the system they’re using. The particular unit I decided to get is the AVer Media AVerDiGi EB1304NET SATA+.

Now, there are a couple quirks to the EB1304. The first I noticed was that its audio input/output jacks are, well, a little unique:

Yes. It takes bare wires with little metal clips. I haven’t tested it yet, but the fine technical support folks at CCTV Wholesalers (who, by the way, I highly recommend for price, shipping speed, and speedy & helpful support) assure me it’ll work, so that’s something I’ll be testing out soon.

The second quirk, of course, is more crucial to the usefulness of the unit – the video files it outputs. While this little unit is an embedded Linux product (woo!), the file formats it produces unfortunately are not so open & standard. As I mentioned in my previous blog post about AVF files, it provides you with both an AVF (which is really just an AVI file with a slightly tainted header) and a TBL file for each of the four input videos. Now, these play fine in mplayer, vlc, and very soon gstreamer-backed players, but, one of the reasons I want to capture four videos at once is to produce usability testing videos that show both what the user is working on and their reaction, ideally a single video with all four inputs in a quad-split screen. AVer Media provides a CD-ROM with a bunch of Windows-only programs to do this in software – and they don’t even run in Wine. So, I wanted to try to get the AVFs working in gstreamer so that I could use gstreamer pipelines to achieve this quad-split video.

Well, since the updated gstreamer for Fedora wasn’t yet available this morning, I took Nicu’s advice and used mencoder to convert the AVF files to AVI:


mencoder -oac copy -ovc copy -o ch1.avi 2009_08_25_21_07_59_ch1.avf

Next, the ever-amazing Ray Strode hacked on putting together the gstreamer pipeline necessary to stitch them together into one video:


gst-launch -v filesrc location=ch2.avi ! decodebin ! videoscale ! video/x-raw-yuv,width=720,height=480 ! videobox left=-720 top=-480 border-alpha=0 ! videomixer name=right ! videomixer name=three ! videomixer name=all ! alpha ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=all.ogv filesrc location=ch1.avi ! decodebin ! videoscale ! video/x-raw-yuv,width=720,height=480 ! videobox border-alpha=0.0 left=-720 ! alpha ! ffmpegcolorspace ! all. filesrc location=ch4.avi ! decodebin ! videoscale ! video/x-raw-yuv,width=720,height=480 ! videobox border-alpha=0.0 top=-480 ! alpha ! ffmpegcolorspace ! three. filesrc location=ch3.avi ! decodebin ! videoscale ! video/x-raw-yuv,width=720,height=480 ! alpha ! ffmpegcolorspace ! right.

AWESOME, right? :) Here’s the result, although I took the video of me out and replaced it with another copy of the screen video because I’m shy :) :



Download in OGV format

As we go through this process, I’m making sure to document everything so anybody else who wanted to put together a similar kit without having to run proprietary software to do so can learn from this experience. One of my next blog posts is going to be a rundown of all of the equipment I ordered for our kit with photos and writeups of how to use it all so if you are interested in this, you can look forward to that.

Next up is figuring out that little beastie of an audio input and then updating the pipeline to handle the audio. Wish me luck! :)

About Máirín Duffy

Máirín is a principal interaction designer at Red Hat. She is passionate about software freedom and free & open source tools, particularly in the creative domain: her favorite application is Inkscape. You can read more from Máirín on her blog at blog.linuxgrrl.com.

Discussion

15 thoughts on “Open Source, Portable Usability Testing Lab: Part 1

  1. Nice! Although I can’t say I’ve ever found a use for anything more than two video sources in a usability test myself…

    Posted by Lab Rat | August 26, 2009, 2:59 pm
    • Yeh, I don’t know that it would be useful to have all three cameras all the time, but im thinking maybe in particular scenarios it could be useful so I figured I’d get the 3rd camera just in case. (Worse-case scenario, it serves as a spare if one breaks.)

      Posted by mairin | August 26, 2009, 3:01 pm
  2. do you know of plans via something like telepathy to be able to take multiple laptop embedded webcams via an adhoc wireless network and produce a set of video feeds…as a poorman’s approach to this?

    -jef

    Posted by jef spaleta | August 26, 2009, 3:18 pm
    • I don’t, but that sounds like an awesome idea. :)

      This whole set up cost ~$800 btw. Cheaper than a single laptop.

      Posted by mairin | August 26, 2009, 3:20 pm
      • that’s good to see its not too expensive. I’m sure that puts it into a price range where some people will duplicate your setup. But I was thinking more along the lines of how to cheaply arm LUGs or Ambassadors so they can be your minions using technology they’d already have on hand. I think with the right telepathy based tools we could put 3 or 4 people with laptops on an lock adhoc wireless network anywhere in the world…and give them a set of instructions and they should be able to produce usability studies worth reviewing. .. I think.

        Posted by jef spaleta | August 26, 2009, 3:31 pm
      • @Jef yep good point. Well, at the very least, if they have 4 videos they can now stitch them :)

        Posted by mairin | August 26, 2009, 3:37 pm
  3. This is a fantastic step forward! I was just looking at videobox the other day while considering how to get a PiP function together for doing on the spot captures of presentations (i.e. the slide deck + the speaker in a PiP + audio feed). GStreamer is really powerful and I don’t think anyone’s really scratched the surface of what can be done with really good basic tools put together in clever ways.

    I did find through my PulseCaster project that it’s a *lot* easier to read and understand gstreamer pipelines if you do them in Python. That’s a real bear of a command line you got there!

    Posted by Paul Frields | August 26, 2009, 5:16 pm
  4. Almost a year ago I had read a post on planet gstreamer where a new element ‘multifilesrc’ was discussed. The author of the post had made a nice demo video which showed mixing of outputs from three webcams (acting as CCTV) and a still frame into a single video on the fly. The main difference there was that the webcam’s were outputting jpeg frames instead of a continuous videos. But the important point is to consider ‘on the fly’.
    I remember the setup was used by the author in some library or a lab. If I find the link to the post by any chance, I will let you know.

    People often underestimate the power of gstreamer.

    Posted by Onkar | August 27, 2009, 6:34 am
  5. Indeed, Ray is amazing, I bow to his geekiness of being able to put together that scary GStreamer pipeline (how useful would be a GUI allowing ordinary people to generate something like that…)

    …and your signature, the lovely couple on the bottom-left, is definitely a nice touch.

    Posted by Nicu | August 28, 2009, 3:24 am
  6. thanks for the info…..nice blog bro. plizz reply me too

    Posted by badro3n | September 4, 2009, 8:13 am
  7. you could also use a var to use newer decodebin2 instead of decodebin. Like using $DECODEBIN in the scribe and set it beforehand to either one. To boost the audio, there is also plain volume element which can boost up to factor 10.0 – it should be a tad faster. But with all the transcoding it won’t matter much :)

    I’ll take a look into your comments to update the man pages a bit. This kind of feedback is useful – Thanks.

    Posted by Stefan Kost | October 16, 2009, 4:43 am
  8. Ray and Máirín,.
    I have been using avermedia\averdigi products for several years. I have always been looking at a linux solution(they used to have the DX5000 cards for Linux, but they still use use an Active X client.Now all is left is zoneminder for the pc)I. I was thrilled to see they were using linux have been requesting the source code for their embedded surveillance DVR with no success. Did you accomplish this and if not could you help me in any way to do this? I have also tried with Toshiba DVR as well. Toshiba says they are GPl’d, but they still with not give out the source code.

    Chris

    Posted by chris | March 20, 2010, 8:03 pm

Trackbacks/Pingbacks

  1. Pingback: Donna Benjamin (kattekrab) 's status on Wednesday, 26-Aug-09 20:44:05 UTC - Identi.ca - August 26, 2009

  2. Pingback: Open Source, Portable Usability Testing Lab: Part 2 – The Parts « mairin - October 17, 2009

Follow

Get every new post delivered to your Inbox.

Join 62 other followers

%d bloggers like this: