Video Capture and Analysis: 5 Ways You’re Hurting Your Video Analysis

This blog post is part two of a two-part series on using video records in contextual inquiry.

In part one Video Capture and Analysis: 5 Reasons to Film Your Research,” we discussed five reasons to film your research. One of the biggest advantages of filming your research is that you can analyze the video after the research has concluded.

In video analysis, you codify behaviors or events to put quantitative values to qualitative observations. These quantitative values can be a useful way to quickly and simply communicate your findings. Video analysis has been a staple of behavioral research methods for a long time, but there’s surprisingly little information about how to do it effectively.

This could be because video analysis plans are dependent upon the needs of a specific study or project. The results are usually not intended to have external validity (relevance or meaning outside of the context of the study). The parameters of analysis are defined by the researcher and these may have little value outside the context of that study.

Video analysis is, by nature, subjective, since the researcher decides what variables to code for and how to code them. As long as the method of analysis has internal validity (the results actually demonstrate the effect that you say they demonstrate), this subjectivity isn’t a problem. But this makes maintaining internal validity difficult, since there aren’t any standard approaches with which you can compare your own plan.

However, there are ways to set it up so that you maintain as much internal validity as possible. Here are five ways that you can hurt the internal validity of your video analysis, and solutions to overcome each of them.

5. You’re not getting multiple camera views.

How this hurts internal validity: When you’re defining the variables Illustration of one camera being used instead of fourto code for, you make judgment calls about how to handle variations—whether to code them separately or code them as the same. With only a single view of a participant or event, you’re not getting the full picture, which means you could be missing crucial nuances that could better define the parameters of that variable.

Example: Imagine you have a participant looking at a screen. The screen displays a right stimulus and a left stimulus. You might use a single camera view, focused on the participant’s face, to count how many times they looked at the left or right stimulus. But an additional view could tell you what you wouldn’t know otherwise: that every time the participant looked left, they were actually looking at a distraction off-screen that had nothing to do with the test, and never once looked at the left stimulus.

How to fix it: Using at least two complementing views will give you a more comprehensive understanding of behaviors, so that you’re actually counting what you think you’re counting.

4. You’re not skimming the footage first.

How this hurts internal validity: Video footage gives a limited understanding of a participant’s session or site visit, and things might not look exactly the same in every recording. Without a contextual understanding of the entire session, you might not correctly code for events or behaviors.

Example: In the right/left stimulus example used above, imagine that the participant appeared to be looking left the entire time during a session. You code for a left gaze—until they do look left, later in the video, and you realize that they had been sitting in a way that made it appear they were looking left.

How to fix it: Build in time to quickly skim the footage without analyzing it. It will give you a chance to become acclimated to the video and code the footage more consistently.

3. You’re not limiting the scope of the analysis.

Illustration of evaluating focal pointsHow this hurts internal validity: Too many focal points make it hard to reliably track the most important variables. It’s tempting to try to make your analysis comprehensive but if you try to capture everything, you risk not capturing anything in a meaningful way. Another potential risk is that too-detailed analysis might not be applicable across all sites or participants, leaving you with an overly ambitious coding scheme that doesn’t fit with the data you collect.

Example: In the right/left stimulus study, you have decided to track participant gaze (the most meaningful, central variable). You add variables like heart rate variability, saccade measurements, body posture, head tilt, and number of blinks. This adds significant effort to your video analysis, and puts the quality of your main data at risk. At the end of the project, you realize that only your original variable had any relevance to your findings.

How to fix it: Choose limited focal points that are likely to be common to all participants or cases, and build in time for multiple passes of video. For focal points that seem worth further investigation, note where they occur in the footage so you can easily go back to key timestamps later.

2. You’re not letting your analysis plan evolve along with the research.

How this hurts internal validity: Research goals will naturally evolve over the course of a project, and your view of the events you’re studying may change as well. You might be reluctant to adjust your video analysis plan, since that often means redoing work. This can also look like you’re hurting internal validity, since you’re redoing research based on new information. But avoiding that adjustment could result in data that’s too vague, or in defining variables that don’t ultimately align with the findings of a project.

Example: In the right/left stimulus study, you realize that the right stimulus actually has two different points of salience. You could re-code for a “right look” and a “far right look.” This is something that isn’t immediately obvious from the footage and therefore isn’t something you originally coded for, but it has implications for your results.

How to fix it: Have regular check-ins with your data and think about how that analysis will ultimately be used. Do test runs of your data into its final presentation (e.g., summary spreadsheets, graphic maps with data, etc.). Depending on your sample size, the output should align with and support your primary findings, and should not be surprising to you. It should also support any nuances you identify.

1. You’re trying to get your primary findings from video analysis.

How this hurts internal validity: Video analysis is meant to support your primary findings, not constitute them. If you use it to identify findings, it can mean you’re less likely to choose an effective, consistent, and valuable coding scheme.Illustration of a computer monitor showing surgical video

Example: Imagine that you did not capture enough information from the participants in your right/left stimulus study during the post-session interview; you attempt to gather qualitative information from your video analysis instead. But qualitative insights can rarely be obtained from the video footage, and this makes you code for their right and left gazes in a less objective and consistent manner.

How to fix it: Know the purpose and limits of your video capture plan. If your video capture plan is capturing the process, you can focus on making those qualitative inferences in the field rather than trying to get them from a limited set of views later.

Video analysis is a valuable tool for adding weight to your primary insights and giving your entire team a more nuanced view of your sessions. While this method must be utilized carefully, it adds a new dimension to your results when used well. Coding your footage with internal validity results in findings that are clearer and more compelling to your audience.

This post was edited by Lindsey Stefan.

You might also like
Our Chicago office is Open!
Artwork and Activism: DS Welcomes Rare Disease Activist Max Schill
Happy 4th of July 2019
DS Celebrates 1 Year Anniversary at 123 S Broad
Healthcare Providers: We want your input!
What is a Caregiver?
HFES Health Care Symposium 2019: A Recap
Juvenile Arthritis Patients Can Make a Difference

Subscribe to our monthly newsletter

TAGS

See all