Caught in the act!
Accident reconstruction from captured video footage can prove more than you might think
Your client has suffered serious injury in a vehicular accident. Liability is not clear, and there is a lack of physical evidence left at the scene – no skid marks, no solid witnesses and very little to build a foundation for the reconstruction of the event. The lack of physical evidence, such as skid marks, presents a challenge when determining critical factors like speed, point of impact, acceleration and position of people and vehicles leading up to the incident.
The chances of determining what happened look bleak, and then you discover that video footage of the accident exists, captured on a nearby security camera. Things would be looking up, if only you could make use of the footage.
Most often, the capturing of these events is by chance as cameras intended for another purpose accidentally capture some portion of the event. Typical sources of this footage include: video surveillance cameras outside gas stations, fast-food restaurants and other late-night businesses; pedestrians and tourists carrying video cameras for personal use; onboard video cameras mounted on municipal buses and taxi cabs; and now, even governmental agency cameras mounted on the street to capture vehicle violations at select intersections or areas of roadway.
Was anything recorded?
The first step in using this footage is simply discovering its existence. By knowing about the possibility of the footage’s existence, an investigator can begin the process by surveying those areas and establishments likely to have cameras. Once a potential camera and footage source are found, the next step is to determine if the camera captured footage during the time in question. Often, cameras are mounted “for show” and may not have been active or functional during the time in question. One important thing to note is that many video surveillance systems capture footage in a “loop sequence” – the camera overwrites older footage at a given time interval, sometimes as short as six hours. Once the footage has been overwritten, it is gone forever and is not available for analysis. For this reason, it is critical to determine if footage is available as quickly after the incident as possible.
Once the footage has been obtained, the next step is to determine if the footage has any value in reconstructing the event. Did it capture during the correct timeframe? Was it aimed toward the event area? Did it capture the actual event? Although I have yet to see a case where the actual event was captured in its entirety, even peripheral information can be exceptionally powerful in helping determine what happened, or perhaps more importantly, what did not happen.
How to view the video
In order to see what the footage holds, it is of course necessary to be able to view it, and there are a few technical considerations involved in this step. Video of this type is typically recorded in one of two manners: onto tape or digitally. If the video was recorded to tape, then the physical tape must be obtained so a copy can be made for your viewing. In this event, the major consideration is the type of tape – VHS, S-VHS, BetaCam, Hi-8 or Digital Video – and finding a player that can play the style of tape.
If the footage was recorded digitally, which is the most common scenario we see, then additional considerations come into play. Typically, digitally-recorded footage is not written to a physical tape, but rather to a hard drive like those found in desktop and laptop computers. Rather than obtaining a physical copy of a tape, the investigator will obtain a computer file or series of files. These can be copied to a computer storage device such as a floppy, CD or DVD. In addition to the computer files of the footage, it may be necessary to obtain a viewer as well. Viewers are software programs, often customized to the video system used, that allow display of the captured footage. Some systems are advanced and offer an array of tools to assist in the viewing and analysis of the footage. In any event, both the footage itself and the software must be obtained.
Video processing overview
First, the video is viewed to locate the footage captured near the time of the event. Typically, the video expert will review the video footage multiple times in order to get a sense of the quality of the footage and the type of information it holds. Where is the camera looking? Is the view clear or grainy? What portion of the event or event area does the camera see?
Once the initial viewing is completed and we have determined the portions that are of interest, the footage is “digitized.” The video is separated into individual frames or pictures, which are enhanced digitally to improve clarity and picture quality. These still frames each depict viewed area at a single point in time – a series of sequential “snapshots” of the area. The rate at which the camera records the video must be determined in order to use the sequence of frames as a timer. The recording rate, or “refresh rate,” determines the smallest time interval that can be analyzed for reconstruction. This is one of the most powerful aspects of the footage, as each snapshot or frame records the scene at a given time, with a consistent interval between frames.
This quality of the captured footage allows for providing a critical piece of information for analyzing factors such as speed and acceleration. Typically, today’s security cameras record at a rate of around 1.5Hz or one new picture every 2/3 of a second. Handheld video cameras typically record at a rate of 30Hz, or one new picture every 1/30 of a second (30 snapshots every second). Using a 1.5Hz security camera as an example, events that are seen by this camera and are visible in the footage two frames apart occurred 2/3 x 2 = 1.33 seconds apart. This method can be used to determine the time between any event or object that is recorded by the video, such as the passing of pedestrians or traffic signals in the case of a camera mounted on a city bus.
Scene measurements
After the individual frames are digitized, enhanced and the recording rate is determined, the event scene is surveyed with a Total Station or preferably, a 3D Scanning Laser. The measurements are used to create a 3D working model of the scene, providing distances between objects seen in the video. Coupling these distances with the elapsed times determined from the refresh rate, it is now possible to calculate the speed, position and acceleration of any object seen in the footage, as well as that of the vehicle that carries the camera, in the case of a vehicle-mounted system. The working model can then be used as the basis for a compelling 3D animation of the event, showing the relative positions and motions of all objects seen in the footage.
Case example
The following example from a recent case illustrates how an accident can be reconstructed accurately and with proper foundation using the video from a bus-mounted video camera.
A Los Angeles city bus collides with a bicycle and rider while making a right turn in an intersection. The bike rider is crushed under the bus and suffers major injuries. The defense alleges that the bike hit the bus and that the bus was acting in accordance with traffic and safety rules. An accurate reconstruction is needed to determine who is at fault.
• The Tools – 3D Laser Scanner
The 3D Laser Scanner is an advanced survey instrument that is used to remotely measure surface geometry of sites and objects with extraordinary completeness, accuracy and speed. Unlike traditional surveying tools that are used to record certain, selected points within a scene, a 3D laser scanner automatically blankets the scene with millions of closely spaced point measurements. The resultant “point cloud” is used to create extremely accurate 3D models of everything in the scene that the laser sees,” including colors, so that every road stripe, sidewalk crack and object, down to the leaves on every tree, are captured and added to the 3D model. No other method of scene or object measurement comes close to the level of accuracy demonstrated by the 3D laser scanner. A typical scan may take five to 20 minutes; scans are usually done from several different vantage points in order to capture geometry for the entire scene or object.
• Laser-assisted Photogrammetry
In our case example, the velocities and trajectories of both bicycle and bus must be determined and synchronized in order to reconstruct the accident. The video camera located onboard the bus had taken several pictures of the stationary objects located on the sidewalk. Using the 3D Laser Scanner, the entire scene was scanned and recreated “virtually” in the 3D working model. The working model provided the exact locations of the objects seen by the onboard video camera and photogrammetry techniques were used to determine velocity and acceleration of the bus at each frame. This determination would not have been possible were it not for the video footage, and the accuracy of the analysis was maximized by using the 3D laser scanner. Reconstruction parameters for the bicycle were derived by the expert.
As is often the case, the camera did not film the actual impact. However, since velocity and trajectory of the bus were determined for three seconds prior to impact via analysis of the video, the point of impact could be derived. The derived motion parameters for bicycle and bus were imported into the 3D working model and used to determine that the bus did not stop in the intersection before turning into the crosswalk, as was required by law and testified to by the bus driver. For the plaintiff, the working model demonstrated that, had the bus stopped at the intersection before it made the turn, the accident could have been avoided.
Animation analysis
The results of the analysis are illustrated graphically in a real-time computer animation, depicting the motions of the bus and bicycle up to and including impact. In this case and in keeping with the theme of the reconstruction methodology, the attorneys used videotaped depositions to record the testimony of the bus driver as she recounted her actions leading up to the accident. In her testimony, the bus driver claimed that she had in fact stopped in the intersection prior to passing through it and had properly looked in both mirrors and cleared the way prior to entering the intersection.
Split-screen impeachment
From the laser-based analyses performed with the onboard video camera footage, it is apparent that the driver did not act as she testified. The discrepancy between her testimony and the demonstrated facts of the reconstruction was exploited and highlighted using a split screen to show the animation. In one side of the screen, the videotaped deposition testimony of the bus driver recounting her actions was displayed. As she recounted her actions step by step, the second screen showed the animated results of the analyses and highlighted each discrepancy between her recorded testimony and the facts as determined by the working model. The effect was a very powerful and compelling impeachment of the bus driver. The case settled in favor of the plaintiff.
Conclusion
Reconstructing accidents that have little in the way of physical evidence is always a challenge for adjusters, attorneys and experts alike. Laying a solid foundation for the liability argument requires the input of data related to the event. Illustrating the validity and fidelity of the reconstruction requires corroborating evidence. Both can be obtained through the analysis of videotape footage. In today’s increasingly watched and recorded society, video footage has become more common and provides the attorney with a powerful new tool in the field of reconstruction.
Craig Fries
Bio as of August 2007:
Craig Fries founded Precision Simulations, Inc. (PSI) in 1997 after working in forensic animation and conducting scientific visual-performance research and NASA-sponsored studies. PSI specializes in computer-based reconstruction and condemnation litigation graphics. Hallmarks include the first two forensic animations created from laser scanning data admitted into court trials (2002, HI and CA). www.precisionsim.com
http://www.precisionsim.com
Copyright ©
2025 by the author.
For reprint permission, contact the publisher: www.plaintiffmagazine.com