Training for Trouble: Accident Response Group Serves Video at Sandia


      Bookmark and Share

BEST PRACTICES SERIES

When we think of content management for video assets, we tend to focus on obvious applications such as entertainment, news, and advertising. But as events of the last couple of months have reminded us, there is more to a secure society than simply commerce and media; governmental institutions are crucial as well, and they must be as prepared as possible for any eventuality. A digitized knowledge base can be a valuable tool in support of overall governmental training and response efforts, and in many situations there can be an important role for video, which takes advantage of our visual memory to reinforce the retention of information.

It's early yet to find training initiatives that have arisen directly from "homeland security" issues. But many areas of government have long been "tasked" with assessing and responding to threats in the national security realm, and video is often used in documenting activities and procedures. Historically, technology allowing search and delivery of such video content has generally lagged behind the capabilities available for text-based information. In the last few years, however, the capacity of networks to deal with video has improved, and content management for video has come of age.

In a security context, one example of digitized video's integration into a networked knowledge base is found in the Accident Response Group (ARG) at Sandia National Labs. A "national security laboratory" headquartered in Albuquerque, New Mexico, Sandia is operated by Lockheed Martin and primarily funded by the U.S. Department of Energy. The organization handles research, design and development of all non-nuclear components used in U.S. nuclear weapons programs, and is involved as well in programs related to energy, critical infrastructure, non-proliferation, materials control, and emerging threats.

ARG's searchable video database has been implemented using the Screening Room package of applications from Convera in Vienna, Virginia. Formed in December 2000 from Excalibur Technologies and Intel's Interactive Media Services Division, Convera targets corporate and institutional markets with products for securely accessing, indexing, and searching rich media content—text, images, audio, and video—across interconnected computer networks. Among its public-sector clients are the FBI, NASA, the Nuclear Regulatory Commission, U.S. military services, the Departments of Justice and State, and various domestic and foreign intelligence agencies.

Ready to Deploy
"The Accident Response Group training program was started back in 1968," says Michael Krawczyk, a systems analyst involved in Sandia's video database project. "The primary motivation is to have Subject Matter Experts [SMEs] ready to deploy anywhere in the world in response to a nuclear weapon accident."

Along with U.S. forces support staff, the SMEs are the "target audience" for ARG's video documentation. The video archives—all raw footage with little additional post-production—cover several different types of material, including disassembly videos, render-safe procedures, safety courses, and interviews with people who were on-the-scene at accidents.

"The availability of videotaped materials from past accidents and exercises allows the SMEs to review how similar situations were handled previously," Krawczyk says. "The procedures videos allow the SMEs to see the designated Standard Operating Procedures (SOP) for handling weapons."

Historically, ARG's video materials were accessible through a physical tape library. Users would go to the library, search a text database for footage on the topic they wanted to see, and request the tape or tapes from the librarian. With the tape in a VCR, they would then look for the section pertaining to the subject of their search. Less than efficient in the best of times, this approach was deemed to be especially unsatisfactory as a way of getting information during an actual accident, where decisions with profound consequences might need to be made under extreme time pressure.

Digitizing the video content seemed like an obvious solution, allowing users to search and view footage from their desktops over the facility's Ethernet LAN. A system to handle the job would involve several distinct but interrelated areas: cataloging the available contents in a way that lets users search effectively for all footage related to a given topic, efficiently taking the video content itself into the system, and supporting the rapid delivery of the video content to end-users at acceptable image quality.

Krawczyk describes the major requirements for such a system as "electronic capture of the video and support for searching a transcription and seeking playback directly to matching words. Once agreed upon, our original expectations and requirements really never wavered."

While the requirements remained consistent, the approach to meeting them did not. Before his arrival, Krawczyk says, "Sandia was well into the development of an in-house Digital Video Library [DVL]. But it was never 'prime time' software, just demo kind of stuff. It required the software programmer there to make it work."

The in-house approach was eventually abandoned in favor of a common off-the-shelf (COTS) solution. "The funding required to develop a full-featured system exceeded the project scope," Krawczyk says. "Buying the best software on the market was the way for us to go."

ARG's needs ran parallel to those of another project at Sandia, a Knowledge Preservation Project in which conversations with retired and soon-to-be-retired nuclear weapons engineers are videotaped in order to pass along knowledge and experience to those responsible for maintaining the weapons in the future. A team of three represented the two projects in evaluating solutions to meet the requirements of both.

"The team attended no less than six different product demonstrations," Krawczyk says. "The major factor was the ability to integrate our own transcription into the vendor's system, and to have an accuracy rate within one minute. That is, after a search is performed, we wanted to be able to click on the link and hear the words said on the video within one minute."

The decision was ultimately made to go with Screening Room. "The first of three systems was installed in December 2000," Krawczyk says. "Currently, there is one unclassified system and one classified system in production mode, and a second classified system is being installed."

Capture and Serve
According to Daniel Agan, Convera's senior Vice President for marketing and corporate development, Screening Room works by automatically capturing existing analog or digital video, including live feeds, while simultaneously extracting and cataloging metadata, including closed-caption text and voice soundtracks. It also creates a low-resolution copy of the original video as well as a visual storyboard.

"We don't provide any hardware," Agan says, "although we can and often do make recommendations on the type of hardware that will optimize performance based on the requirements of the user."

The Screening Room software package is made up of four principal components. At the start of the process is SR Capture, a real-time video logger that "ingests" video from a multitude of sources—tape, live broadcast, satellite feed, DVD, etc.—creates a streaming media file, and creates a visual storyboard of key frames based on time-based events and major event changes such as cuts, fades, and dissolves. It also supports enhanced logging in the storyboard with additional frame selection, demarcation of in and out points, and manual text annotation of individual frames and clips.

Agan says capture stations require a fairly high-end PC, "such as a 750mHz dual-Pentium with two video capture boards—usually Osprey 100s. If MPEG-1 or MPEG-2 video encoding is required, then an Optibase MPEG capture board is also needed. At least 512K of memory is required and a reasonably large hard drive—say 20GB—is recommended."

At Sandia, the intake of video material involves two cases: footage on Betacam videotapes, and footage from existing MPEG-1 files that were converted under the old in-house DVL system. "The majority of the videotapes destined to go into the system already exist," Krawczyk says. "New exercises—and hopefully no new accidents—are taped when they happen. Betacam recorders are typical for taping exercises."

The first intake step for the material on tape is to have the soundtracks manually transcribed by support personnel, which Krawczyk estimates yields 99% accuracy. "Secretarial staff and contractors perform the transcriptions," he says. "The end result is a Word document with minute-markers in it."

The video material from the tapes is digitized in Screening Room with the system's Osprey card, and encoded on-the-fly to a 300Kbps Windows Media file (.asf) at 30fps, 320 x 240 resolution. For the source material that is already in MPEG-1 format, recapture is not required; Screening Room will ingest the file directly—without making an .asf file—and will perform its analysis, storyboarding, and indexing steps on that file.

Another Screening Room component, SR Edit, may be used to annotate and organize video assets after capture. Users can assemble-edit video, add, modify, and delete metadata and user annotations; review and delete individual storyboard clips or frames; and perform searches based on visual or text clues.

The digital video assets, metadata, and indexes produced during the capture and editing process are stored in an XML-compliant, low-resolution media file in the SR Video Asset Server component. "All metadata is tied back to the original video via timecodes," Agan says, "allowing full searches to precise segments at the asset, clip, or frame level—without having to view the entire content."

Hardware requirements for the Server start with a single 750mHz Pentium with 512K of RAM, but ultimately depend on the number of users and the frequency of use. The size of the disk drive depends on the volume of content; RAID storage is recommended for large-scale operations.

Typically integrated with the Server is another component, SR Browse, which allows end-users to search for, preview, and view video content streamed to a desktop or laptop computer running either Internet Explorer (version 5.0 or later) or Netscape NT Navigator. No Screening Room components need be installed on end-user machines.

At Sandia, Krawczyk says, the video is delivered to users in three ways: unclassified network, classified network, and field deployable laptop. "Training is performed in an office desktop environment," he says. "But for accidents and exercises, which take place in the field, the material can be viewed on a laptop. No connections back to the classified server are available, so a snapshot of the data is taken from the server before deployment."

Work in Progress
So far, only a small part of ARG's total video library has been brought into the video database. "The digitizing of the tapes and integration of the transcription happen when time permits," Krawczyk says. "Only 5% of the library is in the system currently."

Krawczyk says that setting up the Screening Room system was challenging, but mainly because work that would typically be done by Convera had to be handled in-house. "Convera offered to send field engineers to help us. But we would have had to kill them if they saw the content of the videos," he jokes. "So we had to do the installs ourselves. Getting three NT server machines to work together through DCOM modules is challenging, but once the software is running correctly in the domain model, it will run without any interaction from the administrator, except for precautionary backups of the SQL database."

Krawczyk says the main benefits of moving to a digital video library have been reduced labor and improved access to video-based information, making the job of digging through hundreds of hours of video less fatiguing. "Information is provided much faster, with data delivery in seconds. We are also able to use the system in the field and view the segments on laptops for exercises, and we have 100% reliability with the text search." He adds that he is looking forward to implementing Screening Room 2.3 because of user-security features that will allow categorized access based on ARG's 'need-to-know' strategies.

Overall, Krawczyk says, the new system has been "very productive in helping with our training schedules. Since a lot of the persons involved in ARG are reaching retirement age, we need to educate new persons entering into this program. Persons in the program can now find information that would have taken them days of research to find under the old tape library system."