[an error occurred while processing this directive]

Post Production

"Fix it in post" has never been easier now that editors are armed with a variety of digital tools. However, as the industry moves towards digital post production, editors must remember that digital means both tape (linear) and disk (nonlinear). While digital tape offers relief from generational loss and faster-than-realtime transfers, it is disk-based nonlinear editing that has transformed post production.

Moving from analog tape machine editing to nonlinear computer editing invites a variety of questions and concerns for an

editor--not just "how does it work?" But only you can answer these questions based on your unique situation.

Nonlinear Editing: Going Tapeless

By Mark J. Pescatore and Michael Silbergleid

The move from a linear, tape-based system to a nonlinear, disk-based system is like trading in your old reliable car for a new model. Both vehicles will still get you from point A to point B, but your new car doesn't handle the same as your old one. Once you've adjusted to your new vehicle, though, you'll see all its advantages--and you'll appreciate the new engine that has a lot more kick.

Put simply, nonlinear editing (NLE) is the future. It's a sneak preview to the tapeless environment of tomorrow. And it's safe to predict that eventually, acquiring footage on disk, rather than tape, will be the norm.

Like it or not, fellow tape editors, change is coming; in fact, for many, it's already here. In a recent survey of news directors by Television Broadcast magazine, slightly more than half of all news departments (51 percent) reported having at least one NLE (with the number increasing every year since the introduction of NLE systems). In this case, though, change is good. Nonlinear editing is more than the latest gadget; it's a time-saving tool that makes your options as an editor significantly more flexible without sacrificing quality.

Remember, nonlinear editing systems are a tool to help you edit more efficiently. The basic principles of editing remain the same--except now you can think in a nonlinear fashion, free of concern if something has to be moved or changed in the middle of the program in the final edit, since it's just a few mouse clicks away. You're still trying to tell a story. You can still edit on the fly or select precise editing points. And you can still edit video and audio separately or together. In fact, you often have more tracks/channels at your disposal than even the best tape-based linear editing systems. It's not the task that's changing--only the equipment. As an editor in a nonlinear world, you'll use a keyboard and mouse as your edit controller--some of the same tools you use in the tape world. The flexibility of the computer, however, will allow you to make changes instantly--changes that are non-destructive to the original digitized footage in the computer, as what you are really editing is a set of directions for how the audio and video are manipulated.

How many times have you had to run the gamut of wipes for a client who is just not sure what is going to look best? With nonlinear editing, preroll is eliminated. Video, audio and effects are accessed instantly in realtime systems, although some systems must render effects and transitions. When you don't need to preroll or postroll, you save time, as well as wear and tear on tapes, machine transports and heads. Maybe a savings of 10 seconds doesn't seem like a great deal of time by itself, but when you save that much time with every cut and every effect, it adds up quickly.

Of course, a significant time disadvantage to nonlinear editing is the process of digitizing media. There is no high speed method of transferring analog footage to the computer's hard drive, although four-times normal play speed transfer systems exist for digital tape formats. For most of us, digitizing takes place in realtime.

Comparison will naturally be made for years about the positive and negative aspects of each method of editing, but the new technology is very promising. Put simply, nonlinear editing is a new way to do your job. It's a better way to do your job. And, provided that you receive plenty of training, it's an easier, faster and more creative way to do your job.

NLE Practical Considerations

The following makes a good checklist for systems under consideration:

  • If the system compresses video, does the system offer compression rates that are considered "online" quality?

  • What types of compression does the system offer, and how does that compare with compression types used in acquisition and delivery of final product?

  • What types of video inputs and outputs does the system have--analog (component, composite, s-video), digital, faster-than-realtime?

  • What types of audio inputs and outputs does the system have--analog (RCA or quarter-inch), AES/EBU (XLR or BNC), fiber-optic digital?

  • Does the system include software updates and how are they delivered?

  • What is the cost of a maintenance agreement and what does it specifically cover (hardware repair, technical support, upgrades; when and where--24/7 and toll-free, fax, email)?

  • Who is the first line of technical support--the manufacturer, the dealer, or someone else?

  • What is the storage capacity of the base system and maximum expansion possibilities?

  • How many video formats and aspect ratios can the system handle and will there be upgrades (including high definition)?

  • Is the system composed of dedicated components or are there limited manufacturer-approved configurations regarding third party hardware and software?

  • How effective is the system in a networking environment?

  • If the system renders, is the render time acceptable to you and your potential clients, especially if you have to re-render to fix something?

  • Does the rendering software recognize multi-processor configurations?

  • How many video layers is the system capable of handling without rendering to a single layer?

  • Does the system support EDL importing/exporting?

  • Does the system have a good built-in character generator?

  • Does the system offer audio editing capabilities?

  • How do you back up files?

  • During editing, can audio files be on a separate drive than video files, like other hard drives or a Jaz drive?

  • Can the system export QuickTime files for use on the web, in CD-ROMs or in third party software? Does it support different frame sizes, compression rates, color depth and frame rates?

  • What is the minimum size for a clip on the timeline--one frame, three frames?

  • What is the maximum size for a clip on the timeline and/or in a bin?

  • Does the system offer audio VU meters and/or audio waveforms that are accurate and functional?

  • How many audio channels are there? Are they discrete channels or locked into stereo pairs? How can they be panned or faded?

  • What is the resolution of the system, is it changeable or media-dependent, and does it include an alpha key channel?

  • How will the system's compression engine handle video noise?

  • Does the system offer multiple compression types (such as DV and MPEG)?

  • Can the system do both compressed and uncompressed video? Are projects restricted to compressed or uncompressed, or can the system mix compressed and uncompressed in the same project?

    Digital Post: The Paradigm Wars

    By Bob Turner

    Digital is changing all the rules in post production. But, how can you play the game if you don't know the rules? Convergence, Paltex, Ampex Ace and CMX are all gone. Editware is now supporting the former GVG SuperEdit system and Sony still offers a high-end linear system, but both Sony and Editware are "pushing" the system's hybrid capabilities.

    Processes are changing. The models on which we base our planning decisions may not be valid any longer. I am not sure what video will look like in the next three years. How will I know which editing system makes sense? (No wonder NAB is always in Las Vegas.)

    When the first edition of this book was written there was a dilemma between linear editing and nonlinear editing. Today, it seems that there is a larger conflict between traditional nonlinear techniques versus system-wide workgroup models. Do you want traditional two video stream nonlinear editing, or more modern systems that will allow many layers of audio and video without the need to layer two streams at a time? Or should you go with a "state-of-the-art", metadata-based post production facility where all creative workstations are integrated with servers, library and/or your scheduling and newsroom software?

    Conflicts Abound

    Time versus cost? Quality versus cost? Innovation versus a wrong choice? System-wide networked solutions or starting application-specific post production solutions? Workgroup post production with application- specific workstations versus all-in-one workstation? Open systems versus closed systems? Peer-to-peer networking versus "essence media" (what we old-timers used to know as audio, video and graphics) accessed from a server? Or does it make more sense to continue with independent workstation storage (and "sneaker-net" until the networking decision becomes more obvious)?

    "Click and drag" mouse, trackball or pen and tablet versus traditional keyboard editing? How many audio and video tracks are needed on a timeline? Should there be interaction between journalist workstations, the edit system and the newsroom and scheduling systems and servers? Will you have union or job description problems? Will employee skills match the tools they are given or will they be frustrated with either the lack of editing tools or with post production tools that are too complex (which may cause those employees’ subsequent departure from the company)? How much will the training period cost and how long will it be before the operators can at least achieve the performance output that existed prior to the change in technology? Are there economic benefits to certain systems--will they require fewer operators/technicians? Will there be maintenance and/or supply expense reductions or are you committing to high annual maintenance fees? And with all this to decide, there is the most important question: what will my competitors be offering to my clients or perspective clients? Let's take a look at some of these conflicts and what they will mean for post production.

    Time Versus Cost

    Can you save money by choosing a more expensive system that is more efficient? Do you have enough work to maximize the benefits of those efficiencies? Can you deal with increased rendering time for a system that costs less? Will "essence media" networking be more efficient and stable--ending days of "sneakernet?" Are video and audio digitization/transfer delays a thing of the past? Who will be making the creative decisions, the logs and the rough-cuts? Are there faster processes that will save time and money? Today, there are both optical disk and hard disk-based video camcorders, so you can plug in the disks or HD storage units and start editing without any transfer delays. There are digital tape cameras that log clips with picture-icons or thumbnails of those clips that can later be fed into the editing system or logging package (with designations for good takes). There are PIM (personal information managers) devices that can do the logging and clip evaluations with in-points and out-points that greatly decrease the times for digitization or digital video transfer/transcoding. There are digital tape formats that allow transfers at four-times normal play transfer rates. There are journalist workstations that can let a journalist do logging and rough cuts, so the editor can work more efficiently and finish more programs or segments on a given work shift.

    When the editor finishes a segment, that segment can be zapped to the news server or even accessed by master control from the edit workstation, if necessary. With workgroup models, you can have several editors, graphic/effects artists and audio specialists working together from a central server to get the job done faster and better. The typical problem with all this is that generally, the more efficient you make the operation, the greater the initial cost. Another of the "time versus cost" variations is the "greater-than-realtime" concept. These are technologies that appear very attractive. Time is money, and if you can stream video around at four-times realtime speed, why not? Well, perhaps there are reasons why not. Will edit points remain accurate when streaming the edited program at four-times or greater? In addition to this concern, you will probably have to limit operations to a specific compression algorithm or a more expensive transport technology. Perhaps this speed may only be available from a very limited selection of hardware, which require equipment selection compromises to be made.

    Quality Versus Cost

    You can spend $700,000 and up for an uncompressed video editing system or under $40,000 for a slightly compressed video editing system that may have better editing tools. The worst part about this fact is that both the client paying the bills and the video program viewer may not notice the difference. Will the uncompressed video editing system prevent client and viewer complaints? If you are a post production facility, will the client be willing to pay for the higher capitalization costs if it keeps them from worrying about any compression difficulties? Do most clients even know the right things to worry about with regard to compression, or is it just a bad word?

    Most in our industry believe that DTV will bring image quality issues to a new high-point in the decision-making process. Some industry researchers theorize that consumers will begin to notice image quality differences as soon as they purchase digital TV sets in a manner similar to the way audio CDs changed the recording industry. Will the system you are considering protect your investment from future quality improvements or even the possible transition to HDTV? Today, you may see some cascaded compression errors on graphics-oriented or heavily composited programming. Experts appear to agree that if you cannot work with uncompressed video, the next best solution would be to remain in the same compression (e.g. DVCAM/DVCPRO or 4:2:2 MPEG-2 ML@MP) throughout the production/post production process from camcorder to newsroom/program server. There is no loss when doing newsroom-style cuts-only editing and remaining consistent in one of these digitally compressed formats. But even with remaining consistent with such a format, you can suffer digital generational loss with each composite, key, transition and effect. The compromise that some editors and compositors suggest is to uncompress and do all your compositing, graphics and effects before recompressing. What will you choose: HDTV-capable post production, uncompressed video post production, compressed digital post production, or to put off a decision? And if you decide to go the uncompressed route, there remains the question: will the high-cost system you select be competing against a low-cost uncompressed system in six months to a year, and if so, will that put you at an economic disadvantage? And will the system work with higher-resolution video when the need arises?

    Innovation Versus Making a Poor Choice

    There have been several exciting innovations in the field of video post production within the last few years: workgroup integration, working with multiple aspect ratios and image qualities, new recording/storage solutions, and even new profit models including interactive broadcasting.

    Will standard definition, digital television production get you through the life of the equipment you are purchasing? If you select a format, will this lock you into a long-term commitment that goes further than the life of the system you are purchasing? Will it allow you the flexibility to do interactive programming? Will the physical and electronic infrastructure need to be rebuilt, and will that lock you into something more permanent than you intend? For example, will you be able to work in both of today's aspect ratios? How do you "up-rez" when needs demand? Will you be able to switch to a progressive scan video image if and when the need arises? Is there flexibility inherent in your decisions?

    Another concern is the "Rule of Three Versions." Some call this rule an old wives' tale. It states that no post production software is truly usable before its third version. The problem is that new paradigms mean all new processes and new software. Can you wait five years until version 3 of the software you are considering is available? If not, is the alternative a gamble that puts your personal security at risk?

    MPEG-2 post production development may be behind the DV production alternatives: DVCPRO and DVCAM. But these 25 Mbps formats appear to be evolving and 50 Mbps per second (interlace and progressive) and 100 Mbps format choices for this digital video standard set are emerging this year. Which do you choose? Or would a more expensive uncompressed video method be the best post production solution? If you choose incorrectly, what do you do about the choice you have committed to? (Will you be able to find another job?)

    Open Systems Versus Closed Systems

    "Open Systems," "OpenDTV" and "Open Studio" are attractive concepts, but do they represent reality? Are these viable solutions? If you purchase an "Open System" will it offer compatibility with other "Open System" components? Will it offer the "economies of scale" that the concept implies? Will the platform or computer operating system that the "Open System" is based upon stand the test of time? Today, leaders in "Open System" editing software packages frequently mandate specific hardware requirements. Different configurations could provide different levels of storage, accessibility and performance while allowing the user to scale the system to better fit their requirements and budget.

    One of the most popular "Open System" manufacturers will not allow you to put other "non-approved" software on their high-end workstation or upgrade the workstation's operating system software (when such upgrades become available) without their approval. If you do, you will invalidate their technical support commitment and warrantee. This results in a facility that may have two or more versions of an operating system on identical computer systems--a maintenance headache to be sure. And requiring a facility to limit the software that can be installed on an expensive computer system prevents the facility from maximizing this hardware investment.

    Competitive "Closed System" manufacturers may be more expensive, but may offer performance benefits due to the closed nature of the design. In addition, "Closed System" manufacturers may offer programming possibilities such as Java Application Program Interfaces or scripting languages, which may provide advantages you hope to find with an "Open System."

    Traditional Purchase and Maintenance Versus Software Maintenance

    In the traditional linear realm, every year you would budget for the purchase of "black boxes" that would keep your customers happy. These customers demanded the latest "wizbangs," and "wizbangs" meant expensive hardware. This hardware was maintained by a team of hardware engineers and kept working only with constant cleaning and routine maintenance. It usually became obsolete by someone else's wizbang box within a year of purchase (which kept the antacid companies in business). The boxes were proprietary but the software was provided for free.

    With our paradigm shift, the annual hardware purchasing is--for the most part--being replaced by software acquisition. Both the software and the hardware platform it works with can usually be easily upgraded. This usually means a long-term commitment to the hardware decision you make. Furthermore, this "new way" generally means much less expensive hardware and less hardware maintenance. It also means purchasing things that you are not used to budgeting for: software and software maintenance/technical support.

    Once again, when the first edition of this book went to press, the norm for this new technology was the purchase of maintenance contracts. Now there is a conflicting trend that may or may not be to your liking. Because many large purchasers (who often have software support personnel on staff) objected to the high fee for software maintenance--without any guarantee of the number or quality of new versions in a given year--began pushing a "pay-as-you-go" alternative. You see what the new version of software offers and decide whether or not it is worth purchasing.

    While this sounds reasonable, the people who budget for the next year no longer have the ability to plan for this expense--a major disadvantage with institutional purchasers. In addition, this new model means that the manufacturer will need to provide technical support for several different versions at a given point in time, and someone will have to pay for that additional expense (guess who?). On top of that, the manufacturer can no longer use the expected maintenance contract revenues to pay for the research and development, so the manufacturers loose their budget-planning abilities. Generally, the smaller facilities, while objecting to the high cost of software maintenance and technical support contracts, were better off than the uncertain world they now find themselves in without those contracts.

    The technical support is also getting much more complex, both in the purchasing and the methodology. Some limit technical support to telephone support at specific times. Some offer the ability to remotely diagnose problems via a modem or even provide routine remote maintenance. Some offer Web technical support forums. What is the board replacement policy? Is "on-site" support available? Does whatever is available work for you? Another change may be the discovery that supplies that had been routinely budgeted for are no longer necessary, but others such as digital storage devices are always in urgent demand. Is digital storage a supply item or a capital purchase? Lines blur.

    Workgroup Post Production Versus All-In-One Workstation

    Two conflicting trends continue at the National Association of Broadcasters convention:

    1) Workgroup editing networks where audio workstations, editing systems, graphics workstations, digitizing stations, assistant editor/logging stations, character generation systems, compositing/effects systems and 3D animation workstations all interconnect and frequently access the same "raw" images simultaneously; and

    2) All-in-one workstations that could facilitate all those previously mentioned capabilities in one system.

    The question then arises: Is this actually conflicting? Does it make sense to purchase several identical systems and make one an animation station, one a compositing/effects station, one an audio station, one a video editing station, etc., and have them all networked together instead of tying together different types of workstations? Are there maintenance and purchasing advantages with this concept? Would it not be better to be able to select the best workstation and software for each area? If you go the latter route, are the systems metadata compatible and is the network able to transfer the metadata that the workgroup members need to access? Would transcoding of essence media be required? Do operators exist with high levels of skill and expertise in all these categories so that they can expertly edit, design graphics/effects composites, create 3D animations and sweeten audio on a single workstation? Does it make economic sense for that operator to have an All-in-One system? (What would keep him, once he masters the system, from going off on his own with his own "project studio?") Could this combination of talent and technology not promote a more focused "autour" type of production that has the potential for exciting stylistic programming? Will the communications and resource sharing problems found with workgroups be eliminated? Wouldn't this be a much less expensive option from both staffing and a capital expense point-of-view? Again, do such "prodigies" exist?

    Essence Media Storage

    Central storage versus peer-to-peer networking? These may be two very different models, but are they completely independent of one another? Can't workstations have both networked storage at the workstation as well as access to the audio/video data stored on the central server? Of course they can. And it gets much more complicated as you add multiple servers to serve different purposes, such as programming, news and commercial servers; daily, weekly and archive server solutions; or servers for multiple formats and resolutions. This complicated storage dilemma is discussed elsewhere in the volume, but here are a few of the issues:

    The Central Storage Versus Peer-to-Peer

    The Central Storage model allows multiple users to access the same footage simultaneously and, with wide enough bandwidth, can allow realtime, high-quality editing/compositing at multiple workstations from moving images stored on this server. This model allows for engineering specialists to supervise sound/image input into the server system from a central VTR/equipment room with proper test and monitoring gear. This model also keeps equipment and fan noise problems out of the post production suites. It may prove to be extremely efficient and cost-effective. On the other hand, it may offer a single point of failure that can be inauspicious at best. Peer-to-peer models usually allow sound and images to be transferred from one system to another--frequently at high transfer speeds. Each workstation has its own storage to be manipulated as the operator wishes without affecting the operators at other workstations, unless the operator has to pause when their sound or images are being accessed by another workstation. This may be a more expensive solution in the short term since no one wants to run out of storage, but the cost of additional capacity is generally becoming progressively less expensive. There are no hassles fighting for space on a central server. There are no communication problems with controlling the storage available or accidentally deleting something a different operator at another workstation needs. This redundancy does increase cost and it is more inefficient when multiple workstations need access to the same essence media.

    For either model, is it easy to find and access the sound or images you need? If you are sharing images or sequences, does the required metadata follow? Just how easy is it to access a composite from an effects/compositing workstation to the editing system? How long does it take? Is it in a truly compatible form and is there any image loss caused by the transfer? Can the audio engineer easily access the video for mix-to-pix? Are there security access controls, communications capabilities and other administration issues that are addressed and does this "control data" flow over the same network with the video, audio and metadata?

    And how much bandwidth do you need? There are now realtime, broadcast-quality long haul 270 Mbps video transport networks that have proven track records. A few of these systems allow you to double throughput by doubling the cable connections and offers the hardware/software capabilities to marry the signals flawlessly at the workstation. Today, Gigabit Ethernet and other high-bandwidth solutions are being installed in facilities.

    How many systems require access through the same network simultaneously, and what does this do to bandwidth capabilities? Can the network data bandwidth be "throttled back" on specific systems based upon specific workstation needs? Can this "throttling back" be a dynamic process, based upon the temporal needs of the "networked workstation community?" Does the administration software allow you to grow and work with any type of network (SCSI, Fibre Channel, SSA, ATM, etc.)? What are the risks of an individual workstation bringing all other workstations connected to it down? Can the network interface with other systems such as scheduling, library management or newsroom software?

    Tape Libraries Versus Archiving Systems

    Traditionally, everything could be stored in a tape library for later access. With digital, however, there may not be source tapes. Today, images go to digital storage from satellite, telecine, networking or directly from a camera. When and how can you archive the original sound and images now that this "tape recording" process is gone? Do you need to save this information for future use, and if so, in what format? Should you save it as uncompressed video, compressed video or digital data?

    What metadata information (labeling data, logs, EDLs, color correction, enhancement processes, audio level settings, DVE programming, source media identification, etc.) gets stored with this essence media? What type of database/media asset management is required? Is this metadata compatible with all your various workstations that would desire access to the information? Can all systems (library database, creative workstations, billing systems, scheduling, etc.) access this metadata? We are now at a point, technologically, when this digital information can be stored practically on data tapes, hard disks or optical technology for ease in archiving. Which will you choose? Will the format chosen for archival purposes have a life as long as the need for the images stored on the format? How fast can this archived data be accessed? What degradation problems will arise from long-term archiving, how will they differ from format to format and can they be corrected--can the damaged digital data be reconstructed? Does it make sense to invest in a robotic "nearline" archive system or levels of archiving? Or does it make more sense to simply output it to videotape and store it in a tape library? (See Appendix B: Storage & Archiving/Asset Management.)

    Is the Hybrid Editing Concept Dying?

    Today, one digital post model is called "hybrid post production." Rather than scrap your traditional equipment (DVEs, VCRs, DDRs, switchers, mixers, character generators, graphic systems, etc.) and convert to an entirely new model that is difficult for some to even comprehend, a number of manufacturers have tried to create hybrid technologies incorporating the best of both models. They do this by modifying processes to allow you access to the greatest benefits of the new technology while continuing to utilize some of the expensive, previously-purchased technology, adapting it to a random-access style of operations.

    This concept appears to be rapidly withering on the vine. The phrase, "A hybrid system cannot do either linear or nonlinear well" may be partly responsible, but it has been based upon the experiences of several such systems. Sony ES-3 edit system was hybrid, but it is no longer. Fast Electronics manufactured one of the more popular hybrid systems, but now focuses all R&D on nonlinear-only technology. Other leading hybrids have bitten the dust or the manufacturers have eliminated the linear capabilities. Sony BE-9100, Accom Axial (note: Accom recently purchased Scitex Digital Video, owners of the Sphere NLE line), ETC Multilinear Ensemble Gold and SuperEdit (now owned by Editware) still offer hybrid capabilities, but all come from a linear background.

    As storage technology solutions continue to expand in capacity while the price plummets, the trend away from "hybrid" and toward "total" nonlinear editing will probably continue. But this does not rule out all the advantages of the hybrid editor. One of the highest-end nonlinear editing systems available today is clearly nonlinear in concept and execution, but offers an I/O port for external machines such as a DVE, a downstream character generator, and even has the ability to control external VCR to external VCR recordings. Most owners do not access these capabilities because the system offers excellent DVE, titling and hard disk-based editing capabilities, but still they are available for "hybrid use."

    One of the biggest missing ingredients to "hybrid editing" technology as well as first-generation nonlinear editing systems is the inability to edit "vertically" (creatively arranging layers of clips on multiple video or audio tracks on a nonlinear timeline) in addition to the ability to edit "horizontally" (ordering clips in time).

    Vertical Linearity

    The concept of "vertical linearity"--the ability to edit multiple video tracks (layers)--is an important new model that warrants strong consideration. It allows close integration and modification at any point of the multiple-layer building process. Early nonlinear and hybrid systems allowed complete random access and clip manipulation of two video layers (and a limited number of audio layers). But to add additional video (or audio) tracks/layers required rendering the existing tracks into a new combined track and then adding an additional layer to that. For a composited video segment, you may have to render and combine several times. Unfortunately, if you want to change one of the earlier layers, you had to start this process all over. This then becomes a clearly linear process that does not allow for creative manipulation or organization of the various layers. Many of today's nonlinear systems allow multiple tracks, but often only two can be combined in realtime--the rest require rendering. In other systems there may be more tracks, transitions or processes that can be combined in realtime. The number of layers/tracks can vary dramatically from system to system.

    Some multiprocessor systems allow you to assign one or more of the available processors to dedicated rendering. This may allow you to play back a composited sequence soon after you complete it, especially if the more intense layering was built early in the sequence. And rendering times for multiple tracks or layers can vary widely, depending upon the workstation, the software, the resolution required and the complexity of the composite.

    Traditional Operations Versus The New Way

    Can the editors, compositors, graphics designers and other operators or staff members in traditional post production environments adapt to the new digital models? How easy is it for a keyboard editor to adapt to the "click and drag" of a mouse, trackball, or pen and tablet? How easy is it for these people to think in nonlinear fashion? Can those who have depended upon engineers to maintain equipment and are unused to computer crashes or even the standard operations of the operating system that their new workstations use, adapt? In the transition from film to tape, there were many talented and skilled personnel that could not adapt and were lost to our industry.

    Assuming that those remaining can adapt, how long will it take, and how much re-training is required? After they learn the basics, how long will it be until they are as productive as before the switch? This can be an expensive and time-consuming jump from the traditional to the new. After this "jump," how soon will it be before new versions or technological improvements require additional training and downtime. (The development is evolving rapidly.) How do you estimate the time and expense? With all the rules thrown out, how do you predict the chaos? How do you budget time to ramp-up, training expenses, staff loss, morale problems, etc.? Do your job classifications need to be changed? Do you have union concerns? Will the "new way" improve productivity, which will result in staff reductions or re-assignments? Would it be better to start anew with new staff for the new technology, and continue to use the old investments until they are no longer demanded or viable and then dump both the equipment and the staff that operates it?

    What Are The Benefits?

    By now you may be screaming, "Why are we doing this?" Actually there are several reasons--improved quality and federally mandated digital broadcast transmission are two, just for starters. Post production has been evolving with new digital tools since the beginning of electronic video editing. The TBC, the DVE and the character generator are obvious examples. But the time and expense required for post production, the capitalization of post facilities and the clear limitations of traditional methodologies have created this revolution to occur. The continuing evolution of editing, audio mixing, graphics, compositing, and animation systems and techniques all pressing against the edge of the technological envelope can also be "blamed." Viewers, and those that provide the programs for those viewers, crave the new and exciting, and the artists that create for those viewers have demanded new, more efficient and powerful tools. Viewers are now demanding more from both the post production crafts (the middleman in the production/distribution process) and the delivery mechanisms. You have little choice but to adapt to these changes. The problem is, there are few rules to provide guidance. In fact, the problem may be that you have too many choices. The game is changing. And if you want one additional reason to enter the paradigm wars, check out what the competition is doing.

    [an error occurred while processing this directive]


    Back to the Table of Contents
    Back to Digital Television's Home Page