[an error occurred while processing this directive]

Engineering & Transmission

Digital technologies have brought about a fundamental change in the "nuts and bolts" of broadcasting and teleproduction. What used to be simple signals that could be viewed on an analog waveform monitor or oscilloscope, have evolved into more complex representations requiring more complex and sophisticated equipment. While our broadcast signals still use carrier waves, what they carry is now completely different. And just because a signal is a "digital" signal, it does not mean that the signal is perfect.

This section takes a look at digital engineering (the importance of test and measurement), transmission of signals within and outside of the facility, and offers primers on 8-VSB (U.S.'s ATSC), COFDM (Europe's DVB-T) and QAM (digital cable).

DTV Test, Measurement And Monitoring

By William C. Miller

Test, measurement and monitoring are necessary evils. Collectively, they're one of those below-the-line expenses that'll never make you any money, but ignore them and they can cost you plenty. If you've been in the television business for any length of time, you know this. You also know how hard it is to find the capital funding to add to your arsenal of test equipment, or to find the manpower to regularly check out your systems.

Where DTV is concerned, I've got bad news and good news. The bad news is that DTV is an entirely new species of television, and it's going to require lots of new equipment and techniques. The good news is that all of this stuff is digital; that means there's an opportunity to automate it to a great degree. If done properly (and that includes test equipment manufacturers providing the proper tools), you could actually save money in the long run over the total cost of monitoring an analog plant.

Why do we need so much new test equipment?

Because you're adding a lot more layers to your system, and each layer has to be verified differently. Also, if you've decided to transmit anything other than 480-line interlaced video, you're dealing with another set of image formats operating at much higher frequencies, on a different interface.

What are layers?

A layer is a part of the signal that helps it hook to the next level. For instance, the NTSC video signal has a couple of layers; it comes out of the imagers as RGB, but at the camera output they get encoded into NTSC. You could look at the NTSC coding as a layer whose purpose is to encode the three color components into a single signal. At the monitor, a decoder undoes the NTSC coding and presents the RGB signals to the CRT for display.

In the digital world, there are many layers. The analog RGB layer is still there, of course, but there's also a digital component layer, a compression layer, a wrapper layer, a transport layer and an interface layer. Each of these has to be examined with different tools. There are also new bits of information about the signal, called metadata, which also have to be wrapped and transported. PSIP is an example. The good thing about digital systems is that everything eventually gets converted to bits, and they all get moved around on the same transport. However, from the troubleshooting point of view, the bad part about digital systems is that everything gets converted to bits and gets moved around on the same transport. If something goes wrong, you have to know how to dig it out of the transport and look at it.

Isn't this digital stuff supposed to be perfect?

Nothing yet devised by the hand of man is perfect. When it works properly, a digital transmission system can indeed exactly reproduce at its output the information fed into its input. The key here is when it works properly. As those who have worked with digital systems know all too well, when digital equipment fails it fails big time. If you want to be able to locate faults quickly, you have to design in that capability.

On the other hand, the serial digital interfaces, SMPTE 259 and 292, have one overwhelming advantage over their analog counterparts: automatic level adjustment and equalization. No more DA tweaks, no more adjusting proc amps to compensate for transmission losses. Leave your tweaker at the door. That's why nobody who's converted to digital ever wants to go back. Besides, once you master how the ATSC system works, you'll be able to analyze what you're transmitting to see how many bits you can afford to allocate to other services. This is where the new business opportunities will come from. After all, in the age of digital broadcasting, bits equal bucks.

This is getting too complicated

Not really. NTSC is complicated, but we've all had lots of time to get used to its quirks and peculiarities. Remember the stories about blue bananas? Sure you do. Seen one lately? Of course not. The same thing will happen with DTV. To be comfortable with the system, however, you'll have to learn a new language. That's where books like this one can help.

So what is all this stuff, anyway?

Well, let's start by separating the system into layers, as mentioned before. Then we'll examine each layer to determine how to measure it and troubleshoot it.

We'll begin with the analog layer. All of your audio and video starts as analog. Most modern cameras do the analog-to-digital conversion somewhere internally; what comes out the back of the CCU is a serial digital signal. Nonetheless, that serial digital signal contains a digital representation of the original analog, and you can't shade the camera without being able to see it. This was recognized years ago, and many manufacturers offer digital-input waveform monitors to let you see what your video looks like.

Many of those same waveform monitors can also diagnose problems in the baseband digital transport as well. By transport here I mean the serial digital signal; if it's standard definition you're probably dealing with SMPTE 259M; if it's high definition you're using SMPTE 292M. A blatant commercial plug: You need to get your hands on the SMPTE standards. The entire set of television standards is available on a single CD-ROM; details are on the SMPTE Web site, www.smpte.org.

In standard-definition video (SMPTE 259M) there's a tool available to check the integrity of the bitstream. SMPTE RP165 defines a method of embedding cyclical redundancy check codes (CRCs) in the serial digital signal. There are two codes; one for the active picture area and another for the full field. At the receiver, these are checked against CRCs computed from the signal itself; if the two disagree, an error has crept in. RP165 also defines a method of transmitting the fact that errors have been detected; the entire system is referred to as EDH, error detection and handling. Not all equipment out there supports EDH, but it can be a very useful tool for troubleshooting SDI problems, particularly finding paths that are too long.

In the high definition serial interface, a checksum is computed for each line and is carried in the EAV packet. This makes error detection much simpler; no separate encoder is required.

As discussed elsewhere in this book, to fit a 1.5 Gbps video signal into a 6 MHz channel, we have to use bit rate reduction, or compression as it's more commonly called. The ATSC system uses MPEG-2 compression, as defined in ISO 13818-2. Analyzing MPEG encoders to see if they're performing efficiently is a job for a laboratory; I expect most users will want to look at the decoded image with their eyes. However, it's extremely useful to be able to have a set of measurements that correspond with the subjective impairments you see. A considerable amount of work has gone into making tools for such measurements; they usually involve a set of test images and an analyzer. The analyzer measures the difference between the original image and the decoded image after compression, taking into account the fact that the eye sees some impairments more readily than others. You need one of these only if you're evaluating encoders for purchase or if you're considering upgrading the compression software that runs in the encoder. An individual station probably doesn't need one; a group might consider purchasing one.

Compression is also used elsewhere in the studio. All of the popular tape formats for high definition utilize compression, as do several for standard definition. Not all use MPEG compression; in fact, most use other types, such as M-JPEG or DV. We're now starting to see ways of connecting these machines together that keep the signal in compressed form. One of the more popular of these is SDTI (SMPTE 305M). At the moment I'm not aware of any SDTI analyzers, but I suspect they'll be developed. EDH also works on SDTI, and it's extremely important here. With regular SDI signals, EDH will report errors long before they're so numerous that they affect signal quality. SDTI signals, because they're compressed, are more fragile; each bit represents more of the original video. If you see problems, EDH will help you determine whether the codecs are broken or whether the path is at fault.

Now let's go back to the studio output. We've compressed the video to fit into an ATSC signal and added closed captioning; now it has to go into a transport stream. This transport stream will also have to carry the compressed audio, timing signals and metadata, including things like PMTs, PIDs and other directory information, plus PSIP. You'll need a way of looking at the structure of the transport stream multiplex and verifying that it's correct or receivers won't function properly. Transport stream analyzers are available; they're extremely useful. To my mind they're mandatory. Just as you now need a way to decode your VITS, you'll need a way to verify the transport stream.

Let's talk a bit about audio. Two issues have emerged that could become real showstoppers. The first develops from the migration to a multichannel world. Remember the grief we all had when TV went stereo? Multichannel is even more complicated. How do you know that the audio signal you're putting out is correct and complete? One common error is routing the Lf and Rf signals to the audio coder, assuming that they are Lt and Rt or Lo and Ro. The result? No dialogue! The music and effects are mixed to the Lf and Rf channels, so the opening credits sound fine. However, dialogue goes in the C channel, so as soon as the first actor starts speaking you're going to have a very obvious problem. (See table 1.)

table 1

I know of only one way to resolve this issue. First, you need to have the ability to monitor multichannel sound in your master control room. Second, when you check slates and levels, roll past the opening credits and into the body of the show to where you can verify that you've got dialogue. Unfortunately, there's no way to automate this one.

The second killer issue is audio/video timing. It appears that no two types of receiver have equal audio/video delay, and the magnitude of the error can be far worse than the usual couple of frames of lip-sync error we have become used to handling. How we will deal with this is an open question, but we do need a simple, standard way of testing and measuring it. At the moment there is at least one piece of test gear that can analyze lip sync in an encoded stream. There are also test bitstreams that can be used to certify a decoder. One thing you cannot do is just assume that your ATSC encoder is correctly set; you have to verify it. Both encoder and receiver manufacturers are aware of the problem, and both the ATSC Implementation Subcommittee (IS) and the SMPTE Television Systems Technology Committee (S22) are working diligently to resolve it.

Now let's talk about getting this stuff on the air. The output of the transmission multiplexer will be carried on either the 270 Mbps asynchronous serial interface (ASI) as defined in the DVB standards, or on the SMPTE 310 synchronous serial interface (SSI). ASI is the same rate as the SMPTE 259M C level, but uses a different channel coding. SSI is a 40 Mbps interface using a simple channel code. It's designed for short connections between the transmission mux and the transmitter input. It has no forward error correction built in, so STLs will have to add their own. Whichever interface your transmitter uses, you'll have to be able to monitor it.

The modulator adds several layers of forward error correction and equalizer training signals before coding the transport stream into 8-VSB. You'll need to verify that this is being done properly. You'll also want to be able to look at the 8-VSB constellation to verify that it's correct. It should be possible to do both of these with a test demodulator; manufacturers should design these functions in. The test demodulator should provide you with a transport stream at one of its outputs, which you can feed into the transport stream analyzer to verify that it's correct (or determine where it's broken). Similarly, you'll need a reference MPEG decoder for the video, a reference AC-3 decoder for the audio, and a PSIP analyzer to make sure that's getting out OK. If you're also sending data, you'll need a way to look at that as well. At the beginning, at least, it's going to be quite a pile of equipment.

If it's all digital, can't it be smart?

Yes. In fact, that's where having all that software can be a real help. Now the equipment can tell you when it's in trouble. That is, it can if the manufacturer has had the foresight to design in that capability. However, self-diagnosis is only half the battle. It's not much help if your super-duper whizzo box discovers it's in trouble, but only lights a small red LED on its front panel. In addition to error detection, you need error reporting. As a recent advertising campaign states, "Knowledge isn't power. Sharing it is." There are lots of ways to share diagnostic information; let's look at a few of them.

I've already mentioned EDH. Its error detection mechanism works quite well, but its error reporting leaves a lot to be desired. EDH works by setting flags in the signal; downstream equipment can read these. However, we really need errors to be reported out to a central monitoring system. SMPTE has standardized two ways to do this.

The simple method is SMPTE 273M, the simple fault-reporting interface. 273M specifies an isolated contact closure that is open if everything's OK, closed if there's a hard fault and closed once per field in the presence of EDH errors (one closure per errored field). Because the closure is isolated, you can connect a number of devices together in parallel to create a summary alarm for a subsystem. These in turn can be connected together, until at the end you have a system which works very much the same way that telco systems have for years; you follow the red lights to the source of the problem. A smarter way to do this would be to connect all devices to a central monitoring system; you could also use the signals to control automatic failover switches where you have redundant paths. 273M is designed to be cheap to implement. It's also designed to show a fault if it loses power. It is not suited to reporting status from complex devices such as servers and VTRs.

For complex devices there is SMPTE 269M, the status monitoring and diagnostic protocol (SMDP). SMDP is just a protocol, not a defined language; there's a lot of room for implementation flexibility. In this it resembles IEEE-488 (GPIB), which inspired it. At least one manufacturer implements SMDP (under its own trade name) across a wide range of its products; it began doing so when it got into the systems business and saw how such a reporting system could save it time and money in installing and servicing large systems. 269M is defined using a short, point-to-point serial interface, but nothing prevents it from being carried on a LAN; in fact, it was expected that LANs would be the most common implementation of such systems.

In practice SMPTE 269M has been overtaken by technology from the data networking world. Many of the newer monitoring devices on the market now sport embedded Web servers that allow them to be monitored by any computer with a browser. You simply bookmark the appropriate URLs on a PC on your technical intranet and you have a customized monitoring system with close to zero programming effort.

However, error reporting requires a push technology; you don't want to wait until a device is polled to know that it's in trouble. My colleague, George Berger, proposes using e-mail, the original push technology, for this. Most of the devices we need to monitor either have embedded controllers or are controlled by an external processor. Most of these run under operating systems that support mail and have a mail API. All we need is for the application software to send a mail message on change of status, particularly when that change of status calls the device's continued serviceability into question. Once notified, you can get the details by using your Web browser to interrogate whatever information the monitoring or reporting devices have published.

You should be able to define where you want the mail sent. I recommend a separate mail server for your diagnostic LAN. It would route messages to the appropriate people or departments, depending on what sent the message and what the contents were (status, warning or failure messages). You could link this system to your business LAN (using a router for isolation) to make communicating with departments outside Engineering simpler. Once your diagnostic messages are in e-mail form, you can use standard IT products for paging, voice notification, etc.

The importance of this concept is that it lets you leverage existing IT technology to extend the reach of your TV-specific diagnostic systems. It also makes it fairly simple for vendors to include this capability into their products. Logging troubles becomes a snap; e-mail includes dates and times. Just archive the message file. Later, you can sort it by the from: field to find out, for instance, how many messages you got from VTR A over the last month. It's one of those "why didn't I think of it" ideas, but it can only work if manufacturers are told to include it in their equipment.

A few years ago, I wrote a paper for a SMPTE conference titled "Monitoring and Diagnostics in Digital Television Systems."[1] In it, I described SMPTE 269M and SMPTE 273M, as well as the concept of the diagnostic LAN. I suggested that the real value of these diagnostic systems was not just that they would free up operators from having to constantly scrutinize the equipment, but that they could diagnose problems well before they became apparent on air. Consider, for example, the case of a server with error-correcting RAID drives. Clearly the RAID system must be aware of the occurrence of errors in order to correct them, but is this information reported out? If so, is it reported in a manner that makes it easy for a centralized monitoring system to receive, analyze and route it?

Knowing where a problem has occurred is helpful when sweeping up after a failure. It's also helpful after the fact in determining whether something should be replaced. However, in the environment in which we work, this information is of greatest value in averting problems before they affect air. In the analog world, we do the best we can with the information we have, but air failures do occur and they cost us money, both directly in the form of make-goods and indirectly in terms of our stations' reputation. In a digital world, the opportunity exists to make our diagnostic systems proactive rather than reactive, to fix or work around failures before they affect air. That's why I say there's good news; if our suppliers give us the tools, over the long run we can save more than we spend.

References

[1] Miller, William C., "Monitoring and Diagnostics in Digital Television Systems," SMPTE Journal, September 1994, page 614.

The opinions presented here are solely those of the author, and do not necessarily represent the positions of his employer or of any professional society with which he is affiliated.

Transmission: Digital Within The Facility

By Sheldon Liebman

The movement of digital video and audio information in and around a facility is the sum of three different parts. If these pieces do not speak with each other correctly, the result will be a signal that either won't move from point A to point B or won't be usable once it gets there.

The first piece of the puzzle is the physical connection between the various pieces of equipment. Not only must all the equipment be in place to create a physical link, but also all of the equipment must recognize and be able to use those links. For example, you can take a standard phone cord and plug the two ends directly into two different phones. This will create a physical connection, but does not allow a conversation to take place between them.

The second piece that must be dealt with is the communications interface between the equipment. This communications layer makes sure that the devices that are linked together can actually speak to each other using a common language. The simplest example of this may be connecting an ordinary VCR output into the antenna input of a television. If the VCR is set to output on channel 3 and the TV is tuned to channel 4, the physical connection exists but the communications interface is faulty.

Once the devices are properly connected to each other and communicating, the final issue to address is the data format of the information being sent between the devices. Every user of Microsoft Windows is familiar with the problem of trying to look at a file that is incompatible with the software loaded onto their machine. If you don't have a way to decode the information you've received, it's just a bunch of bits. Using the data requires that both the sending and receiving equipment can understand and process the digital information.

Before you can decide among the various types of networks that are available, it's important to look at the connection methods and file format issues to ensure that the decision you make can actually be implemented.

Line Speeds

Just as there are speed issues within a computer and across a bus, there are limitations that are introduced based on the physical methods by which you connect devices together. For example, standard copper wire can't be used for gigabit per second connections. This is one of the reasons that Plain Old Telephone Service (POTS) is incompatible with high-speed networking. To move beyond a certain level, different types of wiring must be used. For the highest performance today, fiber optic cable is used.

When multiple locations are being connected, the right size "pipe" must be purchased or rented between them to ensure that the network can operate. Beyond POTS, which tops out at about 56 kilobits per second (kbps), there are alternatives like ISDN (128 Kbps) to be considered. For very high-speed networks, there is an alphabet soup of connection descriptions.

These descriptions usually start with T1 or DS1, which is rated at 1.544 megabits per second (Mbps). This number is the equivalent of 24 voice grade connections linked together. T3 or DS3 brings this up to approximately 45 Mbps. From this point, the designation changes to OC1, which is equivalent to DS3, and goes all the way up to OC192, which offers a speed of greater than eight gigabits per second (Gbps).

The higher speeds are accomplished using a physical layer called SONET, which stands for Synchronous Optical Network. SONET is an international standard that has been adopted in the United States, Europe and Japan. As described above, it has the ability both to satisfy much of today's bandwidth requirements and to grow in the future as the need for more bandwidth arises.

Whether a network uses SONET or standard copper wire, it can only operate effectively if the wiring and equipment requirements are understood and met. This includes not only the type of cable that is used but also how far it can travel between pieces of equipment and how it must be physically connected to the network nodes. If the wrong choice is made, it can be a very expensive error to correct.

File Formats

In the early days of video compression, almost every equipment supplier utilized Motion JPEG as the method for compressing video information and storing it on a computer disk. What they didn't tell us, however, was that the information was being stored differently by every company. This created a huge headache for companies that purchased equipment from multiple sources, only to find that the video captured by device "A" could only be viewed by device "A." If device "B" was used, the result was garbage.

Similar problems exist between videotape formats. A half-inch tape containing s-video (S-VHS) information cannot be played on a standard half-inch VCR (VHS). It is also true, for the most part, that the different DV format tapes from different manufacturers cannot be played on each other's equipment (note that certain Panasonic DVCPRO machines can play Sony DVCAM and consumer DV formats, Sony DVCAM machines can play consumer DV, and Sony consumer DV decks can play Sony DVCAM).

When digital video networks are being planned, it's crucial to consider the formats that will be used between the nodes on the network. Just moving the bit stream from one place to another isn't enough. If a 30-second stream of uncompressed digital video is placed in a file, that file will be approximately one gigabyte large. Moving the file from point A to point B, depending on the network used, can happen in less than 10 seconds or may take many hours. If the file can't be opened and viewed when it gets to the other end, why bother sending it at all? As we'll see, this can be a serious issue depending on the type of network that is being used.

ATM

Asynchronous Transfer Mode (ATM) is a very high bandwidth transmission, switching and multiplexing technology. Although ATM is usually described as a networking protocol for Wide Area Networks (WANs), it can be used quite successfully in a Local Area Network (LAN) environment. One of the best advantages offered by ATM is that it can move very large streams of data very quickly.

ATM utilizes "cells" to move data from one point to another. It is designed to handle a wide variety of data types including audio, video, graphics, data and voice, all through a single pipeline. Basically, the technology takes all of these different types of data and converts them into fixed length cells that are 53 bytes long. Within the cell, five bytes contain control information and the other 48 bytes contain data. The cells are moved through very large data pipes at a high rate of speed and are then reassembled into their original form at the receiving end of the connection.

Since the information is sent in small packages, ATM offers a highly reliable way of transmitting data from one point to another. If a cell does not get through, it can easily be resent. This process does not take a lot of time due to the high speeds at which ATM networks are typically configured. Today, these networks usually run at either 155 Mbps or 622 Mbps. This higher speed translates into over 60 megabytes per second (MBps), more than enough bandwidth to handle full resolution, uncompressed video streams. In the future, speeds of up to 10 Gbps are planned.

An ATM network can be configured with dedicated channels between two points, virtually guaranteeing that a large data pipe will always be available. Alternatively, it can allocate bandwidth on demand. In this type of environment, the network will constantly monitor the usage requirements of existing connections and will not allow new users to join the network if that will deteriorate the performance needed by existing connections.

Because ATM is used heavily in the WAN environment, using an ATM network (or at least providing a bridge to an ATM network) allows a facility to easily connect to other locations around the world. On the downside, the cost of ATM equipment is still very high. In many cases, only very large installations can justify the deployment of this type of network.

Ethernet

Ethernet is probably the most widely used form of networking in the world. When it comes to moving standard data files, email and general-purpose documents, nothing can beat the ease of use and low cost of a standard Ethernet connection.

The original Ethernet specification moves data at a rate of 10 Mbps, which is definitely too slow for video applications. A few years ago, Fast Ethernet was introduced. With a tenfold increase in the pipeline, Fast Ethernet supports connections at up to 100 Mbps. In a compressed video environment, this type of network can be used effectively, although the number of users accessing data at the same time must be very small.

The newest member of the Ethernet family is Gigabit Ethernet, which can theoretically process data at 10 times the rate of Fast Ethernet or 100 times faster than standard Ethernet.

The reality of this standard is that it doesn't accomplish this goal when used in a video environment. All forms of Ethernet use very small data packets and have a high overhead associated with the handshaking and interrupt requirements at both ends. Because of this, transmitting large data files such as video and audio data end up getting bogged down in the handshaking.

The result is that the speed increase as you move up the ladder is less than 10 times the previous level. Fast Ethernet usually achieves only six to seven MBps and Gigabit Ethernet appears to top out at 30 to 40 MBps. While this is still fast enough to move around a single stream of uncompressed video, it doesn't provide the kind of performance that's available from other types of networks.

Ethernet can also exist in a switched format, where hubs make direct connections between the two devices communicating. The difference between a switched network and one that uses standard hubs is a concept that confuses many people. Basically, a switch has intelligence and communicates with every device connected to it. This results in a significant performance increase, although at a much higher price. A standard hub, on the other hand, just connects all of the devices to a single pipe and lets them determine on their own how to move data between them.

In effect, a switch makes every connection a point-to-point connection so that the maximum bandwidth possible between two devices is achieved. Standard Ethernet's bandwidth is shared between all the users on the network. Switched Ethernet allows the two communicating devices to share the entire bandwidth in a direct connection through a switched hub that all equipment is connected to. This is similar to how the telephone network functions--connecting two phones through a switched network. A benefit of Switched Ethernet is that all that is required is to replace the existing standard hub with a Switched Ethernet hub--current network interface cards (NICs) do not need to be replaced.

Fibre Channel

The original goal for Fibre Channel was to create a new standard for host-to-host connectivity with very high speed and over very long distances. During its development, however, additional functionality was added to address the need for connecting high-speed storage devices directly to the network. In this respect, Fibre Channel and Serial Storage Architecture (SSA) are very similar.

Fibre Channel is unique, however, in that it is the only connection standard that can be used in a point-to-point, loop or switched environment. This flexibility gives a facility the ability to start small and grow without having to change the basic networking infrastructure.

Fibre Channel's point-to-point configuration is the easiest way to start creating a Fibre Channel network. It is also the least flexible. Compared with traditional methods of connecting storage and computers, however, it offers advantages in both speed and distance. Fibre Channel supports up to 30 meters between devices using standard cabling and can be stretched to 10 kilometers using long wavelength optical connections. The speed of the connections varies from 30 MBps up to at least 70 to 80 MBps. The newest Fibre Channel hardware promises speeds of up to 200 MBps.

As a company grows and wants to connect multiple computers and storage devices, it can move to the Fibre Channel Arbitrated Loop (FC-AL) topology. Fibre Channel Arbitrated Loop requires the installation of a hub that is connected to all the nodes. As the network grows, more or larger hubs can be added so that all of the devices are connected together in a single system.

The advantage of FC-AL is that it allows a Fibre Channel network to be configured at a lower cost than if a full blown switch-based fabric is created. The disadvantage of FC-AL, however, is that there is a limit to just how large you'll want to make the network. Since all of the devices share the network bandwidth, performance degrades after four to six devices are connected.

Fibre Channel also supports a switched configuration that represents the original intention of the standard.

In the Fibre Channel standard, a switch knows where to find every asset in the network and can control them from a single location.

To gain maximum results for minimum costs, the latest generation of Fibre Channel switches now contain a "Fabric to Loop" feature, or FL. This means that you can take a hub and plug it into a switch. The entire group of devices on the hub then appears as a single device on the switch. While this does not provide the full benefits of a switched environment, it does increase the flexibility of growing a Fibre Channel network while reducing the cost.

IEEE 1394 (FireWire)

Since 1987, the search has been on to find a replacement for SCSI (see below) and to bring peripheral connectivity to a new level. One of the contenders for this title is the IEEE 1394 standard (also known by the Apple trademark "FireWire"). This interface first appeared on commercial products in 1996 and continues to make inroads in video applications.

Although it was not exclusively designed for video, 1394 is seeing its largest application in this area. This is being driven by the inclusion of a 1394 connector on a number of digital camcorders and nonlinear editors. The decision to use 1394 was influenced by the number of advantages it offers.

First of all, the connectors used in 1394 are based on those used in video games. This makes them very inexpensive to produce and also ensures that they can put up with a lot of abuse. The cable used to connect devices is also thin and easy to manipulate.

The 1394 specification supports up to 63 devices on a single bus. Perhaps more importantly, it allows buses to be bridged together so the theoretical maximum is thousands of devices. It also uses no terminators and supports hot plug-in and dynamic configuration. When you add 64-bit addressing with automatic address selection, it's clear that 1394 has truly been designed to be a plug-and-play interface.

1394 also works well with video streams. Standard computer data is sent in small packets of information with lots of handshaking from both sides. 1394 supports large packets of information that don't need the constant handshaking of more volatile types of data. As a result, the interface can handle at least 10 MBps of continuous data (not just burst). Improvements to the design promise continuous throughput reaching over 100 MBps or more.

Since fully uncompressed video requires throughput of approximately 30 MBps, 1394 is not yet ready to play in this area. However, very high quality results can still be achieved with a compression rate of 4:1, which brings the throughput requirement down under eight MBps. The current generation of 1394 can easily handle this load.

Because 1394 is a digital serial bus, multiple devices can access the data on the bus without affecting the quality received by any of the other devices. This means that there is no loss of data or quality when multiple devices process the video information. For example, a digital video camera can send data to a digital monitor and to a computer at the same time--with no loss between devices, no termination problems and no need for a distribution amplifier.

The one big negative with 1394 is that it was not developed to function as a full-fledged network. The protocol does not include functions that are necessary to handle the demanding requirements of accessing and sharing information between more than one user. However, it can be very beneficial as an entry point or exit point for digital video information that is moved around within a facility. As the speed of the interface increases, the format of the video data can move from compressed to uncompressed and from standard resolution up to high definition formats.

HIPPI

The High Performance Parallel Interface (HIPPI) was originally developed as a way for supercomputers to communicate with each other at very high speeds. It was also one of the first standards designed to allow direct connection of storage devices. HIPPI operates at 800 Mbps, which provides enough bandwidth for multiple streams of uncompressed video to move around simultaneously. For this reason, at least one very high-end graphics and special effects supplier has been touting this network architecture to its customers.

The two biggest drawbacks to HIPPI are its cost and wiring requirements. Since the original form of HIPPI is a parallel technology, it requires a very large connector and thick, bulky wiring that contains 50 pairs of wires and which can be run up to 50 meters between network nodes.

A newer alternative is called Serial HIPPI. Serial HIPPI runs at 1.6 Gbps speed and supports distances between nodes of up to 10 kilometers if fiber connections are used. An even newer version of HIPPI is being planned called HIPPI 6400. This proposed standard is designed to support 6.4 Gbps (over 600 MBps) connections between devices.

HIPPI is also much more costly to install than many of the alternatives. Since HIPPI has never achieved the large installation volumes of other solutions, the switches, routers and interface cards are typically priced at least two to three times higher than other gigabit alternatives.

For installations that have already installed HIPPI networks, the benefits to this technology are obvious. To extend their reach and to expand their systems, however, it is probably a better idea to link the existing HIPPI network to another standard using a switch that handles both.

SCSI

The Small Computer System Interface (SCSI) is the standard by which direct connection of peripherals to personal computers is measured. Over the years, this standard has been expanded and tweaked in an attempt to stay current with advances in other types of computer technology.

The original SCSI specification is an 8-bit wide bus that supports up to eight devices including the host adapter. The maximum bandwidth available is five MBps and only two devices can exchange data at one time. The first improvement, Wide SCSI, increased the bus to 16-bits, effectively doubling the transfer rate to 10 MBps.

Next came Fast SCSI, which doubled the throughput for both 8-bit and 16-bit devices to a maximum of 20 MBps. Another doubling of the bus clock was introduced with UltraSCSI, bringing the maximum bandwidth to 40 MBps. In addition, UltraSCSI increased the number of devices that could be connected to a maximum of 15.

The evolution of SCSI is not over. Further developments have been announced (and are starting to appear) that raise the transfer rate to 80 MBps and even to 160 MBps. At these speeds, the movement of data across SCSI rivals the speed of other storage connection alternatives, but the limit on the number of devices is still a serious drawback.

SCSI is also defined primarily as a method for connecting storage and other peripheral devices, rather than connecting multiple computers. At least one company currently markets a networking solution based on SCSI technology, but there are limitations that make it unattractive to many users.

SDI/SDTI

The Serial Digital Interface (SDI) is the video world's answer to the transmission of digital video (with embedded audio). As defined in SMPTE specification 259M, SDI provides a method for transmitting uncompressed digital video, audio and other data between video devices that can be separated by as much as 300 meters.

In order for digital video to be transmitted inside a facility, all of the equipment that is used must be able to recognize and use the digital video format. In the video world, the central device to this process is a router, which corresponds to the hub or switch used in a standard computer style network.

The original CCIR-601 specification on which SDI is based was an 8-bit signal. This has since been upgraded to include a 10-bit version and includes both composite and component formats. Other more proprietary enhancements to SDI communications have been implemented as well, creating something of a problem.

Sony introduced SDDI (Serial Digital Data Interface). The enhancement specified a method by which data streams could be seamlessly transmitted along with video data. Panasonic countered with CSDI (Compressed Serial Digital Interface), which provides a method for sending compressed data across a serial digital connection. Because they incorporated proprietary components, equipment that was compatible with one of these formats could not be used with another.

In an effort to bridge the gap and provide a common solution for all video professionals, there is now another standard. The new format, called SDTI (Serial Digital Transport Interface), is an open standard so that all manufacturers can make equipment that is compatible with it. The goal is to allow all of the existing forms of SDI (including CSDI and SDDI) to be bridged across a single environment with support for video, audio, data and compression.

The advantage to using digital video is obvious--there is no generation loss as the streams are processed. When multiple layers of video are used or required, the ability to stay in a digital format allows more complex imagery to be used.

Today, the number of all-digital facilities is very small due to the high cost of the equipment and the incompatibility between the formats. This cost is driven higher by the need to support multiple pieces of equipment to deal with the multiple, incompatible versions of SDI. If an open standard can be agreed upon, it will be easier for all video professionals to shift to this format.

SSA

Serial Storage Architecture (SSA) was originally developed as a high-speed storage interface. During its development, the definition expanded to include high speed networking capabilities. Today, SSA is primarily sold as SSA-80, which supports a total communications bandwidth between devices of 80 MBps. The next generation of SSA, which is SSA-160, has been designed to double that bandwidth and should be commercialized during 1998.

SSA was designed to bring important pieces of mainframe technology to storage and networking connections, including no single point of failure, data protection and host connectivity. Since it was expected to be connected with storage subsystems in most instances, the cost of the interface was also very important. As a result, SSA is a single chip, CMOS integrated solution that is very inexpensive to implement.

The power of the SSA interface provides enough bandwidth to move around multiple video streams with or without compression. The SSA-80 interface runs at 200 MHz, with 20 MBps of read and write to any device from two directions. Every node has two ports, every port handles 20 MBps in and out simultaneously, so there is a total of 80 MBps for every device. SSA-160 calls for twice as much speed, up to 40 MBps in and out for a total of 160 MBps for every device.

With up to 128 devices on a loop, there is room for multiple machines and multiple storage devices to be connected at the same time. SSA also offers enough distance between devices to allow an entire facility to exist on a single loop. The connections can use four-wire (two-pair) twisted pair cables and there can be up to 20 meters between each point-to-point link with pricing comparable to SCSI. If this distance is too small, fiber optic connections can be used to increase the distance to almost a kilometer per node without performance loss.

Because SSA was designed with mainframe technology in mind, it is highly reliable. An SSA loop has no single point of failure and supports auto-configuration and hot swapping of components. This is especially important if SSA is used to create video subsystems. Finally SSA is a simple and cost effective solution compared to some other alternatives.

USB

After a number of years of promises-and-false starts, the Universal Serial Bus (USB) is finally a reality. This is a plug and play interface that allows up to 127 peripherals to be connected to a single computer using only one port address and a single interrupt. The downside, at least in the initial version of this specification, is the speed. The USB supports a maximum bandwidth of only 12 Mbps (1+MBps), which makes it the slowest interface in this list.

Of course, USB wasn't designed to be a full-fledged network. Rather, it was designed to allow a large number of devices to plug into a PC with a single interface. These devices can include video cameras, but it is more likely they would include multiple VTRs. If USB ports start to show up on these and other video devices, machine control for an entire facility might be handled using a single computer.

Conclusion

There are a large number of ways that digital video can be captured, compressed, processed and transmitted within a facility today, and more are coming in the near future. To get started, it's important to be sure that the bandwidth required can be met by the network that's desired. As more equipment is added, most of these networks can be expanded. Whichever method is used, the results will be high quality video that doesn't get lost from one generation to the next.

Intra-Facility And Inter-Facility Transmission: Copper Versus Fiber

By George Maier

The movement of digital video from point-to-point, or from point-to-multipoint, falls in one of two areas: transport within a facility or transport between facilities. The fundamental technologies are similar in either case, but the methods, protocols and hardware needs vary considerably depending on the task.

Intra-Facility Cable

The advent of SMPTE 259M (ITU-R 601) as a dominant digital serial digital standard at 270 Mbps, has forced facility owners to take a second look at their infrastructure--as it exists now and as it will exist five years from now. The coming of HDTV has forced an even closer examination of facilities and will have a lasting affect on the infrastructure planning process.

Coaxial cables are still the practical solution for many digital applications, but their limitations are becoming apparent. Table 1 is a summary of five types of cables rated for digital video transmission rates. This data was supplied by Belden, however similar products may be supplied by Com/Scope and others. The pricing shown is suggested retail and may vary at the distributor level.

It can be seen that the best case is 151 meters at 1.5 Gbps with the largest available cable. Newer coaxial designs provide respectable reach at 1.5 Gbps and will work well in a large majority of facilities, but distance limitations at the HDTV end of the performance curve suggests that coaxial cables can no longer support all of the video needs in a facility.

For transmission distances beyond 100 meters (328 feet) only the larger RG-11/U type will meet the requirement. Some facility planners will avoid going to the absolute distance limit of a cable and will build in a 10 to 15 percent safety margin. (Editors' note: Some cable manufacturers already put a 10 percent safety margin on their recommended distances--see Better Cables, Better Distances following this section.) The safety margin is necessary to allow for cable aging, connector losses, physical compression and excessive bending. Once a digital signal drops out, it's gone and there's no way to recover it. The solution for long cable runs is the addition of reclocking distribution amplifiers with auto equalization at the far end of the cable. Such devices allow extending the range of cable beyond the specified limits of the cable itself. The equalization range of chips used in SDTV and HDTV distribution amplifiers is constantly being improved and we can expect to see these limits increase over time.

In rebuilding facilities to support digital video, many took comfort in the fact that they could use much of the existing copper cable plant in conjunction with SDI capture, switching, edit and playback operations. Many facilities have been upgraded to a 270/360 Mbps infrastructure, but stopped there. One of the major reasons was the promise of mezzanine compression, which has not become a practical reality, as mezzanine compression systems are expensive. Currently, the lowest cost MPEG-2 encoder and decoder combinations are priced between $20,000 and $30,000, which is actually more expensive than a small HDTV router and the compression system would add frame delays at each pass. In addition, most router manufacturers have engineered their HDTV routers to operate from the same control systems that their SDI routers respond to.

A hidden issue in older facilities with 8281 style cabling is that their original connectors were not optimized for 75 ohms and are producing significant reflections and VSWR anomalies at high bit rates, which seriously limit their digital range. Many broadcast and production folks are finding they must overbuild their plants once again to support video data rates above the 270/360 Mbps levels.

Intra-Facility Fiber

Fiber optics have proven especially important in the area of field production, sports, news and entertainment events, because fiber systems can provide far greater reach and fiber cables are light and easy to work with. Side benefits include EMI and noise immunity, ground loop elimination and the most obvious, unlimited bandwidth.

The need for low cost fiber components to support DTV/HDTV within the facility has been answered by a growing number of manufacturers. Familiar names like Axon, CSI Math, Force, Leitch, NVISION, Telecast Fiber Systems and others have introduced digital products that are optimized for in-building or campus video transmission.

There are two general categories of optical fiber: multimode and single mode. While both fibers are 125 micrometers (or "microns") in diameter and identical in their outer appearance, upon examination one finds that the inner light-carrying "core" glass layer of the single mode is much smaller than the multimode fiber's core. This 9 micron single mode core, compared to 50 micron or 62.5 micron multimode core, is partially responsible for its virtually unlimited bandwidth. Single mode is the fiber type that is routinely installed outside the plant in telephone and cable TV companies and it is the fiber that is required for high data rates, such as uncompressed 1.5 Gbps HDTV, over long distances.

Multimode fiber is more commonly used for limited distance runs, say up to five or 10 kilometers, of analog video and audio, or just a couple of kilometers for SDTV at 270 Mbps. This is because the multimode fiber permits several modes of light to travel through the core at the same time. While the multimode fiber can send more optical power, it suffers greater dispersion of the high frequency modulated pulses, such as the optical representation of the bits of digital video and the resultant output pulses can become smeared together, resulting in an indistinguishable signal. This problem becomes more pronounced as data rates increase and as distances become greater.

Single mode fiber cable is approximately the same price as multimode cable, but transceiver electronics tend to be more expensive, since single mode systems are generally laser based, where multimode systems may be LED based. In addition, more care in handling and use is required, since single mode cables are more sensitive to attenuation losses due to improper bending and dirt on connector faces.

Fiber cable construction varies depending on the job at hand. Because fiber strands are so small, they are usually bundled with anywhere from four to over one hundred fibers in the same sheath. Within a facility, standard, flame retardant, PVC jacketed cables are normally used, while Teflon jacketed, plenum rated (high temperature) cables are optional. For outdoor use in campus or metro area applications, gel filled loose tube fiber is recommended. In mobile production applications, an extremely rugged "tactical" fiber is usually the choice. As the name implies, tactical fiber was developed for military applications, but lends itself well to the pounding that field production cables are subjected to.

For certain applications, SMPTE is supporting hybrid cables, which include both fiber and copper strands. Table 2 provides a few reference points for six and eight fiber bundles of single mode in various jacketing schemes. Single mode fiber was chosen over multi-mode due to the bandwidth limitations as explained earlier. This data was gathered from Mohawk Cable, but similar cables are available from Com/Scope and others.

As the chart suggests, fiber is significantly less expensive, lighter and smaller than virtually any coaxial cables available for digital video, but that is only half of the story. To be useful, electrical to optical and optical to electrical conversion is required and is where the majority of expense lies. The issue for many is how the total expense compares with copper.

As discussed previously, optical hardware is now available from a number of manufacturers that can allow you to rack-mount optical interfaces for serial digital transport within a facility. The nature of rapid component developments for fiber optic systems is such that a myriad of low-cost products is now possible that were not previously.

Figure 1 is a composite of typical fiber and copper applications in an SDTV/HDTV digital facility. In a typical facility, some of the connections will be made by coaxial cable alone, while longer runs need line amplifiers with automatic equalization and reclocking, while others will be fiber optic. Figure 2 shows the relative range of 1694A coaxial cable versus fiber optic cables. We chose an RG-59/U style cable, like 1694A as the most popular of the newer designs from both a cost and size standpoint. Of course one could choose to implement an RG-11/U size cable, such as 7731A, but the 0.405 inch diameter would fill a conduit rather quickly.

Figure 3 is a basic view of fiber in the field. Systems like the one outlined in the diagram have been deployed in both 270 Mbps and 1.5 Gbps configurations. Field units like the Viper, offered by Telecast Fiber Systems, include such amenities as intercom and data circuits as well as the capability of being used for any bit rate in the SMPTE 259M and SMPTE 292M recommendations.

Although private fiber is still fairly rare, a growing number of television stations and post production facilities are finding dark (unused) fiber that they can utilize. Figure 4 is representative of several different approaches to fiber optic ATSC STL systems now in operation. In one case, the network downlink is located at the transmitter; about 25 percent of all network affiliates are in a similar situation. If the link from the studio should totally fail, the output of the network decoder could be routed locally to the ATSC encoder, which feeds the DTV transmitter.

In the second case, the reverse is true and the ATSC data stream is generated at the studio, but still needs a path to the transmitter. Again fiber optic transmission systems can be of use, particularly in light of overcrowding in the broadcast auxiliary microwave bands and the constant threat of encroachment by non-broadcast interests.

Copper or Fiber?

There is a constant debate over this issue and the answer is not as clear as it once was. In a situation where a digital facility manager must cover distances that are well within the range of coaxial cables, as defined earlier, the issue is simple. Straight copper coax cable is an easy winner. Once the need for equalizers and reclocking has been established, the choice becomes much more difficult. To provide some guidance, we looked at a hypothetical, but realistic different situation and drew a comparison.

In a situation where four bi-directional 270 Mbps contribution circuits are to be run 200 meters (656 feet), all that is required is a good quality cable, like 1694A. At some point, these same eight circuits must carry 1.5 Gbps so a shelf of reclocking distribution amplifiers is needed at each end. The total cost breaks down as follows:

These numbers were arrived at using Belden 1694A cable, Amphenol 75 ohm BNC connectors and Tektronix M9602HD reclocking distribution amplifiers in a MAX-900 shelf for the coaxial cable analysis. The fiber optic analysis used Mohawk 8 fiber PVC cable, AMP duplex SC connectors and the Telecast Python multiple fiber I/O. Interestingly, there is a major difference in price for even this modest scenario. The final decision is left to the user, but the cost of fiber can no longer be an objection in studio infrastructure.

Inter-Facility

The only feasible way to move digital video between facilities is to use either microwave radio or fiber optic systems, and both offer very specific advantages and disadvantages.

Microwave: For many years, microwave has been the best choice for studio to transmitter links and for news gathering, but has fallen out of favor for long haul intercity networks due to the availability of fiber and satellites. As the nation and the world move to digital, microwave will continue to provide a large percentage of the all-important final links to the transmitter and may find new uses as extensions of digital fiber networks.

Familiar names like Microwave Radio Communications, Nucomm and RF Technology, have been joined by Alcatel, Itelco and Moseley, with new digital products. A number of products appear to be emerging to support any one of several situations that a broadcaster might be in.

Baseband Modulators And Demodulators: If you have an existing FM microwave system and would like to use it to carry an ATSC video stream, these devices will allow it by generating a 4FSK signal that is friendly to a directly modulated FM baseband radio. The advantages are simplicity and not having to buy a new radio. The disadvantages are some loss of fade margin and having to dedicate a radio for this purpose.

Digital IF Modulators And Demodulators: In some cases, existing 70 MHz analog FMT and FMR shelves may be replaced by new digital modulators and demodulators. The linearity of the existing radio will play a major role in determining whether it can still be used, needs upgrading or must be replaced by a completely new digital radio. Most companies have been selling "digital ready" radios for some time now and those will require only minor adjustments.

Digital Radios: Digital radios are essentially new, IF or directly modulated digital transmission systems specifically designed for the job at hand. The key factor, as mentioned above, is linearity in the microwave and IF amplifier stages, plus error correction and automatic countermeasures against multipath.

The digital modems that are being offered for video applications can be configured to run in QPSK or QAM modes, depending on the spectral efficiency requirement, as dictated by the data speed and channel width. At 7 GHz for example, QPSK modulation would be a good choice for 45 Mbps transmission. With a spectral efficiency of 1.66 bits/Hz, QPSK provides a robust transmission medium within the allotted 25 MHz channels. At 2 GHz, where the channels are 17 MHz wide and may soon be 12 MHz, a higher form of modulation, like 16-QAM, will be required to pass the same 45 Mbps signal.

Thanks to years of development for the utility and cellular markets, sophisticated digital radios are readily available with spectral efficiency high enough to allow transmission of 19.4 Mbps in a 5 or 7 MHz wide RF channel and transmission of 45 Mbps in a 10 or 12 MHz RF channel. Although new to the broadcast world, similar technology has been used in common carrier microwave systems for nearly two decades.

Dual Video: With reference to figure 6, several of the companies mentioned earlier are offering a 45 Mbps microwave radio, that will simultaneously transport the ATSC data stream at 19.4 Mbps and an NTSC signal that has been digitized and compressed using MPEG-2 encoding equipment. The major advantage in using this type of approach is that both ATSC data and the NTSC video may be transported in a single RF channel; a must in areas where additional microwave frequencies are not available in 7 or 13 GHz and the STL path is too long for an 18 or 23 GHz radio.

Figure 7 shows an alternate approach to spectrum congestion by Alcatel, which includes adding a new digital radio for ATSC next to the old analog radio in the same band and possibly in the same channel, by using a dual polarized antenna system and added filtering.

Another new variation, shown in figure 8 is called the Twin Stream by Microwave Radio Communications, which combines both digital and analog elements in the same radio. It allows a non-compressed NTSC carrier to be combined with a 16-QAM modulated ATSC carrier in the same channel and on the same polarization. The advantage is in not having to use MPEG-2 compression NTSC video.

Fiber Optics: With few exceptions, the broadcaster rarely owns inter-facility fiber. In metro areas, a large percentage of new video circuits are being supplied by local exchange carriers, metro area carriers and increasingly cable television companies. Because the local phone company is a regulated carrier, new types of services are slow to materialize. Their unregulated counterparts are quicker to look at new digital opportunities and may decide to push ahead with services if the economics are sound. A substantial number of telco-and CATV-based SDI networks are now providing compressed and non-compressed point-to-point and switched SDI service in most major metro areas.

Companies like ADC, Artel Video Systems, Video Products Group and others have been very active with regulated and non-regulated carriers in providing uncompressed 270 Mbps equipment for broadcast and post production local loop networks. Compression equipment to support 270 Mbps SDI has been supplied by ADC, Alcatel, Barco/RE, Nortel, Synctrix and others. At this point, none of the carriers are discussing 19.4 Mbps ATSC or 1.5 Gbps HDTV transmission at native rates, despite the fact that fiber equipment is readily available for either.

Should you find that leased services are the only option, it is vitally important to negotiate a deal based on video service, not on a data rate. Even though most compressed digital circuits are carried via DS-3 facilities, do not approach a carrier and ask for a DS-3, or you'll pay the DS-3 rate, which is considerably higher than a video tariff that uses DS-3 as its transmission medium. Discuss the nature of the requirement in video terms first. For uncompressed 270 Mbps SDI circuits, tariffs similar to the old analog TV-1 services are now available in major metro areas.

Better Cables, Better Distance for HDTV

By Steve Lampen

Uncompressed, high definition video signals run at a data rate of 1.485 Gbps and a bandwidth of 750 MHz. It is no surprise, therefore, that cables designed to operate at 4.2 MHz for analog video have a much harder time at 750 MHz. These high frequencies require greater precision and lower loss than analog. Where effective cable distances were thousands of feet for analog, the distance limitations are greatly reduced for HD.

When SMPTE first addressed this problem, they looked at the bit error rate at the output of various cables. Their purpose was to identify the "digital cliff", the point where the signal on a cable goes from "zero" bit errors to unacceptable bit errors. This can occur in as little as 50 feet.

The SMPTE 292M committee cut cables until they established the location of this cliff, cut that distance in half, and measured the level on the cable. From there they came up with the standard: where the signal level has fallen 20 dB, that is as far as your cable can go for HD video. It should be apparent, therefore, that these cables can go up to twice as far as their 'recommended' distance, especially if your receiving device is good at resolving bit errors. Of course, you could look at bit errors yourself, and that would determine whether a particular cable, or series of cables, would work or not.

There is one other way to test HD cable and that is by measuring return loss. Return loss shows a number of cable faults with a single measurement, such as flaws in the design, flaws in the manufacturing, or even errors or mishandling during installation of a cable. Ultimately, return loss shows the variations in impedance in a cable, which lead to signal reflection, which is the "return" in return loss.

A return loss graph can show things as varied as the wrong impedance plugs attached to the cable, or wrong jacks or plugs in a patch panel. It can also reveal abuse during installation, such as stepping on a cable or bending a cable too tightly, or exceeding the pull strength of the cable. Return loss can even reveal manufacturing errors.

Broadcasters are familiar with VSWR--Voltage Standing Wave Ratio, which is a cousin to return loss. For instance, SMPTE recommends a return loss of 15 dB up to the third harmonic of 750 MHz (2.25 GHz), this is equivalent to a VSWR of 1.43:1. If you know VSWR, you will recognize this as a very large amount of return. Others have suggested that 15 dB return loss is insufficient to show many circuit flaws.

It is suggested that a two-band approach be taken, since return loss becomes progressively more difficult as frequencies increase. In the band of 5 to 850 MHz, a minimum of 23 dB would be acceptable (equivalent to a VSWR of 1.15:1) and from 850 to 2.25 GHz a minimum 21 dB (equivalent to a VSWR of 1.2:1). Some manufacturers are sweeping cables and showing 21 dB return loss out to 3 GHz, which is even better.

So what cables should you use and what cables should you avoid? Certainly, the standard video RG-59 cables, with solid insulations and single braid shields lack a number of requirements. First their center conductors are often tin-plated to help prevent oxidation and corrosion. While admirable at analog video frequencies, these features can cause severe loss at HD frequencies. Above 50 MHz, the majority of the signal runs along the surface of the conductor, called "skin effect". What you need is a bare copper conductor, since any tinned wire will have that tin right where the high-frequency signal wants to flow. And tin is a poor conductor compared to copper.

Around the conductor is the insulation, called the "dielectric." The performance of the dielectric is indicated by the "velocity of propagation," as listed in manufacturer's catalogs. Older cables use solid polyethylene, with a velocity of propagation of 66 percent. This can easily be surpassed by newer gas-injected foam polyethylene, with velocities in the +80 percent range. The high velocity provides lower high-frequency attenuation.

However, foam is inherently softer than a solid dielectric, so foam dielectrics will allow the center conductors to "migrate" when the cable is bent, or otherwise deformed. This can lead to greater impedance variations, with a resultant increase in return loss. Therefore, it is essential that these foam cables have high-density hard-cell foam. The best of these cables exhibit about double the variation of solid cables (±3‡ foamed versus ±1-1/2‡ solid), but with much better high frequency response.

This is truly cutting-edge technology for cables, and can be easily determined by stripping the jacked and removing the braid and foil from short samples of cables that you are considering. Just squeeze the dielectric of each sample. The high-density hard cell one should be immediately apparent.

Over the dielectric is the shield. Where a single braid was sufficient coverage for analog video, it is not for HD. Older double braid cables have improved shielding, but the ideal is a combination of foil and braid. Foil is superior at high frequencies, since it offers 100 percent coverage at "skin effect" frequencies. Braid is superior at lower frequencies, so a combination is ideal. Braid coverage should be as high as possible. Maximum braid coverage is around 95 percent for a single braid.

The jacket has little effect on the performance of a cable, but a choice of color, and consistency and appearance, will be of concern. There are no standards for color codes (other than red/green/blue indicating RGB-analog video), so you can have any color indicate whatever you want.

Digital Television Broadcasting

By Ron Merrell

Major equipment manufacturers laboring to deliver educational seminars and short courses on the industry's transition to DTV openly admit that answers which worked just a year ago have been revised many times by now, with even more revisions coming.

While it's tempting to jump into the RF plant and start thinking about what brand of new transmitter will surface as your best choice, it's a good idea to back off and start at a much more elementary point in the chain of RF plant: the tower.

The Tower

It's not nearly as exciting, because once you have one, you have it for so many years everyone forgets just how long it's been there. In fact, some stations have towers that are icons of their city's skyline, such as KCMO in Kansas City, MO. Lights strung up each leg to the top of the tower let you know you're in Kansas City.

The reason we suggest starting with the tower is that many of them have been around so long that they've outlived their chief engineers many times over.

With towers that old, who's still around to point out the old gal is already overloaded with antennas for other services? Overloading can mean two things: weight and windloading. Except in very rare cases where a tower was first supporting a minimal antenna and has long since been converted to a very high gain version, basic tower/transmission line/antenna weight is seldom an overlooked problem. All station engineers fear the day when freezing rain collects on the tower, antenna(s), transmission line(s) and perhaps on the guys. For a short period of time, there can be an incredible weight overload, especially if the tower lives in a climate where freezing rains are rare. In that case, it probably wasn't a consideration in the original design. What station engineers should be concerned about in the transition to DTV has more to do with what additional services added to the tower will mean, and a little less to do with mother nature playing nasty tricks. What does catch the eye of tower manufacturers and consultants is how much is on the tower. They'll expect to see the transmission line, the antenna and obstruction lights, but antennas for other services will get their attention. The problem is windloading.

Wind pushes against the tower and anything on the tower that the wind hits. In general, towers are rated for the wind speed they can handle. Look at the specs and you'll see it expressed other ways, but that's what it means. So, after various chiefs and administrations over the years have leased space on their towers for additional income or new services of their own, it's possible that many towers are seriously endangered by the extra windloading, or the extra force that will be exerted against the structure as a whole.

Should any of these over-windloaded towers also be subjected to freezing rain, the problem is exaggerated because the ice adds extra surface area that, in turn, helps multiply the windloading on the whole structure and everything tied to it.

The fact is, there are stations on the air today that are so far removed from their original specs that no one on staff knows for sure how much leeway their tower has for making it through the next storm season, let alone how it would fare when a second antenna and transmission line is run up its legs.

The first step in the transition process is to bring in experts who understand how to assess the health of your tower and its ability to carry any added load. Their assessment could drastically alter your plans.

If the inspection reveals an overloaded tower, don't throw in the towel. There are options, as you'll see in the next section of this chapter.

Tower Survey

In some cases, tower consultants such as ITI and companies like Dielectric, Landmark, LeBLANC and Micro Communications can suggest ways to strengthen their tower, or how many service antennas and their feed lines need to come down. Landmark is known for its tall towers and towers that require lots of ingenuity to construct. Still, the company is well-known for its tower strengthening strategies. According to the company, most towers can be saved.

What could be overlooked is checking into the proposed changes with an eye toward how a new antenna will react with the old one(s). The first stop in the transition to DTV belongs here, because if the antenna won't stay in the air, you're worse off than finding yourself at the end of the transmitter waiting line.

Over the past four years, attention has been focused on whether or not the transmitter manufacturers can produce transmitters fast enough to meet the demand expected across the industry. A recent manufacturers' survey in Television Broadcast indicated that this shouldn't be a major problem for stations.

However, in some markets, and among the networks, engineers decided that while they contemplated brand buys and partnerships, they should have their towers inspected and assessed for current and future performance. For example, tower and antenna studies have long been concluded on the Mt. Sutro complex in San Francisco and on those atop the World Trade Center in New York.

Sutro has been in operation for decades now, and its design may hold the answer for many broadcasters today. The candelabra-type design, with many stations sharing a common structure, is one way to insure no one needs to bear the sole brunt of new tower expenses. DTV Utah and other cooperative, shared sites are becoming a trend in the industry. While they come loaded with legal baggage, they are gaining popularity, because once the legalities are worked out, the expenses are shared. What's more, in this arrangement, no one station lays sole claim to the very best site. Whatever approach you choose, the problem becomes all the more focused when you realize that there isn't exactly a glut of tower installers lined up for the big push. Some have suggested that this is just a perceived problem that could be overcome by attracting installers from the cellular industry.

While that's an interesting proposal, most chiefs and directors of engineering get white-knuckled at such talk.

Once again, the reason for making the tower the first priority is because you might be okay with additional loading. Then again, maybe not. So if any changes are needed, or if a second tower is appropriate, if you aren't looking into it now, you will be waiting in a line that's a lot longer than the one originally predicted for DTV transmitters. Depending on how deep a station can afford to go into financing, having a thorough understanding of your tower's condition and capabilities could affect how you enter the DTV transition.

For example, if money isn't available for a full, near-term commitment because of your tower's capabilities, you might consider putting up a shorter, interim tower and running lower power for the time being. There are several scenarios along this line that could be played out sensibly, but only when the tower's actual condition is known.

Checks AND Balances

While technical questions abound concerning adjacent channel separation and interference, cable head-end reception, area coverage versus fall-off, efficiencies, pattern changes, STL linking and the headaches of sorting out who's affecting what on multi-antenna structures, finding some relief isn't as daunting as it might at first seem.

Among these considerations, cable system problems should get a leg up, based on the findings of experimental DTV stations such as WHD (Washington, DC). They've already run head-on into that experience, but it's something to keep on your agenda of subjects to be considered. Also, there's a whole other area to investigate, as it concerns the connection of the station to its remote transmitter site. You'll have to check out the potential for fiber optics, additional microwave STL capabilities, as well as ENG and STL links.

In fact, even the Canon CanoBeam lightwave link could become a player, as it is another linking alternative.

Everybody Grab A Partner

Partnering is a term that surfaced in the industry just a few years ago, and now it's almost a byword for the transition to DTV. The term is self-defining, because partners are brought together for their common goals. Long before it became a popular expression, engineers had grown comfortable with a partnership normally (but not always) put together by the transmitter manufacturer of choice. This would include the antenna manufacturer or supplier (who also dealt, if necessary, with a transmission line and RF components company) and the tower company. Even if a new tower wasn't needed, their reps should always have been involved if for nothing more than to assure that the tower's integrity wasn't compromised. But partnering in the late 1990s has come to mean the old team (just mentioned), which was joined first by a cast of integrators/installers, has now also been joined with production and post production players who will supply the new equipment compatibly so that everything works for the benefit of all.

For perhaps the first time in the history of television broadcasting, a multitude of chief engineers and directors of engineering are asking for and getting help to make certain that all video chain possibilities are explored and that all plant knowledge is shared for the common end of developing a very sophisticated video and RF chain. Manufacturers holding DTV seminars with these titled station people are quick to admit that many very elementary questions are being asked, questions that in the future will probably be answered in a variety of ways by some form of partnering.

And don't think that because the transmitter manufacturer you prefer won't be making forays into other product areas that they can't play a major role for your station. Their ability to cross lines to meet your needs is waiting for your test questions.

One of the major misconceptions is that only the megabuck RF manufacturers are capable of putting together a team deep enough to satisfy your RF plant and studio needs. This just isn't true.

Harris can offer (and will offer even more) studio-type products of their own along with RF plant equipment. Comark has struck agreements with a number of well-known studio box manufacturers. And in some cases, it's likely that both Harris and Comark will be involved in equipment designs on units that are not now in existence.

But when you check out the alliances and partnerships being forged by the other manufacturers, you'll discover they, too, have the ability to bring a team together for a turnkey system. It would be a huge mistake, for example, to assume that companies such as Acrodyne, Automated Broadcast Systems (ABS), Continental Electronics, EMCEE, Itelco, Larcan and others can't or aren't interested in working out turnkey contracts.

As this handbook went to press, EMCEE has acquired ABS and its high power UHF transmitter lineup.

These companies, working with systems integrators who have vast broadcast experience, can help you set up a partnership and also make arrangements on the financial end to make certain that no stations will be left behind for lack of financing.

As everyone knows, there have been many bottom line transition estimates. It isn't cheap. But most forms of partnerships can offer capital financing on some or all of the equipment required for the transition to DTV. If you find this turn of events a bit unusual, consider that if partnerships were left to the black box studio manufacturers, the opportunity for incompatibility would be maximized. And there already is some history here of successful turnkey operations that were forged by RF manufacturers.

Enter The Transmitter

You won't get very far into transmitter discussions before you run head-on into the 8-VSB exciter, and this is a very technical subject. So for a detailed technical description of 8-VSB, see What Exactly Is 8-VSB Anyway later in this chapter.

Depending on how well financed they are, stations are considering everything from putting their names on the list for one of the first transmitters all the way to asking about retrofitting. Retrofitting in most cases is like grabbing for straws. After all, if the main transmitter is retrofitted, the alternate main transmitter would be put back on the air for NTSC, where the majority of the viewers will be. There are so many old horse main transmitters still chugging along where, in many cases, the alternate is either a newer but lower power rig, or it's good only for emergencies.

However, don't dismiss retrofitting altogether. At some point, stations will want to consider it for the transmitter forced into the alternate role because of a DTV transmitter purchase. It'll be a consideration at the point where the station is mandated to be DTV all the way. Financing, discussed earlier under partnering, plays its first role here, because if it can't be arranged by the group or the individual station to cover the big transition numbers, RF can be an area of relief. A number of transmitter manufacturers offer lower power transmitters, and they can put you in a partnership for antennas with high gain and the appropriate transmission lines, along with installers. The savings against a full power RF installation could be substantial, perhaps allowing some money in the budget to be moved into the studio chain. Along these same lines, solid state becomes a cost/operational question. If you're thinking in this direction, the operating costs for solid state (at least currently) versus IOTs cross over at 20 kW. At that point, the choice falls to the IOTs, and you may recall that EEV introduced its IOT digital tube recently, so even these devices are changing. Watch for more developments along those lines at future NABs. It'll surprise everyone if Silicon Carbide can change the crossover point, but that's how NAB news is made. While Silicon Carbide awaits its turn, the LDMOS is now the solid-state device of choice. Since our last edition, Acrodyne has built its first IOT transmitter and brought in Rhode & Schwarz to shore up its solid state medium and low power lineup. The Rhode 8-VSB modulator will sit at the heart of the new Acrodyne IOT transmitter. New to the American television market, Continental Electronics is offering a cutting-edge transmitter designed by Telefunken, a company that traces its television roots to the very beginning of television. The choices are compounded by Litton's entry of the Constant Efficiency Amplifier, a device that combines a klystron and MDSC technology.

Links

Again, a warning: don't get sidetracked with production equipment until you can at least pass along a DTV program through the RF chain...all the way through it.

The STL is the most important link, as both an ATSC program stream and an NTSC program must both go to the transmitter site (assuming both ATSC and NTSC transmitters are co-located). The problem is that, typically, there is only one link with insufficient bandwidth to carry both signals. One way around this is to use an STL based on a single DS-3 (45 Mbps) or equivalent channel and multiplexing the ATSC and NTSC streams together for STL transport. At the transmitter site, the DS-3 signal is demultiplexed into separate ATSC and NTSC program streams and fed to the appropriate exciter/transmitter.

History has shown us that, left to the production box manufacturers, incompatibility could stop this revolution as surely as a major stock market crash. Transmitter manufacturers, sometimes acting as catalysts, sometimes as entrepreneurs, turned this into a bull market technology through partnering and sheer will.

Acting in this role, whatever their individual goals, all RF manufacturers have helped form the nucleus of partnerships that won't tolerate incompatibilities. And, as you'll begin to see at upcoming NAB conventions, where some RF equipment manufacturers are hesitant to join the team, others will introduce equipment you'd have expected from other areas of the equipment exhibition halls.

But, lest you get lost again in the whirlwind of announcements that will be forthcoming, take another look at that tower. If DTV won't work here, it makes little difference what else you have in mind.

Repacking: So Much Fun, Let's Do It Again

The primary reason that broadcasters are being "loaned" a second channel is so television transmission can be converted from analog to digital. The theory is that at some time in the future (tentatively 2006), analog broadcasting will be shut off and the spectrum used by analog broadcasting will be "recaptured" for auction by the federal government.

Currently, the television broadcast spectrum is covered with mostly analog stations. In a few years, we'll have both analog and digital stations throughout the spectrum. In addition, we have taboo channels and have had to avoid co-channel and adjacent-channel interference issues.

When the analog channels are returned, the television broadcast spectrum will be marked with holes where those stations used to be. This is not very efficient. So the plan is to take all the digital channels and "repack" them into a smaller slice of the spectrum. While "virtual" channel numbers will remain the same (just as a digital channel number is based on an analog channel number--see DTV Transmission Realities), the actual frequency will most likely change.

Does that mean that broadcasters will have to buy a new transmitter again? Possibly. Or at least some components of their transmission system.

Repacking will mean spectrum efficiency for the federal government, but it will mean a second DTV transmission transition for broadcasters.

DTV Interference: New Channels, New Problems

By Mark J. Pescatore

When WFAA, the ABC affiliate in the Dallas/Ft. Worth area, turned on its DTV transmitter in February 1998, their signal interfered with heart monitors in nearby hospitals. This was not a life-threatening development, but it served as a wake-up call for the broadcast industry.

Many broadcast industry insiders weren't surprised by the interference, since so many secondary and unlicensed pieces of equipment use parts of the broadcast spectrum for transmission. In fact, these experts promise many more of the same types of interference problems in the coming years. The problems aren't restricted to telemetry devices--your own wireless microphones could also fall victim to the new DTV channels (better check those frequencies). Plus, a select few markets with new DTV channels 3 or 4 might experience interference with cable set-top boxes and other devices (like video games and VCRs) that use an RF output modulated on channels 3 or 4. (The FCC believes such interference is unlikely, especially if the equipment has channel-switchable capabilities.)

There are more than 4,000 hospitals across the country, and each one has the potential to be hindered by new DTV transmissions. Many of your own wireless microphones could fall victim to progress as well. How many health care facilities will be affected in your area when DTV comes to town?

Since there were no DTV sets on the market when WFAA ran into interference problems, staying off the air was a relatively painless way for the station to help solve the problem. By the time your local stations are ready to make the jump to DTV, though, there will probably be an audience out there. Then, the thought of turning off a signal for a week might seem less like charity and more like financial suicide--especially if those stations have already sold air time.

And testing a DTV signal doesn't guarantee that all interference will be detected. All the interference with WFAA wasn't reported (or discovered) when the station first signed-on with their digital signal. In fact, WFAA had to turn off their signal three different times to solve separate interference problems with three health care facilities.

Solving this problem isn't about assessing blame. After all, broadcasters have done nothing wrong. They've been allocated a portion of the spectrum for digital television transmission, and they are legally broadcasting their signal. In this case, the television station is the primary service. That means if the hospitals want to continue to use their monitors, they are going to have to change their frequencies or change their equipment, neither of which is a "quick fix" option.

In a joint statement on March 25, 1998 the FCC and the Food and Drug Administration (FDA) outlined courses of action to help prevent this interference in the future. For its part, the FCC will be working to ensure that TV broadcasters communicate with area health care facilities about the potential problem, and will ask manufacturers to help their customers determine if they may be affected by new DTV channels assignments. The FCC will also provide information about spectrum sharing and an area-specific DTV channel listing on their Web site, www.fcc.gov.

Meanwhile, the FDA has sent a public health advisory to all U.S. hospitals and nursing homes, and will work with manufacturers to have equipment labeled to alert users about potential interference. Together, both agencies will explore long-term spectrum needs for medical devices in an effort to avoid future interference problems, and, according to the statement, will "work with equipment manufacturers and the health care community to consider various long term technology improvements that might ameliorate the interference problem."

Despite this effort by the FCC and FDA, broadcasters need to take the initiative to work with the hospitals and caregivers in their area before their DTV signal hits the airwaves. This means taking the extra time to send a member of the engineering staff to the local hospitals to explain the potential interference in detail. Not only does this promote a caring image in the community, it helps stations avoid negative publicity (imagine all the accusatory headlines like "TV Station Is Killing Patients" you won't have to endure). A little preventative engineering assessment today helps keep the doctor(s) away.

DTV Transmission Realities

By Michael Silbergleid

Quality. Is that why we're doing DTV? Or is it other non-traditional television services like data broadcasting and digital coupons? Is it multicasting or HDTV? Or is it all of these things? Or none, since television broadcasters, to stay in television broadcasting, must do this by law?

As an industry, broadcasters will be able to do a great deal of things with the 19.39 Mbps that can go into the 6 MHz of bandwidth the FCC has allocated for digital television. We will have crystal clear CD-like sound and, if we so choose, high definition film-like television. But there is one thing that can potentially cause havoc with all of this--compression.

We've been living with compression since the beginning of television with interlace. And now we have the ability to trade interlace for progressive, but to transmit that signal to the home, we'll still have to use MPEG-2 compression.

Through the years, there has been research that shows interlace compresses better than progressive. Not to be outdone, there has also been research that shows that progressive compresses better than interlace. So now what? Do we produce the best picture we possibly can and hope that transmission doesn't muck it all up? Basically, yes. And it will take some trial and error as the encoders get better. If you've ever seen a DBS signal, you've probably seen macroblocks--a compression artifact.

In 1997 and 1998, ATSC encoders used in experimental and volunteer DTV/HDTV stations did not encode signals perfectly. Critical viewers (experts--not consumers) could see compression artifacts as macroblocks within the picture. This was especially evident in fast-action programming like basketball.

The big question is: What can the encoder handle, and specifically what will it choke on?

Which leads the purchaser of an ATSC encoder (or encoders if multicasting will occur) to wonder if it will come with a "no choke" guarantee, or if viewers will even see the same picture that we as professionals will see.

Display Technology

The Consumer Electronics Association (CEA) has stated that the digital television sets made by their members will display all of the formats in the ATSC's Table 3 on whatever display type the television actually has (this means that HDTV signals will be down-converted to SDTV if a SDTV display is what is in the television).

If the TV set is a 4:3 display, how will 16:9 images be displayed? Decoders are capable of sending to the display: letterbox, center protected area, and pan and scan. In theory the choice is up to the consumer, in reality, the choice is made by the broadcaster.

Today's displays are interlace and 30 frames per second. What if the original signal is something different? Well, whatever is sent will have to be converted to whatever the display is. So although new digital television sets may be capable of displaying 24, 30 or 60 frames in progressive or interlace, today's televisions with a digital television set-top box will still only display 30 frames per second interlace. What this means is that 3:2 pulldown would move from the telecine to the ATSC decoder. And, if you believe that 720 progressive looks better than 1080 interlace, but the public believes that 1080 interlace is a better picture than 720 progressive, because 1080 is a larger number than 720, then we'll see a great deal of interlace digital television displays in a great deal of houses.

Channel Numbers

The way we talk about channel numbers in the DTV world has also changed--we now have virtual channels. The ATSC, in document A/65, specifies how the new numbering system works. There are two types of channel numbers: the major channel number and the minor channel number (or sub-channel number). A "virtual channel number" must be associated with a major and a minor channel number, with the minor channel number on the right-hand part of the "virtual channel number," such as 4-2. To state that another way, the major channel number, along with the minor channel number, act as the user's reference number for the virtual channel.

Major channel number: Numbered between 1 and 99. In the U.S., numbers 2 to 69 are for ATSC television purposes (using the same channel number as the station's NTSC analog license, or the ATSC channel number if the broadcaster does not have an ATSC license) and numbers 70 to 99 are for other services (such as data) and repositioning other services within the ATSC multiplex (for example, a local broadcaster transmitting community college lectures in its bit stream may want to use a major channel number different than its own major channel number for the virtual channel carrying the lectures.

Minor channel number: Numbered between 0 and 999 (one thousand channels). In the U.S., the zero minor channel number is the NTSC analog channel (ATSC channel 5 would be ATSC channel 5-0). Numbers 1 to 99 are used for ATSC television and ATSC audio-only services while numbers 1 to 999 can be used for data services. (Note that receivers can go up to minor channel number 1,024.)

Monitoring What People See

There are a large number of television stations that have inexpensive color and black and white television sets in their studio and master control rooms so that staff can monitor more closely what a viewer is seeing at home. In the digital television world, with the varying degree of decoders that will be available to consumers within digital television sets and set-top boxes, what is the station to do, monitor every possible display? No.

What a station should do is look at off-air signals with three levels of decoders--the best, the mid-range, and the least expensive. And looking at the station's signal from your cable company or companies (and whose decoder will they have if they convert to analog?) will be just as important.

If the cable company feed is less than adequate, will the broadcast station have to "loan" them a better decoder so the signal looks as good as it can be? Stations might want to budget that in as well. And if stations multicast, how will that decoder know what to pass on to your single cable channel (with the future of multicasting Must Carry unknown)?

Most people will not notice what a professional can see, so the real question is: What can stations get away with, and does the station truly want to get away with it?

(For a discussion on the relationship between 4:3 and 16:9 screen sizes, see the Measuring Screen Size section in DTV in the Real World.)

What Exactly Is 8-VSB Anyway?

By David Sparano

The U.S. invented it, the ATTC tested it, the FCC accepted it, everyone is talking about it, and soon we'll all get it in our homes--but what is 8-VSB anyway? Simply put, 8-VSB is the RF modulation format utilized by the recently approved ATSC digital television standard to transmit digital bits over the airways to the home consumer. Since any terrestrial TV system must overcome numerous channel impairments such as ghosts, noise bursts, signal fades, and interference in order to reach the home viewer, the selection of the right RF modulation format is critical. Being one of the most crucial aspects of the DTV system, the 8-VSB format has received a great deal of attention and scrutiny recently.

In the alphabet soup world of digital communications, there are two big names to remember when thinking about the complete DTV system: 8-VSB and MPEG-2. 8-VSB is the RF modulation format and MPEG-2 is the video compression/packetization format. To convert high definition studio video into a form suitable for over-the-air broadcast, according to DTV standards, two stages of processing are needed: MPEG-2 encoding and 8-VSB modulation. Accordingly, two major pieces of equipment are required: an MPEG-2 encoder and an 8-VSB exciter.

The MPEG-2 encoder takes baseband digital video and performs bit rate compression using the techniques of discrete cosine transform, run length coding and bi-directional motion prediction--all of which are discussed elsewhere in this book. The MPEG-2 encoder then multiplexes this compressed video information together with pre-coded Dolby Digital (AC-3) audio and any ancillary data that will be transmitted. The result is a stream of highly compressed MPEG-2 data packets with a data rate of only 19.39 Mbps. This is by no means a trivial task since the high resolution digital video (or multiple standard resolution video) input to the MPEG-2 encoder could easily have a data rate of one Gbps or more. This 19.39 Mbps data stream is known as the DTV Transport Layer. It is output from the MPEG-2 encoder and input to the 8-VSB exciter.

Although MPEG-2 compression techniques can achieve stunning bit-rate reduction results, still more tricks must be employed to squeeze the 19.39 Mbps DTV Transport Layer signal into a slender six MHz RF channel and transmit it (hopefully without errors) to the eager consumer waiting at home in front of the TV set. This is the job of the 8-VSB exciter.

Figure 1 is a block diagram of a typical 8-VSB exciter. In this section, we will walk through the major processes that occur in the 8-VSB exciter--identifying the major components of the 8-VSB signal and explaining how this signal is generated.

Data Synchronization

The first thing that the 8-VSB exciter does upon receiving the MPEG-2 data packets is to synchronize its own internal circuits to the incoming signal. Before any signal processing can occur, the 8-VSB exciter must correctly identify the start and end points of each MPEG-2 data packet. This is accomplished using the MPEG-2 sync byte. MPEG-2 packets are 188 bytes in length with the first byte in each packet always being the sync byte. The MPEG-2 sync byte is then discarded; it will ultimately be replaced by the ATSC segment sync in a later stage of processing.

Data Randomizer

With the exception of the segment and field syncs (to be discussed later), the 8-VSB bit stream must have a completely random, noise-like nature. This is because our transmitted signal frequency response must have a flat noise-like spectrum in order to use the allotted channel space with maximum efficiency. If our data contained repetitious patterns, the recurring rhythm of these patterns would cause the RF energy content of our signal to "lump" together at certain discrete points of our frequency spectrum--thereby leaving holes in other parts. This implies that certain parts of our six MHz channel would be over-used while other parts would be under-used. Plus, the large concentrations of RF energy at certain modulating frequencies would be more likely to create discernible beat patterns in an NTSC television set when DTV-to-NTSC interference is experienced.

In the data randomizer, each byte value is changed according to known pattern of pseudo-random number generation. This process is reversed in the receiver in order to recover the proper data values.

Reed-Solomon Encoding

Reed-Solomon encoding is a Forward Error Correction (FEC) scheme applied to the incoming data stream. Forward Error Correction is a general term used to describe a variety of techniques that can be used to correct bit errors that occur during transmission. Atmospheric noise, multipath propagation, signal fades, and transmitter non-linearities may all create received bit errors. Forward Error Correction can detect and correct these errors, up to a reasonable limit.

The Reed-Solomon encoder takes all 187 bytes of an incoming MPEG-2 data packet (the packet sync byte has been removed) and mathematically manipulates them as a block to create a sort of "digital thumbnail sketch" of the block contents. This "sketch" occupies 20 additional bytes which are then tacked onto the tail end of the original 187 byte packet. These 20 bytes are known as Reed-Solomon parity bytes.

The receiver will compare the received 187 byte block to the 20 parity bytes in order to determine the validity of the recovered data. If errors are detected, the receiver can use the parity "thumbnail sketch" to locate the exact location of the errors, modify the disrupted bytes, and reconstruct the original information. Up to 10 byte errors per packet can be corrected this way. If too many byte errors are present in a given packet, the parity "thumbnail sketch" no longer resembles the received data block, the validity of the data can no longer be confirmed, and the entire MPEG-2 packet must be discarded.

Data Interleaver

The data interleaver disturbs the sequential order of the data stream and disperses the data throughout time (over a range of about 4.5 msec through the use of memory buffers) in order to minimize the transmitted signal's sensitivity to burst-type interference.

This is the equivalent of spreading all of your eggs (bytes) over many different baskets (time). If a noise burst punches a hole in the signal during propagation and "one basket" (i.e., several milliseconds) is lost, many different segments lose one egg instead of one data segment losing all of its eggs. This is known as time diversity.

Data interleaving is done according to a known pattern; the process is reversed in the receiver in order to recover the proper data order.

Trellis Encoder

Trellis coding is yet another form of Forward Error Correction. Unlike Reed-Solomon coding, which treats the entire MPEG-2 packet simultaneously as a block, trellis coding is an evolving code that tracks the progressing stream of bits as it develops through time. Accordingly, Reed-Solomon coding is known as a form of block code, while trellis coding is a convolutional code.

For trellis coding, each 8-bit byte is split up into a stream of four, 2-bit words. In the trellis coder, each 2-bit word that arrives is compared to the past history of previous 2-bit words. A 3-bit binary code is mathematically generated to describe the transition from the previous 2-bit word to the current one. These 3-bit codes are substituted for the original 2-bit words and transmitted over-the-air as the eight level symbols of 8-VSB (3 bits = 23 = 8 combinations or levels). For every two bits that go into the trellis coder, three bits come out. For this reason, the trellis coder in the 8-VSB system is said to be a 2/3 rate coder.

The trellis decoder in the receiver uses the received 3-bit transition codes to reconstruct the evolution of the data stream from one 2-bit word to the next. In this way, the trellis coder follows a "trail" as the signal moves from one word to the next through time. The power of trellis coding lies in its ability to track a signal's history through time and discard potentially faulty information (errors) based on a signal's past and future behavior.

This is somewhat like following one person's footsteps through the snow on a busy sidewalk. When the trail becomes confused with other tracks (i.e., errors are received), the trellis decoder has the ability to follow several possible "trails" for a few footprints and make a decision as to which prints are the correct ones. (Note: change this analogy to "footprints in the sand on a crowded beach" if you are reading this in a warm climate.)

Sync and Pilot

The next step in the signal processing chain is the insertion of the various "helper" signals that aid the 8-VSB receiver in accurately locating and demodulating the transmitted RF signal. These are the ATSC pilot, segment sync, and frame sync. The pilot and sync signals are inserted after the data randomization, Reed-Solomon coding, data interleaving and trellis coding stages so as not to destroy the fixed time and amplitude relationships that these signals must possess in order to be effective.

Recovering a clock signal in order to decode a received waveform has always been a tricky proposition in digital RF communications. If we derive the receiver clock from the recovered data, we have a sort of "chicken and egg" dilemma. The data must be sampled by the receiver clock in order to be accurately recovered. The receiver clock itself must be generated from accurately recovered data. The resulting clocking system quickly "crashes" when the noise or interference level rises to a point that significant data errors are received.

When NTSC was invented, the need was recognized to have a powerful sync pulse that rose above the rest of the RF modulation envelope. In this way, the receiver synchronization circuits could still "home-in" on the sync pulses and maintain the correct picture framing--even if the contents of the picture were a bit snowy. (Everyone saw the need for this except the French; sync there is the weakest part of the signal--vive la différence). NTSC also benefited from a large residual visual carrier (caused by the DC component of the modulating video) that helped TV receiver tuners zero in on the transmitted carrier center frequency.

8-VSB employs a similar strategy of sync pulses and residual carriers that allows the receiver to "lock" onto the incoming signal and begin decoding, even in the presence of heavy ghosting and high noise levels.

The first "helper" signal is the ATSC pilot. Just before modulation, a small DC shift is applied to the 8-VSB baseband signal (which was previously centered about zero volts with no DC component). This causes a small residual carrier to appear at the zero frequency point of the resulting modulated spectrum. This is the ATSC pilot. This gives the RF PLL circuits in the 8-VSB receiver something to lock onto that is independent of the data being transmitted.

Although similar in nature, the ATSC pilot is much smaller than the NTSC visual carrier, consuming only 0.3 dB or 7 percent of the transmitted power.

The other "helper" signals are the ATSC segment and frame syncs. An ATSC data segment is comprised of the 187 bytes of the original MPEG-2 data packet plus the 20 parity bytes added by the Reed-Solomon encoder. After trellis coding, our 207 byte segment has been stretched out into a stream of 828 8-level symbols. The ATSC segment sync is a repetitive four symbol (one byte) pulse that is added to the front of the data segment and replaces the missing first byte (packet sync byte) of the original MPEG-2 data packet. Correlation circuits in the 8-VSB receiver home in on the repetitive nature of the segment sync, which is easily contrasted against the background of completely random data. The recovered sync signal is used to generate the receiver clock and recover the data. Because of their repetitive nature and extended duration, the segment syncs are easy for the receiver to spot. The result is that accurate clock recovery can be had at noise and interference levels well above those where accurate data recovery is impossible (up to 0 dB SNR--data recovery requires at least 15 dB SNR). This allows for quick data recovery during channel changes and other transient conditions. Figure 2 shows the make-up of the ATSC data segment and the position of the ATSC segment sync.

An ATSC data segment is roughly analogous to an NTSC line; ATSC segment sync is somewhat like NTSC horizontal sync. Their duration and frequencies of repetition are, of course, completely different. Each ATSC segment sync lasts 0.37 msec.; NTSC sync lasts 4.7 msec. An ATSC data segment lasts 77.3 msec.; an NTSC line 63.6 msec. A careful inspection of the numbers involved reveals that the ATSC segment sync is somewhat more "slender" when compared to its NTSC counterpart. This is done to maximize the active data payload and minimize the time committed to sync "overhead."

Three hundred and thirteen consecutive data segments are combined to make a data frame. Figure 3 shows the make-up of an ATSC data frame. The ATSC frame sync is an entire data segment that is repeated once per frame (24.2 msec.) and is roughly analogous to the NTSC vertical interval. (FYI: The NTSC vertical interval occurs once every 16.7 msec.). The ATSC frame sync has a known data symbol pattern and is used to "train" the adaptive ghost-canceling equalizer in the receiver. This is done by comparing the received signal with errors against the known reference of the frame sync. The resulting error vectors are used to adjust the taps of the receiver ghost-canceling equalizer. Like the segment sync, the repetitive nature of the frame sync, and correlation techniques used in the 8-VSB receiver, allow frame sync recovery at very high noise and interference levels (up to 0 dB SNR).

The robustness of the segment and frame sync signals permits accurate clock recovery and ghost-canceling operation in the 8-VSB receiver--even when the active data is completely corrupted by the presence of strong multipath distortion. This allows the adaptive ghost-canceling equalizer "to keep its head" and "hunt around in the mud" in order to recover a useable signal--even during the presence of strong signal echoes.

AM Modulation

Our eight-level baseband signal, with syncs and DC pilot shift added, is then amplitude modulated on an intermediate frequency (IF) carrier. With traditional amplitude modulation, we generate a double sideband RF spectrum about our carrier frequency, with each RF sideband being the mirror image of the other. This represents redundant information and one sideband can be discarded without any net information loss. This strategy was employed to some degree in creating the vestigial lower sideband in traditional NTSC analog television. In 8-VSB, this concept is taken to greater extremes with the lower RF sideband being almost completely removed. (Note: 8-VSB = 8 level--Vestigial Side Band.)

(There are several different ways to implement the AM modulation, VSB filtering, and pilot insertion stages of the 8-VSB exciter, some of which are completely digital and involve direct digital synthesis of the required waveforms. All methods aim to achieve the same results at the exciter output. This particular arrangement was chosen in the interest of providing a clear, easily understandable, signal flow diagram.)

Nyquist Filter

As a result of the data overhead added to the signal in the form of forward error correction coding and sync insertion, our data rate has gone from 19.39 Mbps at the exciter input to 32.28 Mbps at the output of the trellis coder. Since 3 bits are transmitted in each symbol of the 8-level 8-VSB constellation, the resulting symbol rate is 32Mb / 3 = 10.76 Million symbols/sec. By virtue of the Nyquist Theorem, we know that 10.76 Million symbols/sec can be transmitted in a VSB signal with a minimum frequency bandwidth of 1/2 X 10.76 MHz = 5.38 MHz. Since we are allotted a channel bandwidth of 6 MHz, we can relax the steepness of our VSB filter skirts slightly and still fall within the 6 MHz channel. This permissible excess bandwidth (represented by a, the Greek letter alpha) is 11.5 percent for the ATSC 8-VSB system. That is, 5.38 MHz (minimum bandwidth per Nyquist) + 620 kHz (11.5 percent excess bandwidth) = 6.00 MHz (channel bandwidth used). The higher the alpha factor used, the easier the hardware implementation is, both in terms of filter requirements and clock precision for sampling.

The resulting frequency response after Nyquist VSB filter is shown in figure 4.

This virtual elimination of the lower sideband, along with the narrowband filtering of the upper sideband, creates very significant changes in the RF waveform that is ultimately transmitted. For the NTSC-hardened veteran, there is a great temptation to imagine the 8-VSB RF waveform as being a sort of "8-step luminance stairstep" signal transmitting the eight levels of 8-VSB. Unfortunately, there is a fundamental flaw with this notion. As figure 5 illustrates, such a crisp stairstep signal with "squared off" abrupt transitions would generate a frequency spectrum that is far too wide for our single 6 MHz channel. A "square symbol pulse" -type signal generates a rich spectrum of frequency sidelobes that would interfere with adjacent channels.

We know that this type of RF waveform is incorrect since our Nyquist VSB filter has already pared our RF spectrum down to a slender 6 MHz channel.

As any video or transmitter engineer knows, when a square pulse is frequency bandlimited, it will lose its square edges and "ring" (oscillate) in time before and after the initial pulse. For our digital 8-level signal, this would spell disaster as the pre- and post-ringing from one symbol pulse would interfere with the preceding and following pulses, thereby distorting their levels and disrupting their information content.

Fortunately, there is still a way to transmit our 8-VSB symbol pulses if we observe that the 8-level information is only recognized during the precise instant of sampling in the receiver. At all other times, the symbol pulse amplitude is unimportant and can be modified in any way we please--so long as the amplitude at the precise instant of sampling still assumes one of the required eight amplitude levels.

If the narrowband frequency filtering is done correctly according to the Nyquist Theorem, the resulting train of symbol pulses will be orthogonal. This means that at each precise instant of sampling, only one symbol pulse will contribute to the final RF envelope waveform; all preceding and following symbol pulses will be experiencing a zero crossing in their amplitude. This is shown in figure 6a. In this way, when the RF waveform is sampled by the receiver clock, the recovered voltage will represent only the current symbol's amplitude (one of the eight possible levels).

At all times in-between the instants of sampling, the total RF envelope waveform reflects the addition of the "ringing" of hundreds of previous and future symbols (since all symbols have non-zero amplitudes between sampling times). Note that, in the interest of simplicity, figure 6a shows our narrowband symbol pulses as ringing for only 10 sampling periods; in reality they ring for a much longer time. These non-zero values (between sampling times) from hundreds of symbols can add up to very large signal voltages. The result is a very "peaky" signal that most closely resembles white noise. This is shown in figure 6b. The peak to average ratio of this signal can be as high as 12 dB, although RF peak clipping in the transmitter can limit this value to 6 to 7 dB with minimal consequences.

8-VSB Signal Constellation

In 8-VSB, the digital information is transmitted exclusively in the amplitude of the RF envelope and not in the phase. This is unlike other digital modulation formats such as QAM, where each point in the signal constellation is a certain vector combination of carrier amplitude and phase. This is not possible in 8-VSB since the carrier phase is no longer an independent variable under our control, but is rather "consumed" in suppressing the vestigial lower sideband.

The resulting 8-VSB signal constellation, as compared to 64-QAM, is shown in figure 7. Our eight levels are recovered by sampling an in-phase (I channel) synchronous detector. Nothing would be gained by sampling a quadrature channel detector since no useful information is contained in this channel. Our signal constellation diagram is therefore a series eight vertical lines that correspond to our eight transmitted amplitude levels. By eliminating any dependence on the Q-channel, the 8-VSB receiver need only process the I channel, thereby cutting in half the number of DSP circuits required in certain stages. The result is greater simplicity, and ultimately cost savings, in the receiver design.

The Rest Of The 8-VSB Chain

After the Nyquist VSB filter, the 8-VSB IF signal is then double up-converted in the exciter, by traditional oscillator-mixer-filter circuits, to the assigned channel frequency in the UHF or VHF band. The on-channel RF output of the 8-VSB exciter is then supplied to the DTV transmitter. The transmitter is essentially a traditional RF power amplifier, be it solid state or tube-type. A high-power RF output system filters the transmitter output signal and suppresses any spurious out-of-band signals caused by transmitter non-linearities. The last link in the transmitting chain is the antenna that broadcasts the full-power, on-channel 8-VSB DTV signal.

In the home receiver, the over-the-air signal is demodulated by essentially applying in reverse the same principals that we have already discussed. The incoming RF signal is received, downconverted, filtered, then detected. Then the segment and frame syncs are recovered. Segment syncs aid in receiver clock recovery and frame syncs are used to train the adaptive ghost-canceling equalizer. Once the proper data stream has been recovered, it is trellis decoded, deinterleaved, Reed-Solomon decoded, and derandomized. The end result is the recovery of the original MPEG-2 data packets. MPEG-2 decoding circuits reconstruct the video image for display on the TV screen and Dolby Digital (AC-3) circuits decode the sound information and drive the receiver loudspeakers. The home viewer "receives his DTV" and the signal chain is complete.

Conclusion

The goal of this section has been to provide some insight into the inner workings of the 8-VSB transmission system. Like many things in life, 8-VSB can appear formidable at first, but is really quite simple "once you get to know it." Hopefully the knowledge conveyed in this section will dispel some of the fear factor that many NTSC engineers experience when faced with the unknown world of digital TV broadcasting.

So what then is 8-VSB? Simply put, 8-VSB is the future of American television. And the future doesn't have to be such a scary thing.

References

[1] Davis, Robert and Twitchell, Edwin, "The Harris VSB Exciter for Digital ATV" NAB 1996 Engineering Conference. April 15-18, 1996.

[2] Citta, Richard and Sgrignoli, Gary, "ATSC Transmission System: 8-VSB Tutorial" ITVS 1997 Montreux Symposium. June 12-17, 1997.

[3] Totty, Ron, Davis, Robert and Weirather, Robert, "The Fundamentals of Digital ATV Transmission" ATV Seminar in Print. Harris Corporation Broadcast Division, 1995.

Acknowledgements

Special thanks go to Joe Seccia, Bob Plonka and Ed Twitchell of Harris Corporation Broadcast Division for their contributions of time, material and assistance in writing this section.

The How And Why Of COFDM

By J. H. Stott

Copyright 1998 by BBC Research and Development, United Kingdom

Coded Orthogonal Frequency Division Multiplexing (COFDM) is a form of modulation which is particularly well-suited to the needs of the terrestrial channel. COFDM can cope with high levels of multipath propagation with wide delay spreads. This leads to the concept of single-frequency networks in which many transmitters send the same signal on the same frequency, generating 'artificial multipath'. COFDM also copes well with co-channel narrowband interference, as may be caused by the carriers of existing analog services.

COFDM has therefore been chosen for two recent new standards for broadcasting, for sound (Digital Audio Broadcasting, DAB) and television (Digital Video Broadcasting-Terrestrial, DVB-T), optimized for their respective applications and with options to fit particular needs.

The special performance of COFDM in respect of multipath and interference is only achieved by careful choice of parameters and attention to detail in the way in which the forward error-correction coding is applied.

Introduction

Digital techniques have been used for many years by broadcasters in the production, distribution and storage of their program material. They have also been used in 'supporting roles' in broadcasting itself, with the introduction of Teletext and digital sound (NICAM) for television, and Radio Data (RDS) to accompany FM sound broadcasts. These have all used relatively conventional forms of digital modulation.

Sound and television terrestrial broadcasting is now entering a new age in which the main audio and video signals will themselves be broadcast in digital form. Systems have been standardized [ETSI 1 and 2], for use in Europe and elsewhere in the world for Digital Audio Broadcasting (DAB) and Digital Video Broadcasting-Terrestrial (DVB-T).

These systems have been designed in recognition of the circumstances in which they will be used. DAB (unlike its AM and FM predecessors) was especially designed to cope with the rigors of reception in moving cars--especially multipath, in this case time-varying. For DVB-T, less emphasis was placed on true mobility, but reception via the often-used set-top antennas still implied the need to cope with multipath reception--and higher data capacity than DAB was also required. A new form of modulation--COFDM--was chosen for both cases, albeit with detail differences and appropriate changes of parameters to suit the different requirements. Both include a degree of flexibility.

COFDM involves modulating the data onto a large number of carriers ("FDM"). The key features which make it work, in a way so well suited to terrestrial channels, include orthogonality (the "O" of COFDM), addition of a guard interval, and the use of error coding (the "C"), interleaving and channel-state information (CSI).

This section is a brief attempt to explain these features and their significance.

Why Use Multiple Carriers?

The use of multiple carriers follows from the presence of significant levels of multipath.

Suppose we modulate a carrier with digital information. During each symbol, we transmit the carrier with a particular phase and amplitude which is chosen from the constellation in use. Each symbol conveys a number of bits of information, equal to the logarithm (to the base 2) of the number of different states in the constellation.

Now imagine that this signal is received via two paths, with a relative delay between them, and that the receiver attempts to demodulate what was sent in say symbol n by examining the corresponding symbol's-worth of received information.

When the relative delay is more than one symbol period, figure 1 (left), the signal received via the second path acts purely as interference, since it only carries information belonging to a previous symbol or symbols. Such inter-symbol interference (ISI) implies that only very small levels of the delayed signal could be tolerated (the exact level depending on the constellation in use and the acceptable loss of noise margin).

When the relative delay is less than one symbol period, figure 1 (right), part of the signal received via the second path acts purely as interference, since it only carries information belonging to the previous symbol. The rest of it carries the information from the wanted symbol--but may add constructively or destructively to the main-path information.

This tells us that if we are to cope with any appreciable level of delayed signals, the symbol rate must be reduced sufficiently that the total delay spread (between first- and last-received paths) is only a modest fraction of the symbol period. The information that can be carried by a single carrier is thus limited in the presence of multipath. If one carrier cannot then carry the information rate we require, this leads naturally to the idea of dividing the high-rate data into many low-rate parallel streams, each conveyed by its own carrier--of which there are a large number. This is a form of FDM--the first step towards COFDM.

Even when the delay spread is less than one symbol period, a degree of ISI from the previous symbol remains. This could be eliminated if the period for which each symbol is transmitted were made longer than the period over which the receiver integrates the signal--a first indication that adding a guard interval may be a good thing. (We shall return to this idea shortly).

Orthogonality And The Use Of The DFT/FFT

Orthogonality: The use of a very large number of carriers is a prospect which is practically daunting (surely, we would need many modulators/demodulators, and filters to accompany them?) and would appear to require an increase of bandwidth to accommodate them. Both these worries can fortunately be dispelled if we do one simple thing: we specify that the carriers are evenly spaced by precisely ¶u = 1/Tu, where Tu is the period (the 'useful' or 'active' symbol period) over which the receiver integrates the demodulated signal. When we do this, the carriers form what mathematicians call an orthogonal set:

The kth carrier (at baseband) can be written as yk(t) = ejkwut, where wu = 2p/Tu, and the orthogonality condition that the carriers satisfy is:

More intuitively, what this represents is the common procedure of demodulating a carrier by multiplying by a carrier1 of the same frequency, ('beating it down to zero frequency') and integrating the result. Any other carriers will give rise to 'beat tones' which are at integer multiples of wu. All of these unwanted 'beat tones' therefore have an integer number of cycles during the integration period Tuand thus integrate to zero.

Thus without any 'explicit' filtering2, we can separately demodulate all the carriers without any mutual cross-talk, just by our particular choice for the carrier spacing. Furthermore, we have not wasted any spectrum either--the carriers are closely packed so that they occupy the same spectrum in total as if all the data modulated a single carrier, with an ideal very sharp-cut filter.

Preserving Orthogonality: In practice, our carriers are modulated by complex numbers which change from symbol to symbol. If the integration period spans two symbols (as for the delayed paths in figure 1), not only will there be same-carrier ISI, but in addition there will be inter-carrier interference (ICI) as well. This happens because the beat tones from other carriers may no longer integrate to zero if they change in phase and/or amplitude during the period. We avoid this by adding a guard interval, which ensures that all information integrated comes from the same symbol and appears constant during it.

Figure 2 shows this addition of a guard interval. The symbol period is extended so it exceeds the receiver integration period Tu. Since all the carriers are cyclic within Tu, so too is the whole modulated signal, so that the segment added at the beginning of the symbol to form the guard interval is identical to the segment of the same length at the end of the symbol. As long as the delay of any path with respect to the main (shortest) one is less than the guard interval, all the signal components within the integration period come from the same symbol and the orthogonality criterion is satisfied. ICI and ISI will only occur when the relative delay exceeds the guard interval.

The guard interval length is chosen to match the level of multipath expected. It should not form too large a fraction of Tu, otherwise too much data capacity (and spectral efficiency) will be sacrificed. DAB uses approximately3 Tu/4; DVB-T has more options, of which Tu/4 is the largest. To tolerate very long delays (as in the 'artificial multipath' of a single-frequency network, SFN) Tu must therefore be made large, implying a large number of carriers--from hundreds to thousands.

The paths of figure 2 may still add constructively or destructively. In fact it is possible to show that the signal demodulated from a particular carrier equals that transmitted, but simply multiplied by the effective frequency response of the (multipath) channel at the carrier frequency4.

Many other things can cause a loss of orthogonality and hence also cause ICI. They include errors in the local-oscillator or sampling frequencies of the receiver, and phase-noise in the local oscillator [Stott 3 and 4]. However, the effects of all these can, with care, be held within acceptable limits in practice.

Use of FFT: We've avoided thousands of filters, thanks to orthogonality,--what about implementing all the demodulating carriers, multipliers and integrators?

In practice, we work with the received signal in sampled form (sampled above the Nyquist limit, of course). The process of integration then becomes one of summation, and the whole demodulation process takes on a form which is identical to the Discrete Fourier Transform (DFT). Fortunately there exist efficient, so-called Fast Fourier Transform (FFT) implementations of this, and integrated circuits are already available, so that we are able to build COFDM equipment reasonably easily. Common versions of the FFT operate on a group of 2M time samples (corresponding to the samples taken in the integration period) and deliver the same number of frequency coefficients. These correspond to the data demodulated from the many carriers. In practice, because we sample above the Nyquist limit, not all of the coefficients obtained correspond to active carriers that we have used5.

The inverse FFT is similarly used in the transmitter to generate the OFDM signal from the input data.

Choice Of Basic Modulation

In each symbol, each carrier is modulated (multiplied) by a complex number taken from a constellation set. The more states there are in the constellation, the more bits can be conveyed by each carrier during one symbol--but the closer the constellation points become, assuming constant transmitted power. Thus there is a well-known trade-off of ruggedness versus capacity.

At the receiver, the corresponding demodulated value (the frequency coefficient from the receiver FFT) has been multiplied by an arbitrary complex number (the response of the channel at the carrier frequency). The constellation is thus rotated and changed in size. How can we then determine which constellation point was sent?

One simple way is to use differential demodulation, such as the DQPSK used in DAB. Information is carried by the change of phase from one symbol to the next. As long as the channel changes slowly enough, its response does not matter. Using such a differential (rather than coherent) demodulation causes some loss in thermal noise performance--but DAB is nevertheless a very rugged system.

When higher capacity is needed (as in DVB-T) there are advantages in coherent demodulation. In this, the response of the channel for each carrier is somehow determined, and the received constellation is appropriately equalized before determining which constellation point was transmitted, and hence what bits were transmitted. To do this in DVB-T, some pilot information is transmitted (so-called scattered pilots6), so that in some symbols, on some carriers, known information is transmitted (figure 3) from which a sub-sampled7 version of the frequency response is measured. This is then interpolated, using a 1-D or 2-D filter, to fill-in the unknown gaps and used to equalize all the constellations carrying data.

Use Of Error Coding

Why Do We Need Error Coding? In fact, we would expect to use forward error-correction coding in almost any practical digital communication system, in order to be able to deliver an acceptable bit-error ratio (BER) at a reasonably low signal-to-noise ratio (SNR). At a high SNR it might not be necessary--and this is also true for uncoded OFDM, but only when the channel is relatively flat. Uncoded OFDM does not perform very well in a selective channel. Its performance could be evaluated for any selective channel and any modulation scheme, by: noting the SNR for each carrier, deducing the corresponding BER for each carrier's data and finally obtaining the BER for the whole data signal by averaging the BERs of all the carriers used.

Very simple examples will show the point. Clearly, if there is a 0 dB echo of delay such that every mth carrier is completely extinguished, then the 'symbol' error ratio, SER, (where 'symbol' denotes the group of bits carried by one carrier within one OFDM symbol) will be of the order of 1 in m, even at infinite SNR. An echo delay of say Tu/4, the maximum for which loss of orthogonality is avoided when the guard-interval fraction is 1/4, (as in DAB and some modes of DVB-T) would thus cause the SER to be 1/4. Similarly, if there is one carrier, amongst N carriers in all, which is badly affected by interference, then the SER will be of the order of 1 in N, even with infinite SNR.

This tells us two things: uncoded OFDM is not satisfactory for use in such extremely selective channels, and, for any reasonable number of carriers, CW interference affecting one carrier is less of a problem than a 0 dB echo.

However, just adding hard-decision-based coding to this uncoded system is not enough, either--it would take a remarkably powerful hard-decision code to cope with an SER of 1 in 4! The solution is the use of convolutional coding with soft-decision decoding, properly integrated with the OFDM system.

Soft Decisions And Channel-State Information: First let us revise, for simplicity, 2-level modulation of a single carrier: one bit is transmitted per symbol, with say a '0' being sent by a modulating signal of -1V and a '1' by +1V. At a receiver, assuming that the gain is correct, we should expect to demodulate a signal always in the vicinity of either -1V or +1V, depending on whether a '0' or a '1' was transmitted, the departure from the exact values ±1V being caused by the inevitable noise added in transmission.

A hard-decision receiver would operate according to the rule that negative signals should be decoded as '0' and positive ones as '1,' 0V being the decision boundary. If the instantaneous amplitude of the noise were never to exceed ±1V then this simple receiver would make no mistakes. But noise may occasionally have a large amplitude, although with lower probability than for smaller values. Thus if say +0.5V is received, it most probably means that a '1' was transmitted, but there is a smaller yet still finite probability that actually '0' was sent. Common sense suggests that when a large-amplitude signal is received we can be more confident in the hard decision than if the amplitude is small.

This view of a degree of confidence is exploited in soft-decision Viterbi decoders. These maintain a history of many possible transmitted sequences, building up a view of their relative likelihoods and finally selecting the value '0' or '1' for each bit according to which has the maximum likelihood. For convenience, a Viterbi decoder adds log-likelihoods (rather than multiplying probabilities) to accumulate the likelihood of each possible sequence. It can be shown that in the case of BPSK or QPSK the appropriate log-likelihood measure or metric of the certainty of each decision is indeed simply proportional to the distance from the decision boundary. The slope of this linear relationship itself also depends directly on the signal-to-noise ratio. Thus the Viterbi decoder is fed with a soft decision comprising both the hard decision (the sign of the signal) together with a measure of the amplitude of the received signal.

With other rectangular-constellation modulation systems, such as 16-QAM or 64-QAM (see QAM In Cable Transmission in this chapter), each axis carries more than one bit, usually with Gray coding. At the receiver, a soft decision can be made separately for each received bit. The metric functions are now more complicated than for QPSK, being different for each bit, but the principle of the decoder exploiting knowledge of the expected reliability of each bit remains.

Metrics for COFDM are slightly more complicated. We start from the understanding that the soft-decision information is a measure of the confidence to be placed in the accompanying hard decision.

When data are modulated onto a single carrier in a time-invariant system then a priori all data symbols suffer from the same noise power on average; the soft-decision information simply needs to take note of the random symbol-by-symbol variations that this noise causes.

When data are modulated onto the multiple COFDM carriers, the metrics become slightly more complicated as the various carriers will have different signal-to-noise ratios. For example, a carrier which falls into a notch in the frequency response will comprise mostly noise; one in a peak will suffer much less. Thus in addition to the symbol-by-symbol variations there is another factor to take account of in the soft decisions: data conveyed by carriers having a high SNR are a priori more reliable than those conveyed by carriers having low SNR. This extra a priori information is usually known as channel-state information (CSI).

The CSI concept can be extended to embrace interference which affects carriers selectively.

Including channel-state information in the generation of soft decisions is the key to the unique performance of COFDM in the presence of frequency-selective fading and interference.

We return to the simple example in which there is a 0 dB echo of such a delay (and phase) as to cause a complete null on one carrier in 4. Figure 4 illustrates the effect of this selective channel: 1 carrier in 4 is nulled out, while 1 carrier in 4 is actually boosted, and the remaining 2 are unaffected. Note that received power is shown, to which the SNRs of the carriers will be proportional if the receiver noise is itself flat, as is usual. The 'mean power' marked is the mean of all carriers. It is equal to the total received power (via both paths) shared equally between all carriers.

In COFDM, the Viterbi metrics for each bit should be weighted according to the SNR of the carrier by which it traveled. Clearly, the bits from the nulled carriers are effectively flagged as having 'no confidence.' This is essentially the same thing as an erasure--the Viterbi decoder in effect just records that it has no information about these bits.

There is another well-known case of regularly-occurring erasures, namely punctured codes. Typically, convolutional codes intrinsically have code rates expressed as simple fractions like 1/2 or 1/3. When a code having higher rate (less redundancy) is needed then one of these lower-rate 'mother' codes is punctured, that is to say certain of the coded bits are just not transmitted, according to a regular pattern known to the receiver. At the receiver 'dummy bits' are re-inserted to replace the omitted ones, but are marked as erasures--bits having zero confidence--so that the Viterbi decoder treats them accordingly. Punctured codes obviously are less powerful than the mother code, but there is an acceptable steady trade-off between performance and code rate as the degree of puncturing is increased.

Suppose we take a rate-1/2 code and puncture it by removing 1 bit in 4. The rate-1/2 code produces 2 coded bits for every 1 uncoded bit, and thus 4 coded bits for every 2 uncoded bits. If we puncture 1 in 4 of these coded bits then we clearly finish by transmitting 3 coded bits for every 2 uncoded bits. In other words we have generated a rate-2/3 code. Indeed, this is exactly how the rate-2/3 option of DVB-T is made.

Now return to our simple COFDM example in which 1 carrier in 4 is nulled out by the channel--but the corresponding bits are effectively flagged as erasures thanks to the application of channel-state information. 2 out of 3 of the remaining carriers are received at the same SNR as that of the overall channel, while 1 is actually boosted, having an improved SNR. Suppose that rate-1/2 coding is used for the COFDM signal. It follows that the SNR performance of COFDM with this selective channel should be very slightly better (because 1 carrier in 4 is boosted) than that for a single-carrier (SC) system using the corresponding punctured rate-2/3 code in a flat channel. In other words, the effect of this very selective channel on COFDM can be directly estimated from knowledge of the behavior of puncturing the same code when used in a SC system through a flat channel.

This explains how the penalty in required CNR for a COFDM system subject to 0 dB echoes may be quite small, provided a relatively powerful convolutional code is used together with the application of channel-state information.

Interleaving: So far we have considered a very special example so as to make it easy to explain by invoking the close analogy with the use of code puncturing. But what of other delay values?

If the relative delay of the echo is rather shorter than we just considered, then the notches in the channel's frequency response will be broader, affecting many adjacent carriers. This means that the coded data we transmit should not simply be assigned to the OFDM carriers in order, since at the receiver this would cause the Viterbi soft-decision decoder to be fed with clusters of unreliable bits. This is known to cause serious loss of performance, which we avoid by interleaving the coded data before assigning them to OFDM carriers at the modulator. A corresponding de-interleaver is used at the receiver before decoding. In this way the cluster of errors occurring when adjacent carriers fail simultaneously (as when there is a broad notch in the frequency response of the channel) is broken up, enabling the Viterbi decoder to perform better.

As just described, the process could be called frequency interleaving; as is used in DVB-T. It is all that is needed if the channel only varies slowly with time. In mobile operation (a key application for DAB) we may expect the various paths to be subjected to different and significant Doppler shifts, making the frequency response vary with time, figure 5. Furthermore, a vehicle may drive into shaded areas like underpasses so that all signals are severely attenuated for a period (not shown in figure 5). For this reason, in DAB the coded data are in addition re-distributed over time, providing time interleaving.

More coding: DAB conveys audio data, which, despite being compressed in source coding, is relatively robust to the effects of transmission errors8. The BER remaining after correction by the Viterbi decoder is adequate. On the other hand, the compressed video data of DVB-T is more susceptible to errors so that the residual BER at the output of the Viterbi decoder is too high.

Thus DVB-T includes a second stage of error coding, called the 'outer' coding, since in an overall block diagram it sandwiches the ('inner') convolutional coding. Data to be transmitted are first coded with a Reed-Solomon code, interleaved with an additional 'outer' interleaver, then passed to the 'inner' convolutional coder. At the receiver, the Viterbi decoder is followed by an 'outer' interleaver and the 'outer' R-S decoder. The R-S decoder uses hard decisions, but is able to reduce the BER substantially despite very modest extra redundancy having been added at the transmitter.

Single-Frequency Networks

Our simple example of a 0 dB echo often crops up when considering SFNs. If two synchronized COFDM transmitters operate on a common frequency there will somewhere be locations where the two signals will be received at equal strength (and with a relative delay, depending on the geometry of the situation, which we assume to be within the system limits). An obvious question is: does reception suffer or benefit from this situation?

Clearly, compared with receiving either transmitter alone, the total-received-signal-to-noise power ratio (CNR) is doubled, i.e. increased by 3 dB, expressed in familiar decibel notation. However, the presence of the two transmissions makes reception selective rather than flat (as we might hope to have with a single transmission, without 'natural' echoes). This increases the CNR required to achieve the same BER, in a way which depends on the error-correcting code in use.

We have already seen a qualitative argument how this increase in CNR requirement may be closely related to the performance of punctured codes. Simulation shows that the increase in CNR requirement between flat and 0 dB-echo channels is just below 3 dB for a rate-1/2 code, while it is greater for higher-rate codes which have already been punctured. Practical experience supports the order of 3 dB for a rate-1/2 code, while for rate-2/3 the increase is of the order of 6 dB.

It follows that with rate-1/2 coding, receiving two signals of equal strength, in place of either one alone, increases the received CNR by 3 dB while also increasing the CNR required for satisfactory reception (in the now highly-selective channel) by about the same amount. The performance is thus unchanged by adding the second path.

Since for most practical purposes the case of the 0 dB echo appears to be more or less the worst one, this is very encouraging for planning and developing SFNs.

Summary Of Key Dab and DVB-T Features

Both DAB and DVB-T have flexibility built-in to cope with a range of circumstances and uses.

DAB has four modes with 192, 384, 768 or 1536 carriers and corresponding guard intervals from 31 to 246 µs. All occupy 1.536 MHz, use DQPSK and use both time- and frequency-interleaving.

DVB-T has two modes with either 1705 or 6817 carriers in 7.61 MHz bandwidth, with a wide range of guard intervals from 7 to 224 µs. Coherent demodulation is used, with QPSK/16-QAM/64-QAM constellations. Together with options for inner-code rate this provides extensive trade-off between ruggedness and capacity (from 5 to 31.7 Mbps). No time interleaving is used. The convolutional inner code is supplemented by a Reed-Solomon outer code. (The figures quoted above relate to the use of nominally 8 MHz channels. The DVB-T specification can be adapted to 6 or 7 MHz channels by simply scaling the clock rate; the capacity and bandwidth scale in proportion.)

Conclusions

COFDM, as used in DAB and DVB-T is very well matched to the terrestrial channel, being able to cope with severe multipath and the presence of co-channel narrowband interference. It also makes single-frequency networks possible.

COFDM is also adaptable to various uses by appropriate choice of parameters, both DAB and DVB-T having a range of options to facilitate this.

COFDM only works because all the key elements are correctly integrated: many orthogonal carriers, added guard intervals, interleaving, soft-decision Viterbi decoding and the use of channel-state information.

Footnotes

1: Actually a complex conjugate, corresponding to the standard I-Q quadrature demodulation process.

2: In fact the 'integrate-and-dump' process can itself be shown to be equivalent

to a filter with a sinc(w/wu) characteristic, with nulls on all the carriers except the wanted one.

3: Actually it is precisely 63Tu/256 ~~ 0.246Tu.

4: For the mathematically inclined, the addition of the guard interval has in effect turned the normal process of convolution of the signal with the impulse response of the channel into a circular convolution, which corresponds to multiplication of DFT frequency coefficients.

5: Note that this does not lead to any loss of capacity or inefficient use of bandwidth. It merely corresponds to 'headroom' for the analogue filtering in the system.

6: Some carriers always carry further continual-pilot information which is used for synchronization.

7: Sub-sampled in both frequency and time.

8: Some more-susceptible data have special treatment.

References

[1]: ETS 300 401 (1994): Radio broadcast systems; Digital Audio Broadcasting (DAB) to mobile, portable and fixed receivers. www.etsi.fr.

[2]: ETS 300 744 (1997): Digital broadcasting systems for television, sound and data services; framing structure, channel coding and modulation for digital terrestrial television. www.etsi.fr.

[3]: Stott, J.H., 1995. The effects of frequency errors in OFDM. BBC Research and Development Report No. RD 1995/15. www.bbc.co.uk/rd/pubs/reports/1995_15.html.

[4]: Stott, J.H., 1998. The effects of phase noise in COFDM. EBU Technical Review, No 276 (Summer 1998), pp. 12 to 25. www.bbc.co.uk/rd/pubs/papers/pdffiles/jsebu276.pdf.

The following further reading is recommended:

European Broadcasting Union (EBU), 1988. Advanced digital techniques for UHF satellite sound broadcasting. Collected papers on concepts for sound broadcasting into the 21st century.

Maddocks, M.C.D., 1993. An introduction to digital modulation and OFDM techniques. BBC Research Department Report No. RD 1993/10.

Stott, J.H., 1996. The DVB-Terrestrial (DVB-T) specification and its implementation in a practical modem. Proceedings of 1996 International Broadcasting Convention, IEE Conference Publication No. 428, pp. 255 to 260.

Oliphant, A., Marsden, R.P., Poole, R.H.M., and Tanton, N.E., 1996. The design of a network for digital terrestrial TV trials. Proceedings of 1996 International Broadcasting Convention, IEE Conference Publication No. 428, pp 242 to 247.

Mřller, L.G., 1997. COFDM and the choice of parameters for DVB-T. Proceedings of 20th International Television Symposium, Montreux. www.bbc.co.uk/validate/paper_17.htm.

Stott, J.H., 1997. Explaining some of the magic of COFDM. Proceedings of 20th International Television Symposium, Montreux. www.bbc.co.uk/rd/pubs/papers/ paper_15/paper_15.html.

Oliphant, A., 1997. VALIDATE--verifying the European specification for digital terrestrial TV and preparing for the launch of services. Proceedings of 20th International Television Symposium, Montreux. www.bbc.co.uk/rd/pubs/papers/paper_16/paper_16.html.

Morello, A., Blanchietti, G., Benzi, C., Sacco, B., and Tabone, M., 1997. Performance assessment of a DVB-T television system. Proceedings of 20th International Television Symposium, Montreux.

Mitchell, J. , and Sadot, P. The development of a digital terrestrial front end. Proceedings of 1997 International Broadcasting Convention, IEE Conference Publication No. 447, pp. 519-524 www.bbc.co.uk/rd/pubs/papers/paper_12/paper_12.html.

Nokes, C.R., Pullen, I.R., and Salter, J.E., 1997. Evaluation of a DVB-T compliant digital terrestrial television system. Proceedings of 1997 International Broadcasting Convention, IEE Conference Publication No. 447, pp. 331-336. www.bbc.co.uk/rd/pubs/papers/paper_08/paper_08.html.

Oliphant, A., 1998. VALIDATE--a virtual laboratory to accelerate the launch of digital terrestrial television. ECMAST Conference, May 1998. Berlin, Germany. www.bbc.co.uk/rd/pubs/papers/ecmast22/ecmast22.html.

Nokes, C.R., 1998. Results of tests with domestic receiver IC's for DVB-T. Proceedings of 1998 International Broadcasting Convention, pp. 294-299.

Acknowledgements

The author wishes to thank the many colleagues, within the BBC and collaborating organizations throughout Europe, who have helped him to develop his understanding of COFDM. This section is based on a paper the author gave in July 1997 to an IEE Summer School in the U.K.

QAM In Cable Transmission

By John Watkinson

Digital transmission consists of converting data into a waveform suitable for the path along which it is to be sent. The generic term for the path down which the information is sent is the channel, in this case a cable.

In real cables, the digital signal may originate with discrete states which change at discrete times, but the cable will treat it as an analog waveform and so it will not be received in the same form. Various loss mechanisms will reduce the amplitude of the signal. These attenuations will not be the same at all frequencies. Noise will be picked up in the cable as a result of stray electric fields or magnetic induction. As a result the voltage received at the end of the cable will have an infinitely varying state along with a degree of uncertainty due to the noise. Different frequencies can propagate at different speeds in the channel; this is the phenomenon of group delay. An alternative way of considering group delay is that there will be frequency-dependent phase shifts in the signal and these will result in uncertainty in the timing of pulses.

In a digital transmission, it is not the cable which is digital; instead the term describes the way in which the received signals are interpreted. When the receiver makes discrete decisions from the input waveform, it attempts to reject the uncertainties in voltage and time. The technique of channel coding is one where the transmitted waveforms are restricted to a set which allow the receiver to make discrete decisions despite the degradations caused by the analog nature of the cable.

Cables have the characteristic that as frequency rises, the current flows only in the outside layer of the conductor effectively causing the resistance to rise. This is the skin effect and is due to the energy starting to leave the conductors. As frequency rises still further, the energy travels less in the conductors and more in the insulation between them, and their composition becomes important and they have to be called dielectrics.

The conductor spacing and the nature of the dielectric determine the characteristic impedance of the cable. A change of impedance causes reflections in the energy flow and some of it heads back towards the source. Constant impedance cables with fixed conductor spacing are necessary, and these must be suitably terminated to prevent reflections.

At high frequencies, the time taken for the signal to pass down the cable is significantly more than the bit period. There are thus many bits in the cable which have been sent but which have yet to arrive. The voltage at the input of the cable can be quite different to that at the output because the cable has become a transmission line.

In a transmission line, the spectrum of the input signal is effectively split into different frequencies. Low frequencies travel as long wavelength energy packets and high frequencies travel as short wavelengths. The shorter the wavelength, the more times the energy has to pass in and out of the dielectric as it propagates. Unfortunately dielectrics are not ideal and not all of the energy stored per cycle is released. Some of it is lost as heat. High frequencies thus suffer more dielectric loss than low frequencies.

This frequency-dependent behavior is the most important factor in deciding how best to send data down a cable. As a flat frequency response is elusive, the best results will be obtained using a coding scheme that creates a narrow band of frequencies. Then the response can be made reasonably constant with the help of equalization. The decoder might adapt the equalization to optimize the error rate.

In digital circuitry, the signals are generally accompanied by a separate clock signal. It is generally not feasible to provide a separate clock in transmission applications. Cable transmission requires a self-clocking waveform. Clearly if data bits are simply clocked serially from a shift register, in so-called direct transmission, this characteristic will not be obtained. If all the data bits are the same, for example all zeros, there is no clock when they are serialized. This illustrates that raw data, when serialized, have an unconstrained spectrum. Runs of identical bits can produce frequencies much lower than the bit rate would suggest. One of the essential steps in a cable coding system is to narrow the spectrum of the data and ensure that a sufficient clock content is available.

The important step of information recovery at the receiver is known as data separation. The data separator is rather like an analog-to-digital converter because the two processes of sampling and quantizing are both present. In the time domain, the sampling clock is derived from the clock content of the channel waveform. The sampler makes discrete decisions along the time axis in order to reject jitter due to group delay variation.

In the voltage domain, the process of slicing converts the analog waveform from the channel back into a binary representation. The slicer is thus a form of quantizer which has a resolution of only a few bits. The slicing process makes a discrete decision about the voltage of the incoming signal in order to reject noise. Clearly the less noise there is in the channel, the more discrete levels can be distinguished by the quantizer and so the more bits it can output per sample. This is the principle of multi-level signaling.

Multi-level codes need less bandwidth because the more bits are carried in each symbol, the fewer symbols per second are needed for a given bit rate. The bandwidth efficiency of such codes is measured in bits/second/Hz. Cables have the advantage of low noise compared to radio broadcasting, and so they can use more levels. This compensates for the reduced bandwidth available in cables due to frequency dependent loss. As a result cable codes tend to use more signaling levels than radio transmission codes, which in turn use more levels than transmissions from satellites, which have plenty of bandwidth but poor noise characteristics because of the limited power available.

Quadrature Amplitude Modulation (QAM)

Quadrature Amplitude Modulation (QAM) is an ideal code for cable transmission. Figure 1 shows the example of 64-QAM. Incoming 6-bit data words are split into two three-bit words and each is used to amplitude modulate a pair of sinusoidal carriers which are generated in quadrature. The modulators are four-quadrant devices such that eight amplitudes are available, four which are in phase with the carrier and four which are antiphase. The two AM carriers are linearly added and the result is a signal which has 64 combinations of amplitude and phase. There is a great deal of similarity between QAM and the color subcarrier used in analog television in which the two color difference signals are encoded on in-phase and quadrature carriers. To visualize how QAM works, simply replace the analog R-Y and B-Y signals of NTSC with a pair of eight level signals. The result will be 64 possible vectors.

Like analog chroma, the QAM signal can be viewed on a vectorscope. Each bit pattern produces a different vector, resulting in a different point on the vectorscope screen. The set of 64 points is called a constellation. At the encoder, the constellation should be ideal, with regular spaces between the points. At the receiver, cable noise and group delay effects will disturb the ideal constellation and each point will become a "vector ball." The size of these balls gives an indication of the likely error rate of the cable. Clearly if they overlap, the decoder will be unable to distinguish discrete levels and phases and bit errors become inevitable.

The error correction coding of the system is designed to overcome reasonable bit error rates so that the transmitted data are essentially error free. This is important for digital television because MPEG compressed video data are sensitive to bit errors.

In a typical application, the data are randomized by addition to a pseudo-random sequence before being fed to the modulator. The use of randomizing and error correction is not part of QAM which is only a modulation technique. Practical systems need these additional processes.

The sampling clock is recovered at the decoder using a phase-locked loop (PLL) to regenerate it from the clock content of the QAM waveform. In phase-locked loops, the voltage-controlled oscillator is driven by a phase error measured between the output and some reference, as shown in figure 2, such that the output eventually has the same frequency as the reference. If a divider is placed between the VCO and the phase comparator, the VCO frequency can be made to be a multiple of the reference. This also has the effect of making the loop more heavily damped. Clearly data cannot be separated if the PLL is not locked, but it cannot be locked until it has seen transitions for a reasonable period. There will inevitably be a short delay on first applying the signal before the receiver locks to it.

The QAM decoder can easily lock to the symbol rate so that it can set the PLO to the right frequency, but it also needs to be able to sample in the right phase. This can be done by sending a periodic synchronizing signal in a reference phase, much like analog chroma sends a burst at the beginning of every line.

Quadrature modulation appears to achieve the impossible by combining two signals into the space of one. Figure 3 shows how it works. At a) is shown the in-phase signal, which is a sine-wave. At two points per cycle the sine wave passes through zero, which it will do irrespective of the amplitude of the wave. At b) is shown the quadrature signal which is a cosine wave. This also passes through zero twice per cycle, but the points at which this happens are exactly half way between the points for the sine wave.

When the cosine wave is at zero, the sine wave is at a peak. Thus if the waveform is sampled at that point, only the amplitude of the sine wave will be measured. One quarter of a cycle later, the sinewave will be at zero and the cosine wave will be at a peak. If the waveform is sampled at this point, only the amplitude of the cosine wave will be measured. The signals are independent and are said to be orthogonal.

If the receiver is synchronized so that the quadrature modulated signal is sampled at exactly 90-degree increments starting at zero degrees, the samples will represent alternate components which are effectively interleaved. In a sense quadrature modulation is an analog version of data multiplexing where two data streams can be sent down the same bitstream.

In a QAM receiver, the waveform is sampled twice per cycle in phase with the two original carriers so that each sample will represent the amplitude of one of the carriers alone. A sampler of this kind is effectively a phase sensitive rectifier and effectively it simultaneously demodulates and demultiplexes the QAM signal into a pair of 8-level signals.

The cable will have a certain amplitude loss which is a function of its length. The decoder will need an automatic gain control (AGC) system which produces a signal having a stable amplitude as an input for the slicing process. The synchronizing signal can also be used as an amplitude reference. The decoder gain is adjusted until the synchronizing signal has the correct amplitude. This will place the discrete levels of the signal in the center of quantizing intervals of the slicer.

In a digital decoder, the ADCs can be multi-bit devices so that a high resolution version of each sample is available. In the digital domain the AGC can be performed by multiplying the sample values by constants. The slicing process can be performed by comparing the values from the ADC with reference values. The digital decoder has the advantage that it can easily be integrated into an LSI chip at low cost.

[an error occurred while processing this directive]


Back to the Table of Contents
Back to Digital Television's Home Page