The ability of ATM networks to combine voice, video and data communications capabilities in one network are expected to make ATM the networking method of choice for video delivery in the future. The ATM Forum is currently developing standards to address the issues associated with video delivery. The Audiovisual Multimedia Services Technical Committee is currently addressing the issues relating to numerous video applications including broadcast video, video conferencing, desktop multimedia, video on demand, interactive video, near video on demand, distance learning and interactive gaming. This paper provides a survey of the current issues relating to video delivery over ATM networks.
Back to Table of Contents
The bandwidth requirements of uncompressed video far exceed the available resources for the typical end user. Typical uncompressed video streams can require 100 to 240 Mbps to be delivered without distortions or delays at the receiving end. Uncompressed high definition television streams require around 1 Gbps for proper delivery. Several compression methods have been developed which can reduce the bandwidth requirements for video streams to levels acceptable for existing networks.
Compression is achieved by removing redundancy. In video streams that redundancy can be from within a video frame as well as between frames in close proximity. Compression can be either lossy or lossless. Data compressed with a lossless method can be recovered exactly Therefore, lossless methods are often used to compress data on disks. The compression ratios typically achieved are around 2:1 to 4:1 with lossless methods. Lossless methods do not provide adequate compression for video in most cases. In contrast, lossy compression methods can provide much higher compression ratios. Ratios as high as 200:1 are typical with these methods. Data compressed with a lossy method cannot be recovered exactly. The high compression ratios make these the methods of choice for video compression.
Compression methods can be symmetric or asymmetric. For symmetric compression methods it takes the same amount of computational effort to perform the compression operation as it does to perform the decompression operation. Motion JPEG, which is described below, is an example of a symmetric compression method while MPEG-1 and MPEG-2 are asymmetric. Video compression methods are typically asymmetric. Since many of the video on demand applications will involve one source with many recipients it is generally desirable that the compression method place most of the required computational complexity on the source side while limiting the complexity and therefore the cost of the equipment at the destination side or end user. ( Minoli book)
The MPEG-2 (Moving Picture Experts Group) (ISO/IEC 13818) standard is an extension of the MPEG-1 standard described below. MPEG-2 was designed to provide high quality video encoding suitable for transmission over computer networks. It is believed that MPEG-2 will be the primary compression protocol used in transmitting video over ATM networks.
MPEG-2 (and MPEG-1) video compression makes use of the Discrete Cosine Transform (DCT) algorithm to transform 8x8 blocks of pixels into variable length codes (VLC). These VLC's are the representation of the quantized coefficients from the DCT. MPEG-2 encoders produce three types of frames: Intra (I) frames, Predictive (P) frames, and Bidirectional (B) frames. The relationship between these three frames is depicted in Figure 1 ( Riley book). As the name would suggest, I frames use only intra-frame compression and because of this they are much larger than P or B frames. P frames use motion compensated prediction from the previous I or P frame in the video sequence. This forward prediction is indicated by the upper arrows in Figure 1. B frames use motion compensated prediction by either forward predicting from future I or P frames, backward predicting from previous I or P frames, or interpolating between both forward and backward I or P frames. This bidirectional prediction is indicated by the lower arrows in Figure 1. B frames achieve the highest degree of compression and are therefore the smallest frames.
The frames from one I frame to the next form a group of pictures. The components of a GOP are depicted in Frame 2 ( Riley book). Frames are generated by the MPEG-2 encoder by first generating the 8x8 blocks. Four of these blocks are combined to form a macroblock. A macroblock is a 16x16 region of pixels. The macroblocks are then combined to form a slice. A series of slices makes up a frame. The address and motion vectors of the first macroblock in a slice are coded absolutely. The remaining macroblock parameters are differentially coded with respect to the first macroblock in the slice. In the event of errors in transmission of MPEG-2 video, it is at the first macroblock of the next slice where the image decoding can continue correctly. This will be discussed further in the section on error correction and concealment later in this document. ( Zhang paper)
The MPEG-2 systems layer (ISO/IEC 13818-1) provides features necessary for multiplexing and synchronization of video, audio, and data streams. Video streams are broken into units called video access units. A video access unit corresponds to one of the image frames, I, P, or B, described above. A collection of video access units is a video elementary stream and a several elementary streams can be combined and packetized to form packetized elementary streams (PES). PES streams can be stored or transmitted as they are but are more commonly converted into either program streams or transport streams. Program streams (PS) resemble the original MPEG-1 streams. They consist of variable length packets and are intended for use in media where there is a very low probability of bit errors or data loss. The transport streams (TS) are a fixed length. Each TS packet is 188 bytes long with 4 bytes of header information. The TS packets are intended for transport over media where bit errors or loss of information is more likely. The PES packets are loaded into TS packets such that the first byte of a PES packet is the first byte of a TS payload and a single TS packet can only carry data from one PES. ( Riley book)
Synchronization information is built into the MPEG-2 system layer. This is accomplished through the use of time stamps. Two time stamps, the presentation video-conferencingtime stamp (PTS) and the decoder time stamp (DTS) are included in the PES packet header. These tell the decoder when to display decoded information to the end user and when to decode information in the decoder buffers respectively. The clocks between the encoder and the decoder must also be synchronized. This task is accomplished through the use of program clock references (PCR). A PCR can be inserted into a TS packet in a field just after the TS header. PCR's are inserted at regular intervals to maintain synchronization between the encoder and the decoder. The use of these time stamps assumes that the transmission media offers a constant transmission delay. In an ATM network cell delay variation (CDV), or jitter, is always present. Issues relating to CDV are discussed in a later section.
The MPEG-1 (ISO/IEC 11172) video encoding standard was designed to support video encoding at bit rates of approximately 1.5 Mbps. The video encoding methods employ I, P and B frames described above. The quality of the video achieved with this standard is roughly similar to that of a VHS VCR. This level of quality is generally not acceptable for broadcast quality video. It is expected that most video over ATM applications will utilize MPEG-2 rather than MPEG-1.
The ITU-T Recommendation H.261 describes a video encoding standard for two way audio and video transmission. It has traditionally utilized 64 kbps or 128 kbps ISDN links. The H.261 method uses buffering to smooth out short term variations in bit rate from the video encoder. A near constant bit rate is achieved by feeding back the status of the buffer to the encoder. When the buffer is nearly full, the encoder can adjust the bit rate by increasing the quantization step size. This will reduce the bit rate from the encoder at the expense of video quality. H.261 defines two fixed resolutions 352x288 and 176x144. The latter is often used due to the low bit rate of the ISDN connections. Like MPEG-2 this encoding method employs motion compensated prediction. The output bit structure is similar to the structure for MPEG-2 shown above in Figure 2. H.261 employs VLC's at the base level. Groups of blocks form macroblocks. Groups of macroblocks form groups of blocks at the picture level. Video-conferencing using H.261 encoding can be accommodated over ATM networks via circuit emulation utilizing the features of AAL 1. ( Riley book)
Motion JPEG (Joint Photographic Experts Group) is an extension of the joint ITU and ISO JPEG standard for still images. Motion JPEG is a symmetric compression method that typically results in from 10:1 up to 50:1 compression ratios. As an extension of the JPEG still image standard, motion JPEG only removes intra-frame redundancy and not intra-frame redundancy. This results in significantly less compression than a method which would remove both. Another drawback of Motion JPEG is that audio is not integrated into the compression method. There are four modes of JPEG operation defined.
1) Sequential - The compression method proceeds left to right and top to bottom.
2) Progressive - Compression can take place in multiple scans of an image. This allows an image to be displayed in stages. The image quality improves with each stage. This is particularly useful on a low bandwidth link.
3) Lossless Encoding - This method allows exact recovery of the image at the expense of less compression.
4) Hierarchical Encoding - An image encoded with this method is encoded and can be displayed at multiple resolutions without uncompressing at the highest resolution first.
The lack of inter-frame coding can be viewed as a feature for some video applications. If direct access to a random video frame is desired, Motion JPEG will allow faster access than MPEG-1 or MPEG-2. With interframe coding only a fraction of the frames transmitted are not encoded in relation to previous or future frames. Thus, it may be necessary to wait for multiple frames to arrive before decoding a specific one. With Motion JPEG any frame received can be decoded immediately.
Back to Table of Contents
There are two main options for mapping MPEG-2 bit streams into ATM Cells, constant bit rate (CBR) transmission over ATM adaptation layer 1 (AAL 1) and transmission over AAL 5. Originally, AAL 2 was envisioned as the adaptation layer that would provide the necessary support for video services over ATM. Currently, AAL 2 is undefined. The merits of AAL 5 and AAL 1 are discussed below.
Figure 3 ( VoD Specification) shows the mapping of TS packets into AAL 5 Protocol Data Units (PDU). Two TS packets will map exactly into 8 ATM cells. This mapping has been adopted in the Video on Demand Specification 1.1 by the ATM Forum. One major drawback of using ATM AAL 5 is it lacks a built in mechanism for timing recovery. ( Minoli book) Also, AAL 5 does not have a built in capacity for supporting forward error correction (FEC). One major advantage of using AAL 5 may be financial. Since video applications will require a signaling capability, AAL 5 will already be implemented in the ATM equipment ( Minoli book) Another advantage of using AAL 5 is that by adopting a NULL convergence sub-layer (CS) no additional network functionality will need to be defined ( Dixit paper). There are two major categories of video that would likely be transmitted over ATM using AAL 5. Video which is being sent over heterogeneous networks would likely be sent via AAL 5. This video would probably be carried as IP packets over ATM and would be encoded in proprietary formats such as Quicktime or AVI. The AAL 5 would provide no quality of service guarantees from the network for this class of video. The second class of video would be variable bit rate (VBR) traffic that is native to the ATM network. This video would be able to benefit from AAL 5 quality of service guarantees. ( Riley book)
As shown in Figure 4, a TS packet will map neatly into 4 ATM AAL 1 cells. One major advantage of AAL 1 over AAL 5 is that it was designed for real time applications. The major disadvantage of AAL 1 is that it only supports constant bit rate applications. Future video applications will probably want to take advantage of variable bit rate transmission options. AAL 1 will also need to be supported in end equipment in addition to the AAL 5 functionality. AAL 1 does provide for forward error correction (FEC). This may be important for some video applications, especially over media prone to errors, such as wireless ATM. AAL 1 is expected to be the media of choice to support video from H.261 or H.263 encoders. H.261 video has traditionally been transported over lines which are multiples of 64 kbps or ISDN lines. ( Riley Book)
Back to Table of Contents
To provide video of acceptable quality to the user the network must provide a certain level of service. Cell delay variation, bit errors and cell loss all can have severe effects on the quality of the video stream received. A transmission link with a bit error rate of 10 -5 would be acceptable for non real time data transmission with some form of error correction. In a video stream, however, this error rate would cause a serious degradation in the quality of the received video. Similarly, cell delay, cell loss, and rate control issues also have a significant impact on the quality of video received. This section examines these issues.
Cell delay variation or jitter can have a significant impact on the quality of a video stream. MPEG-2 video systems use a 27 MHz system clock in the encoder and the decoder. This clock is used to synchronize the operations at the decoder with those at the encoder. This enables video and audio streams to be correctly synchronized and also regulates the retrieval of frames from the decoder buffer to prevent overflow or underflow. To keep the encoder and decoder in synchronization with each other the encoder places program clock references (PCR) periodically in the TS. These are used to adjust the system clock at the decoder as necessary. If there is jitter in the ATM cells the PCR's will also experience jitter. Jitter in the PCR's will propagate to the system clock which is used to synchronize the other timing functions of the decoder. This will result in picture quality degradation. ( Minoli book)
One proposed solution for traffic over AAL 1 is to use synchronous residual time stamps (SRTS) In this method both ends of the transmission would need to have access to the same standard network clock. This reference clock could then be used to determine and counter the effects of the CDV. Whether this clock would be readily available is unknown. Also, there is some question whether AAL 1 would provide enough bits for SRTS to be effective. ( Minoli book)
A lengthy discussion of sources of jitter and ways to estimate jitter in ATM networks is provided in Appendix A of the ATM Forum Video on Demand Specification 1.1. ( VoD Specification)
Encoded video streams are highly susceptible to loss of quality due to bit errors. Bit error rates are media dependent with the least error rates expected from optical fiber. Bit error rates for a 5 Mbps video sequence are given in the Table 1 below. ( Riley book)
|Bit Error Rate||Average Interval between Errors|
|10 -5||20 ms|
|10 -6||200 ms|
|10 -7||2 sec|
|10 -8||20 sec|
|10 -9||3 min, 20 sec|
The encoding method of MPEG-2 video makes it susceptible to picture quality loss due to bit errors. When a bit error occurs, the error that occurs in one cell can propagate both spatially and temporally through the video sequence. A spatial error occurs because the variable length codes (VLC) that make up the blocks and slices are coded differentially and utilize motion vectors from the previous VLC. If a VLC is lost then that error will propagate to the next point of absolute coding. In an MPEG-2 stream this point is at the start of the next video slice. Therefore, a bit error can degrade the picture quality of a larger strip in the frame. Temporal error propagation occurs due to the forward and bidirectional prediction in P and B frames. An error that occurs in an I frame will propagate through a previous B frame and all subsequent P and B frames until the next I frame occurs. (Figure 5) ( Riley book) illustrates this situation. The strip is in error in the original frame due to the loss of VLC synchronization. The error is propagated temporally through the group of pictures. In a typical video sequence, a GOP can last for 12 to 15 frames. At 25 to 30 frames per second the error could persist for 0.5 seconds. This would be long enough to make the video quality objectionable in many cases. Bit errors that occur in P frames will be propagated in a similar manner to surrounding B frames generating a similar, but more limited effect. Bit errors in B frames would only effect that frame.
For the reasons described in the previous section the cell loss rate also plays a critical role in the quality of the decoded video stream. The cell loss rate can depend on a number of factors including the physical media in use, the switching technique, the switch buffer size, the number of switches traversed in a connection, the QoS class used for the service, and whether the video stream is CBR or VBR ( Minoli book). Losses of cells in ATM networks is often a result of congestion in the switches. Providing appropriate rate control is one way to limit cell loss.
A traffic contract is negotiated between the user and an ATM network at connection setup time. This contract is policed by a the usage parameter control (UPC), typically using the generic cell rate algorithm (GCRA), to ensure that the source does not violate this traffic contract. It may be difficult to determine at connection setup time exactly what the required bit rate will be for a particular video stream. Video bit rates can vary with changes in scene content. A scene with little motion and limited scene details will require low bit rates, but if the motion suddenly increases the bit rate required for transmission will rise sharply causing the traffic contract to be violated and cells may be lost. It would be inefficient to allocate bandwidth at the peak cell rate and maximum burst sizes. Allocating too little bandwidth can lead to cell loss.
If the user exceeds the negotiated contract, the UPC can tag those cells which violate the contract so they can be dropped in the event of network congestion. Studies have shown that when video streams violate their ATM traffic contracts it is most often the larger I frames which are at fault and subject to being dropped rather than P or B frames. ( Ohta paper) Lost I frames also lead to the greatest degradation in picture quality of the three frame types.
Various rate control methods have been proposed for both CBR and VBR video. With CBR video a buffer can be employed to smooth out slight variations in frame sizes. If the bit rate from the encoder rises sharply the buffer can be exceeded and cells lost. In order to prevent large changes in bit rate, one method is to use a closed loop encoder. With a closed loop encoder the status of the buffer is fed back to the encoder. If the buffer is close to full, the encoder can lower the bit rate of the frames it is encoding by increasing the DCT quantization step size. This is done at the expense of video quality. ( Riley book)
CBR video can be transmitted with constant quality by employing a buffer in the end user equipment and a delay before beginning video playback. This method is useful for VoD which will tolerate a delay before playback. This delay and buffer can allow a number of frames to be transmitted to the decoder ahead of time. This allows the bursty MPEG frames to be sent at a constant transmission rate. The relationship between the initial delay, buffer size, and transmission rate have also been studied. ( McManus paper) ( Ni paper)
Rate control methods have also been proposed for VBR traffic. One proposed method uses rate based flow control. This method is a modified form of explicit backward congestion notification that was first proposed for available bit rate (ABR) service. With this scheme, the queue occupancy of each switching node is monitored as a measure of congestion. The users are notified of the congestion status of the switches and are instructed to adjust their transmission rates accordingly. The signaling information is transmitted to the user through operation and maintenance (OAM) cells called resource management (RM) cells. Studies of the trade off between congestion levels and picture quality degradation have been reported. ( Dagiuklas paper) ( Karademir paper)
Another method for rate control replaces the leaky bucket UPC with a control based on fuzzy logic. In simulations the fuzzy policer was able to perform the functions of minimizing the cell loss rate and minimizing the effects of policing on picture quality. Work continues in this area as well. ( Andronico paper)
Back to Table of Contents
Bit errors and cell loss in video transmissions tend to cause noticeable picture quality degradation. Error correction and concealment techniques provide methods for the decoder to deal with errors in a way that minimizes the quality loss. Error correction techniques remove the errors and restore the original information. Error concealment techniques do not remove the errors, but manage them in a way that makes them less noticeable to the viewer. Encoding parameter adjustments can also be made that reduce the effects of errors and cell loss.
Error correction is more difficult for real time data than it is for non-real time data. The real time nature of video streams means that they cannot tolerate the delay that would be associated with a traditional retransmission based error correction technique such as automatic repeat request (ARQ). Delay is introduced in the acknowledgment of receipt of frames as well as in waiting for the timeout to expire before a frame is retransmitted. For this reason ARQ is not useful for error correction of video streams.
Forward error correction (FEC) is another error correcting technique. This is supported in ATM by AAL 1. FEC takes a set of input symbols representing data and adds redundancy, producing a different and larger set of output symbols. ( Riley book) FEC methods that can be used are Hamming, Bose Chaudhuri Hocquenghen (BCH) and Reed-Solomon. FEC presents a trade off to the user. On the positive side, FEC allows lost information to be recovered. On the negative side, this ability is paid for in the form of a higher bandwidth requirement for transmission. This added traffic can introduce additional congestion to the network leading to a greater number of lost cells. These additional lost cells may or may not be recoverable with FEC. The role of FEC in video is still a topic of discussion. ( Rasheed paper) ( Ayanoglu paper)
Error concealment is a method of reducing the magnitude of errors and cell loss in the video stream. These methods include temporal concealment, spatial concealment, and motion compensated concealment. With temporal concealment, the errored data in the current frame is replaced by the unerrored data from the previous frame. In video sequences where there is little motion in the scene, this method will be quite effective. Another method of concealing errors is spatial concealment. Spatial concealment involves interpolating the data that surrounds an errored block in a frame. This method is most useful if the data does not contain a high level of detail. Motion compensated concealment involves estimating the motion vectors from neighboring error free blocks. This method could be used to enhance spatial or temporal concealment techniques. I frames cannot be used with this technique since they have no motion vectors. ( Riley book)
The encoding parameters for a video stream can be adjusted to make a stream more resistant to bit errors and cell loss. Scalable coding is supported by MPEG-2 (as well as Motion JPEG). Scalable coding allows multiple qualities of service to be encoded in the same video stream. When congestion was not present in the network, all the cells would arrive at the decoder and the quality would be optimal. When congestion was present, the coding could be performed so that cells which provided a base layer of quality would reach the decoder while the enhancement cells would be lost. Temporal localization is another method that can improve the quality of the video in the presence of cell loss. This involves adding additional I frames to the video stream. Additional I frames prevent long error propagation strings when a cell is lost since errors rarely are propagated beyond the next I frame encountered. The additional I frames are larger than the P or B frames they replace and compression efficiency will be reduced. In addition the greater bit rate required for these added I frames can contribute to network congestion. A third technique that can be performed at the encoder is to decrease the slice size. Since re-synchronization after an error occurs at the start of the next slice, decreasing the slice size will allow this re-synchronization to occur sooner. ( Riley book)
Back to Table of Contents
The Audiovisual Multimedia Services Technical Committee of the ATM Forum released the Video on Demand Specification 1.1, in March 1997. This document represents the first phase of a study of multimedia issues relating to ATM. Specification 1.1 only addresses issues relating to the transport of constant packet rate MPEG-2 Single Program Transport Streams (ISO/IEC 13818-1) over ATM networks. While the scope of the document is very limited, many believe it will serve as a guide for carriage of a wide range of video over ATM networks.
Video on Demand (VoD) Specification 1.1 provides a reference configuration for the network supplying VoD services (Figure 6). The configuration consists of a server, client, and a separate session/connection control unit. The client could be either a set-top-terminal (STT) or inter working unit (IWU). The reference depicts five communications links which would be served by five separate virtual connections (VC). If the server and client both support signaling (ATM Forum Signaling Specification 4.0), then the user-to-network signaling VC's would be as shown. In the event either the server or the client or both did not support signaling, proxy signaling could be employed as described in a later section. The MPEG-2 Single Program Transport Stream traffic would be accommodated on a separate VC. This VC would be the last VC connection established. The User-to-User Control VC would be used for implementation specific information flows. VoD Specification 1.1 indicates that one of the main purposes for this VC would be to exchange program selection information between the client and the server. This would allow the end user to select a specific item (e.g. a movie) for viewing and inform the server of that selection. The VoD Session Control VC would be used for session control information. This link would be utilized to facilitate connection set up between the server and the client in the event that proxy signaling was required.
In Figure 7, the protocol reference has been combined with the reference configuration for VoD. The network adaptation uses AAL 5 in the manner described in the previous section on packing MPEG-2 TS packets into AAL 5 cells. Specification 1.1 allows for the following mapping :
1) Every AAL5-SDU shall contain N MPEG-2 SPTS packets, unless there are fewer than N packets left in the SPTS. (Remaining packets are placed in the final CPCS-SDU)
2) The value of N is established via ATM signaling using N = the AAL5 CPCS-SDU size divided by 188. The default AAL5 CPCS-SDU size is 376 octets, which is two TS packets (N=2)
3) In order to ensure a base level of interoperability, all equipment shall support the value N=2 (AAL5 CPCS-SDU size of 376 octets)
A NULL service specific convergence sublayer is indicated by Specification 1.1.
Proxy signaling procedures are defined by the VoD Specification. Proxy signaling is supported when either the server or the client or both do not support signaling. The basic procedure outlined for proxy signaling is the client contacts the session controller. The session controller provides the client with a list of servers from which to choose. When the client selects a server, the session controller informs the server that the client wishes to establish a connection. If the server agrees, the session controller informs the ATM connection controller to establish a VC for user to user control information. It is the over the user to user control VC that the client will make a specific program selection. (e.g. what movie to receive) The VC would then be established for the transfer of MPEG-2 SPTS video from the server to the client.
Back to Table of Contents
There has been a great deal of interest recently in the area of wireless networking. Issues such as bit error rates and cell loss rates are even more important when transmiting video over a wireless network. A very high performance wireless local area network (VHP-WLAN) which operates in the 60 GHz millimeter wave band can experience cell loss rates of 10 -4 to 10 -2 . ( Zhang paper) To provide adequate picture quality to the user some form of error correction or concealment must be employed. One option is to use the MPEG-2 error resilience techniques that were described previously. ARQ will probably not work for the reasons discussed previously. One proposed solution is to modify the MPEG-2 standard slightly when it is used over wireless ATM networks. This technique is known as macroblock re-synchronization. ( Zhang paper) In macroblock re-synchronization the first macroblock in every ATM cell is coded absolutely rather than differentially. This allows for re-synchronization of the video stream much more often than would be possible if re-synchronization could only take place at the slice level. The authors of this proposal indicate that it would be relatively simple to incorporate this method with the existing MPEG-2 coding standard by adding an inter-working adapter at the boundary between the fixed and wireless network. A second proposal for improving error resilience in wireless networks is to use FEC methods. In addition, improved performance can be achieved by using a two layer scalable MPEG-2 coding scheme rather than one layer. ( Ayanoglu paper)
Back to Table of Contents
This section contains a list of some of the video over ATM products that are currently on the market. Most of these products are video-conferencing coder/decoders that use the motion JPEG compression method.
Fore Systems StreamRunner AVA/ATV-300 ATM video encoder/decoder ( Web Reference)
ATM Interface: OC-3c (SONET) 155 Mbps multimode fiber
UNI 3.0 signaling
Point to multipoint or point to point
AAL 5 used for video and audio encapsulation
Uncompressed and Motion JPEG video support
SVC and PVC support
K-Net ATM Video Products: CellStack Family ( Web Reference)
SVC (UNI 3.0/3.1)
Traffic streams transported within separate VCI's using AAL 5
720 * 576 pixel resolution, 50 fps full motion video (PAL)
First Virtual Corporation: V-Gate ( Web Reference)
384 kbps transmission rate
Uses H.320 video-conferencing standard
STS Technologies Page: MMX Explorer ( Web Reference)
Full motion JPEG
640x480 pixel/frame resolution
Connects via SONET or DS3
AHERN Communications Corporation: ARMANDA Cruiser 100 System ( Web Reference)
Desktop video-conferencing for ISDN, BRI, LAN's, ATM, and the Internet
H.261 video compression
Back to Table of Contents
Many issues relating to video delivery over ATM have been discussed in this paper. These include, video compression, ATM adaptation layer selection, quality of service, error correction and concealment, video on demand, and wireless ATM issues. Most of these areas are still the focus of debate. With the potential that ATM networks have for the delivery of video services it is clear that this topic will continue to be of great interest in the near future.
Back to Table of Contents
AAL : ATM Adaptation Layer
ADSL : Asymmetric Digital Subscriber Loop
ARQ : Automatic Repeat Request
ATM : Asynchronous Transfer Mode
CBR : Constant Bit Rate
CPR : Constant Packet Rate
CS : Convergence Sublayer
DCT : Discrete Cosine Transform
FEC : Forward Error Correction
FTTC : Fiber to the Curb
FTTH : Fiber to the Home
GOP : Group of Pictures
HFC : Hybrid Fiber/Coax
IWU : Inter Working Unit
MPEG-1 : Moving Picture Experts Group Phase 1
MPEG-2 : Moving Picture Experts Group Phase 2
MPEG2-PCR : MPEG-2 Program Clock Reference
PC : Personal Computer
PDU : Protocol Data Unit
PES: Packetized Elementary Streams
PVC : Permanent Virtual Circuit
QoS : Quality of Service
SPTS : Single Program Transport Stream
SRTS : Synchronous Risidual Time Stamp
STT : Set Top Terminal
SVC : Switched Virtual Circuit
UPC : Usage Parameter Control
VBR : Variable Bit Rate
VC : Virtual Connection
VHP-WLAN : Very High Performance Wireless Local Area Network
VIP : Video Information Provider
VLC : Variable Length Code
VoD : Video on Demand
WLAN : Wireless Local Area Network
Back to Table of Contents
1. M. J. Riley, I. E. G. Richardson, Digital Video Communications, Artech House, Boston, MA, 1997.
A good survey of topics relating to digital video communications with special emphasis on transmission issues.
2. D. Minoli, Video Dialtone Technology, McGraw-Hill, New York, NY, 1995.
A good survey of video networks with emphasis on different hardware configurations.
3. The ATM Forum, "Audiovisual Multimedia Services: Video on Demand Specification 1.1," af-saa-0049.001, March, 1997.
The ATM Forum approved specification for Video on Demand.
4. J. Zhang, M. R. Frater, J. F. Arnold, T. M. Percival, "MPEG 2 Video Services for Wireless ATM Networks," IEEE Journal on Selected Areas in Communications, 15(1), 1997, pp. 119-128.
Study of error handling methods for MPEG-2 video over high performance ATM LAN's.
5. Y. Rasheed, A. Leon-Garcia, "AAL 1 with FEC for the Transport of CBR MPEG2 Video over ATM Networks," IEEE Infocom, V2, 1996, pp. 529-536.
Presents an method for implementing MPEG-2 video transmission over ATM Networks incorporating Forward Error Correction.
6. S. Dixit, P. Skelly, "MPEG-2 over ATM for Video Dial Tome Networks: Issues and Strategies," IEEE Network, 9(5), 1995, pp. 30-40.
A good general survey of the issues for video dial tone networks.
7. C. Ohta, K. Shinagawa, Y. Onozato, "Cell Loss Properties for Multiplexing of MPEG Video Sources Considering Picture Coding Types in ATM Networks," IEEE International Conference on Communications, (3), 1996, pp. 1396-1400.
Cell Loss rates are studied for the specific MPEG picture types in multiplexing algorithms.
8. J. M. McManus, K. W. Ross, "Video on Demand over ATM: Constant-Rate Transmission and Transport," IEEE Infocom, v. 3, 1996, pp. 1357-1362.
A method for implementing VoD over CBR transmission links using buffering at the receiver.
9. J. Ni, T. Yang, D. H. K. Tsang, "CBR Transportation of VBR MPEG-2 Video Traffic for Video-On-Demand in ATM Networks," IEEE International Conference on Communications, (3), 1996, pp. 1391-1395.
Describes a method for performing CBR transportation of VBR video traffic using buffers and delays. This method does not adjust quantization scales at the encoder.
10. A. Dagiuklas, M. Ghanbari, "Rate-Based Flow Control of Video Services in ATM Networks," Globecom, (1), 1996, pp. 284-288.
Proposes a method for rate based flow control of video services based on monitoring queue occupancy of each switching node and advising the users to adjust transmission rates accordingly.
11. S. Karademir, I. Lambadaris, M. Devetsikiolis, A. R. Kaye, "Dynamic Rate Control of VBR MPEG Video Transmission over ATM Networks," Globecom, (3), 1996, pp. 1509-1515.
A rate control method for video traffic over CBR is specified by adapting an ABR rate control scheme.
12. M. Andronico, V. Catania, G. Ficili, S. Palazzo, D. Panno, "Performance Evaluation of a Fuzzy Policer for MPEG Video Traffic Control," IEEE International Conference on Communications,(1), 1996, pp. 439-443.
This paper proposes a fuzzy logic alternative to the leaky bucket algorithm for traffic control of MPEG video.
13. E. Ayanoglu, P. Pancha, A. R. Reibman, S. Talwar, "Forward Error Control for MPEG-2 Video Transport in a Wireless ATM LAN," IEEE International Conference on Image Processing, v. 2, 1996, pp. 833-836.
A method for performing forward error correction on a wireless ATM LAN using two phase encoding.
14. C. Gao, J. S. Meditch, "An Adaptive Rate Control Scheme for VBR Video over ATM Networks," Globecom, (1), 1996, pp. 463-467.
A rate control scheme based on multiplexing buffer occupancy.
15. C.-F. Chang, H.-C. Lin, J.-S. Wang, "Two-State Video Source Modeling for Admission Control on ATM Networks," Globecom,(1), 1996, pp. 497-501.
Proposes an admission control method based on a two-state Markov Fluid Model.
16. H. V. Todd, J. S. Meditch, "Encapsulation Protocols for MPEG Video in ATM Networks," IEEE Infocom, v. 3, 1996, pp. 1072-1079.
This paper proposes methods for packing MPEG video into ATM cells in efficient and error resilient ways.
17. J. Mata, G. Pagan, S. Sallent, "Multiplexing and Resource Allocation of VBR MPEG Video Traffic Over ATM Networks," IEEE International Conference on Communications, (3), 1996, pp. 1401-1405.
A study of VBR video multiplexing issues.
18. D. Wilson, M. Ghanbari, "An Efficient Loss Priority Scheme for MPEG-2 Variable Bit Rate Video for ATM Networks," Globecom, (3), 1996, pp. 1954-1958.
Describes a frame sequence partitioning method for two layer loss priority.
19. J. Feng, K.-T. Lo, H. Mehrpour, A. E. Karbowiak, "Loss Recovery Techniques For Transmission of MPEG Video Over ATM Networks," IEEE International Conference on Communications, (3), 1996, pp. 1406-1410.
A cell loss recovery technique based on cell packing using the macroblock as the base unit is proposed. In addition, a modified boundary matching algorithm is also proposed.
20. C.-S. Wu, G.-K. Ma, B.-S. P. Lin, "On Scalable Design of an ATM-based Video Server," IEEE International Conference on Communications, (3), 1996, pp. 1335-1340.
Scalable video server design issues are discussed.
Fore Systems Corporation
First Virtual Corporation
STS Technologies Corporation
Ahern Communications Corporation
Other Reports on Recent Advances in Networking
Back to Table of Contents