Search the FAQ Archives

3 - A - B - C - D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z
faqs.org - Internet FAQ Archives

comp.dcom.cell-relay FAQ: ATM and related technologies (part 4/8)

( Part1 - Part2 - Part3 - Part4 )
[ Usenet FAQs | Web FAQs | Documents | RFC Index | Airports ]
Archive-name: cell-relay-faq/part4
Last-modified: 1997/10/06
URL: http://cell-relay.indiana.edu/cell-relay/FAQ/ATM-FAQ/FAQ.html

See reader questions & answers on this topic! - Help others by sharing your knowledge
NOTE!!!! If you are reading this FAQ as stored on some automated FAQ
archive site you would be better off to follow the above http link to
the most recent official version of this FAQ.  Not only may it be more
current but it will be better formatted than what you are viewing now!

-----------------------------------------------------------------------
comp.dcom.cell-relay FAQ: ATM and related technologies (Rev 1997/10/06)
Part 4 - Introduction and Topic D of FAQ
-----------------------------------------------------------------------
Copyright =A9 1992, 1993, 1994, 1995, 1996, 1997 Carl Symborski

Cell Relay FAQ - Introduction

The Cell Relay FAQ is posted periodically in multiple parts as a Usenet
News FAQ under the title comp.dcom.cell-relay FAQ: ATM, SMDS, and related
technologies. This FAQ is also maintained as a collection of WEB pages
(http://cell-relay.indiana.edu/cell-relay/FAQ/ATM-FAQ/FAQ.html). The WEB
pages will generally be more current than the posted FAQ. In fact this
FAQ is maintained as WEB pages then posted as a traditional Usenet News
FAQ every few months.

This article is the fourth of eight articles which contain general
information and also answers to some Frequently Asked Questions (FAQ)
which are related to or have been seen in comp.dcom.cell-relay. This FAQ
provides information of general interest to both new and experienced
readers. It is posted to the Usenet comp.dcom.cell-relay, comp.answers,
and news.answers news groups every few months.

This FAQ reflects cell-relay traffic through August 1997.

If you have any additions, corrections, or suggestions for improvement to
this FAQ, please send them to carl@umd5.umd.edu.

I will accept suggestions for questions to be added to the FAQ, but please
be aware that I will be more receptive to questions that are accompanied by
answers. :-)

Enjoy!

Carl Symborski
Vice President - Engineering
SALIX Technology, Inc.

carl@umd5.umd.edu
cws@salix.com

Carl's home page is at
http://cell-relay.indiana.edu/cell-relay/FAQ/ATM-FAQ/carl/home.html

---------------------------------------------------------------------------

Cell Relay FAQ - Copyright Notice and Disclaimer

The Cell Relay FAQ is posted periodically in multiple parts as a Usenet
News FAQ under the title comp.dcom.cell-relay FAQ: ATM, SMDS, and related
technologies. This FAQ is also maintained as a collection of WEB pages.

Both versions are Copyright =A9 1992-1997 Carl Symborski and may be freely
redistributed in their entirety provided that this copyright notice is not
removed. They may not be sold for profit or incorporated in commercial
documents or CD-ROMs without the written permission of Carl Symborski.
Permission is expressly granted for this document to be made available for
file transfer from installations offering unrestricted anonymous file
transfer on the Internet. This article is provided as is without any
express or implied warranty. Nothing in this article represents the views
of the University Of Maryland.

---------------------------------------------------------------------------

TOPIC D

ATM TECHNOLOGY QUESTIONS

---------------------------------------------------------------------------

  D1.  What are the various ATM Adaptation layers?
  D2.  Are ATM cells delivered in order?
  D3.  What do people mean by the term "traffic shaping"?
  D4.  What is happening with and Questions about signalling standards for
       ATM?
  D5.  What is VPI and VCI?
  D6.  Why both VPI *and* VCI?
  D7.  How come an ATM cell is 53 bytes anyway?
  D8.  How does AAL5 work?
  D9.  What are the diffferences between Q.93B, Q.931, and Q.2931?
  D10. What is a DXI?
  D11. What is Goodput?
  D12. Questions about LAN Emulation (LANE).
  D13. Questions about the Classsical IP over ATM approach.
  D14. What is the difference between a PVC, Soft PVC, and SVC?
  D15. ATM Physical Level Questions.
  D16. What is ABR?
  D17. Questions about VPI/VCI assignment?
  D18. Specs on how Frame Relay frames gets mapped to ATM cells.
  D19. What are the meaning of CBR, VBR, ABR, UBR?
  D20. Are VP and VC unidirectional?
  D21. M4 ATM Mgmt Interface Questions?
  D22. Questions about QOS.
  D23. Questions about ATM Cell Headers.
  D24. What is MPOA?
  D25. Partial/Early Packet Discard (PPD/EPD) Questions
  D26. Questions about ATM addressing schemes
  D27. What are DBR and SBR?
  D28. What is CLP=3D0+1 all about?
  D29. Connection establishing in the ATM layer
  D30. Information about about B-ISDN and B-ICI

---------------------------------------------------------------------------
SUBJECT D1)

                What are the various ATM Adaptation layers?

In order for ATM to support many kinds of services with different traffic
characteristics and system requirements, it is necessary to adapt the
different classes of applications to the ATM layer. This function is
performed by the AAL, which is service-dependent. Four types of AAL were
originally recommended by CCITT. Two of these have now been merged into
one.

Briefly the four ATM adaptation layers (AAL) have been defined:

   * AAL1 - Supports connection-oriented services that require constant bit
     rates and have specific timing and delay requirements. Example are
     constant bit rate services like DS1 or DS3 transport.
   * AAL2 - This adaptation is a method for carrying voice over ATM. It
     consists of variable size packets (max : 64 bytes) encapsulated within
     the ATM payload. This was previously known as Composite ATM or AAL-CU.
     The ITU spec which describes this is called ITU-T I.363.2.
   * AAL3/4 - This AAL is intended for both connectionless and connection
     oriented variable bit rate services. Originally two distinct
     adaptation layers AAL3 and 4, they have been merged into a single AAL
     which name is AAL3/4 for historical reasons.
   * AAL5 - Supports connection-oriented variable bit rate data services.
     It is a substantially lean AAL compaired with AAL3/4 at the expense of
     error recovery and built in retransmission. This tradeoff provides a
     smaller bandwidth overhead, simpler processing requirements, and
     reduced implementation complexity. Some organizations have proposed
     AAL5 for use with both connection-oriented and connectionless
     services.

Note that some folks talk about an "AAL0" which normally refers to a 'null'
AAL, i.e the case where the payload is directly inserted into a cell. This
typically requires that the payload can always be fitted into a single cell
so that the AAL is not needed for upper layer PDU delineation when the
upper layer PDU bridges several cells.

---------------------------------------------------------------------------
SUBJECT D2)

                     Are ATM cells delivered in order?

Yes. The ATM standards specify that all ATM cells will be delivered in
order. Any switch and adaptation equipment design must take this into
consideration.

---------------------------------------------------------------------------
SUBJECT D3)

             What do people mean by the term "traffic shaping"?

Here is an explicit definition of traffic shaping followed by brief
tutorial. Note that a variety of techniques have been investigated to
implement traffic shaping. Reference the literature for keywords such as
"leaky bucket", "congestion", "rate control", and "policing".

Definition:
     Traffic shaping is forcing your traffic to conform to a certain
     specified behavior. Usually the specified behavior is a worst case or
     a worst case plus average case (i.e., at worst, this application will
     generate 100 Mbits/s of data for a maximum burst of 2 seconds and its
     average over any 10 second interval will be no more than 50 Mbit/s).

Of course, understand that the specified behavior may closely match the way
the traffic was going to behave anyway. But by knowing precisely how the
traffic is going to behave, it is possible to allocate resources inside the
network such that guarantees about availability of bandwidth and maximum
delays can be given.

Brief Tutorial

Assume some switches connected together which are carrying traffic. The
problem to actually deliver the grade of service that has been promised,
and that people are paying good money for. This requires some kind of
resource management strategy, since congestion will be by far the greatest
factor in data loss. You also need to charge enough to cover you costs and
make a profit, but in such a way that you attract customers. There are a
number of parameters and functions that need to be considered:

PARAMETERS

There are lots of traffic parameters that have been proposed for resource
management. The more important ones are:

   * mean bitrate
   * peak bitrate
   * variance of bitrate
   * burst length
   * burst frequency
   * cell-loss rate
   * cell-loss priority
   * etc. etc.

These parameters exist in three forms:

   * actual
   * measured, or estimated
   * declared (by the customer)

FUNCTIONS

(a) Acceptance Function
     Each switch has the option of accepting a virtual circuit request
     based on the declared traffic parameters as given by the customer.
     Acceptance is given if the resulting traffic mix will not cause the
     switch to not achieve its quality of service goals.

     The acceptance process is gone through by every switch in a virtual
     circuit. If a downstream switch refuses to accept a connection, an
     alternate route might be tried.

(b) Policing Function
     Given that a switch at the edge of the network has accepted a virtual
     circuit request, it has to make sure the customer equipment keeps its
     promises. The policing function in some way estimates the the
     parameters of the incoming traffic and takes some action if they
     measure traffic exceeding agreed parameters. This action could be to
     drop the cells, mark them as being low cell-loss priority, etc.

(c) Charging Function
     The function most ignored by traffic researchers, but perhaps the most
     important for the success of any service! Basically, this function
     computes a charge from the estimated and agreed traffic parameters.

(d) Traffic Shaping Function
     Traffic shaping is something that happens in the customer premise
     equipment. If the Policing function is the policeman, and the charging
     function is the judge, then the traffic shaper is the lawyer. The
     traffic shaper uses information about the policing and charging
     functions in order to change the traffic characteristics of the
     customer's stream to get the lowest charge or the smallest cell-loss,
     etc.

     For example, an IP router attached to an ATM network might delay some
     cells slightly in order to reduce the peak rate and rate variance
     without affecting throughput. An MPEG codec that was operating in a
     situation where delay wasn't a problem might operate in a CBR mode.

---------------------------------------------------------------------------
SUBJECT D4)

  What is happening with and Questions about signalling standards for ATM?

NOTE: An authoritative account of the ATM Forum's work on signalling and
other implementation agreements can be found by surfing their WEB site at
http://www.atmforum.com/. Check in their library for back issues of their
"53 Bytes" newsletter (September 1994 for starters). Also check their
approved recommendations.

From=20a historical perspective, some of the ATM Forum's work in this area =
is
as follows.

The Signaling Sub-Working Group (SWG) of the ATM Forum's Technical
Committee completed its implementation agreement on signaling at the ATM
UNI during the summer of 1993. The protocol is based on Q93B with
extensions to support point-to-multipoint connections. Agreements on
addressing specify the use of GOSIP-style NSAPs for the (SNPA) address of
an ATM end-point at the Private UNI, and the use of either or both
GOSIP-style NSAPs and/or E.164 addresses at the Public UNI. The agreements
have been documented as part of the UNI 3.0 specification.

Additionally, the ANSI T1S1 as well as the ITU-T studygroup XI are
concerned with ATM signalling. In the latter half of 1993 a couple of
things happened:

  1. The ITU finally agreed to modify its version of Q93B to more closely
     to align it with that specified in the ATM Forum's UNI 3.0
     specification. The remaining variations included some typos which the
     ITU Study Group found in the Forum's specification. Also, some
     problems were solved differently. Aligned yes, but the changes could
     still cause incompatibilities with UNI 3.0.

  2. Given the above, the ATM Forum's signalling SWG decided to modify the
     Forum's specification to close the remaining gap and align it with the
     ITU.

The biggest change was with SSCOP. UNI 3.0 references the draft ITU-T SSCOP
documents (Q.SAAL). However UNI 3.1 references the final ITU Q.21X0
specifications. These two secifications are not interoperable so there is
no backwards compatibility between UNI 3.0 and UNI 3.1. The ATM Forum UNI
3.1 specification was approved in Fall 1994 and has been distributed to ATM
Forum members and is also available online. See section C4.

UNI 4.0 was next which included not only switched VPs but also many
advances in QOS from the Traffic Management sub-working group.

Question: Signalling messages defined in Q.2931 and ATM Forum UNI v3.1
seems to establish VCCs only. How to establish VPCs by signalling?

Answer: ATM Forum UNI 4.0 provides for switched VPs. This is done by:

   * adding a new bearer class codepoint in bearCap IE for "VP service",
     and
   * adding a new pref/exc codepoint in connId IE for "exclusive VPCI, no
     VCI"

The ATM Forum also has a Private-NNI SWG. Currently they have worked on a
protocol (called PNNI) for distributing link and node state information,
and a call setup procedure, to support intra-network routing and switching.
The spec itself was completed in 1996.

Overall, the protocol is designed for source routing, where the first
switch in the network has enough information about the topology of the
network to determine a route, and then the path information is added to the
signaling message (SETUP) and routed along the path. The overall protocol
is considerably more complex than this, as its necessary to minimise the
view of the topology of a network from the sources point of view (a
topological hierarchy is used, among other things), but thats basically the
approach.

---------------------------------------------------------------------------
SUBJECT D5)

                            What is VPI and VCI?

ATM is a connection orientated protocol and as such there is a connection
identifier in every cell header which explicitly associates a cell with a
given virtual channel on a physical link. The connection identifier
consists of two sub-fields, the Virtual Channel Identifier (VCI) and the
Virtual Path Identifier (VPI). Together they are used in multiplexing,
demultiplexing and switching a cell through the network. VCIs and VPIs are
not addresses. They are explicitly assigned at each segment (link between
ATM nodes) of a connection when a connection is established, and remain for
the duration of the connection. Using the VCI/VPI the ATM layer can
asynchronously interleave (multiplex) cells from multiple connections.

---------------------------------------------------------------------------
SUBJECT D6)

                          Why both VPI *and* VCI?

The Virtual Path concept originated with concerns over the cost of
controlling BISDN networks. The idea was to group connections sharing
common paths through the network into identifiable units (the Paths).
Network management actions would then be applied to the smaller number of
groups of connections (paths) instead of a larger number of individual
connections (VCI). Management here including call setup, routing, failure
management, bandwidth allocation etc. For example, use of Virtual Paths in
an ATM network reduces the load on the control mechanisms because the
function needed to set up a path through the network are performed only
once for all subsequent Virtual Channels using that path. Changing the
trunk mapping of a single Virtual Path can effect a route change for every
Virtual Channel using that path.

Now the basic operation of an ATM switch will be the same, no matter if it
is handling a virtual path or virtual circuit. The switch must identify on
the basis of the incomming cell's VPI, VCI, or both, which output port to
forward a cell received on a given input port. It must also determine what
the new values the VPI/VCI are on this output link, substituting these new
values in the cell.

The algorithms for selecting which switch output port a given input VPI/VCI
should be mapped to is done at the time the call is set up, and is part of
the overall call routing algorithm. The port to be used depends on what
other switches that port is connected to. Call routing is addressed by
protocols like P-NNI (private network-network interface), just being
completed by the ATM forum.

The choice of an outbound VPI/VCI value, on the other hand, is partially a
function of the switch architecture, and partially a function of the
interface. The UNI spec dictates which side of a link, user or network,
selects values. The PNNI spec also has rules for this. Within the switch
designated as the one selecting the values, the choice depends on switch
internals (what space does it support, are VPI/VCI spaces on all ports
fully independent, what is the switch software's policy for value resue,
etc).

---------------------------------------------------------------------------
SUBJECT D7)

                  How come an ATM cell is 53 bytes anyway?

ATM cells are standardized at 53 bytes because it seemed like a good idea
at the time! As it turns out, during the standardization process a conflict
arose within the CCITT as to the payload size within an ATM cell. The US
wanted 64 byte payloads because it was felt optimal for US networks. The
Europeans and Japanese wanted 32 payloads because it was optimal for them.
In the end 48 bytes was chosen as a compromise. So 48 bytes payload plus 5
bytes header is 53 bytes total.

The two positions were not chosen for similar applications however. US
proposed 64 bytes taking into consideration bandwidth utilization for data
networks and efficient memory transfer (length of payload should be a power
of 2 or at least a multiple of 4). 64 bytes fit both requirements.

Europe proposed 32 bytes taking voice applications into consideration. At
cell sizes >=3D 152, there is a talker echo problem. Cell sizes between
32-152 result in listener echo. Cell sizes <=3D 32 overcome both problems,
under ideal conditions.

For several years the *near* consensus was 64 octets. France wanted 32
because they figured with 4 ms. cell fill time, they could *just* scrape by
from one end of the country to the other without echo cancellers, while in
the US we need em 'anyway. So France held its breath, took a few smaller
European countries with them, and demanded that 64 be lowered. Hence the
"split the difference" 48 size. This was at a CCITT SG XVIII meeting ca.
1989.

CCITT chose 48 bytes as a compromise. As far as the header goes, 10% of
payload was perceived as an upper bound on the acceptable overhead, so 5
bytes was chosen.

---------------------------------------------------------------------------
SUBJECT D8)

                            How does AAL5 work?

Here is is a very simplified view of AAL5 and AALs in general. AAL5 is a
mechanism for segmentation and reassembly of packets. That is, it is a
rulebook which sender and receiver agree upon for taking a long packet and
dividing it up into cells. The sender's job is to segment the packet and
build the set of cells to be sent. The receiver's job is to verify that the
packet has been received intact without errors and to put it back together
again.

AAL5 (like any other AAL) is composed of a common part (CPCS) and a service
specific part (SSCS). The common part is further composed of a convergence
sublayer (CS) and a segmentation and reassembly (SAR) sublayer.

+--------------------+
|                    | SSCS
+--------------------+
|        CS          |
| ------------------ | CPCS
|       SAR          |
+--------------------+

SAR segments higher a layer PDU into 48 byte chunks that are fed into the
ATM layer to generate 53 byte cells (carried on the same VCI). The payload
type in the last cell (i.e., wherever the AAL5 trailer is) is marked to
indicate that this is the last cell in a packet. (The receiver may assume
that the next cell received on that VCI is the beginning of a new packet.)

CS provides services such as padding and CRC checking. It takes an SSCS
PDU, adds padding if needed, and then adds an 8-byte trailer such that the
total length of the resultant PDU is a multiple of 48. The trailer consist
of a 2 bytes reserved, 2 bytes of packet length, and 4 bytes of CRC.

SSCS is service dependent and may provide services such as assured data
transmission based on retransmissions. One example is the SAAL developed
for signalling. This consists of the following:

+--------------------+
|       SSCF         |
| ------------------ | SSCS
|       SSCOP        |
+--------------------+
|        CS          |
| ------------------ | CPCS
|       SAR          |
+--------------------+

SSCOP is a general purpose data transfer layer providing, among other
things, assured data transfer.

SSCF is a coordination function that maps SSCOP services into those
primitives needed specifically for signalling (by Q.2931). Different SSCFs
may be prescribed for different services using the same SSCOP.

The SSCS may be null as well (e.g. IP-over-ATM or LAN Emulation).

There are two problems that can happen during transit. First, a cell could
be lost. In that case, the receiver can detect the problem either because
the length does not correspond with the number of cells received, or
because the CRC does not match what is calculated. Second, a bit error can
occur within the payload. Since cells do not have any explicit error
correction/detection mechanism, this cannot be detected except through the
CRC mismatch.

Note that it is up to higher layer protocols to deal with lost and
corrupted packets. This can be done by using a SSCS which supports assured
data transfer, as discussed above.

---------------------------------------------------------------------------
SUBJECT D9)

         What are the differences between Q.93B, Q.931, and Q.2931?

Essentially, Q.93B is an enhanced signalling protocol for call control at
the Broadband-ISDN user-network interface, using the ATM transfer mode. The
most important difference is that unlike Q.931 which manages fixed
bandwidth circuit switched channels, Q.93B has to manage variable bandwidth
virtual channels. So, it has to deal with new parameters such as ATM cell
rate, AAL parameters (for layer 2), broadband bearer capability, etc. In
addition, the ATM Forum has defined new functionality such as
point-to-multipoint calls. The ITU-T Recommendation will specify
interworking procedures for narrowband ISDN.

Note that as of Spring 1994, Q.93B has reached a state of maturity
sufficient to justify a new name, Q.2931 for its published official
designation.

---------------------------------------------------------------------------
SUBJECT D10)

                               What is a DXI?

The ATM DXI (Data Exchange Interface)is basically the functional equivalent
of the SMDS DXI. Routers will handle frames and packets but not typically
fragment them into cells; DSUs will fragment frames into cells as the
information is mapped to the digital transmission facility.

The DXI, then, provides the standard interface between routers and DSUs
without requiring a bunch of proprietary agreements. The SMDS DXI is simple
because the router does the frame (SMDS level 3) and the DSU does the cells
(SMDS level 2). The ATM DXI is a little more complicated since it has to
accomomodate AAL3/4 and/or AAL5 (possibly concurrently).

---------------------------------------------------------------------------
SUBJECT D11)

                              What is Goodput?

When ATM is used to transport cells originating from higher-level protocols
(HLP), an important consideration is the impact of ATM cell loss on that
protocol or at least the segmentation process. ATM cell loss can cause the
effective throughput of some HLPs to be arbitrarily poor depending on ATM
switch buffer size, HLP congestion control mechanisms, and packet size.

This occurs because during congestion for example, and ATM switch buffer
can overflow which will cause cells to be dropped from multiple packets,
ruining each such packet. The preceding and the remaining cells from such
packets, which are ultimately discarded by the frame reassembly process in
the receiver, are nevertheless transmitted on an already congested link,
thus wasting valuable link bandwidth.

The traffic represented by these "bad" cells may be termed as BADPUT.
Correspondingly, the effective throughput, as determined by those cells
which are successfully recombined at the receiver, can be termed as
GOODPUT.

One method of increasing the efficiency of ATM over AAL5 is to drop all
remaining cells for a given packet if one of the cells is lost. This
functionality is sometimes referred to as "early packet drop."

---------------------------------------------------------------------------
SUBJECT D12)

                       Questions about LAN Emulation

Question: What is the ATM Forum's LAN Emulation all about?

Answer: The ATM Forum has published their LAN Emulation (LANE) V1.0
specification. Reference that spec for complete details. Here's the basics
on the requirements and general approach.

The organizations who worked on it thought LANE would be needed for two key
reasons

  1. Allow an ATM network to be used as a LAN backbone for hubs, bridges,
     switching hubs (also sometimes called Ethernet switches or Token Ring
     switches) and the bridging feature in routers.

  2. Allow endstations connected to "legacy" LANs to communicate though a
     LAN-to-ATM hub/bridge/switch with an ATM-attached device (a file
     server, for example) without requiring the traffic to pass through a
     more complex device such as a router. Note that the LAN-attached
     device has a conventional, unchanged protocol stack, complete with MAC
     address, etc.

LANE does not replace routers or routing, but provides a complementary
MAC-level service which matches the trend to MAC-layer switching in the
hubs and wire closets of large LANs.

LANE defines the three main areas required to emulate 802 LANs
(connectionless, broadcast/multicast, 802 hardwired MAC addresses) over ATM
networks (connection-oriented, point-to-point, network-defined
telephone-like addresses).

LANE specifies:

  1. The address resolution procedures and protocols used to first discover
     the ATM address that corresponds to a given MAC station address
     (whether the station is directly ATM-attached, or sitting behind an
     Ethernet/ATM device) and then to set up a virtual circuit between the
     end points (or to the Ethernet/ATM device in front of the Ethernet end
     station).
  2. The protocols and procedures to send broadcast and multicast 802
     packets over the network, using a LANE server with point-to-point
     circuits inbound and point-to-multipoint circuits back out to the
     clients.
  3. Same for how to "flood" (bridging term) packets across ATM, through
     Ethernet/ATM devices to reach Ethernet end stations, even those which
     have not sent a packet yet (thus making the Ethernet switch aware of
     them).
  4. The packet formats/encapsulations.

LANE also works for Token Ring so substitute Token Ring for Ethernet in the
above.

LANE also defines how an ATM adapter in a host can present an Ethernet or
Token Ring logical interface to the protocol stack above. This enables
applications and LAN protocols which were implemented to run above the
aforesaid Ethernet or TR LANs to operate without change over an ATM
network.

Surf the ATM Forum's WEB site http://www.atmforum.com for the January 1995
back issue of their "53 Bytes" publication. That issue contains a helpful
LANE tutorial.

Question: How does LANE work?

Answer: Here is a brief spew on how LANE works with ATM:

   * LANE Client (LEC) Software resides on End System
   * LANE Server (LES) Software resides on the Switch

On boot the ATM adapter registers with the local switch and exchanges
management information. Switch provides a prefix to the ATM adapter which
in combination with the MAC address of the adapter becomes the ATM address
of the adapter. Switch also provides its ATM address.

At this point the 2 ATM adresses are known so the LEC establishes a virtual
circuit connection (VCC) with the LES.

The LEC Registers its ATM/IP/MAC Address with the LES and joins the
Emulated Lan. The LES adds the new LEC to the ARP distribution tree.

The LEC now queries the LES for the Broadcast/Unknown Server (BUS) for
multi- cast. LES provides BUS address. LEC establishes VCC with BUS and
registers its ATM/IP/MAC Address to mcast distribution tree.

Now we can talk to other end systems by arping for the ATM address to the
LES. LES does a lookup and upon hit returns the address. On a miss the LES
broadcasts the ARP in hopes that some LEC will answer. The response is
returned by the LES to the orignating LEC.

A VCC can now be established between the two LEC's and Data is moved.

---------------------------------------------------------------------------
SUBJECT D13)

            Questions about the Classical IP over ATM approach.

Question: Where can I find out about Classical IP over ATM?

Answer: RFC1483 defines the encapsulation of IP datagrams (or other
protocols) directly in AAL5.

Classical IP and ARP over ATM, defined in RFC1577, is targetted towards
making IP run over ATM in the most efficient manner utilizing as many of
the facilities of ATM as possible. It considers the application of ATM as a
direct replacement for the "wires" and local LAN segments connection IP
end-stations and routers operating in the "classical" LAN-based paradigm. A
comprehensive document, RFC1577 defines the ATMARP protocol for logical IP
subnets (LISs). Within an LIS, IP addresses map directly into ATM Forum UNI
3.0 addresses. For communicating out a LIS, an IP router must be used -
following the classical IP routing mode. Reference RFC1577 for a full
description of this approach.

For a tutorial/reference, a set of slides by Grenville Armitage presented
at Interop 95 on the rfc1577 model is available online. The URL is:
HTTP://gump.bellcore.com:8000/~gja/interop95/interop95.html

Question: What is a Logical IP Subnet (LIS) and how does it differ from any
other subnet?

Answer: RFC1577 is the document which defines LIS, but it doesn't make the
concept as obvious as one might wish, although the info is in there in
section 3.

The short answer is that Logical IP subnets are identical, in all
"protocol" aspects, to conventional LAN etc media subnets. The key aspects
that matter in this context are that ATM-attached systems in the same LIS
have the same network numbers and subnet masks, just as on an Ethernet or
other conventional media. Also, two ATM-attached systems not in the same
LIS cannot communicate via RFC1577 except through a router, even though
they are both attached to the same ATM physical network, with ATM-level
connectivity available (PVC or SVC) between them.

This second limitation was a significant factor in the creation of RFC1577.
The issues of "cut-through routing", or communications between two systems
in different IP subnets on a common ATM network (as well as other
connection-oriented networks) were found to be complex, and there was a
desire to define at least the standard or "Classical" means of running IP
over ATM before all those issues were resolved.

RFC 1932, the IP over ATM: A Framework Document, has more overview info on
these basic issues.

---------------------------------------------------------------------------
SUBJECT D14)

          What is the difference between a PVC, Soft PVC, and SVC?

First lets define the three terms, PVC, Soft PVC, and SVC.

A PVC in the usual meaning is a VC that is not signaled by the end points.
Both of the endpoint (user) VC values are manually provisioned. The
link-by-link route through the network is also manually provisioned. If any
equipment fails, the PVC is down, unless the underlying physical network
(sonet, for example) can re-route below ATM. So a PVC is a VC which is
statically mapped at every point in the ATM network. A failure of any link
that a PVC crosses results in the failure of the PVC.

A Soft PVC also has manually provisioned endpoint (user) VC values (which
as defined above do not change), but the route through the network can be
automatically revised if there is a failure. Historically this feature
pretty much required a single-vendor network. A vendor may employ signaling
(invisibly to the endpoints) within the network, or may just have a
workstation somewhere sending proprietary configuration commands when it
detects a failure. However, the PNNI 1.0 spec defines a standard way of
doing this which does not require a vendor proprietary solution. So a Soft
PVC is a VC that is programmed to be present at all times (like a PVC), but
does not use static routes to determine its path through the ATM network.
Failure of a link causes a Soft PVC to route around the outage and remain
available.

A SVC is established by UNI signalling methods. So an SVC is a demand
connection initiated by the user. If a switch in the path fails, the SVC is
broken and would have to be reconnected.

Summarizing, the difference between a PVC and a Soft PVC is that a Soft PVC
will be automatically rerouted if a switch or link in the path fails. From
that perspective a Soft PVC is considered more robust that a simple PVC.

The difference between a SVC and a Soft PVC is that a SVC is established on
an "as needed" basis through user signalling. With a Soft PVC the called
party cannot drop the connection.

---------------------------------------------------------------------------
SUBJECT D15)

                       ATM Physical Level Questions.

Question:Whats the difference between SONET and SDH?

Answer:SONET and SDH are very close, but with just enough differences that
they don't really interoperate. Probably the major difference between them
is that SONET is based on the STS-1 at 51.84 Mb/s (for efficient carrying
of T3 signals), and SDH is based on the STM-1 at 155.52 Mb/s (for efficient
carrying of E4 signals). As such, the way payloads are mapped into these
respective building blocks differ (which makes sense, given how the
European and North American PDHs differ). Check the September 1993 issue of
IEEE Communications Magazine for an overview article on SONET/SDH.

The following table shows how the US STS and the European STM levels
compare:

US        Europe       Bit Rate (total)

STS-1      --            51.84 Mb/s
STS-3     STM-1         155.52 Mb/s
STS-12    STM-4         622.08 Mb/s
STS-24    STM-8        1244.16 Mb/s
STS-48    STM-16       2488.32 Mb/s
STS-192   STM-64       9953.28 Mb/s

From=20a formatting perspective, however, OC-3/STS-3 !=3D STM-1 even though=
 the
rate is the same. SONET STS-3c (i.e., STS-3 concatenated) is the same as
SDH STM-1, followed by STS-9c =3D STM-3c, etc.

There are other minor differences in overhead bytes (different places,
slightly different functionality, etc), but these shouldn't provide many
problems. By the way, most physical interface chips that support SONET also
include a STM operation mode. Switch vendors which use these devices could
then potentially support STS-3 and STM-1 for example. For anyone
interested, there is an ANSI T1 document which reports on all the
differences between SONET and SDH, and proposals to overcome them.
(Document T1X1.2/93-024R2). It's available at ftp.tele.fi in the directory
/atm/ansi, files sonet-sdh-1.ps and sonet-sdh-2.ps

Question:How does a receiver know where the boundaries between cells are?

Answer: On finding boundaries between cells, called "cell delineation" in
the stds docs: in addition to a Header Error Check scan to search for valid
CRCs, some physical layers cells have a known relationship to the PHY
structure. With some PHY's, the cell's are byte-aligned with the underlying
structure, with others, the alignment may be nibble or even bit (i.e., no
alignment at all). The so-called TAXI phy, now fading towards the sunset,
does use special codes in a 4B/5B encoding to mark beginning of cell, etc,
but it's the exception.

In any case, since with most PHY's, cells are continuously arriving back to
back (idle or unassigned cells are filled in by the transmitter if there is
no data-carrying cell in the slot), it only takes a few cell times to sync
up, and it's not too hard to maintain "cell sync" at the receiver.

Most of the PHY specs are online at the ATM Forum's web site. The first few
PHY (SONET/SDH, DS-3, TAXI) specs were included in the UNI 3.0/3.1 spec;
later ones (and there's a lot of them!) are in their own docs.

---------------------------------------------------------------------------
SUBJECT D16)

                                What is ABR?

The ATM Forum Traffic Management (TM) subworking group has defined an ATM
service type called ABR which stands for Available Bit Rate. Using ABR
traffic is not characterized using peak cell rate, burst tolerance, et.al.,
and bandwidth reseverations are not made. Instead traffic is allowed into
the network throttled by a flow control type mechanism. The idea is to
provide fair sharing of network bandwidth resources.

Competing approaches were intensely studied for quite some time. The debate
included many top folks from industry. Extensive simulation work was done
by (among others) Bellcore, Sandia Labs, NIST and Hughes Network Systems.
Some simulations were done explicitly with TCP/IP traffic sources, although
most used a more generic stochastic model.

The result of all this was the adoption in principle of a "rate-based"
approach known as Enhanced Proportional Rate Control Algorithm (EPRCA). The
term "rate based" means that the paradigm used involves adjustment by the
network of the 'sending rate' of each VC. This is as opposed to a "credit
based" or "windowing" approach, where the network communicates to each
source (VC) the amount of buffer-space available for its use, and the
source refrains from sending unless it knows in advance that the network
has room to buffer the data.

ABR has a Peak Cell Rate, a guaranteed Minimum Cell Rate (per VC), and will
do a fair share of the remaining available bandwidth (the specific
mechanism for determining fair share is left for vendor latitude and
experimentation). So you don't have explicit leaky bucket parameters for
ABR.

Check the ATM Forum "Traffic Management 4.0" specification as well as the
"ABR Addendum" for the complete specification of the ABR service type. The
ATM Forum also had a high level discussion on ABR in the October 1995 issue
of their 53 Bytes publication. Surf their WEB site at:
http://www.atmforum.com/ to access these publications.

There are also several rate-control and flow-control papers in the
March-April 1995 issue of IEEE Network, and in the May 1995 issue of IEEE
Journal on Selected Areas in Communication. Most of the issues were covered
very well.

The essential {CBR, VBR, ABR, UBR} service model itself dates back to Sept
1993 (although those names were not yet attached to the categories, and the
definitions were not explicit):

        Natalie Giroux,
        "Categorization of the ATM Layer QoS and Structure of
        the Traffic Management Work"
        ATM Forum contribution 93-0837, Sept 1993.

Another source of compare/contrast information on ABR and the rate-based
vs. credit-based debate is in IEEE Networks vol. 9 of March/April 1995.
There are three articles concerning The rate-based approach, the
credit-based approach and finally a merge of both of them.

There was also a special issue of Computer Communications Review (April
1995) that covered a lot of the ATM forum work. It contained an excellent
description of the various ABR services as well as an analysis of the ABR
rates at steady state.

---------------------------------------------------------------------------
SUBJECT D17)

                    Questions about VPI/VCI assignment?

Question: With respect to the assignment of VPI/VCIs for an ATM Forum 3.1
or Q.2931 SVC service request, consider two users A and B which will
communicate across a network. Are there really four VPI/VCIs that must be
assigned by the call setup process:

  1. The VPI/VCI A uses to send to B
  2. The VPI/VCI that B will receive from A
  3. The VPI/VCI B uses to send to A
  4. The VPI/VCI that A will receive from B?

Answer: According to the ATM Forum UNI 3.1 specification, User A will
request a VCC via a SETUP message. The Network will either respond with (if
there are no problems) a CALL PROCEEDING message or a CONNECT message. In
either case, it must respond with a Connection Identifier (VPI/VCI) in the
first response to the User (see the section labeled "Connection Identifier
Allocation/Selection -Origination in the ATM Forum UNI specification).

At the Called User side (B), the Network will allocate a Connection
Identifier (VPI/VCI) for the Called user and will be SETUP message sent to
the Called User.

In both cases (according to UNI 3.0/3.1) the Network allocates the VPI/VCI.
Also, the VCC can be bidirectional or unidirectional based on how the VCC
was established.

The rationale is simple: it is always the "network" side of the UNI that
allocates all VCCs for communication on that UNI. It is the master and the
"user" is the slave. Hence, the switch always knows which VCCs are
available for use at the UNI. The range of valid VCCs is setup using ILMI.

Q.2931 allows more flexibility. The initiator of the connection over a UNI
(be it "user" or "network") can effectively specify one of the following:

  1. exclusive VPI, exclusive VCI
  2. exclusive VPI, any VCI
  3. any VPI, any VCI

The other side of the UNI must satisfy the desired choice i.e. if choice A,
it must use the specified VPI/VCI; if choice B, it may use any VCI within
the specified VPI; if choice C, it may use any VPI/VCI.

Due to this flexibility, there is the possibility that the initiator of the
conenction over a UNI chooses a VPI/VCI value that is not available at the
other side. Q.2931 does not allow negotiation so the other side has no
choice but to release the VCC.

---------------------------------------------------------------------------
SUBJECT D18)

         Specs on how Frame Relay frames gets mapped to ATM cells.

There are at least four. One is the mapping defined for Frame Relay/ATM
network interworking as defined in Version 1.1 of the ATM Forum's B-ICI
spec (network interworking allows Frame Relay end users to communicate with
each other over an ATM network). In this case frames are mapped using AAL 5
and the FR-SSCS (Frame Relay specific service-specific convergence
sublayer). Despite the long-winded name, the essentials of the mapping are
quite simple to describe: remove the flags and FCS from a Frame Relay
frame, add the AAL-5 CPCS trailer, and segment the result into ATM cells
using AAL 5 SAR rules. The spec defines additional details such as the
mapping between FECN/BECN/DE in the Frame Relay header and EFCI/CLP bits in
the ATM cell headers.

A second mapping is ATM DXI (data exchange interface) mode 1a. This is not
strictly a Frame Relay to ATM mapping but rather uses an HDLC frame
structure identical to that of Frame Relay frames with a two-byte address
field (i.e. a 10-bit DLCI). The HDLC DXI frame address (called DFA in the
spec) gets stripped off and the 10 bits of the "DLCI" get mapped in a funny
way to the VPI and VCI of the ATM cells. The remainder of the DXI frame
gets an AAL 5 CPCS trailer and is chopped up into cells by standard AAL 5
rules.

A third mapping is used for ATM/Frame Relay service interworking. This
version allows for conversion between the RFC 1490 multiprotocol
encapsulation and the RFC 1483 multiprotocol encapsulation. It uses AAL5
with the RFC 1483 encapsulation within the network. It allows a Frame Relay
user to communicate with a user of a different service (e.g. SMDS/CBDS)
across the ATM network.

A fourth mapping is the FUNI which is completely separate standard ratified
by the ATM Forum. It is an extension of the ATM-DXI standard. However
instead of being a local serial interface, it is extended across the wide
area. For more information reference "From Frames to Cells: Low Speed
Access to ATM" in the May 1995 issue of Data Communications.

---------------------------------------------------------------------------
SUBJECT D19)

                What are the meaning of CBR, VBR, ABR, UBR?

They are service classes defined by ATM forum traffic management group.
Each class is defined as follows:

  1. CBR (constant bit rate)
     The CBR service classs is intended for real-time applications, i.e.
     those requring tightly constrained delay and delay variation, as would
     be appropriate for voice and video applications. The consistent
     availability of a fixed quantity of bandwidth is considered
     appropriate for CBR service. Cells which are delayed beyond the value
     specified by CTD(cell transfer delay) are assumed to be significantly
     less value to the application.

     For CBR, the following ATM attributes are specified:
          PCR/CDVT(peak cell rate/cell delay variation tolerance)
          Cell Loss Rate
          CTD/CDV
          CLR may be unspecified for CLP=3D1.

  2. Real time VBR
     The real time VBR service class is intended for real-time
     applications,i.e., those requring tightly constrained delay and delay
     variation, as would be appropriate for voice and video applications.
     Sources are expected to transmit at a rate which varies with time.
     Equivalently the source can be described "bursty". Cells which are
     delayed beyond the value specified by CTD are assumed to be of
     significantly less value to the application. Real-time VBR service may
     support statistical multiplexing of real-time sources, or may provide
     a consistently guaranteed QoS.

     For real time VBR, the following ATM attributes are specified:
          PCR/CDVT
          CLR
          CTD/CDV
          SCR and BT(sustainable cell rate and burst tolerance)

  3. Non-real time VBR
     The non-real time VBR service class is intended for non-real time
     applications which have 'bursty' traffic characteristics and which can
     be characterized in terms of a GCRA. For those cells which are
     transfered, it expects a bound on the cell transfer delay. Non-real
     time VBR service supports statistical multiplexing of connections.

     For non-real time VBR, the following attributes are supported:
          PCR/CDVT
          CLR
          CTD
          SCR and BT

  4. UBR (unspecified bit rate)
     The UBR service class is intended for delay-tolerant or non-real-time
     applications, i.e., those which do not require tightly constrained
     delay and delay variation, such as traditional computer communications
     applications. Sources are expected to transmit non-continuous bursts
     of cells. UBR service supports a high degree of statistical
     multiplexing among sources. UBR service includes no notion of a per-VC
     allocated bandwidth resource. Transport of cells in UBR service is not
     necessarily guaranteed by mechanisms operating at the cell level.
     However it is expected that resources will be provisioned for UBR
     service in such a way as to make it usable for some set of
     applications. UBR service may be considered as interpretation of the
     common term "best effort service".

     For UBR, the following ATM attributes are specified:
          PCR/CDVT

  5. ABR (available bit rate)
     Many applications have the ability to reduce their information
     transfer rate if the network requires them to do so. Likewise, they
     may wish to increase their information transfer rate if there is extra
     bandwidth available within the network. There may not be deterministic
     parameters because the users are willing to live with unreserved
     bandwidth. To support traffic from such sources in an ATM network will
     require facilities different from those for Peak Cell Rate of
     Sustainable Cell Rate traffic. The ABR service is designed to fill
     this need. See section D16 for more ABR information.

See also ATM and Related Acronyms.

Note that the ITU specs have a different names for similar services
classes. Here is a mapping as I understand them:

   * Class A is CBR with accurate timing (eg phone calls)
   * Class B is VBR with timing (eg packetised phone calls)
   * Class C is VBR without accurate timing
   * Class D is connectionless VBR without accurate timing
   * Class X is UBR
   * Class Y is ABR

---------------------------------------------------------------------------
SUBJECT D20)

                       Are VP and VC unidirectional?

This question has been discussed at some length in the past in this group.
Here is one way to look at the situation: each link in the ATM network can
be split into two parts, one in each direction. Each directional sub link
has the entire range of VCCs (pt-pt links can distinguish between
directional data streams). In this context, VCs and VPs can be considered
unidirectional.

However, one always allocates the same VPI/VCI in both directions for a
connection. This may be considered a limitation of the signalling spec or a
simplification.

Nevertheless, there is no constraint that the same bandwidth must be
allocated in both directions. In fact, each direction is an indepndent
traffic stream and has its own traffic parameters and qos. Some connections
may assign the same parameters to both directions if the traffic flows are
symmetrical but this is certainly no requirement.

Irrespective of all the above, implementation wise, VPs and VCs must be
bidirectional and some bandwidth must be allocated in both directions to
order to support OAM flows. Maybe this is hidden from a user but it needs
to be done just the same.

---------------------------------------------------------------------------
SUBJECT D21)

                      M4 ATM Mgmt Interface Questions?

Question: With regard to a carrier ATM network, I recently heard the topic
of an "M4" management interface.

Answer: The ATM Forum Management WG defines "management information flows"
M1 to M5. A management information flow exchanges information between an
ATM management system and a part of a prototypical ATM network. For
instance, the M2 interface defines the information flow between a private
ATM switch and the local private network management system. The management
information flow includes a conceptual view (requirements) and a MIB.
Ideally the MIB can be used by SNMP or CMIP.

The protypical ATM network looks something like this:

ATM Device----Private ATM Net----Public ATM Net----Public ATM Net

Note: it may be more clear to mentally replace the word "public" with
"carrier" in all of this discussion.

The prototypical ATM management system is made up of local private
management systems and public management systems. This combination of
management systems, management flows and MIB's is the start of end to end
ATM network management.

                              M3                M5
            _ Private Mgt Sys<-->Public Mgt Sys<-->Public Mgt Sys
           /          ^                ^                 ^
        M1/         M2|              M4|               M4|
         /            v                v                 v
ATM Device----Private ATM Net----Public ATM Net----Public ATM Net

The management information flows relate to the above network:
     M1 =3D flow between the private management system and the end ATM devi=
ce
     M2 =3D flow between the private management system and the switches
making up the local private ATM net
     M3 =3D the flow between the private management system and the public
management system
     M4 =3D the flow between the switches in the public ATM network and the
public management system
     M5 =3D the flow between two public management systems

So the MIB's and information flows of M4 allow a management system within
your ATM carrier to manage the central office and other carrier ATM
switches of their ATM network.

If you are using their services, you wouldn't have direct access to this
informtion. You would have indirect access to parts of it (read only) via
the M3 interface. For instance, your private management system could query
their public management system to read circuit/path status or counters for
your paths traversing their public network service.

If you were a developer of public-type ATM switches, you would implement
the MIB's associated with M4; plus private MIB extensions. If you were a
management system vendor you might implement M1-3 if you were only interest
ed in private network management; M3-5 if you were interested in the
management of public networks; M1-5 if you managed both.

---------------------------------------------------------------------------
SUBJECT D22)

                            Questions about QOS.

Question: BISUP does not define a corresponding IE or parameter for QoS IE.
For systems adopting only ITU-T series standards there is no problem.
However, for systems adopting other implementation specs., like ATM Forum
UNI v3.1, problems can arise. ATM Forum UNI v3.1 defines 5 kinds of QoS
classes (0~4). When SETUP messages (UNI) are translated into IAM messages
(NNI), the information will be lost.

Answer: When interworking between two types of networks (ATM Forum UNI 3.x
based and ITU based), some information is usually lost. In this case, the
loss is not as significant because there are no universal semantics to QoS
class 1-4. Only QoS class 0 is universally defined as "unspecified" which
basically implies that no qos is associated with the connection. The
specified qos classes 1-4 are network specified i.e. each network provider
can assign his own semantics to each class. In this situation, interworking
even between two ATMF UNI 3.x networks that use different semantics for
specified qos classes, will require proprietary translation techniques.
Therefore, the use of qos classes 1-4 is not widespread.

Question: Different sources of the same type like VBR may have distinct
QoS. Is 5 kinds of QoS class enough to calssify all QoS?

Answer: The use of qos classes is being deprecated. Unfortunately, the
parameterized qos did not make it to UNI 4.0, but it will appear in an
addendum soon.

Question: If a user claims the QoS class is one of VBR services but it
provides the PCR parameter only, does CAC treat it as a CBR service or not?

Answer: Currently, qos classes 1-4 are not specified. Not only that, but
the bearer capability is seldom used to determine traffic type. It is the
ATM traffic descriptor IE that generally determines traffic type.
Nevertheless, the UNI spec specifies some allowable combinations of bearer
capability and traffic descriptor (see table F-1, UNI 3.1). For example,
the user may specify bearer class X with traffic type VBR and timing
indication set to none (this would specify non-real time VBR) and may only
specify PCR for CLP=3D0 and CLP=3D0+1. This is a legal combination. How the
switch CAC allocates resources for such a connection is not specified.

Question: Do we need fairness between CBR/VBR and the ABR service classes?
I've grasped the feeling that first the guaranteed QoS traffic class i.e.
CBRs and VBRs are to be serviced and iff no cells are found belonging to
these classes, ABR class traffic is to be serviced. But if this is the
case, then ABR class may feel starved of servicing and hence lead to
excessive delays, degradation in QoS and can lead to excessive traffic
submission because of retransmission of packets at higher layers. I don't
know whether my assumptions are right or wrong, please clarify.

Answer: There are in fact two assumptions that relate to this scenario,
they are the Call Admission Control (CAC) policy that established the
connections, be they CBR, VBR, whatever, in the first place, and the
policing algorithm at the network (or switch ingress).

The cells traveling in the CBR QoS class were designated as CBR at
connection setup time because either the application would not operate
satisfactorily otherwise (e.g., high quality voice traffic, circuit
emulation, ...) or because the user is willing to pay for the consistently
low latency and low cell loss, even for his IP traffic. The resources
(bandwidth, or link cell slots, if you like) are allocated at call setup.
The "owner" of the link has the responsibility to ensure that new CBR calls
are not setup if they would impact the performance of other equally high
priority calls. To make this work, CBR calls must always run at the
designated, agreed-upon rate, otherwise, they are not CBR! The second
assumption, policing, may be used to check that no source is exceeding its
contract, although within a given network this may not be necessary,
practically speaking.

VBR calls are set up about the same way, with the same CAC policies
governing whether to accept new calls, except that a certain tolerance
around the nominal cell rate is accepted to accomodate somewhat bursty
sources. Again, either the application won't work if the bandwidth contract
is not met, or the user will not be getting the service he paid for.

So, the answer is no, we don't really want to promote ABR cells up into the
CBR/VBR queues, because the goal cannot be fairness across traffic classes
if anyone is to get what they paid for in the higher classes. Consider a
sort-of real-world example: if you are using voice-over-ATM across some
future carrier ATM network, and you actually paid a premium (the usual
voice rates) for the call, you don't really care how many people on the
carrier's Internet service (which by the way runs over the same ATM
switches) are trying to reach the WWW hot site of the week, or how much
delay they suffer. If we used this "promote delayed ABR cells to higher
queues" scheme, then the quality of the voice call goes south in proportion
to the popularity of that hot site. [Check out Peter Newman's paper on
Capitalist and Socialist switching (
http://www.ipsilon.com/~pn/papers/datacomm94.html) for a fun treatment of
this concept.]

The key concept is that trying to deal with fairness only at the cell
scheduling level, without considering CAC and policing, leads to
undesireable network behaviours.

Note, however, that fairness amoung multiple VC's running ABR is of
considerable interest. Weighted Fair Queuing is one scheme proposed to
offer some minimum level of service even to lower priorities among a group
of different traffic classes, but the weights are likely to be still a
function of CAC so that the service levels can be guranteed to the top
priorities.

---------------------------------------------------------------------------
SUBJECT D23)

                     Questions about ATM Cell Headers.

Question: Where in the world is the EFCI bit?

Answer: The EFCI bit is in the cell header. Check out the definition of the
PTI field. In essence, the 2nd bit of the PTI is the EFCI bit when the 1st
bit indicates that this is a user cell. PTI mappings:


     PTI                     Meaning

     000  User cell, no congestion encountered, user-to-user indication =3D=
 0
     001  User cell, no congestion encountered, user-to-user indication =3D=
 1
     010  User cell, congestion encountered, user-to-user indication =3D 0
     011  User cell, congestion encountered, user-to-user indication =3D 1
     100  OAM segment associated cell
     101  OAM end-to-end associated cell
     110  Resource management cell
     111  Reserved for future use

---------------------------------------------------------------------------
SUBJECT D24)

                               What is MPOA?

The ATM Forum's Multiprotocol Over ATM (MPOA) subworking group is
developing an approach to support seamless transport of layer 3 protocols
across ATM networks. Layer 3 protocols meaning things like IP and IPX.
MPOA, operating at layer 2 and 3, will use the ATM Forum LAN Emulation
(LANE) for its layer 2 forwarding. As such, MPOA can be seen as an
evolution beyond LANE.

LANE basically connects together a single legacy LAN subnet across ATM.
MPOA will take this further by allowing direct ATM connectivity between
hosts in different subnets.

The proposed architecture consists of edge devices and route servers. An
edge device (not necessarily user equipment) would forward packets between
the LAN and ATM networks, establishing ATM connections when needed, but
would not be involved directly in routing. Edge devices would query a Route
Server when an unknown host address is encountered. Route Servers would be
able to map a host address into the information needed by the edge device
to establish a connection across the ATM network. That would be the layer 3
address of the optimal exit point from the ATM network as well as the ATM
address of that exit point. Route servers would also be able to forward
packets on to the exit point on behalf of the edge device while they are
establishing their own ATM virtual circuits. (This last part is LANE.)

Some folks will notice that the Route Server address mapping function is
basically the same problem that the Next Hop Resolution Protocol (NHRP) is
addressing.

---------------------------------------------------------------------------
SUBJECT D25)

              Partial/Early Packet Discard (PPD/EPD) Questions

Question: What is PPD and EPD?

Answer: PPD stands for Partial Packet Discard and EPD stands for Early
Packet Discard. These two are actually ATM cell discard techniques which
maximize "goodput" by taking advantage of the notion that some types of ATM
traffic are made up of large packets that are segmented into a series (or
burst) of ATM/AAL5 cells. This notion holds true for classic IP over ATM
and for LAN emulation (LANE).

These mechanisms work in concert with traffic policing. In a way they are
cleaning up after QoS decisions have been made. If some cells which are
part of a larger packet, are dropped for some reason, then why bother
sending the other cells that were a part of the same fragmented packet
since that entire packet will have to be retransmitted anyway. The act of
discarding all other cells under this circumstances is called PPD. Now if
all the cells that are the result of fragmenting a large packet will not
fit into the available buffer space (and some will be dropped) then why
continue sending only some of the cells. Just drop the entire packet (burst
of cells), which is called EPD.

So EPD acts *before* cells belonging to an AAL5 frame are admitted to the
output buffers. If a switch buffer occupancy threshold is exceeded, then
frames are discarded by EPD without even being queued in the output
buffers. On the other hand, PPD acts *after* cells of an AAL5 frame have
been admitted to a buffer. If any one cell of a particular frame is
discarded, then the rest of the cells are also discarded, since the frame
is now errored and will require retransmission anyway.

Question: PPD/EPD interaction with Traffic Policing?

Answer: One action of traffic policing is to CLP=3D1 mark (TAG) cells which
exceed a VCs specific traffic parameters. As these cells traverse an ATM
network they will be discarded IF congestion occurs at some place in the
network. Implicitly this gives CLP=3D0 (not TAGed) cells priority in that t=
he
CLP=3D1 cells will be dropped first.

It is the result of traffic policing and the operation of CLP tagging that
causes cells to be discarded, which can then trigger EPD/PPD. However it is
also possiblt for policing to be doing the right thing and, for example,
not tagging any cells, yet still output queues are congested and the need
for EPD emerges.

---------------------------------------------------------------------------
SUBJECT D26)

                   Questions about ATM addressing schemes

Question: Why are there multiple ATM addressing schemes?

Answer: According to the ATM UNI 3.x and RFC 1577, there are three
structures of ATM Address that can identify an end station.

   * 1) E.164
   * 2) NSAP
   * 3) Both

The multiple addressing schemes exist because the various companies
representing switch and service providers could not reach an agreement on
one format, split, more or less, along public network vs atm lan lines. The
way to tell what format to use is to ask your vendor (whether network
service or equipment vendor). Assumptions are risky...

During the ISDN meetings of 1984-1988 there was much discussion in ITU and
ISO regarding NSAPs and E.164. As near as I recall it came down to the idea
that E.164 does not (by itself) constitute an NSAP, but can be part of the
NSAP.

So, if you are just operating on a LAN you would use NSAP but probably not
E.164. If you are operating on an ATM network and only addressing
end-stations (and could care less about OSI) you would be OK with E.164
addressing. Finally, if you are dealing with OSI based end stations on an
ATM network you would use both, the E.164 bit gets you to the end-station
and the NSAP add-on finds the SAP at Transport Layer.

Question: Where to find info on the encoding of E.164 addresses in NSAP
address?

Answer: In general, the best place to look for answers is ISO 8348, which
is the defining standard for NSAP addresses. Annex A contains the relevant
information, section A.5.3 especially. Some information can also be found
in section 3.1.1.3 of UNI 4.0 as well.

---------------------------------------------------------------------------
SUBJECT D27)

                           What are DBR and SBR?

What are the the following Class of Services:

   * DBR - Deterministic Bit Rate
   * SBR - Statistical Bit Rate

One viewpoint.... DBR and SBR are a serious case of ITU 'Not invented
here'. DBR is a renamed CBR (Constant Bit Rate) class and SBR a renamed VBR
(Variable Bit Rate) class. Now don't ask me why the ITU did this. Granted,
the new names are perhaps 'better' in the sense that they more precisely
describe the characteristics of the class, but still..

=2E..another viewpoint... I don't think there was any 'not invented here'
involved. CBR and VBR refer to the source (cell stream) characteristics,
and DBR and SBR relate to the concept of "ATM Transfer Capabilities"
(ITU-speak) or "service categories" (ATM Forum terminology). As there is
*not* a one-to-one relationship between cell stream characteristics and the
transfer capability used to transport the cells, it would have spawned
(even more) confusion if the same names would have been used for these
different things. DBR and SBR are included in the new version of ITU I.371.
The I.371 also includes a traffic class not supported by the ATM Forum,
called ABT (Available Block Transfer).

---------------------------------------------------------------------------
SUBJECT D28)

                         What is CLP=3D0+1 all about?

The cell flow in a connection can be logically split into various cell
flows depending on the CLP value of the cell, whether it is 0 or 1.

The following are the cell flows:

   * - CLP=3D0 cell flow
   * - CLP=3D1 cell flow
   * - CLP=3D0+1 cell flow (also called aggregate cell flow)

CLP=3D0+1 cell flow is for both CLP=3D0 cells and CLP=3D1 cells. So logical=
ly, a
CLP=3D0 cell travels in 'CLP=3D0 cell flow' and 'CLP=3D0+1 cell flow' while=
 a
CLP=3D1 cell travels in 'CLP=3D1 cell flow' and 'CLP=3D0+1 cell flow'.

The connection and cell flows may be represented as follows:

        Connection
            |
            V

    ---------------------------
    ---------------     |
    CLP=3D0 Cell Flow     |
    ---------------     CLP=3D0+1 Cell Flow
    ---------------     |
    CLP=3D1 Cell Flow     |
    ---------------     |
    ---------------------------

To establish a connection we have to specify Peak Cell Rate(PCR), Sustained
Cell Rate(SCR), Maximum Burst Size(MBS) in forward and backward directions,
for each cell flow. So PCR, SCR, etc are not single values to a connection!
We must specify these values for the cell flows CLP=3D0, CLP=3D1 and CLP=3D=
0+1.
Usually CLP=3D0+1 values will be equal to or more than the sum of PCR, etc
values of CLP=3D0 and CLP=3D1 cell flows.

Depending on the type of the connection we need to specify some (not all)
values specific to some cell flows only. TM 4.0 clearly specifies which
combinations are valid (in chapter 4). For eg. Tagging can be opted only in
VBR.3 conformance defn. in which we specify values for CLP=3D0 and CLP=3D0+=
1
cell flows only.

Right now CDVT is not signalled even in UNI 4.0. Let us say it picks from a
standard table for a PCR or SCR value. The cell conformance test will be
done for every cell flow seperately. Consider a hypotheical type with
tagging option in which we must specify values of CLP=3D0 and CLP=3D0+1 cel=
l
flows only and cell conformance has to be done for PCRs of these cell
flows. A CLP=3D0 cell will be tested with GCRA(1/PCR0, CDVT0). If it is
non-conforming, the cell is deprioritized by tagging it to CLP=3D1. Now the
cell is tested with GCRA(1/PCR01, CDVT01) to check if it is conforming.
Note that at any further check point this cell will be checked only with
GCRA(1/PCR01, CDVT01) because it is no more in CLP=3D0 cell flow. A cell se=
nt
by source with CLP=3D1 is checked only with GCRA(1/PCR01, CDVT01) at any
place.

Note: PCR0 and CDVT0 are PCR and CDVT of CLP=3D0 cell flow and PCR01 and
CDVT01 are PCR and CDVT of CLP=3D0+1 cell flow.

---------------------------------------------------------------------------
SUBJECT D29)

                  Connection establishing in the ATM layer

Question: I have not been able to find information about how connections in
the ATM layer of ATM are set up. Since ATM is connection oriented the AAL
somehow must signal to the ATM layer that it wants to have a connection
open to another host. How is this signalling done?

Answer: Actually, it's not the AAL layer that originates the request for a
connection (although if one were a strict believer in network layering, one
might assume so :-). AAL just defines how information of a given type is
packaged for transporting over the ATM network. There is a signalling
protocol (which, by the way, uses AAL5) which defines a protocol which
includes the end stations, plus any relevant ATM switches along the path.

There are various entities above AAL that could determine a connection is
needed, including the LAN Emulation Client, an IP-ATM end station, a direct
video-over-ATM application, or a human network operator. If the connection
is set up via Switched Virtual Circuits (SVC's), then the protocol used is
most likely Q.2931, previously called Q93B, most commonly referenced via
the ATM Forum's specs:

   * UNI 3.0 (most commonly in use for ATM/data interoperability today),
   * UNI 3.1 (the update for Q.2931 compatibility, no functional changes)
   * UNI 4.0 (approved in 1996)

If the connection is set up by manual means, then the management interface
of your nearby switch is most relevant.

---------------------------------------------------------------------------
SUBJECT D30)

                  Information about about B-ISDN and B-ICI

B-ISUP provides the signalling requirements to support basic bearer
services and supplementary services (for Capability Set 1 and Capability
Set 2 B-ISDN) for B-ISDN applications. In the ATM scenario, the
introduction of this protocol meets the needs to support the Switched
Virtual Connections (SVCs), whereas initial ATM service supported only the
Permanent Virtual Connections (PVCs). This protocol is conceptually the
natural evolution of the ISDN User Part (ISUP) in the Broadband field, but
many important changes have been introduced:

   * the substitution of the concept of circuit (identified by the CIC)
     with that of Virtual Path/Virtual Circuit (VP/VC)
   * the substitution of the concept of connection with that of Virtual
     Path Connection (VPC)
   * a new structure of the protocol, which is now modular and, therefore,
     open for future enhancements, in terms of Supplementary Services.
   * the possibility to manage point-to-multipoint connections/calls
     (Q.2722).
   * it can manage both E.164 and AESA addresses.

B-ISUP runs over this protocol stack:

                SS7 MTP-Level 3
                Q.2140
                Q.SAAL
                ATM

and contains a specific module, called "Compatibility Process", for
managing both unrecognized signalling informations and interworking issues
with a N-ISUP (Narrowband ISUP, i.e. ISUP).

B-ICI stands for Broadband Inter Carrier Interface and is the broad term
for the interface and B-ISUP stack as described and documented by ATM
Forum.

This is a standard interface (based on the ITU-T B-ISUP) which has been
chosen by both ITU-T and ATM Forum for interconnecting *public* ATM
networks (whereas P-NNI is the standard non-SS7 non-ITU-T based interface
for interconnecting *private* ATM networks).

This protocol takes many features from ANSI B-ISUP (T1.648.1-4), especially
those needed for routing signalling messages through different vendor
networks (like the Exit Message and the Carrier Identification Code, Charge
Number, Carrier Selection Information, Outgoing Facility Identifier,
Originating Line Information parameters).

For an introduction to both ISUP and B-ISUP see

"Signalling System #7",
Travis Russel,
McGraw-Hill

For more references surf the Trillium WEB site at http://www.trillium.com

---------------------------------------------------------------------------



User Contributions:

1
Dec 17, 2023 @ 9:09 am
Digital optimization demands a long-term time and effort. As long as you work with an seasoned search engine marketing partner, the superior and persistent outcomes you should see by utilizing their help. Yet, if you cease funding in web optimization entirely, your challengers could snatch an improvement in search rankings by surfacing earliest and drawing in potential clients from your grasp.

https://gotwebsite1.com/local-seo-phoenix/ - [color=black_url - Perks of targeting specific keywords in Paradise Valley

Comment about this article, ask questions, or add new information about this topic:




Part1 - Part2 - Part3 - Part4

[ Usenet FAQs | Web FAQs | Documents | RFC Index ]

Send corrections/additions to the FAQ Maintainer:
carl@umd5.umd.edu (Carl Symborski)





Last Update March 27 2014 @ 02:11 PM