Cell-relay FAQ

Revision as of 01:05, 12 August 2019 by Netfreak (talk | contribs) (Created page with "<pre> Oxford University Libraries Automation Service WWW Server _________________________________________________________________ Subject: comp.dcom.cell-relay...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

   Oxford University Libraries Automation Service WWW Server
     _________________________________________________________________
   
   Subject: comp.dcom.cell-relay FAQ: ATM, SMDS, and related
   technologies (part 2/2)
   Newsgroups: comp.dcom.cell-relay , comp.answers , news.answers
   

Archive-name: cell-relay-faq/part2
Last-modified: 1995/01/29

------------------------------------------------------------------------------
comp.dcom.cell-relay FAQ: ATM, SMDS, and related technologies (Rev 1995/01/29)
Part 2 - Introduction and second half of FAQ
------------------------------------------------------------------------------

Copyright 1995 Carl Symborski
(c) 1995 Carl Symborski

This article is the second of two articles which contain general information
and also answers to some Frequently Asked Questions (FAQ) which are related
to or have been seen in comp.dcom.cell-relay.  This FAQ provides
information of general interest to both new and experienced readers.  It is
posted to the Usenet comp.dcom.cell-relay, comp.answers, and news.answers
news groups every few months.

This FAQ reflects cell-relay traffic through January 1995.


DISCLAIMER - PLEASE READ.

This article is Copyright 1995 by Carl Symborski.  It may be freely
redistributed in its entirety provided that this copyright notice is not
removed.  It may not be sold for profit or incorporated in commercial
documents or CD-ROMs without the written permission of the copyright holder.
Permission is expressly granted for this document to be made available
for file transfer from installations offering unrestricted anonymous file
transfer on the Internet.  This article is provided as is without any express
or implied warranty.  Nothing in this article represents the views of
the University Of Maryland.

If you have any additions, corrections, or suggestions for improvement to
this FAQ, please send them to [email protected].

I will accept suggestions for questions to be added to the FAQ, but
please be aware that I will be more receptive to questions that are
accompanied by answers.  :-)


-----------------------------------------------------------------------------
TOPIC:     D)   ATM TECHNOLOGY QUESTIONS
-----------------------------------------------------------------------------
SUBJECT:  D1)   What are the various ATM Adaptation layers?

        In order for ATM to support many kinds of services with different
traffic characteristics and system requirements, it is necessary to adapt
the different classes of applications to the ATM layer.  This function is
performed by the AAL, which is service-dependent.  Four types of AAL were
originally recommended by CCITT.  Two of these have now been merged
into one.  Also, within the past year a fifth type of AAL has been proposed.

        Briefly the four ATM adaptation layers (AAL) have/are being defined:

AAL1 - Supports connection-oriented services that require constant bit rates
       and have specific timing and delay requirements.  Example are constant
       bit rate services like DS1 or DS3 transport.

AAL2 - Supports connection-oriented services that do not require constant
       bit rates.  In other words, variable bit rate applications like
       some video schemes.

AAL3/4 - This AAL is intended for both connectionless and connection oriented
       variable bit rate services.  Originally two distinct adaptation layers
       AAL3 and 4, they have been merged into a single AAL which name is
       AAL3/4 for historical reasons.

AAL5 - Supports connection-oriented variable bit rate data services.  It is
       a substantially lean AAL compaired with AAL3/4 at the expense of
       error recovery and built in retransmission.  This tradeoff provides
       a smaller bandwidth overhead, simpler processing requirements, and
       reduced implementation complexity.  Some organizations have proposed
       AAL5 for use with both connection-oriented and connectionless services.

A recent document which describes these (except AAL2) with frame formats is:
"Asynchronous Transfer Mode (ATM) and ATM Adaptation Layer (AAL) Protocols
Generic Requirements",  Bellcore Technical Advisory, TA-NWT-001113, Issue 1,
August 1992.  This can be obtained by writing to:

        Bellcore
        Document Registrar
        445 South Street - Rm. 2J125
        P.O. Box 1910
        Morristown, NJ  07962-1910

SUBJECT:  D2)   Are ATM cells delivered in order?

        Yes.  The ATM standards specify that all ATM cells will be delivered
in order.  Any switch and adaptation equipment design must take this into
consideration.


SUBJECT:  D3)   What do people mean by the term "traffic shaping"?

        Here is an explicit definition of traffic shaping followed by brief
tutorial.  Note that a variety of techniques have been investigated to
implement traffic shaping.  Reference the literature for keywords such as
"leaky bucket", "congestion", "rate control", and "policing".

Definition:
Traffic shaping is forcing your traffic to conform to a certain
specified behavior.  Usually the specified behavior is a worst case or a
worst case plus average case (i.e., at worst, this application will generate
100 Mbits/s of data for a maximum burst of 2 seconds and its average over
any 10 second interval will be no more than 50 Mbit/s).

Of course, understand that the specified behavior may closely match the
way the traffic was going to behave anyway.  But by knowing precisely
how the traffic is going to behave, it is possible to allocate resources
inside the network such that guarantees about availability of bandwidth
and maximum delays can be given.


Brief Tutorial:
Assume some switches connected together which are carrying traffic.
The problem to actually deliver the grade of service that has been promised,
and that people are paying good money for. This requires some kind of resource
management strategy, since congestion will be by far the greatest factor
in data loss. You also need to charge enough to cover you costs and make a
profit, but in such a way that you attract customers. There are a number
of parameters and functions that need to be considered:

PARAMETERS
----------
There are lots of traffic parameters that have been proposed for resource
management. The more important ones are:
    mean bitrate
    peak bitrate
    variance of bitrate
    burst length
    burst frequency
    cell-loss rate
    cell-loss priority
    etc. etc.

These parameters exist in three forms:
    (a) actual
    (b) measured, or estimated
    (c) declared (by the customer)

FUNCTIONS
---------
(a) Acceptance Function
-----------------------
Each switch has the option of accepting a virtual circuit request based on
the declared traffic parameters as given by the customer. Acceptance is
given if the resulting traffic mix will not cause the switch to not
achieve its quality of service goals.

The acceptance process is gone through by every switch in a virtual
circuit. If a downstream switch refuses to accept a connection, an
alternate route might be tried.

(b) Policing Function
---------------------
Given that a switch at the edge of the network has accepted a virtual
circuit request, it has to make sure the customer equipment keeps its
promises. The policing function in some way estimates the the parameters
of the incoming traffic and takes some action if they measure traffic
exceeding agreed parameters. This action could be to drop the cells, mark
them as being low cell-loss priority, etc.

(c) Charging Function
---------------------
The function most ignored by traffic researchers, but perhaps the most
important for the success of any service! Basically, this function
computes a charge from the estimated and agreed traffic parameters.

(d) Traffic Shaping Function
----------------------------
Traffic shaping is something that happens in the customer premise equipment.
If the Policing function is the policeman, and the charging function is the
judge, then the traffic shaper is the lawyer. The traffic shaper uses
information about the policing and charging functions in order to change
the traffic characteristics of the customer's stream to get the lowest
charge or the smallest cell-loss, etc.

For example, an IP router attached to an ATM network might delay some
cells slightly in order to reduce the peak rate and rate variance without
affecting throughput. An MPEG codec that was operating in a situation
where delay wasn't a problem might operate in a CBR mode.



SUBJECT:  D4) * What is happening with signalling standards for ATM?

        The Signaling Sub-Working Group (SWG) of the ATM Forum's Technical
Committee has completed its implementation agreement on signaling at the
ATM UNI (summer 1993).  The protocol is based on Q93B with extensions
to support point-to-multipoint connections.  Agreements on addressing specify
the use of GOSIP-style NSAPs for the (SNPA) address of an ATM end-point
at the Private UNI, and the use of either or both GOSIP-style NSAPs and/or
E.164 addresses at the Public UNI.  The agreements have been documented
as part of the UNI 3.0 specification.

Additionally, the ANSI T1S1 as well as the ITU-T sudygroup XI are concerned
with ATM signalling.  In the latter half of 1993 a couple of things happened:
 1) The ITU finally agreed to modify its version of Q93B to more closely
    to align it with that specified in the ATM Forum's UNI 3.0 specification.
    The remaining variations included some typos which the ITU Study Group
    found in the Forum's specification.  Also, some problems were solved
    differently.  Aligned yes, but the changes could still cause
    incompatibilities with UNI 3.0.
 2) Given the above, the ATM Forum's signalling SWG decided to modify the
    Forum's specification to close the remaining gap and align it with the
    ITU.  The end result may be declared as errata to UNI 3.0 or defined
    as a UNI 3.1 specification

The biggest change is with SSCOP.  UNI 3.0 references the draft ITU-T SSCOP
documents (Q.SAAL).  However UNI 3.1 will reference the final ITU Q.21X0
specifications.  These two secifications are *not* interoperable so there
will be no backwards compatibility between UNI 3.0 and UNI 3.1.  The ATM
Forum UNI 3.1 specification was approved in Fall 1994 and has been distributed
to ATM Forum members.  I suppose it will be available publically via Prentice
Hall as a companion to the UNI 3.0 book they sell.  Don't know when this
will happen.  (See section C4.)

The ATM Forum also has a Private-NNI SWG.  Their objective is to define an
interface between one Switching System (SS) and another, where each SS is a
group of one or more switches, such that the specification can be applied to
both the switch-to-switch case and the network-to-network cases.  Currently
they are working on "all the world's problems" and thus a P-NNI specification
is still a ways off.  For the interim, the Forum has developed an "Interim
Inter-switch Signalling Protocol" which is now up for final vote.


SUBJECT:  D5)   What is VPI and VCI?

        ATM is a connection orientated protocol and as such there is a
connection identifier in every cell header which explicitly associates a cell
with a given virtual channel on a physical link.  The connection identifier
consists of two sub-fields, the Virtual Channel Identifier (VCI) and the
Virtual Path Identifier (VPI).  Together they are used in multiplexing,
demultiplexing and switching a cell through the network.  VCIs and VPIs are
not addresses.  They are explicitly assigned at each segment (link between ATM
nodes) of a connection when a connection is established, and remain for the
duration of the connection.  Using the VCI/VPI the ATM layer can
asynchronously interleave (multiplex) cells from multiple connections.


SUBJECT:  D6)   Why both VPI *and* VCI?

        The Virtual Path concept originated with concerns over the cost of
controlling BISDN networks.  The idea was to group connections
sharing common paths through the network into identifiable units (the Paths).
Network management actions would then be applied to the smaller number of
groups of connections (paths) instead of a larger number of individual
connections (VCI).  Management here including call setup, routing, failure
management, bandwidth allocation etc.  For example, use of Virtual Paths in
an ATM network reduces the load on the control mechanisms because the function
needed to set up a path through the network are performed only once for all
subsequent Virtual Channels using that path.  Changing the trunk mapping
of a single Virtual Path can effect a route change for every Virtual Channel
using that path.

Now the basic operation of an ATM switch will be the same, no matter if it is
handling a virtual path or virtual circuit.  The switch must identify on
the basis of the incomming cell's VPI, VCI, or both, which output port to
forward a cell received on a given input port.  It must also determine what
the new values the VPI/VCI are on this output link, substituting these
new values in the cell.


SUBJECT:  D7)   How come an ATM cell is 53 bytes anyway?

        ATM cells are standardized at 53 bytes because it seemed like a
good idea at the time!  As it turns out, during the standardization process
a conflict arose within the CCITT as to the payload size within an ATM
cell.  The US wanted 64 byte payloads because it was felt optimal for
US networks.  The Europeans and Japanese wanted 32 payloads because it was
optimal for them.  In the end 48 bytes was chosen as a compromise.  So
48 bytes payload plus 5 bytes header is 53 bytes total.

The two positions were not chosen for similar applications however.
US proposed 64 bytes taking into consideration bandwidth utilization for
data networks and efficient memory transfer (length of payload should be
a power of 2 or at least a multiple of 4). 64 bytes fit both requirements.

Europe proposed 32 bytes taking voice applications into consideration. At
cell sizes >= 152, there is a talker echo problem. Cell sizes between 32-152
result in listener echo. Cell sizes <= 32 overcome both problems, under ideal
conditions.

CCITT chose 48 bytes as a compromise. As far as the header goes, 10% of
payload was perceived as an upper bound on the acceptable overhead, so 5 bytes
was chosen.


SUBJECT:  D8)   How does AAL5 work?

        Here is is a very simplified view of AAL5 and AALs in general.
AAL5 is a mechanism for segmentation and reassembly of packets.  That is,
it is a rulebook which sender and receiver agree upon for taking a long
packet and dividing it up into cells.  The sender's job is to segment the
packet and build the set of cells to be sent.  The receiver's job is to
verify that the packet has been received intact without errors and to
put it back together again.

AAL5 (like any other AAL) is composed of a common part (CPCS) and a service
specific part (SSCS). The common part is further composed of a convergence
sublayer (CS) and a segmentation and reassembly (SAR) sublayer.

         +--------------------+
         |                    | SSCS
         +--------------------+
         |        CS          |
         | ------------------ | CPCS
         |       SAR          |
         +--------------------+

SAR segments higher a layer PDU into 48 byte chunks that are fed into
the ATM layer to generate 53 byte cells (carried on the same VCI).  The
payload type in the last cell (i.e., wherever the AAL5 trailer is) is marked
to indicate that this is the last cell in a packet.  (The receiver may
assume that the next cell received on that VCI is the beginning of a
new packet.)

CS provides services such as padding and CRC checking. It takes an SSCS
PDU, adds padding if needed, and then adds an 8-byte trailer such that
the total length of the resultant PDU is a multiple of 48. The trailer
consist of a 2 bytes reserved, 2 bytes of packet length, and 4 bytes of CRC.

SSCS is service dependent and may provide services such as assured
data transmission based on retransmissions. One example is the SAAL
developed for signalling. This consists of the following:

         +--------------------+
         |       SSCF         |
         | ------------------ | SSCS
         |       SSCOP        |
         +--------------------+
         |        CS          |
         | ------------------ | CPCS
         |       SAR          |
         +--------------------+

SSCOP is a general purpose data transfer layer providing, among other
things, assured data transfer.

SSCF is a coordination function that maps SSCOP services into those
primitives needed specifically for signalling (by Q.2931). Different
SSCFs may be prescribed for different services using the same SSCOP.

The SSCS may be null as well (e.g. IP-over-ATM or LAN Emulation).

There are two problems that can happen during transit.  First, a
cell could be lost.  In that case, the receiver can detect the problem
either because the length does not correspond with the number of cells
received, or because the CRC does not match what is calculated.  Second,
a bit error can occur within the payload.  Since cells do not have any
explicit error correction/detection mechanism, this cannot be detected
except through the CRC mismatch.

Note that it is up to higher layer protocols to deal with lost and
corrupted packets.  This can be done by using a SSCS which supports
assured data transfer, as discussed above.


SUBJECT:  D9)   What are the diffferences between Q.93B, Q.931, and Q.2931?

        Essentially, Q.93B is an enhanced signalling protocol for call
control at the Broadband-ISDN user-network interface, using the ATM
transfer mode.  The most important difference is that unlike Q.931
which manages fixed bandwidth circuit switched channels, Q.93B has
to manage variable bandwidth virtual channels.  So, it has to deal
with new parameters such as ATM cell rate, AAL parameters (for
layer 2), broadband bearer capability, etc.  In addition, the ATM
Forum has defined new functionality such as point-to-multipoint
calls.  The ITU-T Recommendation will specify interworking
procedures for narrowband ISDN.

Note that as of Spring 1994, Q.93B has reached a state of maturity
susfficient to justify a new name, Q.2931 for its publised official
designation.



SUBJECT:  D10)   What is a DXI?

        The ATM DXI (Data Exchange Interface)is basically the functional
equivalent of the SMDS DXI.  Routers will handle frames and packets but not
typically fragment them into cells; DSUs will fragment frames into cells as
the information is mapped to the digital transmission facility.

The DXI, then, provides the standard interface between routers and DSUs
without requiring a bunch of proprietary agreements.  The SMDS DXI is
simple 'cause the router does the frame (SMDS level 3) and the DSU does
the cells (SMDS level 2).  The ATM DXI is a little more complicated
since it has to accomomodate AAL3/4 and/or AAL5 (possibly concurrently).



SUBJECT:  D11)   What is Goodput?

        When ATM is used to tranasport cells originating from higher-level
protocols (HLP), an important consideration is the impact of ATM cell loss
on that protocol or at least the segmentation processs.  ATM cell loss can
cause the effective throughput of some HLPs to be arbitrarily poor depending
on ATM switch buffer size, HLP congestion control mechanisms, and packet size.

This occurs because during congestion for example, and ATM switch buffer can
overflow which will cause cells to be dropped from multiple packets, runining
each such packet.  The preceding and the remaining cells from such packets,
which are ultimately discarded by the frame reassembly process in the receiver,
are nevertheless transmitted on an already congested link, thus wasting
valuable link bandwidth.

The traffic represented by these "bad" cells may be termed as BADPUT.
Correspondingly, the effective throughput, as determined by those cells which
are successfully recombined at the receiver, can be termed as GOODPUT.


SUBJECT:  D12) * What is LAN Emulation all about?

"LAN Emulation" is a work in progress in the ATM Forum.  Their specification
"LAN Emulation over ATM Specification" is currently up for final vote at
this time.  Here's the basics of the requirements and general approach:

The organizations working on it say LAN Emulation is needed for two key reasons
1) Allow an ATM network to be used as a LAN backbone for hubs, bridges,
switching hubs (also sometimes called Ethernet switches or Token Ring switches)
and the bridging feature in routers.

2) Allow endstations connected to "legacy" LANs to communicate though a
LAN-to-ATM hub/bridge/switch with an ATM-attached device (a file server, for
example) without requiring the traffic to pass through a more complex device
such as a router.  Note that the LAN-attached device has a conventional,
unchanged protocol stack, complete with MAC address, etc.

LAN Emulation does not replace routers or routing, but provides a complementary
MAC-level service which matches the trend to MAC-layer switching in the hubs
and wire closets of large LANs.

The technical approach being discussed in the Forum among companies with
interest and expertise in this area include three elements:

1) Multicast/broadcast support
Since almost all LAN protocols depend on broadcast or multicast packet
delivery, an ATM LAN must provide the same service. Ideally, this would use
some sort of multipoint virtual circuit facility.

2) Mac address to ATM address resolution.
There are two basic variations being discussed:
a) do an ARP-like protocol to ask for a mapping from Mac address to ATM address
b) send packets to some sort of directory or cache server that sends the
destination ATM address back to the source as a sort of side effect of
delivering the packet.

3) Switched Virtual Circuit Management
It is generally desireable (for scalabilitiy, quality of service, etc) to
set up point-to-point virtual circuits between endpoints that want to
communicate with each other (client to file server, for example) once
the two atm addresses are known.  To make this work in the existing legacy LAN
environment, we don't have the freedom to push knowledge or management of
these virtual circuits up above the MAC level (no protocol changes, remember?)
so the logic to resovle for an ATM address and set up a virtual circuit on
demand must be in the LAN Emulation layer.  This would include recognising
when an SVC to another ATM endpoint already existed, so that the same circuit
could be used for other traffic.

4) Mac definition
The actual packet format would be some variant of the packets used on existing
networks.  For example, a packet leaving an Ethernet to go over a virtual
circuit to an ATM-attached file server would probably be carried directly
over AAL5, with some additional control information.


SUBJECT:  D13)   Information about the Classical IP over ATM approach.

        RFC1483 defines the encapsulation of IP datagrams directly in AAL5.
Classical IP and ARP over ATM, defined in RFC1577, is targetted towards making
IP run over ATM in the most efficient manner utilizing as many of the
facilities of ATM as possible.  It considers the application of ATM as a
direct replacement for the "wires" and local LAN segments connection IP
end-stations and routers operating in the "classical" LAN-based paradigm.
A comprehensive document, RFC1577 defines the ATMARP protocol for logical
IP subnets (LISs). Within an LIS, IP addresses map directly into ATM Forum UNI
3.0 addresses.  For communicating out a LIS, an IP router must be used -
following the classical IP routing mode.  Reference the RFCs for a full
description of this approach.


SUBJECT:  D14)   Classical IP and LAN/MAC Emulation approaches compaired.

        The IETF scheme defines an encapsulation and an address resolution
mechanism.  The encapsulation could be applied to a lot of LAN protocols
but the address resolution mechanism is specifically defined (only) for IP.
Further, the IETF has not (yet) defined a multicast capability.  So, those
protocols which require multicast definitely cannot adapt the IETF scheme for
their own use.

The purpose behind the ATM Forum's LAN-Emulation effort is to allow
existing applications (i.e., layer-3 and above protocol stacks) to
run *with no changes* over ATM.  Thus, the mapping for all protocols
is already defined.  In a PC environment, such applications tend to
run over an NDIS/ODI/etc. interface.  The LAN-Emulation effort aims
to be able to be implementable underneath the NDIS/ODI-type interface.

In contrast to LAN-Emulation, the IETF's scheme will allow IP to make
better use of ATM capabilities (e.g., the larger MTU sizes), and for
unicast traffic will be more efficient than having the additional
LAN-Emulation layer.  However, the Classical draft suggests that IP
multicast (e.g., the MBONE) will have to be tunnelled over ATM; I
suspect this will be less efficient than LAN-Emulation.

For better or worse, I think both are going to be used.  So, vendors
may have to do both.  The worse part is extra drivers (or extra
code in one driver that does both).  The better part is that all existing
LAN applications can use one (LAN Emulation), and over time (as their mapping
to ATM is fully defined) can transition to use the other (IETF Scheme).

I would summarize LAN-Emulation as follows:

The advantage of LAN-Emulation is that the applications don't know
they're running over ATM.  The disadvantage of LAN-Emulation is also
that the applications don't know they're running over ATM.


SUBJECT: D15) * Whats the difference between SONET and SDH?

        SONET and SDH are very close, but with just enough differences that
they don't really interoperate. Probably the major difference between them
is that SONET is based on the STS-1 at 51.84 Mb/s (for efficient carrying
of T3 signals), and SDH is based on the STM-1 at 155.52 Mb/s (for efficient
carrying of E4 signals).  As such, the way payloads are mapped into these
respective building blocks differ (which makes sense, given how the European
and North American PDHs differ).  Check the September 1993 issue of IEEE
Communications Magazine for an overview article on SONET/SDH.

The following table shows how the US STS and the European STM levels
compare:

US        Europe       Bit Rate (total)

STS-1      --            51.84 Mb/s
STS-3     STM-1         155.52 Mb/s
STS-12    STM-4         622.08 Mb/s
STS-24    STM-8        1244.16 Mb/s
STS-48    STM-16       2488.32 Mb/s
STS-192   STM-64       9953.28 Mb/s

From a formatting perspective, however, OC-3/STS-3 != STM-1 even though
the rate is the same.  SONET STS-3c (i.e., STS-3 concatenated) is the
same as SDH STM-1, followed by STS-9c = STM-3c, etc.

There are other minor differences in overhead bytes (different places,
slightly different functionality, etc), but these shouldn't provide
many problems.  By the way, most physical interface chips that support SONET
also include a STM operation mode.  Switch vendors which use these devices
could then potentially support STS-3 and STM-1 for example.

For anyone interested, there is an ANSI T1 document which reports on all the
differences between SONET and SDH, and proposals to overcome them. (Document
T1X1.2/93-024R2).  Jim Burkitt, the T1X1 Chair, ([email protected]),
noted in September 1994 that folks can get a copy of T1 Technical Report #36
"A Technical Report on A Comparison of SONET (Synchronous Optical NETwork)
and SDH (Synchronous Digital Hierarchy) from:

    ftp   test.t1bbs.org
    in    /pub/techrpts/tr.0/tr-36.zip

It's also available at datanet.tele.fi in the directory /atm/ansi, file
T1X1.2/93-024R2


SUBJECT: D16) * What is ABR?

The ATM Forum Traffic Management (TM) subworking group is working on
the definition of a new ATM service type called ABR which stands for
Available Bit Rate.  Using ABR traffic is not characterized using
peak cell rate, burst tolerance, et.al., and bandwidth reseverations are not
made.  Instead traffic is allowed into the network throttled by a flow
control type mechanism.  The idea is to provide fair sharing of network
bandwidth resources.

Competing approaches were intensely studied for quite some time.  The debate
included such figures as H. T. Kung, Raj Jain, and many top folks from
industry.  Extensive simulation work was done by (among others) Bellcore,
Sandia Labs, NIST and Hughes Network Systems.  Some simulations were done
explicitly with TCP/IP traffic sources, although most used a more generic
stochastic model.

The result of all this was the adoption in principle of a "rate-based"
approach known as Enhanced Proportional Rate Control Algorithm (EPRCA).
The term "rate based" means that the paradigm used involves adjustment
by the network of the 'sending rate' of each VC.  This is as opposed
to a "credit based" or "windowing" approach, where the network communicates
to each source (VC) the amount of buffer-space available for its use,
and the source refrains from sending unless it knows in advance that
the network has room to buffer the data.

ABR will have a Peak Cell Rate, a guaranteed Minimum Cell Rate (per
VC), and will do a fair share of the remaining available bandwidth
(the specific mechanism for determining fair share is left for vendor
latitude and experimentation).  So you don't have explicit leaky
bucket parameters for ABR.

There isn't yet a published document which discussess all the traffic
management work that the Forum is working on these days (including
ABR).  There *will* be one, called

        "ATM Forum Traffic Management Specification Version 4.0"

which exists in draft form, and will be finalized in April and issued
in July (which probably means September to be realistic).
In the mean time you can reference:

        M. W. Garrett,
        "ATM Service Architecture: From Applications to Scheduling"
        ATM Forum contribution 94-0846, Sept 1994.

which is "publically", albeit not permanently, available at

        ftp://thumper.bellcore.com/pub/mwg/ATMF.qos.mwg

The essential {CBR, VBR, ABR, UBR} service model itself dates back
to Sept 1993 (although those names were not yet attached to the
categories, and the definitions were not explicit):

        Natalie Giroux,
        "Categorization of the ATM Layer QoS and Structure of
        the Traffic Management Work"
        ATM Forum contribution 93-0837, Sept 1993.



SUBJECT:  D17) + Questions about VPI/VCI assignment?

Q: With respect to the assignment of VPI/VCIs for an ATM Forum 3.1
or Q.2931 SVC service request, consider two users A and B which will
communicate across a network.  Are there really four VPI/VCIs that must be
assigned by the call setup process:
    a) The VPI/VCI A uses to send to B
    b) The VPI/VCI that B will receive from A
    c) The VPI/VCI B uses to send to A
    d) The VPI/VCI that A will receive from B?

A: According to the ATM Forum UNI 3.1 specification, User A will request
a VCC via a SETUP message. The Network will either respond with (if
there are no problems) a CALL PROCEEDING message or a CONNECT
message. In either case, it must respond with a Connection Identifier
(VPI/VCI) in the first response to the User (see the section labeled
"Connection Identifier Allocation/Selection -Origination in the ATM
Forum UNI specification).

At the Called User side (B), the Network will allocate a Connection
Identifier (VPI/VCI) for the Called user and will be SETUP message
sent to the Called User.

In both cases, the Network allocates the VPI/VCI. Also, the VCC
can be bidirectional or unidirectional based on how the VCC was
established.


SUBJECT:  D18) + AAL5 CRC32 Questions

Q: I want to implement functions to encode and decode CRC for AAL5.  Are
there any written code available from FTP sites?  I am aware of Vince.
Are there any others? Thanks.

A: The AAL5 CRC is the same as the Ethernet CRC.  You can find example
code in many PD or freeware packages such as XModem or Kermit; or by
asking archie about CRC.  You should be aware that while these
examples all use the "look up 1 byte at a time" mode to speed things
up, the programs to build the lookup tables seem to feed the bits in
in the opposite order from that which the Ethernet and AAL5 CRC expects.

Additionally, C. M. Heard ([email protected]), has provided some sample
code including a set of high-endian routines for computing AAL 5 CRC-32.
This is available, along with tutorial and associated code concerning the
on correction of single-bit header errors, in the cell-relay mailing list
archives.  Check this using the new Cell Relay Retreat web page, on the ATM
Software page as "AAL5 CRC Calculation (C Code)"  This software is also
stored on cell-relay.indiana.edu:/pub/cell-relay/SoftwareSources/crc32h.c
for ftp access.

Q: Does anyone have some test cases for confirming that AAL5 CRC-32
software works on a given machine?

A: There are three examples of valid AAL-5 CS-PDU in I.363:

/* 40 Octets filled with "0" */
/* CPCS-UU = 0, CPI = 0, Length = 40, CRC-32 = 864d7f99 */
char pkt_data[48]={0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
                   0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
                   0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
                   0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
                   0x00,0x00,0x00,0x28,0x86,0x4d,0x7f,0x99};

/* 40 Octets filled with "1" */
/* CPCS-UU = 0, CPI = 0, Length = 40, CRC-32 = c55e457a */
char pkt_data[48]={0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
                   0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
                   0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
                   0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
                   0x00,0x00,0x00,0x28,0xc5,0x5e,0x45,0x7a};

/* 40 Octets counting: 1 to 40 */
/* CPCS-UU = 0, CPI = 0, Length = 40, CRC-32 = bf671ed0 */
char pkt_data[48]={0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0a,
                   0x0b,0x0c,0x0d,0x0e,0x0f,0x10,0x11,0x12,0x13,0x14,
                   0x15,0x16,0x17,0x18,0x19,0x1a,0x1b,0x1c,0x1d,0x1e,
                   0x1f,0x20,0x21,0x22,0x23,0x24,0x25,0x26,0x27,0x28,
                   0x00,0x00,0x00,0x28,0xbf,0x67,0x1e,0xd0};


SUBJECT:  D19) + Specs on how Frame Relay frames gets mapped to ATM cells

There are at least three.  One is the mapping defined
for Frame Relay/ATM network interworking as defined in Version 1.1 of the
ATM Forum's B-ICI spec (network interworking allows Frame Relay end users
to communicate with each other over an ATM network).  In this case frames
are mapped using AAL 5 and the FR-SSCS (Frame Relay specific service-specific
convergence sublayer).  Despite the long-winded name, the essentials of the
mapping are quite simple to describe: remove the flags and FCS from a Frame
Relay frame, add the AAL-5 CPCS trailer, and segment the result into ATM
cells using AAL 5 SAR rules. The spec defines additional details such as
the mapping between FECN/BECN/DE in the Frame Relay header and EFCI/CLP
bits in the ATM cell headers.

A second mapping is ATM DXI (data exchange interface) mode 1a. This is not
strictly a Frame Relay to ATM mapping but rather uses an HDLC frame structure
identical to that of Frame Relay frames with a two-byte address field (i.e.
a 10-bit DLCI).   The HDLC DXI frame address (called DFA in the spec) gets
stripped off and the 10 bits of the "DLCI" get mapped in a funny way to the
VPI and VCI of the ATM cells.  The remainder of the DXI frame gets an AAL 5
CPCS trailer and is chopped up into cells by standard AAL 5 rules.

A third mapping is used for ATM/Frame Relay service interworking.  This
version allows for conversion between the RFC 1490 multiprotocol encapsulation
and the RFC 1483 multiprotocol encapsulation.  However work on this mapping is
not yet finished.  When all is said and done it will use AAL 5 with the
RFC 1483 encapsulation within the network.  It will allow a Frame Relay user
to communicate with a user of a different service (e.g. SMDS/CBDS) across
the ATM network.


-----------------------------------------------------------------------------
TOPIC:     E)   TOPIC: ATM VS. XYZ TECHNOLOGY
-----------------------------------------------------------------------------
SUBJECT:  E1) * How does ATM differ from SMDS?

        SMDS is the Switched Multi-megabit Data Service, a *service* offering
interface from Bellcore.  SMDS provides a datagram service, where a packet has
about a 40-octet header plus up to 9188 octets of data. The packets themselves
may or may not be transported within the network on top of a connection-
oriented ATM service.  SMDS uses E.164 (ISDN) addresses.  Therefore SMDS is
a connectionless packet switched *service*, not a cell-relay service.

However, the SMDS Subscriber Network Interface is currently defined to use
IEEE 802.6 Distributed Queue Dual Bus (DQDB) access across the SMDS
user-network interface.  DQDB itself *is* a form of cell relay.  The lower
layers of SMDS fragment the packets into cells with a 5-octet header and
48-octet payload.  The payload itself has a 2-octet header, 44-octets of data,
plus a 2-octet trailer.  An SMDS cell therefore is nearly identical in form
to an AAL3/4 cell.  Note that while DQDB is used as the access protocol,
either DQDB or AAL3/4 may be used for the switch-to-switch interface.

Several have noted that the point to stress is that SMDS is a *service* rather
than a technology.  As such it can be accessed by multiple protocols, as long
as those protocols support the features of SMDS.  SIP based on 802.6 is one
such a protocol. however, others have been defined and are being used,
including:
  - DXI based (HDLC based)
  - Frame Relay based
  - ATM based

Furthermore, different physical access facilities can be used, including
DS1, E1, DS3, E3, Nx64kbps, Nx56kbps, and SONET/SDH.

Another way to look at SMDS is as an ATM application.  A common approach is to
have an SMDS server in an ATM network, thus creating a connectionless datagram
service over ATM that provides all SMDS service features while utilizing the
benefits of ATM.

One source of (readable) information on SMDS is probably the SMDS
Interest Group (SIG), 480 San Antonio Road, Suite 100, Mountain View,
California 94040, USA; Tel +1 415 962 2590; Fax +1 415 941 0849.
This SIG is in many ways similar to the ATM Forum, and cooperates with
it. Also, there is a European branch known as ESIG which is concerned
with adapting the American SIG documents to fit European network
architectures and regulations. SIG work is mostly based on Bellcore
SMDS TAs and such like, while ESIG aligns with ITU and ETSI standards.

Obviously, Bellcore documentation will be an authoritative SMDS reference.
(Contact Bellcore at (908) 699-5800 or 1-800-521-CORE.)  Additionally there
are SMDS references in section C1 of this FAQ.


SUBJECT:  E2) + What is MTP3/SS7 and how it relates to ATM?

MTP3 (Message Transfer _Part_ level 3) is the network layer of the SS7
signalling transport system. It routes SS7 signalling messages to public
network nodes by means of Destination Point Codes, and to the appropriate
signalling entity within a node by means of a Service Info Octet.  MTP3 is
specified as part of the Signalling System 7 protocol and is also referred
to as part of the B-ICI interface for ATM.  MTP3 sits between MTP2 and the
user parts (ISUP, TUP, SCCP and TCAP) of the SS7 protocol stack.
B-ISUP is an Application Layer protocol run over MTP3.

MTP3 includes a number of link-protection features, to allow automatic
rerouting of signalling messages around broken signalling transfer
points. It includes certain management functions for congestion control
on signalling links.

The protocol is defined in Q.704, available from ITU.

MTP3 is widely deployed for existing narrowband SS7 networks.  It will
be used for the transport of B-ISUP, but don't expect the document to
mention ATM!  MTP3 assumes it is running over MTP2 (data link protocol),
and the ATM SAAL is specifically designed to mimic this and leave MTP3
unchanged.


-----------------------------------------------------------------------------
TOPIC:     F)   TOPIC: FREELY AVAILABLE REFERENCE IMPLEMENTATIONS
-----------------------------------------------------------------------------
SUBJECT:  F1)   What and where is VINCE?

         Vince has now on record as the first "publicaly available" software
source code in the ATM technology area.  This work was carried out by the
Research Networks section of the Center for Computational Science at the
Naval Research Laboratory, with support from the Advanced Research Projects
Agency and NAVSEA.  In the Grand Internet Tradition, these fine folks have
contributed their efforts to the community in support of further research.

VINCE RELEASE 0.6 ALPHA

Vince, the Vendor Independent Network Control Entity, is
publicly available (in source code form) as an
alpha release. Its primary function is to perform ATM
signalling and VC management tasks. It currently includes
a hardware module that allows it to run on Fore ASX-100(tm)
switches.  Other hardware modules are under development.

Vince consists of a core which provides basic ATM network
semantics, and modules to perform specific functions. Modules
included in this release are:

  spans  - module which intereroperates signalling and routing
           with Fore Systems ASX switch and various host interfaces.
           SPANS is (tm) Fore Systems, Inc.

  q93b   - an implementation of signalling as specified in the ATM
           Forum UNI 3.0 document.  The vince core includes sscop
           and aal5 in its protocol library.

  sim    - a software ATM switch or host that can be used to test
           signalling and routing implementations without ATM
           hardware.

  sroute - an address independent VC routing module.

The Vince distribution also contains a driver that uses spans for
signalling and supports the Fore Systems SBA-100 under SunOS(tm).

Work has already begun on a kernel version of Vince, which will
allow ATM Forum UNI signalling for hosts.  Also in development
are SNMP/ILMI support, interdomain routing, and support for other
switches.

The intent is to provide a redistributable framework which
allows for code sharing among ATM protocol developers.

Vince and its architecture document are available for
anonymous ftp at hsdndev.harvard.edu:pub/mankin

A mailing list for Vince developers and users can be joined
by sending mail to [email protected].


-----------------------------------------------------------------------------
TOPIC:     G)   TOPIC: FLAMES AND RECURRING HOLY WARS
-----------------------------------------------------------------------------

         As with any News and/or email list, topics will be raised which
elicit a broad range of viewpoints.  Often these are quite polarized and yield
a chain of replies and counter topics which can span weeks and even months.
Typically without resolution or group consensus.  This section lists some
memorable (lengthy?) topic threads.

PLEASE NOTE that the idea here is not to re-kindle old flames, and not to
somehow pronounce some conclusion.  Rather, recorded here are are a few
pieces of the dialogue containing information which might be of some use.


SUBJECT:  G1)   Are big buffers in ATM switches needed to support TCP/IP?

         A recurring theme in 1993 concerned the suitability of ATM to
transport TCP/IP based traffic.  The arguments generally centered around the
possible need for ATM WAN switches to support very large buffers such that
TCP's reactive congestion control mechanism will work.  Points of contention
include: are big buffers needed, if so then where, and what exactly is the
TCP congestion control mechanism.

Undoubtedly, many of these discussions have been fueled by some 1993 studies
which reported that TCP works poorly over ATM because of the cell-discard
phenomenon coupled with TCP's congestion control scheme.

The longest thread on this subject started in the October 1993 timeframe and
ended in December under the subject of "Fairness on ATM Networks".
Generally folks expressed opinions in one of the following postures:

1) Big buffers are not needed at all....

  A few argued that if ATM VC's are provisioned and treated as fixed leased
  lines then ATM will be able to support TCP/IP just fine.  This means that
  you would need to subscribe to the maximum possible burst rate which would
  be very inefficient use of bandwidth since TCP is usually very bursty.

2) Put big buffers in routers and not ATM switches....

  If you are using wide-area links over ATM, then use a router smart enough
  not to violate the Call-Acceptance contract.  The call acceptance function
  should be such that it doesn't let you negotiate a contract that causes
  congestion.  Congestion should only occur when there is a fault in the
  network.  A router is quite capable of smoothing out bursts.  That is what
  they do right now when they operate over leased lines.  The advantage of
  an ATM connection replacing a leased line is that the connection parameters
  can be able to be renegotiated on the fly, so if your IP network (as
  opposed to the ATM network) is experiencing congestion, then it can request
  more bandwidth.

  Supporting this thinking is the notion that for most data networks using ATM
  as their wide-area medium, a router would likely be the access point with
  many TCP connections being concentrated on a given ATM connection.

3) Still others suggest that ATM switches should implement priorities and
   that there should be different buffer sizes allocated per priority.
   The different priorities and associated buffer sizes would support
   traffic separation, trading off cell loss for delay. So for example,
   "voice" traffic could have small buffer sizes and "data" traffic could
   have big buffer sizes.  The switches would then provide the buffering
   necessary to support TCP's reactive congestion control algorithms.

   Some folks argued that this would be "expensive" to implement.  Regardless,
   many new switches being anounced in 1993/4 claim to have such priorities
   and buffer size capabilities.

Finally many folks were not clear on the differing TCP reactive congestion
control mechanisms. A quick summary follows:

In the original algorithm,  the TCP goes into slow-start when a packet loss
is detected.  In slow-start, the window is set to one packet and increased
by one for every acknowledgement received until the window size is half what it
was before the packet is dropped. You get a series of larger and larger
bursts but the largest causes half the number of packets to be buffered as
there were before the packet drop occurred.  Hence there is no burst until the
window size is half what it was before the packet is dropped and is then
increased at a much lower rate, 1/(window size) for each acknowledgement.
This window control algorithm ensures that the only bursts generated are
probably small enough to be no problem.

In the Reno algorithm, the window is halved so that packets start being sent
in response to acknowledgements again after half the old window's of
acknowledgements have been received.  Hence there is no "burst" of packets
generated.  The only packess generated are in response to acknowledgements,
and only after half an old window of acknowledgements are received.

For more information check out Van Jacoboson's algorithms published in
ACM SIGCOMM 1988.


SUBJECT:  G2)   Can AAL5 be used for connection-less protocols?

         This thread started with questions about wether AAL5 supports
connection oriented or connection-less protocols.  Check the November
and December 1993 archives for the subject: "AAL Type 5 question".

First some background
---------------------
Officially, AAL 5 provides support for adaption of higher layer connection-
oriented protocols to the connection-oriented ATM protocol.
There was, however, a debate going on, claiming, that AAL 5 could also be
used to adapt higher layer connectionless protocols to the connection-oriented
ATM protocol.

The whole debate is grounded in a systematical approach of the ITU-T, which
states, that all AALs should be classified into four different classes, to
minimise the number of AALs required for supporting any imaginable service.

The classification of the ITU-T is as follows:

+------------------------+-----------+-----------+-----------+-----------+
|                        |  Class A  |  Class B  |  Class C  |  Class D  |
+------------------------+-----------+-----------+-----------------------+
|Timing relation between |        Required       |   Not Required        |
|source and destination  |                       |                       |
+------------------------+-----------+-----------+-----------------------+
| Bit Rate               |  Constant |          Variable                 |
+------------------------+-----------+-----------------------+-----------+
| Connection Mode        |     Connection-Oriented           |Connection-|
|                        |                                   | less      |
+------------------------+-----------------------------------+-----------+

AAL 5 is currently agreed to be in Class C. Some parties at the
standardisation bodies claim that it could be as well in Class D.

At the moment the following mapping between AALs and classes applies:
Class A: AAL 1
Class B: AAL 2
Class C: AAL 3/4, AAL 5
Class D: AAL 3/4

The reason for AAL3/4 in classes C and D is the follwing:
The ITU-T started to define AAL3 for Class C and AAL 4 for Class D. They
turned out to be identical after long debates.

Reality Check
-------------
The real issue is how to run a connection-less service over ATM which is
inherently connection-oriented.  AALs themselfs merely transport higher
layer packets across an ATM virtual circuit.  Connection-less services
are actually provided by higher layer protocols such as CLNAP.  Given
that, there is nothing to prevent folks from using AAL5 to implement
a connection-less communication mode.  This is exactly what the IETF is
doing with IP over ATM, and what the ATM Forum is also doing with
LAN Emulation.

The reality is that these folks expect that AAL5 will be largely used for
connection-less upper layer protocols such as CLNP and IP.  So some
find it strange to have AAL5 classified as an AAL for connection-
oriented services only.

However, from an ITU-T service Class perspective, you must stick strictly to
the view that to call an AAL "Class D" it must support each and every
posssible connection-less protocol.  The current agreement in the ITU-T
is that AAL5 can not claim this and so is officially considered a
"Class C" AAL.


SUBJECT:  G3) + How does the ATM Layers map to OSI reference model?

Most people agree that the ATM standards cover 3 distinct layers -- Physical
Layer, ATM Layer, and ATM Adaptation Layer (AAL).

The Physical Layer (corresponding to OSI Physical) is usually taken to be
SONET/SDH (which itself has 4 layers...) but can be other things as well.
The PHY deals with medium-related issues.

The ATM Layer is responsible for creating cells and formatting the cell header
(5 octets).  Some argue that it also corresponds to OSI Physical (it deals
with bit transport) and others say its OSI Data Link (formatting,
addressing, flow control, etc.).

The AAL is responsible for adapting ATM's cell switching capabilities to the
needs of specific higher layer protocols.  The AAL is responsible for
formatting the cell payload (48 bytes).  Some argue that this layer
corresponds to OSI data link (data error control, above Physical),
others OSI transport (it's end-to-end).

I think that this all proves that the OSI model is an excellent model as a
basis for discussion and comparison but becoming hopelessly inadequate to
discuss many new services.

https://cdn.preterhuman.net/texts/underground/unsorted/Cell-RelayFAQ.txt