Open main menu
Article 6563 of comp.protocols.tcp-ip:
From: [email protected] (Joachim Carlo Santos Martillo)
Subject: TCP/IP versus OSI
Message-ID: <[email protected]>
Date: 15 Mar 89 12:37:56 GMT
Reply-To: [email protected] (Joachim Carlo Santos Martillo)
Organization: Clearpoint Research Corp., Hopkinton Mass.

The following is an article which I am going to submit to Data
Communications in reply to a column which William Stallings
did on me a few months ago.  I think people in this forum might
be interested, and I would not mind some comments.

                   Round 2 in the great TCP/IP versus OSI Debate

            I. INTRODUCTION

            When ISO  published the first proposal for the ISO reference
            model in  1978, DARPA-sponsored research in packet switching
            for data  communications had  already been  progressing  for
            over 10  years.  The NCP protocol suite, from which the X.25
            packet-switching protocol suite originated, had already been
            rejected as unsuitable for genuine resource-sharing computer
            networks.   The major architectural and protocol development
            for internetting  over the  ARPANET was completed during the
            1978-79 period.  The complete  conversion of DARPA-sponsored
            networks to  internetting occurred  in January,  1983,  when
            DARPA required  all ARPANET  computers to  use TCP/IP. Since
            then, with an effective architecture, with working protocols
            on real networks, researchers and developers within the ARPA
            Internet community  have been  refining computer  networking
            and providing  continually more  resource sharing  at  lower
            costs.  At the same time, with no obvious architecture, with
            theoretical  or   idealized  networks   and  while  actively
            ignoring the  work being  done in the ARPA Internet context,
            the ISO  OSI  standards  committees  were  developing  basic
            remote terminal  and file  transfer protocols.   The ISO OSI
            protocol suite  generally provides  potentially much less at
            much  more   cost  than  the  ARPA  Internet  suite  already
            provides.   No one  should be  surprised that  many computer
            networking system  architects wish  to debate  the merits of
            the OSI  reference model  and that  many relatively  pleased
            business, technical  and academic users of the ARPA Internet
            protocol suite  would like  such a  debate  to  be  actively
            pursued in the media.

           |                                                            |
           |                         Background				|
           |								|
           |Since June,  1988 William Stallings and I have been engaging|
           |in a  guerilla debate  in the  reader's forum  and  the  EOT|
           |feature on  the technical  and economic merits of OSI versus|
           |ARPANET-style networking.  Enough issues have been raised to|
           |require a  complete article  to continue the discussion. The|
           |debate is  of major interest because managers are now making|
           |strategic decisions  which will affect the development, cost|
           |and functionality  of  corporate  networks  over  the  whole|
           |world.   A valid  approach to  the  debate  deals  with  the|
           |technical,  economic  and  logistic  issues  but  avoids  ad|
           |hominem attacks.  I apologize for those comments in my forum|
           |letter which  might be  construed  as  personal  attacks  on|
           |William Stallings.           				|
	   |								|
           |Since I  have not  yet published  many papers and my book is|
           |only 3/4s  finished, I  should  introduce  myself  before  I|
           |refute the  ideas which Stallings presented in the September|
           |EOT feature.   I am a system designer and implementer who is|
           |a founder and Project Director at Constellation Technologies|
           |which   is    a   Boston-based   start-up   consulting   and|
           |manufacturing  company   specializing  in   increasing   the|
           |performance, reliability  and security of standard low-level|
           |communications technologies   for  any of  the  plethora  of|
           |computer networking environments currently available.       |
           |                                                            |
           |I  am   not  an  "Arpanet  Old  Network  Boy."  My  original|
           |experience is   in  telephony.  I have implemented Signaling|
           |System 6, X.25, Q.921 and Q.931.  During a one-year research|
           |position at  MIT, I  worked on TFTP and helped develop the X|
           |network transparent  windowing protocol.   Later I developed|
           |PC/NTS which  uses IEEE  802.2 Type  2 to  provide  PC-Prime|
           |Series 50  connectivity over IEEE 802.3 (Ethernet) networks.|
           |My partner  Tony Bono  and I  have attended various IEEE and|
           |CCITT  standards-related   committees  in  various  official|
           |capacities. 				         	|

            II. THE DEBATE

            Part of  the problem with debating is the lack of a mutually
            agreeable and  understood set  of concepts in which to frame
            the debate.   I  have yet  to meet a communications engineer
            who had  a sense  of what  a process might be. Having taught
            working  software   and  hardware   engineers   at   Harvard
            University and  AT&T and  having attended  the international
            standards  committees   with  many  hardware,  software  and
            communications  engineers,  I  have  observed  that  overall
            system design  concepts in  computer networking  need a  lot
            more  attention   and  understanding  than  they  have  been
            getting.  Normally in the standardization process, this lack
            of attention would not be serious because official standards
            bodies usually  simply make  official  already  existing  de
            facto standards  like Ethernet  2.0 which had already proven
            themselves.   In the  case of OSI, the ISO committee, for no
            obvious reasons, chose to ignore the proven ARPA Internet de
            facto standard.

           |                                                            |
           |                       Architecture,           		|
           |                 Functional Specification,           	|
           |                    Design Specification           		|
           |                                         |                  |
           |Nowadays, we  read a lot of hype about CASE, object-oriented|
           |program techniques  and languages  designed to facilitate or|
           |to ease  the development  of large software projects.  These|
           |tools generally duck the hardest and most interesting system|
           |design and  development problem  which is  the design  under|
           |constraint of  major systems  which somebody  might actually|
           |want to  buy.   The hype  avoids the real issue that student|
           |engineers are  either simply  not taught  or  do  not  learn|
           |system  design  in  university  engineering  programs.    If|
           |software engineers  generally knew how to produce acceptable|
           |architectures,   functional    specifications   and   design|
           |specifications, the  push for  automatic tools would be much|
           |less. In  fact, the  development of CASE tools for automatic|
           |creation of systems architectures, functional specifications|
           |and design specifications requires understanding exactly how|
           |to produce  proper architectures and specifications.  But if|
           |engineers  knew   how  to  produce  good  architectures  and|
           |specifications for  software, presumably  student  engineers|
           |would   receive    reasonable   instruction   in   producing|
           |architectures and  specifications, and  then there  would be|
           |much less  need for  automatic CASE  tools to produce system|
           |architectures,   functional    specifications   or    design|
           |specifications.						|
           |           							|
           |Just as  an architectural  description of  a building  would|
           |point  out  that  a  building  is  Gothic  or  Georgian,  an|
           |operating system  architecture  might  point  out  that  the|
           |operating system  is multitasking, pre-emptively time-sliced|
           |with kernel  privileged routines running at interrupt level.|
           |A  system   architecture  would   describe  statically   and|
           |abstractly the  fundamental operating  system entities.   In|
           |Unix, the  fundamental operating system entities on the user|
           |side would  be the  process and  the file.   The  functional|
           |specification  would   describe  the   functionality  to  be|
           |provided  to   the  user   within  the  constraints  of  the|
           |architecture. A functional specification should not list the|
           |function calls used in the system.  The design specification|
           |should specify  the model by which the architecture is to be|
           |implemented to  provide the desired functionality.  A little|
           |pseudocode can  be useful depending on the particular design|
           |specification detail  level.   Data  structures,  which  are|
           |likely to  change many  times during implementations, should|
           |not appear in the design specification.			|
           |								|
           |Ancillary  documents   which  treat  financial  and  project|
           |management issues  should be  available to  the  development|
           |team.   In all  cases documents  must be  short.  Otherwise,|
           |there is  no assurance the all members of the development or|
           |product management  teams will  read  and  fully  comprehend|
           |their documents.   Detail  and verbiage  can be the enemy of|
           |clarity.   Good architectures  and functional specifications|
           |for moderately  large systems  like Unix  generally  require|
           |about 10-20  pages.   A good high-level design specification|
           |for such  a system  would take  about  25  pages.    If  the|
           |documents are  longer, something  may be  wrong.  The key is|
           |understanding what should not be included in such documents.|
           |The  ISO   OSI  documents   generally  violate   all   these|
           |principles.							|

            As a  consequence, the  ISO OSI  committee and  OSI boosters
            have an  obligation to justify their viewpoint in debate and
            technical discussion  with computer  networking experts  and
            system designers.  Unfortunately, the debate over the use of
            OSI versus TCP/IP has so far suffered from three problems:

                 o    a lack of systems level viewpoint,

                 o    a lack of developer insight and

                 o    an hostility toward critical appraisal either
                      technically or economically of the proposed ISO
                      OSI standards.

            The following material is an attempt to engage in a critical
            analysis  of  OSI  on  the  basis  of  system  architecture,
            development principles and business economics.  Note that in
            the following article unattributed quotations are taken from
            the itemized  list which Stallings used in EOT to attempt to
            summarize my position.


            The most  powerful system level architectural design concept
            in   modern    computer   networking   is   internetworking.
            Internetworking is practically absent from the OSI reference
            model  which   concentrates  on   layering,  which   is   an
            implementation technique,  and on  the  virtual  connection,
            which  would   be  a   feature  of  a  proper  architecture.
            Internetworking   is good  for the same reason Unix is good.
            The Unix  architects and the ARPA Internet architects, after
            several missteps, concluded that the most useful designs are
            achieved by  first choosing  an effective  computational  or
            application model  for the user and then figuring out how to
            implement this  model  on  a  particular  set  of  hardware.
            Without taking  a position on success or failure, I have the
            impression that  the  SNA  and  VMS  architects  by  way  of
            contrast set  out to  make the  most effective  use of their
            hardware.   As a  consequence both  SNA and  VMS are  rather
            inflexible systems  which are  often rather inconvenient for
            users even  though the  hardware is  often quite effectively
            used.   Of course,  starting from  the user computational or
            application model  does not  preclude eventually  making the
            most  effective   use  of  the  hardware  once  the  desired
            computational or application model has been implemented.

           |                                                            |
           |                      Internetworking           		|
           |           							|
           |The internetworking  approach enables  system designers  and|
           |implementers to  provide network users with a single, highly|
           |available,  highly   reliable,   easily   enlarged,   easily|
           |modifiable, virtual network.  The user does not need to know|
           |that this single virtual network is  composed of a multitude|
           |of technologically  heterogeneous wide  area and  local area|
           |networks    with     multiple    domains    of    authority.|
           |Internetworking is  achieved by  means of  a coherent system|
           |level  view  through  the  use  of  an  obligatory  internet|
           |protocol  with   ancillary  monitoring  protocol,  gateways,|
           |exterior/internal gateway  protocols and hierarchical domain|
           |name service.                                               |
           |                                                            |
           |In the  internetworking (not  interworking) approach, if two|
           |hosts are  attached to  the same  physical subnetwork  of an|
           |internetwork,  the  hosts  communicate  directly  with  each|
           |other.   If the  hosts are  attached to  different  physical|
           |subnetworks, the  hosts communicate  via gateways  local  to|
           |each host.   Gateways  understand and learn the internetwork|
           |topology dynamically  at a  subnetwork (not  host level) and|
           |route  data   from  the  source  subnetwork  to  destination|
           |subnetwork on a subnetwork hop by subnetwork hop basis.  The|
           |detail of information required for routing and configuration|
           |is reduced  by orders  of magnitude.   In the ARPA Internet,|
           |gateways  learn   topological  information  dynamically  and|
           |provide reliability  as well  as availability  by performing|
           |alternate routing  of  IP  datagrams  in  cases  of  network|
           |congestion or network failures.                             |
           |                                                            |
           |An authoritative  domain,  Within  the  ARPA  Internet,  can|
           |conceal from  the rest of the internetwork a lot of internal|
           |structural detail  because gateways  in other  domains  need|
           |only  know  about  gateways  within  their  own  domain  and|
           |gateways  between  authoritative  domains.    Thus,  logical|
           |subnetworks  of  an  internetwork  may  also  themselves  be|
           |catenets  (concatenated  networks)  with  internal  gateways|
           |connecting  different   physical  subnetworks   within  each|
           |catenet.   For example, to send traffic to MIT, a gateway at|
           |U.C. Berkeley  only need know about gateways between MIT and|
           |other domains  and need  know  nothing  about  the  internal|
           |structure of the MIT domain's catenet.                      |

	    The ARPA  Internet is one realization of the internetworking
            model.   While I am not particularly enamored of some of the
            ARPA protocol  features (nor  of Unix features by the way),1
            the ARPA  Internet works  well with  capacity for expansion.
            SINet  (described   in  "How  to  grow  a  world-class  X.25
            network," Data  Communications, May  1988) is  based on  the
            CSNet subnetwork within the ARPA Internet.

            1 The  use of  local-IP-address, local-TCP-port,  remote-IP-
            address, remote-TCP-port  quadruples to  uniquely identify a
            given TCP  virtual circuit  is an  impediment  to  providing
            greater  reliability  and  availability  for  a  non-gateway
            multihomed host.   A  even larger  problem with TCP/IP could
            lie   in    the   possibly   non-optimal   partitioning   of
            functionality between TCP, IP and ICMP.

           |                                                    	|
           |                        WANs and LANs			|
           |                                     			|
           |OSI actually  has an  architecture.   Like the  ARPANET, OSI|
           |predicates  the   existence  of   a  communications   subnet|
           |consisting  communications   subnet  processors  (or  subnet|
           |switches) and  communications subnet  access processors  (or|
           |access switches).   Access  switches are  also known as IMPs|
           |(Interface Message Processors) or PSNs (Packet Switch Nodes)|
           |in the  ARPANET context.  PSPDN (Packet-Switched Public Data|
           |Network)  terminology  usually  designates  access  switches|
           |simply as  packet switches.  The communication subnet may be|
           |hierarchical and  may contain  adjunct processors other than|
           |subnet and  access switches.   The  internal architecture of|
           |the  communications   subnet  is  quite  distinct  from  the|
           |architecture   presented    to   end-point   hosts.      The|
           |communications subnet may use protocols completely different|
           |from the  protocols used  for communication between two end-|
           |point hosts.   An end-point host receives and transmits data|
           |to its  attached access switch via a subnet access protocol.|
           |The communications subnet is responsible for taking a packet|
           |received at  an access switch and transporting the packet to|
           |the access  switch attached  to  the  destination  end-point|
           |host.   The existence  of such a well-defined communications|
           |subnet is the hall mark of a Wide-Area Network (WAN).       |
           |Unfortunately,  from   the  standpoint  of  making  computer|
           |networking generally and inexpensively available, access and|
           |subnet switches  are expensive  devices to  build which need|
           |fairly complicated  control software.   DECNET  gets  around|
           |some of  these problems  by incorporating the communications|
           |subnet logic  into  end-point  hosts.    As  a  consequence,|
           |customers who  wish to run DECNET typically have to purchase|
           |much more  powerful machines  than they might otherwise use.|
           |For the  situation of  a communications  subnet  which  need|
           |support connectivity  for only  a small number of hosts, LAN|
           |developers  found   a  more   cost  effective   solution  by|
           |developing a  degenerate form  of packet  switches based  on|
           |hardware-logic  packet   filtering  rather   than   software|
           |controlled  packet   switching.    These  degenerate  packet|
           |switches are  installed in the end-point hosts, are accessed|
           |often via  DMA2 as  LAN  controllers  and  are  attached  to|
           |extremely simplified  communications  subnets  like  coaxial|
           |cables.     Direct   host-to-switch   (controller)   access,|
           |degenerate    packet-switching     (packet-filtering)    and|
           |simplified communications  subnets  are  the  distinguishing|
           |features of LANs.           				|
           |           							|
           |While ISO  was ignoring  the whole  internetworking issue of|
           |providing universal  connectivity  between  end-point  hosts|
           |attached to different physical networks within internetworks|
           |composed of  many  WANs  and  even  more  LANs  concatenated|
           |together, and while the IEEE was confusing all the issues by|
           |presenting as an end-to-end protocol a communications subnet|
           |protocol (IEEE  802.2)  based  on  a  communications  subnet|
           |access protocol  (X.25 level 2), the ARPA Internet community|
           |developed an  internet architecture capable of providing the|
           |universal connectivity  and resource sharing which business,|
           |technical and academic users really want and need.         	|


            2 Some  machines like the Prime 50 Series do not use genuine
            DMA  but  instead  use  inefficient  microcoded  I/O.    IBM
            machines generally  use more  efficient  and  somewhat  more
            expensive internal switching.

            The backbone  of the  ARPA Internet  is the  ARPANET.    The
            ARPANET is  a packet  switched subnetwork  within  the  ARPA
            Internet.  The ARPANET communications subnet access protocol
            is 1822.   CSNet  was set up as an experiment to demonstrate
            that the  ARPA Internet  architecture and suite of protocols
            would function  on a  packet  network  whose  communications
            subnet access  protocol is  X.25.   Using  an  X.25-accessed
            packet network  instead of  an 1822-accessed  packet network
            makes sense  despite  the  glaring  deficiencies  of  X.25,3
            because X.25 controllers are available for many more systems
            than  1822   controllers  and   because   many   proprietary
            networking schemes like SNA and DECNET can use X.25-accessed
            packet networks  but cannot use a packet network accessed by

            Yet,  calling  SINet  a  world  class  X.25  network  is  as
            reasonable  as  calling  the  ARPANET  a  world  class  1822
            network.4   Schlumberger has  produced a  world class TCP/IP
            network whose wires can be shared with SNA and DECNET hosts.
            Schlumberger  has   shown  enthusiasm   for  the   flexible,
            effective ARPANET  suite  of  protocols  but  has  given  no
            support in  the  development  of  SINet  to  the  idea  that
            business should prepare to migrate to OSI based networks.

            I  would   be  an   OSI-enthusiast  if  ISO  had  reinvented
            internetworking  correctly.    Unfortunately,  the  ISO  OSI
            reference model which first appeared in 1978 clearly ignored
            all the  ARPA community work on intercomputer networking and
            resource  sharing   which  was   easily  accessible  in  the
            literature of the time.  Instead of building the OSI network
            on an  internetworking foundation,  ISO standardized  on the
            older less  effective  host-to-packet-switch-to-packet-data-
            subnet-to-packet-switch-to-host (NCP)  model which the DARPA


            3 For  example, X.25 does flow control on the host to packet
            switch connection on the basis of packets transmitted rather
            than on  the  basis  of  consumption  of  advertised  memory
            window.   The exchange  of lots of little packets on an X.25
            connection can  cause continual transmission throttling even
            though the receiver has lots of space for incoming data.

            4 Or  as much  sense  as  calling  Ethernet  LANs  DMA-based
            networks because the packet switches (an Ethernet controller
            is a  degenerate case  of a  packet switch)  on the  LAN are
            typically accessed by DMA.

            had abandoned 5 years earlier because of lack of flexibility
            and other problems.

           |                                                            |
           |           Pieces of the ARPA Internet Conceptually         |
           |                                                		|
           |							 	|
           |							 	|
           |							 	|
           |							 	|
           |							 	|
           |			(No Graphics)			 	|
           |							 	|
           |							 	|

            Nowadays, mostly in response to US vendors and DARPA, pieces
            of the ARPA Internet architecture have resurfaced in the OSI
            reference  model   quite  incoherently   rather  than  as  a
            consequence   of   an   integrated   correct   architectural
            viewpoint.  Connectionless-mode transmission is described in
            ISO/7498/DAD1 which  is an  addendum to  ISO 7498  and not a
            core document.   Because connectionless-mode transmission is
            defined in an addendum, the procedure apparently need not be
            implemented, and  UK GOSIP,  for example, explicitly rejects
            the use  of the  connectionless transmission  mode.      The
            introduction to the 1986 ISO 7498/DAD1 explicitly states, as
            follows, that  ISO was  extremely reluctant to incorporate a
            genuine datagram  based protocol  which could  be  used  for

                ISO 7498 describes the Reference Model of Open
                Systems Interconnection.  It is the intention of
                that International standard that the Reference
                model should establish a framework for coordinating
                the development of existing and future standards
                for the interconnection of systems.  The assumption
                that connection is a fundamental prerequisite for
                communication in the OSI environment permeates the
                Reference Model and is one of the most useful and
                important unifying concepts of the architecture
                which it describes.  However, since the
                International Standard was produced it has been
                realized that this deeply-rooted connection
                orientation unnecessarily limits the power and
                scope of the Reference Model, since it excludes
                important classes of applications and important
                classes of communication network technology which
                have a fundamentally connectionless nature.

            An  OSI  connectionless-mode  protocol  packet  may  undergo
            something like  fragmentation, but from the literature, this
            form of  segmentation as  used in  OSI  networks  is  hardly
            equivalent to ARPA Internet fragmentation.  Stallings states
            the  following   in  Handbook   of   Computer-Communications
            Standards, the  Open Systems Interconnection (OSI) Model and
            OSI-Related Standards,  on p.  18  (the  only  reference  to
            anything resembling fragmentation in the book).

                Whether the application entity sends data in
                messages or in a continuous stream, lower level
                protocols may need to break up the data into blocks
                of some smaller bounded size.  This process is
                called segmentation.

            Such  a   process  is   not  equivalent   to  ARPA  Internet
            fragmentation.   In the  ARPA Internet  fragmentation is the
            process whereby  the gateway  software operating  at the  IP
            layer converts  a single  IP packet into several separate IP
            packets and  then routes the packets.  Each ARPA IP fragment
            has a  full IP  header.   It is  not obvious  that each  OSI
            segment has a complete packet header. The ARPA fragmentation
            procedure is not carried out by lower protocol layers.  A N-
            layer packet  in OSI  is segmented  at layer  N-1 while  the
            packet is routed (relayed) at layer N+1.

            This partitioning of basic internetworking procedures across
            layer 2  (N-1), layer  3 (N)  and layer 4 (N+1) violates the
            following principles described in ISO/DIS 7498:  Information
            Processing Systems  -- Open Systems Interconnection -- Basic
            Reference Model.

                 P1:  do not create so many layers as to make the system
                      engineering task of describing and integrating the
                      layers more difficult than necessary [ISO uses
                      three layers where one could be used];

                 P2:  create a boundary at a point where the description
                      of services can be small and the number or
                      interactions across the boundary are minimized [by
                      putting per-packet relaying in layer 4 at least
                      two interactions across the boundary are required
                      per packet];

                 P5:  select boundaries at a point which past experience
                      has demonstrated to be successful [the ARPA
                      Internet layering boundaries which combine the
                      addressing, fragmentation and routing in one layer
                      has proven successful];

                 P6:  create a layer where there is a need for a
                      different level of abstraction in the handling of
                      data, e.g. morphology, syntax, semantics
                      [fragmentation, routing, and network addressing
                      are all seem quit naturally to be part of network
                      layer semantics as the ARPA Internet example

                 P9:  allow changes of functions or protocols to be made
                      within a layer without affecting other layers [I
                      would think changing the manner of addressing at
                      layer 3 would affect relaying at layer 4].

            Even if  OSI N-1 segmentation and N+1 relaying could be used
            in the  same way  as fragmentation  and routing  in the ARPA
            Internet,  it   takes  a  lot  more  apparatus  than  simply
            permitting the  use of  the  ISO  connectionless  "internet"
            protocol to achieve internetworking.

            The OSI  documents almost  concede this  point  because  ISO
            7498/DAD 1,  ISO/DIS 8473 (Information Processing Systems --
            Data    Communications    --    Protocol    for    Providing
            Connectionless-Mode Network Service) actually provide for N-
            layer  segmentation  (actually  fragmentation)  and  N-layer
            routing right  in the  network layer  in addition to the OSI
            standard N-1  segmentation and N+1 relaying.  Providing such
            functionality directly  in the  network layer actually seems
            in greater accordance with OSI design principles, but if ISO
            is really  conceding this  point, ISO  should  go  back  and
            redesign the system rather than leaving this mishmash of N-1
            segmentation, N  segmentation, N  routing and  N+1 relaying.
            The current  connectionless-mode network  service  is  still
            insufficient  for   internetworking  because   the   gateway
            protocols are  not present and the connectionless-mode error
            PDUs (Protocol Data Units) do not provide the necessary ICMP
            functionality.     The  documents   also  indicate  a  major
            confusion between  an internetwork  gateway, which  connects
            different subnetworks of one catenet (concatenated network),
            and  a   simple  bridge,  which  connects  several  separate
            physical networks  into a  single network at the link layer,
            or an interworking unit, which is a subnet switch connecting
            two different  communications subnets either under different
            administrative  authorities   or  using  different  internal
            protocols.5    Tanenbaum  writes  the  following  about  the


            5  This  confusion  is  most  distressing  from  a  security
            standpoint.   The November  2 ARPA  Internet (Cornell) virus
            attack shows  that one  of  the  major  threats  to  network
            security is  insider attack which is a problem with even the
            most isolated corporate network.  Because many ARPA Internet
            network authorities  were assuming  insider  good  behavior,
            ARPA Internet  network administrators  often did  not  erect
            security  barriers   or  close   trapdoors.    Nevertheless,
            gateways  have   far  more   potential   than   bridges   or
            interworking units to provide reasonable firewalls to hinder
            and frustrate  insider attack.    MIT/Project  Athena  which
            makes judicious  use of  gateways and  which does not assume
            insider good  behavior  was  relatively  unaffected  by  the
            virus. Any  document which  confuses gateways,  bridges  and
            interworking units is encouraging security laxity.

            connectionless-mode network service in Computer Networks, p.

                In the OSI model, internetworking is done in the
                network layer.  In all honesty, this is not one of
                the areas in which ISO has devised a model that has
                met with universal acclaim (network security is
                another one).6  From looking at the documents, one
                gets the feeling that internetworking was hastily
                grafted onto the main structure at the last minute.
                In particular, the objections from the ARPA
                Internet community did not carry as much weight as
                they perhaps should have, inasmuch as DARPA had 10
                years experience running an internet with hundreds
                of interconnected networks, and had a good idea of
                what worked in practice and what did not.

            Internetworking,  the   key  concept   of  modern   computer
            networking, exists  within the  OSI  reference  model  as  a
            conceptual wart  which violates even the OSI principles.  If
            ISO had  not tacked  internetworking onto the OSI model, ISO
            was afraid  that DARPA  and that  part of  the  US  computer
            industry with  experience with  modern  computer  networking
            would have  absolutely rejected  the OSI  reference model as

            6 Actually,  I find ISO 7498/2 (Security Architecture) to be
            one of  the more  reasonable ISO documents. I would disagree
            that simple  encryption is  the only  form of security which
            should be  performed at  the link  layer  because  it  seems
            sensible that  if a  multilevel secure mini is replaced by a
            cluster of  PCs on  a  LAN,  multilevel  security  might  be
            desirable at  the link layer.  Providing multilevel security
            at the link layer would require more than simple encryption.
            Still, ISO  7498/2 has the virtue of not pretending to solve
            completely the network security problem.  The document gives
            instead a  framework indentifying  fundamental concepts  and
            building blocks  for  developing  a  security  system  in  a
            networked environment.


            In view  of this  major conceptual  flaw which  OSI has with
            respect to  internetworking,  no  one  should  therefore  be
            surprised that  instead of  tight technical  discussion  and
            reasoning,  implementers   and   designers   like   me   are
            continually  subjected   to  vague  assertions  of  "greater
            richness" of  the  OSI  protocols  over  the  ARPA  Internet
            protocols.   In ARPA  Internet  RFCs,  real-world  practical
            discussion is  common.   I  would not mind similar developer
            insight or  even hints  about the  integration of  these OSI
            protocol  interpreters   into  genuine   operating   systems
            participating in an OSI interoperable environment.

            The customers  should realize "greater richness" costs a lot
            of extra  money even  if a  lot of  the added  features  are
            useless  to   the   customer.   "Greater   richness"   might
            necessitate the  use of  a much  more powerful  processor if
            "greater  richness"   forced  much   more   obligatory   but
            purposeless protocol processing overhead. "Greater richness"
            might also represent a bad or less than optimal partitioning
            of the problem.


            Netview has  so much  "greater richness"  than  the  network
            management protocols  and systems  under development  in the
            ARPA Internet  context that  I have  real problems  with the
            standardization of  Netview into  OSI network  management as
            the obligatory  user interface  and  data  analysis  system.
            Netview is  big, costly,  hard to  implement, and  extremely
            demanding on  the rest of the network management system.  As
            OSI network  management  apparently  subsumes  most  of  the
            capabilities of  Arpanet ICMP  (Internet Control  Monitoring
            Protocol) which  is a sine qua non for internetworking, I am
            as a developer rather distressed that full blown OSI network
            management (possibly  including  a  full  implementation  of
            FTAM)  might have to run on a poor little laser printer with
            a dumb  ethernet interface  card  and  not  much  processing

                                B. FTAM IS DANGEROUS

            The "greater  richness" of  FTAM seems to lie in the ability
            to transmit  single records  and in  the ability  to restart
            aborted file  transfer sessions.    Transmission  of  single
            records seems  fairly useless  in  the  general  case  since
            operating systems  like Unix  and DOS do not base their file
            systems on  records while  the records  of file systems like
            those of  Primos and VMS  have no relationship whatsoever to
            one another.    Including  single  record  or  partial  file
            transfer in  the remote  transfer utility  seems is  a  good
            example of bad partitioning of the problem.  This capability
            really belongs in a separate network file system.  A network
            file system should be separate from the remote file transfer
            system because  the major  issues in  security, performance,
            data  encoding   translation  and  locating  objects  to  be
            transferred are different in major ways for the two systems.

            The ability  to  restart  aborted  file  transfers  is  more
            dangerous than  helpful.  If the transfer were aborted in an
            OSI network,  it could have been aborted because one or both
            of the  end hosts  died or because some piece of the network
            died.  If the network died, a checkpointed file transfer can
            probably be restarted.  If a host died on the other hand, it
            may have  gradually gone  insane and  the checkpoints may be
            useless.   The checkpoints  could only  be guaranteed if end
            hosts  have   special  self-diagnosing  hardware  (which  is
            expensive).   In the absence of special hardware and ways of
            determining exactly  why a  file transfer  aborted, the file
            transfer must  be restarted from the beginning.  By the way,
            even with  the greater  richness of FTAM, it is not clear to
            me that a file could be transferred by FTAM from IBM PC A to
            a Prime Series 50 to IBM PC B in such a way that the file on
            PC A and on PC B could be guaranteed to be identical.

                  C. X.400:  E-MAIL AS GOOD AS THE POSTAL SERVICE

            As currently  used and  envisioned, the X.400 family message
            handling also  has  "greater  richness."    X.400  seems  to
            include   binary-encoded   arbitrary   message-transmission,
            simple  mail   exchange  and   notification  provided  by  a
            Submission and  Delivery Entity  (SDE).   In comparison with
            ARPA SMTP  (Simple Mail  Transfer Protocol), X.400 is overly
            complicated with  hordes  of  User  Agent  Entities  (UAEs),
            Message Transfer  Agent Entities  (MTAEs) and SDEs scurrying
            around potentially eating up -- especially during periods of
            high traffic  -- lots  of  computer  cycles  on  originator,
            target and  intermediate host systems because the source UAE
            has to transfer mail through the local MTAE and intermediate
            MTAEs on  a hop-by-hop  basis to get to the target machine.7


            7 I have to admit that if I were implementing X.400, I would
            probably implement  the local  UAE and  MTAE in one process.
            The  CCITT  specification  does  not  strictly  forbid  this
            design,  but  the  specification  does  seem  to  discourage
            strongly such  a design.   I consider it a major flaw with a
            protocol  specification  when  the  simplest  design  is  so
            strongly counterindicated.   It  does seem  to be obligatory
            that mail  traffic  which  passes  through  an  Intermediate
            System (IS) must pass through an MTAE running on that IS.

            The design is particularly obnoxious because X.400 increases
            the number  of ways  of getting mail transmission failure by
            using so  many intermediate  entities  above  the  transport
            layer. The  SMTP architecture  is, by  contrast, simple  and
            direct.  The user mail program connects to the target system
            SMTP daemon  by a  reliable byte  stream (like a TCP virtual
            circuit) and  transfers  the  mail.    Hop-by-hop  transfers
            through intermediate  systems are possible when needed.  One
            SMTP daemon  simply connects  to another the same way a user
            mail program connects to an SMTP daemon.

            The relatively  greater complexity  and obscurity  of  X.400
            arises because  a major  purpose of  X.400 seems  to  be  to
            intermingle  intercomputer   mail  service   and   telephony
            services  like   telex  or   teletex  to  fit  the  computer
            networking  into   the  PTT  (Post,  Telegraph  &  Telephone
            administration)  model   of  data   communications  (not  an
            unreasonable goal  for a  CCITT protocol  specification  but
            probably not the best technical or cost-effective design for
            the  typical   customer).    Mail  gateways  are  apparently
            supposed to  handle  document  interchange  and  conversion.
            Document interchange and conversion is a really hard problem
            requiring detailed knowledge at least of word processor file
            formats, operating  system architecture,  data encoding, and
            machine architecture.

            It may  be impossible  to   develop a  satisfactory  network
            representation  which   can  handle  all  possible  document
            content, language and source/target hardware combinations as
            well as  provide interconversion  with tradition  telephonic
            data transmission encodings. The cost of development of such
            a system might be hard to justify, and a customer might have
            a hard time justifying paying the price a manufacturer would
            probably have  to charge  for this  product. A  network file
            system  or   remote  file  transfer  provides  a  much  more
            reasonable means  of document  sharing or  interchange  than
            tacking an  e-mail address  into a  file with  a complicated
            internal structure,  sending  this  file  through  the  mail
            system and  then removing  the addressing information before
            putting the  document through  the appropriate  document  or
            graphics handler.

            A NETASCII-based  e-mail system  corresponds exactly  to the
            obvious mapping  of the  typical physical letter, which does
            not usually  contain complicated  pictorial or tabular data,
            to an  electronic letter  and is  sufficient for practically
            all electronic  mail traffic.  Special hybrid systems can be

            developed for  that extremely  tiny fraction  of traffic for
            which NETASCII  representations may  be insufficient and for
            which a network file system or FTP may be insufficient.    A
            correct partitioning  of the  electronic mail should be kept
            completely  separate   from  telephony   services,  document
            interchange and document conversion.

	   |								|
           |                    X.400 Mail Connections			|
           |                                                            |
           |							 	|
           |							 	|
           |							 	|
           |							 	|
           |							 	|
           |			(No Graphics)			 	|
           |							 	|
           |							 	|


            The MIT environment at Project Athena, where IBM and DEC are
            conducting a  major  experiment  in  the  productization  of
            academic software,  provides an  instructive example  of the
            differences between e-mail, messaging and notification.  The
            mail system  used at  MIT is  an implementation of the basic
            SMTP-based ARPA  Internet mail  system. More than four years
            ago the  ARPA Internet  mail system  was  extremely powerful
            and world-spanning.   It  enabled  then  and  still  enables
            electronic mail  to reach  users on any of well over 100,000
            hosts in  N. America,  Europe, large portions of E. Asia and
            Israel.   The Citicorp  network (described  in "How one firm
            created  its  own  global  electronic  mail  network,"  Data
            Communications,  June   1988,  p.   167),   while   probably
            sufficient  for   Citicorp's  current   needs,  connects  an
            insignificant number of CPUs (47), provides no potential for
            connectivity outside  the Citicorp  domain of authority  and
            will probably  not scale  well with  respect to  routing  or
            configuration as it grows.

            The MIT  environment is complex and purposely (apparently in
            the strategies  of DEC  and IBM)  anticipates  the  sort  of
            environment which  should become typical within the business
            world within  the next  few years.   MIT is an authoritative
            domain within  the ARPA  Internet.   The gateways out of the
            MIT domain  communicate with  gateways in  other domains via
            the Exterior  Gateway Protocol (EGP).  Internally, currently
            used internal gateway protocols are GGP, RIP and HELLO.  The
            MIT domain  is composed of a multitude of Ethernet and other
            types of  local area  networks connected  by  a  fiber-optic
            backbone physically  and by gateway machines logically. This
            use of  gateways provides  firewalls between  the  different
            physical networks  so that  little sins  (temporary  network
            meltdowns caused  by Chernobyl  packets) do  not become  big
            sins propagating  themselves throughout  the network.    The
            gatewayed architecture  of the  MIT network  also permits  a
            necessary traffic engineering by putting file system, paging
            and boot  servers on  the same  physical network  with their
            most likely clients so that this sort of traffic need not be
            propagate throughout the complete MIT domain.

            Difficult to  reach locations  achieve connectivity by means
            of non-switched  telephone links.   Since  MIT has  its  own
            5ESS, these  links may  be converted  to ISDN at some point.
            While there  are some  minis and  mainframes in the network,
            the vast  majority of  hosts  within  the  MIT  network  are
            personal workstations with high resolution graphics displays
            of the  Vaxstation and  RT/PC type and personal computers of
            the IBM  PC, PC/XT  and PC/AT  type.   A few  Apollos, Suns,
            Sonys and  various workstations of the 80386 type as well as
            Lisp Machines  and PCs  from other  manufacturers like Apple
            are also  on the  air.  Most of the workstations are public.
            When a user logs in to such a workstation, after appropriate
            Kerberos (MIT  security system)  authentication, he has full
            access to  his own  network files  and directory  as well as
            access to  those resources  within the  network which he has
            the right to use.

            To assist  the administration  of the  MIT domain within the
            ARPA  Internet,   several   network   processes   might   be
            continually sending (possibly non-ASCII) event messages to a
            network  management  server  which  might  every  few  hours
            perform some  data analysis  on received  messages and  then
            format  a   summary  mail  message  to  send  to  a  network
            administrator.   This mail  message would  be placed in that
            network administrator's  mailbox by  his  mail  home's  SMTP
            daemon  which   then  might   check  whether   this  network
            administrator is reachable somewhere within the local domain
            (maybe on  a PC  with a network interface which was recently
            turned on and then was dynamically assigned an IP address by
            a  local  authoritative  dynamic  IP  address  server  after
            appropriate  authentication).    If  this  administrator  is
            available,  the   SMTP  daemon  might  notify  him  via  the
            notification service  (maybe by  popping up  a window on the
            administrator's display)  that he has received mail which he
            could read  from his  remote  location  via  a  post  office

            I have  seen the  above system being developed on top of the
            basic "static"  TCP/IP protocol suite by researchers at MIT,
            DEC and  IBM over  the last  4 years.   X.400 contains a lot
            this MIT  network functionality mishmashed together but I as
            a customer or designer prefer the much more modular MIT mail
            system.   It is  an   extensible,  dynamically  configurable
            TCP/IP-based architecture  from which a customer could chose
            those pieces  of the  system which he needs.  The MIT system
            requires relatively  little static  configuration.   Yet  by
            properly choosing  the system  pieces, coding an appropriate
            filter program  and setting  up a tiny amount of appropriate
            configuration data, a customer could even set up a portal to
            send e-mail  to a fax machine. In comparison, X.400 requires
            complicated directory  services and  an  immense  amount  of
            static configuration about the end user and end user machine
            to   compensate   for   the   internetworking-deficient   or
            internetworking-incompatible addressing scheme. The need for
            such a  level of  static configuration  is  unfortunate  for
            system users  because in  the real world a PC or workstation
            might easily  be moved  from one  LAN to another or might be
            easily replaced by a workstation or PC of another type.

            An MIT-style  mail system  could also  be  much  cheaper  to
            develop and  consequently  could  be  much  less  costly  to
            purchase  than  an  X.400  mail  system  simply  because  it
            represents a  much better  partitioning of the problem.  One
            or two engineers produced each module of the MIT mail system
            in approximately  6  months.    Because  of  complexity  and
            obscurity, the  development of  X.400  products  (I  saw  an
            example at Prime) is measured in staff years.  The executive
            who chooses  X.400 will  cost his  firm an immense amount of
            money which  will look  utterly wasted  when his  firm joins
            with another  firm in some venture and the top executives of
            both firms  try  to  exchange  mail  via  their  X.400  mail
            systems.   Simple mail  exchange between  such systems would
            likely be  very hard  to impossible  because  the  different
            corporations  could   easily  have   made  permissible   but
            incompatible choices in their initial system set-up.  At the
            very last  complete reconfiguration of both systems could be
            necessary.   Had the  firms chosen  an  ARPA  Internet  mail
            system like  the  MIT  system,  once  both  firms  had  ARPA
            Internet connectivity  or set up a domain-to-domain gateway,
            mail would simply work.

           |                                                        	|
	   |			SMTP Mail Connections			|
           |                                                            |
           |							 	|
           |							 	|
           |							 	|
           |							 	|
           |							 	|
           |			(No Graphics)			 	|
           |							 	|
           |							 	|


            Because of  the mail  system development in progress at MIT,
            DEC and  IBM, the X development which I and others have done
            and which is still continuing, SUN NFS (Network File System)
            development,  IBM  AFS  (Andrew  File  System)  development,
            Xenix-Net development,  Kerberos development,  and the other
            plethora of protocol systems being developed within the ARPA
            Internet context  (including the VMTP transaction processing
            system and  commercial  distributed  database  systems  like
            network Ingress),  I am  at the  very least  puzzled by  Mr.
            Stallings' assertion   that  "[it] is the military standards
            that appear  on procurement  specifications  and  that  have
            driven  the   development  of   interoperable   commercially
            available TCP/IP products."
           |                                                            |
           |                  Partitioning the Problem           	|
           |           							|
           |The X  window system  is an  example of  a clearly  and well|
           |partitioned system.   In  windowing, the  first piece of the|
           |problem is  virtualizing the high-resolution raster graphics|
           |device.  Individual applications do not want or need to know|
           |about the  details  of  the  hardware.    Thus,  to  provide|
           |hardware independence,  applications should  only deal  with|
           |virtual high-resolution  raster-graphics devices  and should|
           |only know  about its  own virtual  high  resolution  raster-|
           |graphics devices  (windows).   The next piece of the problem|
           |is  to  translate  between  virtual  high-resolution  raster|
           |graphics devices  and the  physical  high-resolution  raster|
           |graphics device  (display).   The final  part of the problem|
           |lies in  managing the windows on the display.  This problem,|
           |with a  little consideration  clearly differentiates  itself|
           |from  translating   between  virtual   and  physical   high-|
           |resolution raster-graphics devices.                         |
           |           |                                                |
           |In  the   X  window   system,  communication   between   the|
           |application and  its windows is handled by the X library and|
           |those libraries  built  on  top  of  the  basic  X  library.|
           |Virtual to  physical and  physical to virtual translation is|
           |handled by the X server.  X display management is handled by|
           |the X window manager.           |                           |
           |           |                                                |
           |After partitioning  the problem,  careful  consideration  of|
           |display management  leads to  the  conclusion  that  if  all|
           |windows on  a display  are treated as "children" of a single|
           |"root" window,  all of  which "belong"  in some sense to the|
           |window manager,  then the X window manager itself becomes an|
           |ordinary application  which talks  to the X server via the X|
           |library.   As a consequence, developers can easily implement|
           |different  display   management   strategies   as   ordinary|
           |applications without  having to "hack" the operating system.|
           |The  server  itself  may  be  partitioned  (under  operating|
           |systems which support the concept) into a privileged portion|
           |which directly  accesses the  display hardware  and  a  non-|
           |privileged  portion   which  requests   services  from   the|
           |privileged part  of the  server.  Under Unix, the privileged|
           |part of the server goes into the display, mouse and keyboard|
           |drivers while  the non-privileged  part becomes  an ordinary|
           |application.  In common parlance, X server usually refers to|
           |the non-privileged part of the X server which is implemented|
           |as an ordinary application.                                 |
           |                                                            |
           |The last  step in  realizing the X window system is choosing|
           |the  communications  mechanism  between  the  X  server  and|
           |ordinary applications  or the  display manager.  Because the|
           |problem was  nicely partitioned,  the communications problem|
           |is completely extrinsic to the windowing problem as lives as|
           |an easily  replaceable interface module.  The initial choice|
           |at MIT  was to  use TCP/IP  virtual circuits, which provided|
           |immediate network  transparency, but  in fact because X only|
           |requires sequenced  reliable byte-streams so that DECNET VCs|
           |or  shared-memory   communications  mechanisms   can  easily|
           |replace   TCP/IP   virtual   circuits   according   to   the|
           |requirements of  the target  environment.   Systems built on|
           |well-partitioned approaches  to solving  problems often show|
           |such flexibility  because of  modularity of the approach and|
           |because a  successful partitioning of the problem will often|
           |in its  solution increase  the understanding of the original|
           |problem that  developers can  perceive greater  tractability|
           |and simplicity  in the  original and  related problems  than|
           |they might have originally seen.                            |

            It seems  somewhat  propagandistic    to  label  the  TCP/IP
            protocol  suite   static  and   military.     New  RFCs  are
            continually being  generated as Paul Strauss has pointed out
            in his  September article.  Such new  protocols only  become
            military   standards    slowly    because    the    military
            standardization of  new protocols  and  systems  is  a  long
            tedious political  process which  once completed may require
            expensive conformance  and verification  procedures.   After
            all, neither  the   obligatory ICMP nor the immensely useful
            UDP  (User   Datagram  Protocol)  have  associated  military
            standards. Often,  after reviewing  those products generated
            by market  forces, the  US military  specifies and  acquires
            products which go beyond existing military standards. By the
            way, hierarchical  domain name  servers and  X are  used  on


            The military  are not  the only  users "more  interested  in
            sophisticated  applications  than  in  a  slightly  enhanced
            version of  Kermit."   The whole  DEC enterprise  networking
            strategy  is   postulated  on  this  observation.  Stallings
            ignored  my   reference  to   network  file   systems  as  a
            sophisticated  networking   application.  Yet,   in  several
            consulting jobs,  I have seen brokers and investment bankers
            make extensive  use of network file systems.  I also believe
            network transparent graphics will be popular in the business
            world.     At  Saloman   Brothers  both   IBM  PCs  and  SUN
            workstations are  extensively used.   With X, it is possible
            for a  PC user  to run a SUN application remotely which uses
            the PC  as the  output device.  This capability seems highly
            desirable in the Saloman Brothers environment.

            Unfortunately "OSI  is unlikely  ever to  provide for [such]
            resource sharing because it is industry-driven."  Wayne Rash
            Jr.,  a   member  of  the  professional  staff  of  American
            Management Systems,  Inc.  (Arlington, Virginia) who acts as
            a US federal government microcomputer consultant, writes the
            following in  "Is More Always Better," Byte, September 1988,
            p. 131.

                You've probably seen the AT&T television ads about
                this trend [toward downsizing and the development
                of LAN-based resource-sharing systems].  They
                feature two executives, one of whom is equipping
                his office with stand-alone microcomputers.  He's
                being intimidated by another executive, who tells
                him in a very nasty scene, "Stop blowing your
                budget" on personal computers and hook all your
                users to a central system.  This is one view of
                workgroup computing, although AT&T has the perverse
                idea that the intimidator is the forward thinker in
                the scene.

            AT&T and  to an  even greater  extent the similarly inclined
            European PTTs have major input into OSI specification.


            The inclinations  of AT&T  and the  PTTs are  not  the  only
            constraints  under   which  the   OSI  reference  model  was
            developed.   A proprietary  computer networking system, sold
            to a customer, becomes a cow which the manufacturer can milk
            for years. Complete and effective official standards make it
            difficult  for   a  company   to  lock  a  customer  into  a
            proprietary system.   A customer could shop for the cheapest
            standard  system,   or  could  chose  the  offering  of  the
            manufacturer considered  most reliable.   It  is  proverbial
            that no  MIS executive  gets fired  for choosing IBM.  Small
            players have  genuine reason  to fear that a big player like
            Unisys, which  no longer  has a  major proprietary  computer
            networking installed base8, or AT&T, which never had a major
            proprietary computer  networking installed  base9, might try
            to establish  themselves in  the minds  of customers  as the
            ultimate authority  for the supply of true OSI connectivity.
            Thus, small  players fear  that  a  complete  and  effective
            official  standard  might  only  benefit  the  big  players.
            Players like  AT&T or  Unisys fear  IBM  might  hi-jack  the
            standard.   IBM would prefer to preserve its own proprietary
            base  and   avoid  competing  with  the  little  guys  on  a
            cost/performance basis  in what  could turn into a commodity

            No such  considerations were operative in the development of
            the ARPA  Internet suite of protocols.  DARPA had a specific
            need for  intercomputer networking,  was willing  to pay top
            dollar  to   get  the   top  experts  in  the  intercomputer
            networking field  to design  the system  right and  was less
            concerned by  issues of competition (except perhaps for turf
            battles within  the U.S.  government).   By contrast, almost
            all players  who have  input into  the  ISO  standardization
            process have  had reasons and have apparently worked hard to
            limit the effectiveness of OSI systems.

            With all  the limitations, which have been incorporated into
            the OSI  design and  suite of  protocols, the  small players
            have no reason to fear being overwhelmed by big players like
            Unisys or  AT&T.  The big players have the dilemma of either
            being  non-standard   or  of   providing   an   ineffective,
            incomplete  but  genuine  international  standards.    Small
            vendors have lots of room to offer enhanced versions perhaps
            drawing from more sophisticated internetworking concepts. In
            any case,  most small  vendors, as  well as DEC and IBM, are
            hedging their  bets by  offering both  OSI and  TCP/IP based
            products.   IBM seems well positioned with on-going projects
            at the  University of Michigan, CMU, MIT, Brown and Stanford
            and with  IBM's creditability  in the  business world to set
            the  standard   for  the   business  use   of  TCP/IP  style


            8 BNA  and DCA  seem hardly  to count  even  to  the  Unisys

            9 Connecting  computer systems  to the  telephone network is
            not computer networking in any real sense.

            networking. By  contrast, no major manufacturer really seems
            to want to build OSI products, and with the current state of
            OSI, there is really no reason to buy OSI products.


            MAP shows perfectly the result of following the OSI model to
            produce a computer networking  system.  GM analysts sold MAP
            to GM's  top management  on the  basis of the predicted cost
            savings.   Since GM  engineers designed,  sponsored and gave
            birth to  MAP, I  am not surprised that an internal GM study
            has found MAP products less expensive than non-MAP compliant
            products.   If the internal study found anything else, heads
            would have  to roll.  Yet, as far as I know, neither IBM nor
            DEC have  bought into  the concept  although both  companies
            would probably  supply MAP  products for  sufficient profit.
            Ungermann-Bass and other similar vendors have also announced
            a disinclination  to  produce  IEEE  802.4  based  products.
            Allen-Bradley has chosen DECNET in preference to a MAP-based
            manufacturing and  materials handling system. This defection
            of major  manufacturers, vendors  and customers from the MAP
            market has to limit the amount of MAP products available for
            customers to purchase.

            Nowadays, GM  can purchase  equipment for  its manufacturing
            floor from  a limited  selection of  products, which are the
            computer networking  equivalent of  bows and arrows, whereas
            in the  past GM  was stuck  with rocks and knives.  Bows and
            arrows might  be sufficient for the current GM applications;
            however, if  my firm  had designed  MAP, GM  would have  the
            networking equivalent  of nuclear  weapons,  for    the  MAP
            network would  have been  built around  an internet  with  a
            genuine multimedium  gatewayed easily modifiable environment
            so that  in those locations where token-bus noise resistance
            is insufficient and where higher bandwidths might be needed,
            fiber media  could be  used.   With the  imminent deluge  of
            fiber-based  products,   MAP  looks   excessively   limited.
            (Actually, the  MAP standards  committees  have  shown  some
            belated awareness that fiber might be useful in factories.)


            Interestingly enough,  even when OSI systems try to overcome
            OSI limitations via protocol conversion to provide access to
            some of  the sophisticated  resource sharing  to which  ARPA
            Internet users  have long  been accustomed,  the service  is
            specified in  such a  way as  to place  major limitations on
            performance of  more sophisticated  applications. Just  like
            IBM and  other system manufacturers, I have no problems with
            providing to  the customer  at sufficient  profit    exactly
            those products  which  the  customer  specifies.    Yet,  if
            contracted for advice on a system like the NBS TCP/IP-to-OSI
            protocol converter  IS (Intermediate  System), described  in
            "Getting there from here," Data Communications, August 1988,
            I might  point out  that such  a system  could easily double
            packet  traffic   on  a   single   LAN,   decrease   network
            availability and reliability, prevent alternate routing, and
            harm throughput  by creating  a bottleneck  at the  IS which
            must perform both TCP/IP and OSI protocol termination.

            X. CONCLUSION

            Official standardization  simply by  itself does  not make a
            proposal good.   Good  standards generally were already good
            before they  became official  standards. The  IEEE and other
            standards bodies  generate lots  of  standards  for  systems
            which quickly  pass into  oblivion.   OSI was  generated  de
            novo, apparently  with a  conscious decision  to ignore  the
            already functioning  ARPA Internet  example. Unless  a major
            rethinking  of  OSI  (like  redesigning  OSI  on  the  solid
            foundation of  the internetworking  concept) takes  place in
            the near  future, I  must conclude  that the  ARPA  Internet
            suite of  protocols will  be around for a long time and that
            users of  OSI will  be immensely  disappointed by  the cost,
            performance,  flexibility   and   manageability   of   their

            I. Introduction                                            1
            II. The Debate                                             2
            III. Internetworking:  The Key System Level Start Point    4
            IV. "Greater Richness" Versus Developer Insight           14
                A. OSI Network Management and Netview                 14
                B. FTAM is Dangerous                                  14
                C. X.400:  E-Mail as Good as the Postal Service       15
                D. ARPA SMTP:  Designing Mail and Messaging Right     18
            V. Is the TCP/IP Protocol Suite "Static?"                 22
            VI. Enterprise Networking and Sophisticated Applications:
                    Selling Intercomputer Networking                  24
            VII. Big and Small Players Constrain OSI                  24
            VIII. MAP:  Following the OSI Model                       26
            IX. Extending OSI Via Protocol Converters:  Quo vadit?    26
            X. Conclusion                                             27