By Paul Gowans & Val Wilson
- 1 Introduction
- 2 Bandwidth All the Time Versus Bandwidth on Demand
- 3 Bandwidth Control
- 4 Connection Control
- 5 Data Control
- 5.1 Data Compression
- 5.2 Why Use Compression?
- 5.3 Compression Techniques And Standards
- 5.4 Spoofing
- 6 Conclusion
- 7 See Also
The networking and data communications marketplace is undergoing a revolution. More and more users are demanding efficient, secure, high-performance remote information access. They want the ability to connect to their networks from any location, including branch offices, home offices or hotel rooms. They also want access to online services such as the Internet.
This increased demand for connectivity has created a far more complex networking environment. New communications services such as ISDN are becoming widely available and are fueling this demand. In addition, the rapid pace of industry change and network deployment provide network managers with little time for assessing the true costs of network solutions.
Regardless of the burdens they face, network managers must find and implement tools and solutions that can be used to create cost-effective networks for both the near and far term.
Bandwidth All the Time Versus Bandwidth on Demand
Until recently, most corporations based their Wide Area Networks (WANs) on leased lines. Leased lines provide "bandwidth all the time," meaning the company pays for the lines regardless of the time they are in use. While this technology is sufficient for enterprises that need consistent connectivity, it does not meet the performance and cost control needs of companies that have multiple users and offices accessing client/server applications remotely.
Branch offices or remote users that only need access for a few hours a day are better served by WAN switched services such as ISDN. In the ISDN environment, fast call set-up times and other attributes of switched services enable "bandwidth on demand," meaning bandwidth is available when it is needed and charges are only incurred when data is actually being transmitted over the line. With switched services such as ISDN, it is cost-effective to connect even the smallest remote or home office.
Despite the many advantages of switched services, however, they must be managed properly in order to realize the maximum benefits.
While many remote access vendors focus solely on connectivity solutions, Shiva is taking the next step to anticipate customer needs by helping companies manage their skyrocketing telecommunications bills. In order to accomplish this challenging task, Shiva has created Tariff Management, a unique set of technologies designed to help companies minimize the cost of using switched services.
Tariff Management is based on the three technology areas:
- Bandwidth Control
- Connection Control
- Data Control
Bandwidth Control reaps maximum network efficiency at minimum cost by deploying flexible and dynamic bandwidth-on-demand techniques.
Connection Control provides the most efficient way of connecting remote locations. This is based on taking advantage of different tariffs and on prioritizing connections. Connection control also provides fast and efficient recovery from failure.
Data Control makes the most efficient use of available bandwidth by using Spoofing and Triggered Routing update techniques. It ensures that usage-sensitive LAN-to-WAN services such as ISDN are not left "on" when there is no data to send. Data control also utilizes data compression to squeeze as much data as possible into the available bandwidth.
When these three innovative concepts are integrated under Shiva's Tariff Management umbrella, network managers are able to gain the greatest possible monetary and competitive value from remote network access.
LAN-to-LAN traffic is inherently sporadic. Bandwidth Control ensures that WAN services are only used when required and closed down when there is no user data transmission. This is critically important when services are being paid for, regardless of the amount of traffic being transmitted across the network. It also ensures that optimal services are used for particular applications and/or particular remote sites, and that extra bandwidth can be made available when there are unexpected bursts of traffic.
Only by combining these Bandwidth Control features can network managers be confident that WAN costs are minimized and the most flexible service is available.
There are four key areas of Bandwidth Control:
- Bandwidth on Demand
- Minimum Call Duration Timer
- Bandwidth Aggregation and Augmentation
Bandwidth on Demand
In the ISDN environment, fast call-set-up times and other attributes of switched services enable "bandwidth on demand," meaning bandwidth is available when it is needed and charges are only incurred when data is actually being transmitted over the line.
With bandwidth on demand, a call is only opened when there is data to send and then closed as soon as the data is sent. This is totally transparent to users on the network.
For example, when users are running a Web browser to access a remote Web server via ISDN, they cause an ISDN connection to be opened at the point of first access to the Web. While they are reading the data they have received, the connection times out because no data is being sent or received. As soon as they access the next page of information, the connection is re-opened. Since the time to make the ISDN call is so rapid, the users appear to have been connected all the time.
The time-out parameters are usually configurable on the ISDN access devices and the most suitable values will depend on carrier tariff policy and the applications being used.
Minimum Call Duration Timer
The minimum call duration timer is an extension of the bandwidth-on-demand time-out. Many carriers have a minimum call time that is different in length (and possibly in tariff rate) from subsequent call times. For example, the minimum call time may be three minutes and thereafter, the tariff is per minute. Having a separate configurable timer to handle this creates the flexibility needed to successfully manage costs.
Bandwidth Aggregation and Augmentation
With the availability of combined bandwidth, extra data channels -- an ISDN B channel, a leased line, an X.25 virtual circuit or a dial-up circuit -- are only used when the existing channel capacity is saturated. Channels are shut down when the extra bandwidth is not required. Bandwidth can be increased by combining channels of the same network type, such as ISDN B channels, or by combining channels of different types, such as an ISDN B channel and a leased line.
Combining the bandwidth of two or more channels of the same type, on the same interface or across interfaces, is termed aggregation. In this scenario, when a router receives the first packet for transmission, a channel is opened to the remote router. A further channel is then dynamically opened when the number of packets or bytes queued exceeds a certain value, which is normally user-defined. After each new channel is opened, there is a short delay before a subsequent channel is opened, allowing the existing queue to be emptied.
When the measured data throughput indicates that fewer channels are needed, data is no longer transmitted on the channel that was opened last. If both ends stop sending data, the channel is closed after a user-specified time-out. This latency is used to accommodate bursty traffic patterns.
Channels from different interfaces can also be combined. For instance, one channel on an interface is specified as primary while another is specified as secondary. Channels on the primary interface are used before channels from the secondary interface. This technique is used to combine bandwidth from interfaces of similar speed.
Adding bandwidth from a different type of interface is known as augmentation. Using an ISDN B channel as on-demand bandwidth for a leased line is a common application of combined bandwidth. This allows a 64Kbps leased line to be used for average load, while an ISDN B channel is added when the leased line is saturated.
Switchover enables traffic to be moved from one circuit to another, depending upon the traffic rate. A slow-speed leased line running at 19.2Kbps can be linked to a 64Kbps ISDN B channel. When the traffic rate on the leased line reaches saturation, the ISDN link is opened and traffic moved to it. Once the traffic rate drops below that of the leased line, the ISDN link is closed down and traffic diverted back to the leased line. The threshold at which traffic switches can be defined by the user. Switchover ensures that the most cost-effective circuit is always used, and provides a very cost-effective solution for networks with changing bandwidth needs throughout the day.
Shiva's Support of Bandwidth Control
The ShivaIntegrator line of products supports all the Bandwidth Control features described. Dynamic bandwidth aggregation and augmentation is achieved through our own dynamic multilink mechanism.
Methods of Aggregating Data Channels
When data channels are aggregated together they provide a wider "pipe" down which to send the data. This is analogous to adding additional lanes to a single-lane highway. More traffic bandwidth is made available. There are various techniques which can be used to manage the data down these combined channels. These range from the simplest 'round robin' way of sending one data packet down each channel in turn, to splitting the packets of data into fragments and sending each down a different channel. Both PPP Multilink and bonding use the more complex fragment approach.
The PPP Multilink Protocol (RFC 1717) is a standardized extension of the PPP (Point-to-Point Protocol-RFC 1661) standard. It describes a standard method of combining channels to ensure packet ordering and compatibility between manufacturers of internetworking equipment. It employs a method known as "packet chopping," wherein individual packets are chopped into smaller fragments of a uniform size. These fragments are then distributed among all the channels in use. Because it is a software solution, PPP Multilink is limited in the number of channels that can be combined at any given time. However, it does allow internetworking products to combine channels of any type, not just ISDN.
Unfortunately, there are currently no standards for dynamically aggregating additional data channels using PPP, for ISDN or any other service.
Bonding allows channels to be combined at the physical framing level. It gets its name from the Bandwidth On Demand Interoperability Group. It is independent of the framing protocol used. Bonding is a very efficient method of combining channels because it is normally performed in hardware without any software packet handling overhead. Bonding is particularly effective when used with Primary Rate ISDN (PRI). With PRI, up to 63 B channels can theoretically be combined to provide a very high-speed link. This bandwidth can be used to provide the primary method of connection between sites, or as a convenient backup to high-speed leased lines.
Bonding is based on an open standard so it provides interoperability between vendors. It is independent of any higher-layer protocols. Bonding can only be used with ISDN. Also, there are various modes of operation for Bonding and the most common mode does not support the dynamic addition and removal of circuits on demand.
Connection control provides the most efficient way of linking remote locations. This is based on tariff parameters or prioritization. Connection control also provides fast and efficient failure recovery.
Time of Day Tariffs
WAN services are subject to different tariffs at different times of day. Usually, these tariffs are lower during the evening than at peak usage times in the day. Since networks operate continuously and often replicate data at night, it is important to be able to take advantage of these lower tariffs.
Network managers can take advantage in several ways. For example they could use X.25 or frame relay during the day when interactive traffic is high, and employ ISDN at night for data replication and backup. Or, they could make sure that ISDN was not being used at peak times at all, by preventing any remote access during certain periods.
In some circumstances, it is useful to switch off the ISDN link for a period of time, to make sure that applications are not stuck in a loop, using valuable bandwidth.
Callback is another tariff-based technique. Tariffs between two remote locations frequently vary, depending on which site initiates the call. This is particularly applicable to domestic and international long distance calls. With callback, when a remote site calls a second site, the second site closes the call and dials the first site back. When this is done via ISDN, CLI (Calling Line Identification) can be used for the callback number. This is very efficient as the initial call will be refused, incurring no charge at all in this direction. In the case of ISDN and dial-up, the PPP-based PAP or CHAP can be used by network managers to identify which remote site should be called back. Callback can also be used for security purposes or for centralized or decentralized billing, by, for example, an Internet provider.
Circuits can be prioritized to ensure that low or high-priority calls are opened or closed down depending on their importance. For example, consider a router with one Basic Rate ISDN interface. The two B channels on this router can be aggregated to provide 128Kbps of bandwidth to another router. If router 1 needs to call a third router, but another call is in progress, circuit prioritization could give priority to the router 1 call. In this event, router 1 would close down the second call to router 2 and dial up router 3. The call to router 2 would continue with reduced bandwidth.
Failure Recovery: Transparent Backup and Multiple Static Routes
Transparent backup and multiple static routes are two techniques that offer cost-effective resilience against communication failure. Communication failure can either be caused by an internetworking device failure or a network failure. Both of these potential problems -- and the right internetworking product to solve them -- must be considered when designing a resilient network.
Transparent backup is applied when two WAN interfaces on the router provide different paths to the same destination. One interface may be used as the primary circuit, while the second acts as the secondary or backup circuit. If the primary circuit fails, the secondary circuit is automatically activated. When the primary circuit comes up again, traffic is transferred from the secondary circuit back to the primary circuit. If, for example, the primary circuit is a leased line service with the backup circuit running across ISDN, the user would only pay for the ISDN backup when it was needed.
It is vital that routers be capable of switching from one circuit to another without requiring routing protocols such as the Routing Information Protocol (RIP).
Routing protocols often take up to three minutes before implementing the desired topology change. A delay of this length can cause serious problems in a network running critical applications. Data can be lost if there is no known route. With transparent backup, switching to a backup link is done transparently when a link failure is detected. As a result, backup is provided immediately and with no data loss.
Multiple static routes is another cost-saving feature that avoids running routing protocols such as RIP over WAN links while still providing a back-up service. When a failed static route is detected, a router employs the next best route. If the original best route returns to service, it replaces the alternative route.
As the router determines which route to use, there is no need to run RIP over the WAN links. This leads to significant WAN savings as running RIP can be expensive.
It is important to understand the difference between transparent backup and multiple static routes. Transparent backup is used to provide two alternate circuits to the same remote router. Only one route will ever be advertised via the routing protocols, regardless of which circuit is in use. This mechanism is transparent to all devices -- except the two communicating routers -- whether a primary or backup circuit is providing the connectivity. Multiple static routes can be used when connecting multiple routers. The routers will advertise the new route locally, an action that will be detected. Multiple static routes are therefore not transparent.
Support for Connection Control
The ShivaIntegrator is unique in supporting all these Connection Control features. Services can be dynamically changed depending on the time of day. Callback can save considerable money as well as providing flexibility. Circuit prioritization means that key remote locations are always guaranteed bandwidth in any circumstances. Multiple static routes and transparent backup give added levels of backup and security that are vital for an efficient network.
The third element of Tariff Management is data control. It involves controlling the data sent over usage-sensitive services such as ISDN. There are three primary data control components:
- Data compression
- Triggered routing protocol updates
- Spoofing responses to service or "housekeeping" messages
The issue of compression in a routed network is a complex one, and there is no definite solution. However, it is clear is that data compression in many networking environments will increase user response time and increase the volume of data transmitted in a given time. As a result, network costs will be reduced.
The right choice of tools and techniques for any particular network will depend on:
- Traffic bandwidth requirements
- Traffic protocols
- Nature of traffic
- Network topology
- Application latency requirements
Why Use Compression?
Here are some reasons why using the right form of compression can improve your network. It provides:
- Continued use of a legacy low-speed line despite increasing bandwidth requirements. Getting an average 2:1 compression ration across a 9.6 Kbps leased line gives it the bandwidth of a 19.2Kbps line.
- Improved latency across a low-speed line. If the routers at the ends of a slow-speed leased line connection can compress the data sufficiently, any latency improvement may be observed.
- Reduced network costs on a time-based tariffed service such as ISDN.
There are three forms of compression: header compression, body compression and link compression. Each has its advantages and disadvantages, and should be employed in different circumstances. It is worth noting that in internetworking, all compression algorithms must be "lossless" (the packets must look the same following compression/decompression as they did initially). Different algorithms are used than those employed in the fields of voice or video compression, where "lossy" algorithms are used with the expectation of a drop in signal quality. This would be unacceptable in data communications.
In any protocol, a packet consists of header information which defines where the packet is to go, and what type of information it contains. This is followed by the information itself. In a link dedicated to one type of traffic between the same two hosts, this header information does not change, yet is duplicated in each message. Where the body content is small, the header information forms the larger percentage of the bandwidth generated, even though it serves no purpose.
Header Compression removes these duplicated headers before the packet is sent over the link, and regenerates them at the remote end. This technique can be employed to great advantage for interactive protocols such as Telnet and X-Windows, where typically the packet content may be only one byte.
The most common form of header compression in the Internet world is Van Jacobsens Header Compression, which is defined by RFC 1144.
Where applications are communicating using protocols with large body contents, compressing the body will achieve a greater effect than compressing the headers. Although many forms of compression algorithms may be employed, the choice should be made based on the memory or speed requirements. Both sides of the link must of course agree on the algorithm to be used.
A good example of successful body compression is the transportation of Microsoft LAN Manager packets across a WAN. These packets typically contain a large percentage of repeated characters and empty space, and are therefore ideal candidates for compression.
On point-to-point links, the entire data stream may be compressed and regenerated at the remote end. This is a protocol-independent mechanism and may be implemented in devices separate from the internetworking equipment used for the transmission.
Quotes of compression ratios vary considerably from different manufacturers. Many quote best case ratios for pre-configured data. This is often a poor reflection of real data transmission compression. The best method of testing the compression used for particular application is to try it.
The ShivaIntegrator line of products employs the PPP Predictor compression algorithm and a custom algorithm based on PPP Predictor. This can achieve compression ratios up to 6:1.
Compression Techniques And Standards
Apart from Van Jacobsen's header compression, there are currently no RFCs in this area. There are, however, a multitude of Internet Drafts. One Internet Draft defines the compression control protocol used to negotiate, at link initiation, which, if any, compression protocol should be used. The others define the use of each of the compression protocols within the PPP framework.
The PPP Compression Control Protocol
This protocol is a mechanism which allows two PPP peers to negotiate which if any of the compression protocols they will use. Technically, this draft has been ready to progress down the Internet standards track for some time.
The PPP BSD Compression Protocol
This uses the widely implemented UNIX Compress compression algorithm, the source for which is widely and freely available. The PPP BSD algorithm has the following features:
- Dynamic table clearing when compression becomes less effective.
- Automatic compression shutdown when the overall result is not smaller than the input.
- Dynamic choice of code width within pre-determined limits.
- Many years of heavy network use on modem and other point-to-point links to transfer netnews.
- Effective code width that requires less than 64K bytes of memory on both send and receive.
This has been proposed by the CCITT (now the ITU-T) as a compression standard to work in association with the V.42 error-correction protocol for modems. It uses a variant of the Lempel-Ziv-Welch (LZW) compression algorithm and can be implemented in hardware or software. It has the benefit of automatically turning compression on/off depending on the compressability of the data stream.
The PPP Predictor Compression Protocol
Predictor is the algorithm intended to be the vendor standard. According to the Internet Draft which defines it:
"Predictor is a high-speed compression algorithm, available without license fees. The compression ratio obtained using predictor is not as good as other compression algorithms, but it remains one of the fastest algorithms available."
Triggered Routing Protocol Updates
In an IPX environment, routers use RIP to broadcast their current routing tables every 60 seconds. Also every 60 seconds, servers use SAP to broadcast their currently available services. Routers also use RIP to broadcast their routing tables every 60 seconds. When these broadcasts are allowed to traverse usage-sensitive WANs, costs soar.
However, there is a problem with stopping these broadcasts: If they are eliminated, the routers and servers cannot communicate with each other and do not have a complete view of the network. There are at least several ways -- some of them flawed -- to minimize these broadcasts.
Timed updates is a flawed method of transmitting network changes at predefined intervals. It is a simple method that unfortunately sends messages and opens ISDN calls when there are no changes, thereby increasing traffic with no accompanying benefit. If the timer is set to a small interval, the network load can be increased by about 15%, but if the timer is set to a high interval, updates are often transmitted too late to be useful.
Piggybacking is another flawed technique. It allows some traffic types to be designated as transmittable across the WAN if the link is already open, but it does not allow the link to be opened solely for the purpose of transmitting this traffic. When the link is opened for some other user traffic, the routing update information is allowed across. The exception to this behavior comes when a routing link fails, and the routing updates are forced across. With piggybacking, updates are not acknowledged.
Triggered RIP (IP and IPX) and SAP (IPX only) routing updates are more efficient. They only broadcast across the WAN when the available services or network topology changes -- an infrequent occurrence in a well-behaved, stable network. In addition, these updates must be acknowledged--meaning that the remote routers have successfully received the updates. By running Triggered RIP and SAP, network managers can realize the benefit of the protocols without the network overhead costs.
The term "spoofing" is used differently by WAN internetworking vendors. For the purposes of this white paper, spoofing is defined as, "a set of techniques to keep service packets or network housekeeping information off the WAN link while fooling the network into thinking the frames have been sent."
Spoofing is most applicable to Novell NetWare networks. The Novell protocols were written assuming that devices were connected to LANs where bandwidth was not an issue. Many types of service or housekeeping packets are sent between devices. These include:
- IPX keep-alive ("watchdog") packets
- SPX keep-alive ("probe") packets
- NetBIOS over IPX keep-alive packets
NetWare is a client-server operating system. When using IPX, each client (user) logs on to a server on the network, and for the duration of the login session, the server PC sends keep-alive packets to reassure itself that the other side of the session is still live. The period of the keep-alive packets is configurable but is typically measured in minutes. Keep-alive packets are also known as watchdog packets.
Even though a user may not be doing any live work, the session will be considered as live until the user logs out from the server. A typical NetWare user may choose to be permanently logged into one server, rather than logging in and out every day.
SPX is similar to IPX, but is a connection-oriented protocol which sits above IPX and is used by applications to provide guaranteed delivery of packets in the correct sequence. The SPX header includes the IPX header of 30 bytes, and then adds another 12 bytes for sequencing, flow control, connection and acknowledgment information. The SPX protocol also uses keep-alive -- AKA probe -- packets, but in this situation, they are sent both from the client and the server. Lotus Notes is one application which uses SPX/IPX.
NETBIOS Over IPX
Corporate internetworks are increasingly built using NetBIOS over IPX as the common transport protocol; this is the standard configuration in Microsoft Windows95. Microsoft used to standardize on the LAN NETBEUI protocol, which relies on frequent broadcasts to communicate between hosts in the same way as IPX does.
NetBIOS over IPX carries on this practice of communication via broadcasts, so in order to spoof a NETBIOS/IPX network successfully, the WAN router must be able to handle spoofing at both levels simultaneously.
Spoofing is often extended to cover additional ways of keeping service packets or network-related housekeeping information. There is one specific area worth mentioning.
Novell NetWare servers broadcast a serialization packet to protect against duplicate servers using the same serial number. The ShivaIntegrator filters out these packets rather than spoofing a reply.
Spoofing of IPX and SPX keep-alives and NetBIOS over IPX is essentially the same. Consider the following SPX example.
In a typical client-server SPX interaction, one station (the client) requests services from another station (the server). Before submitting any requests to the server, the client must establish a connection. When the client has completed its interaction with the server, the connection is closed down. Once a connection is established between client and server, there may be periods of inactivity when no data is sent by either workstation. During this idle period the server will send keep-alive or probe packets to the client to ensure that the workstation is still up and running. When both the client and the server are on the same LAN this presents no bandwidth or cost issues.
If the client and server are at either end of a switched WAN service such as ISDN, the ideal solution would be to keep the connection closed insuring the periods of inactivity. However, the keep-alive or probe packets keep the connection open. This is where spoofing comes in.
Spoofing reduces network costs significantly because keep-alive packets are responded to locally by the router and not sent across the WAN link. This means the WAN link remains closed during the periods of inactivity. This technique is called spoofing because, by responding locally to keep-alive packets, the router "spoofs" the client and server, making them act as through the WAN connection is still active.
The TCP/IP protocol stack was designed 20 years ago with WANs in mind. It employs few broadcasts. One example is directed broadcasts wherein the broadcast is automatically restricted to the LAN on which it originates. The only broadcast common to TCP/IP is the RIP broadcast, used by routers to advertise the routes known to them. Since the release by Microsoft of a TCP/IP stack for Windows, internetworks built on NetBIOS/IP are frequently used. Spoofing at the IP level is unnecessary because IP is more WAN-friendly than IPX. However, spoofing is still required for the NETBIOS protocol.
Shiva has the most comprehensive set of spoofing features for LAN-to-LAN connectivity in the ShivaIntegrator product line. All the features described above are implemented. Data Communications International (November 1994) quoted Shiva as the top performer in an evaluation with other industry leaders.
Example Cost Savings With Spoofing and Triggered Updates
Both spoofing and triggered updates prevent calls being made to send data that is not necessary. These features can save enormous amounts of money. Consider the following simple example which shows the high costs of sending unnecessary data.
You have a long-distance, ISDN-based Novell Network between two locations. You probably would not have considered using ISDN if these locations required access for more than two hours per day, so assume that this is the period over which real data is actually being sent. Assume the average call is in progress for five minutes, so there are 24 calls per day for real data.
Assume a call charge of $.03 for the first minute and $.01 for each additional minute. (These are the costs from California -- substitute your own tariff rates).
The Costs Over One Day:
- Assume 24 calls of five minutes each. The first minute costs $.03 and four subsequent minutes cost $.01.
- 24* ($.03+(4*$.01)) = $1.68 per day However, if you do not have spoofing and/or you are using normal, not triggered RIP, there are many additional calls to send service frames or housekeeping data. This could easily result in one call per minute to send this unnecessary data. This amounts to one call every minute of every additional hour that no real data is being sent -- 22 hours in this example.
- 22* 60 calls of a few seconds each, so it costs $.03 for every call.
- 22* 60* $.03 = $39.6 per day
In this example, since the initial call charge is more expensive than the per-minute charge, it would actually be cheaper to keep the call open all day.
With spoofing and triggered RIP/SAP, these calls will not be made. This is a simple example, but it does illustrate the enormous savings that these features can produce.
Before network managers can truly optimize their networks, they must have a firm command of the many tools and techniques associated with bandwidth control, connection control and data control.
The concept of bandwidth control is initially simple to grasp, but complex to implement. It is based on the notion that WAN services should only be used when they are needed, and only paid for when they are used. Through the use of techniques such as aggregation, augmentation, switchover and minimum call duration timing, bandwidth control is well within reach.
Connection control is part art and part science. The art involves having the imagination to devise strategies that optimize the same WAN elements that are available to all users. The science involves taking the time to thoroughly understand WAN elements so they can be employed to maximum user advantage.
Data control requires an indepth knowledge of network protocols and how spoofing can be used to optimize them. Primarily applicable to the widely installed base of Novell NetWare networks, spoofing can eliminate wasteful, expensive network misuse. At a time when every expense is scrutinized, this kind of data control can make a significant corporate contribution.
There are many costs to consider when implementing a network solution. The cost of transferring data across the WAN is quantifiable and is the largest cost associated with managing a network.
Shiva and Tariff Management
With the ShivaIntegrator, cost is controlled through Tariff Management a series of features designed to minimize WAN costs. Tariff Management gives network managers maximum flexibility with minimum WAN costs without any extra complexity. Networks can be optimized to provide the best user service at the best price. The growing use of switched WAN services is making the use of Tariff Management increasingly critical to successful networks. Because it was designed with an eye toward the future, Tariff Management will expand and adapt to future changes in network protocols and services.
About the Authors
Paul Gowans is Product Manager for the ShivaIntegrator range of products. Gowans has previously worked in Development and Technical Support. He graduated with honors from Edinburgh University with a B.S. in Computer Science and Management Science. Gowans is an industry spokesperson on networking.
Val Wilson is a Product Marketing Manager for Shiva Corporation. Wilson originally joined Spider Systems (now merged with Shiva) in 1983, and is responsible for the marketing of the ShivaIntegrator product in the U.S. Wilson graduated from St. Andrews University with a B.S. in Computational Science.