Please consider a donation to the Higher Intellect project. See https://preterhuman.net/donate.php or the Donate to Higher Intellect page for more info.

SGI Onyx2: Difference between revisions

From Higher Intellect Vintage Wiki
No edit summary
Line 1: Line 1:
<html>
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<!-- misc-responsive -->
<ins class="adsbygoogle"
    style="display:block"
    data-ad-client="ca-pub-8542359430745061"
    data-ad-slot="5971110325"
    data-ad-format="auto"
    data-full-width-responsive="true"></ins>
<script>
(adsbygoogle = window.adsbygoogle || []).push({});
</script>
</html>


=Introduction=
=Introduction=

Revision as of 14:01, 27 September 2018

Introduction[edit]

What defines an Onyx2 as a workstation, is a screen, keyboard and mouse. Without video hardware (see InfiniteReality below) an Onyx2 is an SGI Origin 2000 server. Even the SGI documentation describes an Onyx2 as a workstation despite the fact they can be configured into 5 rack "reality monsters". That's some workstation, and a lot of noise!


The authoritative source of all information is SGI techpubs.sgi.com

Related techpubs.sgi.com documentation[edit]

(Rack) document number: 007-3457-005

(Deskside) document number: 007-3454-005

Rack system
Rack system
Conceptual view of the above system
24 CPU meters in gr_osview
Deskside Model
Deskside Model
Deskside Model
Deskside Model

Architecture[edit]

An Onyx2 system is comprised of nodes linked together by an interconnection network. It uses the distributed shared memory S2MP (Scalable Shared-Memory Multiprocessing) architecture. The Onyx2 uses NUMAlink (originally named CrayLink) for its system interconnect. The nodes are connected to router boards, which use NUMAlink cables to connect to other nodes through their routers. The NUMAlink's network topology is a bristled fat hypercube. In configurations with more than 64 processors, a hierarchical fat hypercube network topology is used instead. Additional NUMAlink cables, called Xpress links can be installed between unused Standard Router ports to reduce latency and increase bandwidth. Xpress links can only be used in systems that have 16 or 32 processors, as these are the only configurations with a network topology that enables unused ports to be used in such a way.

Router boards[edit]

There are four different router boards used by the Onyx2. Each successive router board allows a larger amount of nodes to be connected.

Null Router[edit]

The Null Router connects two nodes in the same module. A system using the Null Router cannot be expanded as there are no external connectors.

Star Router[edit]

The Star Router can connect up to four nodes. It is always used in conjunction with a Standard Router to function correctly.

Standard Router (Rack Router)[edit]

The Standard Router can connect up to 32 nodes. It contains the SPIDER ASIC, which serves as a router for the NUMAlink network. The SPIDER ASIC has six ports, each with a pair of unidirectional links, connected to a crossbar which enables the ports to communicate with each other.

Meta Router (Cray Router)[edit]

The Meta Router is used in conjunction with Standard Routers to connect more than 32 nodes. It can connect up to 64 nodes.

Onyx2 nodes[edit]

An Onyx2 node fits on a single 16" by 11" printed circuit board that contains one or two processors, the main memory, the directory memory and the Hub ASIC. The node board plugs into the backplane through a 300-pad CPOP (Compression Pad-on-Pad) connector. The connector actually combines two connections, one to the NUMAlink router network and another to the XIO I/O subsystem.

See also the Onyx2/Origin2000_Node_boards topic.

Processor[edit]

Each processor and their secondary cache is contained on a HIMM (Horizontal Inline Memory Module) daughter card that plugs into the node board. At the time of introduction, the Onyx2 used the IP27 board, featuring one or two R10000 processors clocked at 180 MHz with 1 MB secondary cache(s). A high-end model with two 195 MHz R10000 processors with 4 MB secondary caches was also available. In February 1998, the IP31 board was introduced with two 250 MHz R10000 processors with 4 MB secondary caches. Later, the IP31 board was upgraded to support two 300, 350 or 400 MHz R12000 processors. The 300 and 400 MHz models had 8 MB L2 caches, while the 350 MHz model had 4 MB L2 caches. Near the end of its life, a variant of the IP31 board that could utilize the 500 MHz R14000 with 8 MB L2 caches was made available.

Main memory and directory memory[edit]

Each node board can support a maximum of 4 GB of memory through 16 DIMM slots by using proprietary ECC SDRAM DIMMs with capacities of 16, 32, 64 and 256 MB. Because the memory bus is 144 bits wide (128 bits for data and 16 bits for ECC), memory modules are inserted in pairs. Directory memory, which contains information on the contents of remote caches for maintaining cache coherency, must be used in configurations with more than 32 processors as the Onyx2 uses a distributed shared memory model. The directory memory is contained on proprietary DIMMs that are inserted into eight DIMM slots set aside for its use. In configurations where there are fewer than 32 processors, the directory memory is contained within the main memory.

Hub ASIC[edit]

The Hub ASIC interfaces the processors, memory and XIO to the NUMAlink 2 system interconnect. The ASIC contains five major sections: the crossbar (referred to as the "XB"), the I/O interface (referred to as the "II"), the network interface (referred to as the "NI"), the processor interface (referred to as the "PI") and the memory and directory interface (referred to as the "DM"), which also serves as the memory controller. The interfaces communicate with each other via FIFO buffers that are connected to the crossbar. When two processors are connected to the Hub ASIC, the node does not behave in a SMP fashion. Instead, the two processors operate separately and their buses are multiplexed over the single processor interface. This was done to save pins on the Hub ASIC. The Hub ASIC is clocked at 100 MHz and contains 900,000 gates fabricated in a five-layer metal process.

I/O subsystem[edit]

The I/O subsystem is based around the Crossbow (Xbow) ASIC, which shares many similarities with the SPIDER ASIC. Since the Xbow ASIC is intended for use with the simpler XIO protocol, its hardware is also simpler, allowing the ASIC to feature eight ports, compared with the SPIDER ASIC's six ports. Two of the ports connect to the node boards, and the remaining six to XIO cards. While the I/O subsystem's native bus is XIO, PCI-X and VME64 buses can also be used, provided by XIO bridges.

A IO6 base I/O board is present in every system. It is a XIO card that provides:

  • 1 10/100BASE-TX port
  • 2 Serial ports provided by dual UARTs
  • 1 internal Fast 20 UltraSCSI single-ended port
  • 1 external wide UltraSCSI, singled ended port
  • 1 real-time interrupt output for frame sync
  • 1 real-time interrupt input (edge triggered)
  • Flash PROM, NVRAM and real time clock


InfiniteReality[edit]

The difference between a SGI Origin 2000 and an Onyx2 is the InfiniteReality. In fact, the Onyx2 Rack system pictured top right was built from two Onyx2 racks with the InfiniteReality taken out of the second rack and in its place, the top compute module, is an Origin 2000 deskside with the plastics removed. The InfiniteReality was introduced in early 1996. It succeeded the RealityEngine, although the RealityEngine coexisted with the InfiniteReality for some time for the Onyx as an entry-level option for deskside "workstation" configurations.

The InfiniteReality architecture was a third-generation design and is categorized as a sort-middle architecture. It was designed to render complex scenes in high-quality at 60 frames per second, roughly four or two times the performance of the RealityEngine it replaces. It was designed explicitly for use in conjunction with the OpenGL graphics library and implements most of the OpenGL pipeline in hardware.

The implementation is partitioned into Geometry (also known as the Geometry Engine), Raster Memory (also known as the Raster Manager) and Display Generator boards, with each board corresponding to each stage of the three major stages in the architecture's pipeline. The board set partitioning scheme is the same as the RealityEngine, as a result of Silicon Graphics wanting the RealityEngine to be easily upgradable to the InfiniteReality. Each pipeline consists of one Geometry Engine board, one, two or four Raster Manager boards and one Display Generator board.

The implementation comprises twelve Application-specific integrated circuit (ASIC) designs fabricated in 0.5 and 0.35 micrometre processes with three layers of metal interconnect.These ASICs require a 3.3 V power supply. An InfiniteReality pipeline in a maximal configuration contains 251 million transistors. The InfiniteReality was developed by 55 engineers.

Given a system capable enough, such as certain models of the Onyx2 and Onyx 3000, up to 16 InfiniteReality pipelines can be hosted. The pipelines can be operated in three modes: multi-seat, multi-display and multi-pipe. In multi-seat mode, each pipeline can serve up to eight simultaneous users, each with their own separate displays, keyboards and mice. In multi-display mode, multiple outputs drive multiple displays, which is useful for virtual reality. The multi-pipe mode has two methods of operation. The first method requires a digital multiplexer (DPLEX) daughterboard to be installed in every pipeline, which combines the output of multiple pipelines. The second method uses MonsterMode software to distribute the data used to render a frame to multiple pipelines.

To interface the pipeline to the system, a Flat Cable Interface (FCI) cable is used to connect the Host Interface Processor ASIC on the Geometry Board to the Ibus on the IO4 board, a part of the host system.

Geometry board[edit]

The Geometry board is responsible for geometry and image processing and is divided into four stages, each stage being implemented by separate device(s). The first stage is the Host Interface. Due to the InfiniteReality being designed for two very different platforms, the traditional shared memory bus-based Onyx using the POWERpath-2 bus, and the distributed shared memory network-based Onyx2 using the NUMAlink2 interconnect, the InfiniteReality had to have an interface that could provide similar performance on both platforms, which had a large difference in incoming bandwidth (200 MB/s versus 400 MB/s respectively).

To this end, a Host Interface Processor, an embedded RISC core, is used to fetch display list objects using direct memory access (DMA). The Host Interface Processor is accompanied by 16 MB of synchronous dynamic random access memory (SDRAM), of which 15 MB is used to cache display leaf objects. The cache can deliver data to the next stage at over 300 MB/s. The next stage is the Geometry Distributor, which transfers data and instructions from the Host Interface Processor to individual Geometry Engines.

The next stage is performing geometry and image processing. The Geometry Engine is used for the purpose, with each Geometry board containing up to four working in a multiple instruction multiple data (MIMD) fashion. The Geometry Engine is a semi-custom ASIC with a single instruction multiple data (SIMD) pipeline containing three floating-point cores, each containing an arithmetic logic unit (ALU), a multiplier and a 32-bit by 32-entry register file with two read and two write ports. These cores are provided with a 32-bit by 2,560-entry memory that holds elements of OpenGL State and provides Scratchpad RAM storage. Each core also has a float-to-fix converter to convert floating-point values into integer form. The Geometry Engine is capable of completing three instructions per cycle, and each Geometry board, with four such devices, can complete 12 instructions per cycle. The Geometry Engine uses a 195-bit microinstruction, which is compressed in order to reduce size and bandwidth usage in return for slightly less performance.

The Geometry Engine processor operates at 90 MHz, achieving a maximum theoretical performance of 540 MFLOPS. As there are four such processors on a GE12-4 or GE14-4 board, the maximum theoretical performance is 2.16 GFLOPS. A 16-pipeline system therefore achieves a maximum theoretical performance of 34.56 GFLOPS.

The fourth stage is the Geometry-Raster FIFO, a first in first out (FIFO) buffer that merges the outputs of the four Geometry Engines into one, reassembling the outputs in the order they were issued. The FIFO is built from SDRAM and has a capacity of 4 MB, large enough to store 65,536 vertexes. The transformed vertexes are moved from this FIFO to the Raster Manager boards for triangle reassembly and setup by the Triangle Bus (also known as the Vertex Bus), which has a bandwidth of 400 MB/s.

Raster Memory board[edit]

The function of the Raster Memory board is to perform rasterization. It also contains the texture memory and raster memory, which is more commonly known as the framebuffer. Rasterization is performed in the Fragment Generator and the eighty Image Engines. The Fragment Generator comprises four ASIC designs: the Scan Converter (SC) ASIC, the Texel Address Calculator (TA) ASIC, the Texture Memory Controller (TM) ASIC and the Texture Fragment (TF) ASIC.

The SC ASIC and the TA ASIC perform scan conversion, color and depth interpolation, perspective correct texture coordinate interpolation and level of detail computation on incoming data, and the results are passed to the eight TM ASICs, which are specialized memory controllers optimized for texel access. Each TM ASIC controls four SDRAMs that make up one-eighth of the texture memory. The SDRAMs used are 16 bits wide and have separate address and data buses. SDRAMs with a capacity of 4 Mb are used by Raster Manager boards with 16 MB of texture memory while 16 Mb SDRAMs are used by Raster Manager boards with 64 MB of texture memory. The TM ASICs perform texel lookups in its SDRAMs according to the texel addresses issued by the TA ASIC. Texels from the TM ASICs are forwarded to the appropriate TF ASIC, where texture filtering, texture environment combination with interpolated color and fog application is performed. As each SDRAM holds part of the texture memory, all of the 32 SDRAMs must be connected to all of the 80 Image Engines. To achieve this, the TM and TF ASICs implement a two-rank omega network, which reduces the number of individual paths required for the 32 to 80 sort while maintaining the same functionality.

The eighty Image Engines have multiple functions. Firstly, each Image Engine controls a portion of the raster memory, which in the case of the InfiniteReality, is a 1 MB SGRAM organized as 262,144 by 32-bit words. Secondly, the following OpenGL per-fragment operations are performed by the Image Engines: pixel ownership test, stencil test, depth buffer test, blending, dithering and logical operation. Lastly, the Image Engines perform anti-aliasing and accumulation buffer operations. To deliver pixel data for display, each Image Engine has a 2-bit serial bus to the Display Generator board. If one Raster Manager board is present in the pipeline, the Image Engine uses the entire width of the bus, whereas if two or more Raster Manager boards are present, the Image Engine uses half the bus. Each serial bus is actually a part of the Video Bus, which has a bandwidth of 1.2 GB/s. Four Image Engine "cores" are contained on an Image Engine ASIC, which contains nearly 488,000 logic gates, comprising 1.95 million transistors, on a 42 mm2 (6.5 by 6.5 mm) die that was fabricated in a 0.35 micrometre process by VLSI Technology.

The InfiniteReality uses the RM6-16 or RM6-64 Raster Managers. Each pipeline is capable of display resolutions of 2.62, 5.24 or 10.48 million pixels, provided that one, two or four Raster Manager boards respectively are present. The raster memory can be configured to use 256, 512 or 1024 bits per pixel. 320 MB supports a resolution of 2560 by 2048 pixels with each pixel containing 512 bits of information. In a configuration with four Raster Managers, the texture memory has a bandwidth of 15.36 GB/s, and the raster memory has a bandwidth of 72.8 GB/s.

Display Generator board[edit]

The DG5-2 Display Generator board contains hardware to drive up to two video outputs, which may be expanded to eight video outputs with an optional daughterboard, a configuration known as the DG5-8. The outputs are independent and each output has hardware for generating video timing, video resizing, gamma correction and digital-to-analog conversion. Digital-to-analog conversion is provided by 8-bit digital-to-analog converters that support a pixel clock frequency up to 220 MHz.

Data for the video outputs are provided by four ASICs that de-serialize and de-interleave the 160-bit streams into 10-bit component RGBA, 12-bit component RBGA, L16, Stereo Field Sequential (FS) or color indexes. The hardware also incorporates the cursor at this stage. A 32,768 |color index map entries are available.

Capabilities and performance[edit]

The InfiniteReality was capable of several advanced capabilities:

  • 8 by 8 multi-sampled anti-aliasing
  • A maximum color depth of 48-bit RGBA
  • 16 overlay planes
  • A 24-bit floating point Z-buffer
  • Each pixel consists of 256 to 1,048 bits of data
  • Stereo viewing was supported and was quad buffered

The InfiniteReality's performance was:

  • 11 million non-lighted, depth-buffered, anti-aliased, triangle strip (40 pixel triangles), triangles per second
  • 8.3 million textured, depth-buffered, anti-aliased, triangle strip (50 pixel triangles), triangles per second
  • 7+ million lighted, textured and anti-aliased triangles per second
  • 800 million trilinear mip-mapped, textured, 16-bit texel, depth buffered pixels per second
  • 750 million trilinear mip-mapped, textured, 16-bit texel, four by four sub-sample anti-aliased, depth buffered pixels per second
  • 710+ million textured and anti-aliased pixels per second
  • 300 million displayed pixels per second, distributed over one to eight outputs

InfiniteReality2[edit]

InfiniteReality2 is what hinv (an IRIX utility that lists the hardware present in a system) refers to an InfiniteReality that is used in the Onyx2. The InfiniteReality2 however, was still marketed as the InfiniteReality. It was the second implementation of the InfiniteReality architecture, and was introduced in late 1996. It is identical to the InfiniteReality architecturally, but differs mechanically as the Onyx2's Origin 2000-based card cage is different from the Onyx's Challenge-based card cage.

Introduced by the InfiniteReality2 is an interface scheme that is used in rackmount Onyx2 or later systems. Instead of being connected to the host system via a FCI cable, the board set is plugged into the rear of a midplane, which can support two pipelines. The midplane has eleven slots. Slot six to slot eleven are for the first pipeline, which may contain one to four Raster Manager boards. Slot one to four is for the second pipeline, which may contain one or two Raster Manager boards due to the number of slots there are. Because of this, maximally configured Onyx systems use one midplane for each pipeline to avoid restricting half of the 16 pipelines to a maximum of two Raster Manager boards. Slot five contains a Ktown board if the midplane is used in an Origin 2000-based system (Onyx2) or a Ktown2 board if the midplane is used in an Origin 3000-based system (Onyx 3000). The purpose of these boards is to interface the host system's XIO link to the Host Interface Processor ASIC on the Geometry board. These boards have two XIO ports for this purpose, with the top XIO port connected to the right pipeline and the bottom XIO port connected to the left pipeline.

Reality[edit]

The Reality is a cost-reduced version of the InfiniteReality2 intended to provide similar performance. Instead of using the GE14-4 Geometry Engine board and the RM7-16 or RM7-64 Raster Manager boards, the Reality used the GE14-2 Geometry Engine board and the RM8-16 or RM8-64 Raster Manager boards. The GE14-2 has two Geometry Engine Processors, instead of four like the other models. The RM8-16 and RM864 has 16 or 64 MB of texture memory respectively and 40 MB of raster memory. The Reality was also limited by the number of Raster Manager boards it could support, one or two. When maximally configured with two RM8-64 Raster Manager boards, the Reality pipeline has 80 MB of raster memory.

InfiniteReality2E[edit]

The InfiniteReality2E was an upgrade of the InfiniteReality, marketed as the InfiniteReality2, introduced in 1998. It succeeded the InfiniteReality2 board set and was itself succeeded by the InfiniteReality3 in 2000, but was not discontinued until 10 April 2001.

It improves upon the InfiniteReality by replacing the GE14-4 Geometry Engine board with the GE16-4 Geometry Engine board and the RM7-16 or RM7-64 Raster Manager boards with the RM9-64 Raster Manager board. The new Geometry Engine board operated at 112 MHz, improving geometry and image processing performance. The new Raster Manager board operated at 72 MHz, improving anti-aliased pixel fill performance.

InfiniteReality3[edit]

InfiniteReality3 was introduced in 2000 along with the Onyx 3000 to supersede the InfiniteReality2. It was used in the Onyx2 and Onyx 3000 visualization systems. The only improvement over the previous implementation was replacement of the RM9-64 Raster Manager with the RM10-256 Raster Manager, which has 256 MB of texture memory, four times that the of the previous raster manager. When maximally configured with four Raster Managers, the InfiniteReality3 pipeline provides 320 MB of raster memory.

InfiniteReality4[edit]

InfiniteReality4 was introduced in 2002 to succeed the InfiniteReality3. It was used in the Onyx2, SGI Onyx 3000 and SGI Onyx 350. It is the last member of the InfiniteReality family, itself succeeded by the ATI FireGL-based UltimateVision, which was used in the Onyx4. The only improvement over the previous implementation was the replacement of the RM10-256 Raster Manager by the RM11-1024 Raster Manager, which has improved performance, 1 GB of texture memory and 2.5 GB of raster memory, four and thirty-two times that of the previous raster manager, respectively. When maximally configured with four Raster Managers, the InfiniteReality4 pipeline has 10 GB of raster memory. In a maximum configuration with 16 pipelines, the InfiniteReality4 contained 16 GB of texture memory and 160 GB of raster memory.

Comparison[edit]

The figures presented in the tables are for a minimal 1-pipeline and a maximal 16-pipeline configuration, except for the Reality, which was restricted to single pipe operation.

Hardware[edit]

Model Geometry
Engine
board
Raster Manager
board
Display Generator
board
Texture
memory
(MB)
Raster
memory
(MB)
Introduced Discontinued
InfiniteReality GE12-4 RM6-16 or RM6-64 DG4-2 or DG4-8 16 to 1,024 80 to 5,120 ? 1999-09-30
InfiniteReality2 GE14-4 RM7-16 or RM7-64 DG5-2 or DG5-8 16 to 1,024 80 to 5,120 ? ?
Reality GE14-2 RM8-16 or RM8-64 DG5-2 or DG5-8 64 40 to 80 ? ?
InfiniteReality2E GE16-4 RM9-64 DG5-2 or DG5-8 64 to 1,024 80 to 5,120 ? ?
InfiniteReality3 GE16-4 RM10-256 DG5-2 or DG5-8 256 to 4,096 80 to 5,120 ? 2003-06-27
InfiniteReality4 GE16-4 RM11-1024 DG5-2 or DG5-8 1,024 to 16,384 2,560 to 163,840 ? ?

Performance[edit]

Model Polygons
(millions per second)
Pixel fill
(millions of pixels per second)
Volume rendering
(millions of voxels per second)
InfiniteReality 10.9 ? ?
InfiniteReality2 10.9 ? ?
Reality 5.5 94 to 188 (1) 100 to 200
InfiniteReality2E 13.1 to 210 192 to 6,100 200 to 6,400
InfiniteReality3 13.1 to 210 5,600 6,400
InfiniteReality4 13.1 to 210 10,200 (2) 6,400

(1) Anti-aliased, Z-buffered, textured.
(2) 8 by 8 sub-sampled anti-aliased, Z-buffered, textured, lit, 40-bit color pixels.


Hardware aggregator[edit]

Node boards have 2 CPUs per board.

Known node board CPU speeds[edit]

IP27: CPUs are mounted directly to the node board individually.

180 MHz R10000 (Can not be mixed with others speed node boards)

195 MHz R10000


IP31: CPUs are mounted in pairs (along with their respective caches) to a PIMM, a pluggable module which then mounts to the node board.

250 MHz R10000

300 MHz R12000

350 MHz R12000 (Can not be used in configurations greater than 8 CPUs)

400 MHz R12000

500 MHz R14000

PCI cards[edit]

PCI card cage and compatible PCI cards very similar to Octane, except the screws for the cage have a different orientation from the Octane one.

PCI cards in the card cage run at 33 MHz. They must be 5V-compatible and may be either 32- or 64-bit; the card cage has three 64-bit slots. What follows is a list of known working cards.

Type of device Vendor name Model Description PCI Vendor ID PCI Device ID Notes
SCSI Qlogic qla1040b Fast/Wide SCSI controller 1077 1020 This is the SCSI controller on the BASEIO board. Works "out of the box" on IRIX 6.4 and 6.5.
Fibre Channel / SCSI Qlogic qla2342 dual-port 2Gb FC controller 1077 2312 Force a kernel recompile if it doesn't show up in hinv. Works "out of the box" on IRIX 6.5.17 and above.

Memory capacities[edit]

Onyx2 uses the same proprietary memory as the rest of the Origin 200/2000 series of computers. To distinguish between the different capacities, they were color-coded across the top edge of each DIMM:

Red: 256MB

White/Silver: 128MB

Green: 64MB

Sample hinv (from the pictured rack system above)[edit]

Location: /hw/module/1/slot/n1/node
        MODULEID Board: barcode K0027261   part              rev   
    IP31PIMMR14K Board: barcode MJG000     part 030-1547-002 rev  E
       8P12_MPLN Board: barcode HXP697     part 030-1535-001 rev  B
            IP31 Board: barcode MHZ690     part 030-1523-001 rev  C
Location: /hw/module/1/slot/n2/node
            IP31 Board: barcode MJA682     part 030-1523-001 rev  C
    IP31PIMMR14K Board: barcode MJV829     part 030-1547-002 rev  E
Location: /hw/module/1/slot/n3/node
            IP31 Board: barcode MHZ231     part 030-1523-001 rev  C
    IP31PIMMR14K Board: barcode MJJ983     part 030-1547-002 rev  E
Location: /hw/module/1/slot/n4/node
            IP31 Board: barcode JRP729     part 030-1255-003 rev  D
    IP31PIMMR14K Board: barcode DPD869     part 030-1547-002 rev  D
Location: /hw/module/1/slot/r1/router
      ROUTER_IR1 Board: barcode KLC273     part 030-0841-003 rev  C
Location: /hw/module/1/slot/r2/router
      ROUTER_IR1 Board: barcode KDK226     part 030-0841-003 rev  B
Location: /hw/module/1/slot/io2/pci_xio
         PCI_XIO Board: barcode KDG223     part 030-1062-002 rev  E
Location: /hw/module/1/slot/io8/mscsi
           MSCSI Board: barcode KCP460     part 030-1243-001 rev  M
Location: /hw/module/1/slot/io7/divo
            DIVO Board: barcode KAH156     part 030-1305-001 rev  E
Location: /hw/module/1/slot/io1/baseio
          BASEIO Board: barcode DYZ782     part 030-0734-002 rev  N
             MIO Board: barcode EYZ131     part 030-0880-003 rev  E
Location: /hw/module/1/slot/io9/fibre_channel
   FIBRE_CHANNEL Board: barcode JHT635     part 030-0927-003 rev  E
Location: /hw/module/1/slot/io3/kona
          GE16-4 Board: barcode KVZ553     part 030-1398-001 rev  E
           KTOWN Board: barcode KFR848     part 030-1067-001 rev  F
Location: /hw/module/2/slot/n1/node
        MODULEID Board: barcode K0019167   part              rev   
   IP31PIMMR12KS Board: barcode LAT691     part 030-1423-002 rev  G
            IP31 Board: barcode LAX165     part 030-1523-001 rev  C
       8P12_MPLN Board: barcode FXZ861     part 030-0762-006 rev  K
Location: /hw/module/2/slot/n2/node
   IP31PIMMR12KS Board: barcode HGL932     part 030-1423-002 rev  G
            IP31 Board: barcode KSB603     part 030-1523-001 rev  C
Location: /hw/module/2/slot/n3/node
            IP31 Board: barcode KRT958     part 030-1523-001 rev  C
   IP31PIMMR12KS Board: barcode KRK403     part 030-1423-002 rev  F
Location: /hw/module/2/slot/n4/node
   IP31PIMMR12KS Board: barcode KRH708     part 030-1423-002 rev  G
            IP31 Board: barcode KSB450     part 030-1523-001 rev  C
Location: /hw/module/2/slot/r1/router
      ROUTER_IR1 Board: barcode GWR211     part 030-0841-003 rev  B
Location: /hw/module/2/slot/r2/router
      ROUTER_IR1 Board: barcode GTL148     part 030-0841-003 rev  B
Location: /hw/module/2/slot/io1/baseio
          BASEIO Board: barcode HSG629     part 030-1124-002 rev  M
Location: /hw/module/2/slot/io3/mscsi
           MSCSI Board: barcode HSK846     part 030-1243-001 rev  M
Location: /hw/module/3/slot/n1/node
        MODULEID Board: barcode K0009218   part              rev   
            IP31 Board: barcode KRS981     part 030-1523-001 rev  C
   IP31PIMMR12KS Board: barcode KRH713     part 030-1423-002 rev  G
       8P12_MPLN Board: barcode DWZ328     part 013-1547-003 rev  D
Location: /hw/module/3/slot/n2/node
   IP31PIMMR12KS Board: barcode LAJ899     part 030-1423-002 rev  G
            IP31 Board: barcode LAJ448     part 030-1523-001 rev  C
Location: /hw/module/3/slot/n3/node
   IP31PIMMR12KS Board: barcode LAT486     part 030-1423-002 rev  G
            IP31 Board: barcode JZP191     part 030-1523-001 rev  C
Location: /hw/module/3/slot/n4/node
   IP31PIMMR12KS Board: barcode LAK445     part 030-1423-002 rev  G
            IP31 Board: barcode LAK857     part 030-1523-001 rev  C
Location: /hw/module/3/slot/r1/router
      ROUTER_IR1 Board: barcode MDS015     part 030-0841-003 rev  D
Location: /hw/module/3/slot/r2/router
      ROUTER_IR1 Board: barcode MDM991     part 030-0841-003 rev  D
Location: /hw/module/3/slot/io1/baseio
          BASEIO Board: barcode FSN491     part 030-0734-002 rev  K
             MIO Board: barcode GWN986     part 030-0880-003 rev  F
Location: /hw/module/3/slot/io3/mscsi
           MSCSI Board: barcode GSR276     part 030-1243-001 rev  G
Processor 0: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 1: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 2: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 3: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 4: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 5: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 6: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 7: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 8: 400 MHZ IP27
          BASEIO Board: barcode HSG629     part 030-1124-002 rev  M
Location: /hw/module/2/slot/io3/mscsi
           MSCSI Board: barcode HSK846     part 030-1243-001 rev  M
Location: /hw/module/3/slot/n1/node
        MODULEID Board: barcode K0009218   part              rev   
            IP31 Board: barcode KRS981     part 030-1523-001 rev  C
   IP31PIMMR12KS Board: barcode KRH713     part 030-1423-002 rev  G
       8P12_MPLN Board: barcode DWZ328     part 013-1547-003 rev  D
Location: /hw/module/3/slot/n2/node
   IP31PIMMR12KS Board: barcode LAJ899     part 030-1423-002 rev  G
            IP31 Board: barcode LAJ448     part 030-1523-001 rev  C
Location: /hw/module/3/slot/n3/node
   IP31PIMMR12KS Board: barcode LAT486     part 030-1423-002 rev  G
            IP31 Board: barcode JZP191     part 030-1523-001 rev  C
Location: /hw/module/3/slot/n4/node
   IP31PIMMR12KS Board: barcode LAK445     part 030-1423-002 rev  G
            IP31 Board: barcode LAK857     part 030-1523-001 rev  C
Location: /hw/module/3/slot/r1/router
      ROUTER_IR1 Board: barcode MDS015     part 030-0841-003 rev  D
Location: /hw/module/3/slot/r2/router
      ROUTER_IR1 Board: barcode MDM991     part 030-0841-003 rev  D
Location: /hw/module/3/slot/io1/baseio
          BASEIO Board: barcode FSN491     part 030-0734-002 rev  K
             MIO Board: barcode GWN986     part 030-0880-003 rev  F
Location: /hw/module/3/slot/io3/mscsi
           MSCSI Board: barcode GSR276     part 030-1243-001 rev  G
Processor 0: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 1: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 2: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 3: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 4: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 5: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 6: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 7: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 8: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 9: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 10: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 11: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 12: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 13: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 14: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 15: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 16: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 17: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 18: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 19: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 20: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 21: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 22: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 23: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
CPU 0 at Module 1/Slot 1/Slice A: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 1 at Module 1/Slot 1/Slice B: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 2 at Module 1/Slot 2/Slice A: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 3 at Module 1/Slot 2/Slice B: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 4 at Module 1/Slot 3/Slice A: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 5 at Module 1/Slot 3/Slice B: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 6 at Module 1/Slot 4/Slice A: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 7 at Module 1/Slot 4/Slice B: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 8 at Module 2/Slot 1/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 9 at Module 2/Slot 1/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 10 at Module 2/Slot 2/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 11 at Module 2/Slot 2/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 12 at Module 2/Slot 3/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 13 at Module 2/Slot 3/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 14 at Module 2/Slot 4/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 15 at Module 2/Slot 4/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 16 at Module 3/Slot 1/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 17 at Module 3/Slot 1/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 18 at Module 3/Slot 2/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 19 at Module 3/Slot 2/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 20 at Module 3/Slot 3/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 21 at Module 3/Slot 3/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 22 at Module 3/Slot 4/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 23 at Module 3/Slot 4/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
Main memory size: 38400 Mbytes
Instruction cache size: 32 Kbytes
Data cache size: 32 Kbytes
Secondary unified instruction/data cache size: 8 Mbytes
Memory at Module 1/Slot 1: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 1/Slot 2: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 1/Slot 3: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 1/Slot 4: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 2/Slot 1: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 2/Slot 2: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 2/Slot 3: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 2/Slot 4: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 3/Slot 1: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 3/Slot 2: 1024 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 3/Slot 3: 256 MB (enabled)
  Bank 0 contains 128 MB (Standard) DIMMS (enabled)
  Bank 1 contains 128 MB (Standard) DIMMS (enabled)
  Bank 2 contains 128 MB (Standard) DIMMS (disabled)
  Bank 3 contains 128 MB (Standard) DIMMS (disabled)
Memory at Module 3/Slot 4: 256 MB (enabled)
  Bank 0 contains 256 MB (Standard) DIMMS (enabled)
ROUTER in Module 1/Slot 2: Revision 2: Active Ports [2,3,4,5,6] (enabled)
ROUTER in Module 1/Slot 4: Revision 2: Active Ports [2,3,4,5,6] (enabled)
ROUTER in Module 2/Slot 2: Revision 2: Active Ports [3,4,5,6] (enabled)
ROUTER in Module 2/Slot 4: Revision 2: Active Ports [1,3,4,5,6] (enabled)
ROUTER in Module 3/Slot 2: Revision 2: Active Ports [2,4,5,6] (enabled)
ROUTER in Module 3/Slot 4: Revision 2: Active Ports [1,2,4,5,6] (enabled)
Integral SCSI controller 2: Version QL1040B (rev. 2), differential
Integral SCSI controller 3: Version QL1040B (rev. 2), differential
Integral SCSI controller 4: Version QL1040B (rev. 2), differential
Integral SCSI controller 5: Version QL1040B (rev. 2), differential
Integral SCSI controller 8: Version QL1040B (rev. 2), single ended
  Disk drive: unit 1 on SCSI controller 8 (unit 1)
  Disk drive: unit 2 on SCSI controller 8 (unit 2)
  Disk drive: unit 3 on SCSI controller 8 (unit 3)
  CDROM: unit 6 on SCSI controller 8
Integral SCSI controller 9: Version QL1040B (rev. 2), single ended
Integral SCSI controller 6: Version Fibre Channel AIC-1160, revision 2
Integral SCSI controller 0: Version QL1040B (rev. 2), single ended
  Disk drive: unit 1 on SCSI controller 0 (unit 1)
  Disk drive: unit 2 on SCSI controller 0 (unit 2)
  CDROM: unit 6 on SCSI controller 0
Integral SCSI controller 1: Version QL1040B (rev. 2), single ended
  Disk drive: unit 4 on SCSI controller 1 (unit 4)
  Disk drive: unit 5 on SCSI controller 1 (unit 5)
Integral SCSI controller 15: Version QL1040B (rev. 2), single ended
Integral SCSI controller 14: Version QL1040B (rev. 2), single ended
  Disk drive: unit 1 on SCSI controller 14 (unit 1)
Integral SCSI controller 10: Version QL1040B (rev. 2), differential
Integral SCSI controller 11: Version QL1040B (rev. 2), differential
Integral SCSI controller 12: Version QL1040B (rev. 2), differential
Integral SCSI controller 13: Version QL1040B (rev. 2), differential
Integral SCSI controller 16: Version QL1040B (rev. 2), differential
Integral SCSI controller 17: Version QL1040B (rev. 2), differential
Integral SCSI controller 18: Version QL1040B (rev. 2), differential
Integral SCSI controller 19: Version QL1040B (rev. 2), differential
Integral SCSI controller 7: Version Fibre Channel AIC-1160, revision 2
IOC3/IOC4 serial port: tty5
IOC3/IOC4 serial port: tty6
IOC3/IOC4 serial port: tty1
IOC3/IOC4 serial port: tty2
IOC3/IOC4 serial port: tty9
IOC3/IOC4 serial port: tty10
IOC3/IOC4 serial port: tty3
IOC3/IOC4 serial port: tty4
IOC3/IOC4 serial port: tty7
IOC3/IOC4 serial port: tty8
IOC3 parallel port: plp1
IOC3 parallel port: plp2
Graphics board: InfiniteReality3
Gigabit Ethernet: eg0, module 1, PCI slot 2, firmware version 12.4.10
Fast Ethernet: ef1, version 1, module 2, slot io1, pci 2
Integral Fast Ethernet: ef0, version 1, module 1, slot io1, pci 2
Fast Ethernet: ef2, version 1, module 3, slot io1, pci 2
Iris Audio Processor: version RAD revision 7.0, number 1
Iris Audio Processor: version RAD revision 7.0, number 2
Origin PCI XIO board, module 1 slot 2: Revision 4
  PCI Adapter ID (vendor 0x133d, device 0x0001) PCI slot 0
  PCI Adapter ID (vendor 0x10a9, device 0x0009) PCI slot 2
Origin MSCSI board, module 1 slot 8: Revision 4
Origin BASEIO board, module 1 slot 1: Revision 3
Origin BASEIO board, module 2 slot 1: Revision 4
Origin BASEIO board, module 3 slot 1: Revision 3
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 3
  PCI Adapter ID (vendor 0x10a9, device 0x0002) PCI slot 0
  PCI Adapter ID (vendor 0x10a9, device 0x0002) PCI slot 2
Origin FIBRE CHANNEL board, module 1 slot 9: Revision 4
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
Origin MSCSI board, module 2 slot 3: Revision 4
  PCI Adapter ID (vendor 0x9004, device 0x1160) PCI slot 0
  PCI Adapter ID (vendor 0x9004, device 0x1160) PCI slot 1
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 6
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x10a9, device 0x0005) PCI slot 7
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 6
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x10a9, device 0x0005) PCI slot 7
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 3
Origin MSCSI board, module 3 slot 3: Revision 3
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 3
DIVO Video: controller 0 unit 0: Input, Output
IOC3/IOC4 external interrupts: 2
IOC3/IOC4 external interrupts: 1
IOC3/IOC4 external interrupts: 3
HUB in Module 1/Slot 1: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 1/Slot 2: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 1/Slot 3: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 1/Slot 4: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 2/Slot 1: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 2/Slot 2: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 2/Slot 3: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 2/Slot 4: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 3/Slot 1: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 3/Slot 2: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 3/Slot 3: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 3/Slot 4: Revision 6 Speed 100.00 Mhz (enabled)
IP27prom in Module 1/Slot n1: Revision 6.156
IP27prom in Module 1/Slot n2: Revision 6.156
IP27prom in Module 1/Slot n3: Revision 6.156
IP27prom in Module 1/Slot n4: Revision 6.156
IO6prom on Global Master Baseio in Module 1/Slot io1: Revision 6.156
IP27prom in Module 2/Slot n1: Revision 6.156
IP27prom in Module 2/Slot n2: Revision 6.156
IP27prom in Module 2/Slot n3: Revision 6.156
IP27prom in Module 2/Slot n4: Revision 6.156
IP27prom in Module 3/Slot n1: Revision 6.156
IP27prom in Module 3/Slot n2: Revision 6.156
IP27prom in Module 3/Slot n3: Revision 6.156
IP27prom in Module 3/Slot n4: Revision 6.156

Diagnostics[edit]

Try stripping the Onyx2 until you get a minimum configuration that boots without error.

Remove:

  • Directory RAM
  • All standard RAM except the pair in Bank 0 on each node <your hinv indicates all Bank 0s were working>
  • The Graphics module
  • The IO6G <if you still have the IO6 to replace it with>
  • The MENET and FC boards
  • The HD that contains the failed IRIX install
  • The external CD
  • If necessary, all but one nodeboard

from this point make and test each change/reconfiguration *one* step at a time - it'll take more time, but it will also enable you to make more sense of any errors

Connect a serial terminal enable a *large* scroll back buffer on the terminal program and save each session.

  1. Boot to the PROM monitor and issue "resetenv"
  2. Enter POD mode from the PROM command line by entering "pod", then:
  • "go cac"
  • "clearalllogs"
  • "initalllogs"
  • "flush"
  • "reset" the system will reset
  1. When it restarts, stop in the PROM and:

run "enableall",followed by "update" at the PROM command line <NOTE: repeat this 3 step process after *every* hardware error>

Reboot - are there any error messages?

If so - what are they? stop and report back to the forums

If not, install the IO6G and graphics board <but *nothing* else yet and do not connect kb, m, or monitor> Boot to the PROM monitor, and "update" the PROM hardware invertory Boot again - if errors appear report back

If no errors appear during the boot to PROM Pwer down, re-install the boot drive, restart the system, clear/prep the drive and install IRIX what revision is your install set, btw?

If there are install errors stop and report back

If not, connect a kb, mouse and monitor, leave the serial terminal connected for now and attempt to boot IRIX

If booting IRIX is unsuccessful what errors appeared?

If the IRIX boot was successful, test each RAM set in Bank 0 of a nodeboard *no* Directory RAM yet. If any set gives errors, record the error message, init the POD log, update the PROm inventory, and test the remaining sets.

Once you have eliminated any problem RAM Try the RAM that passed in the other memory banks If there are any errors during this process, try another known good set in the problem bank if the problem persists and cleaning the slot(s) didn't help, skip the bank or replace the nodeboard

Once the RAM is tested and running w/o error, reinstall the MENET and FC boards You can also reinstall the Directory RAM, but in an 8 processor system it does little beyond using electricity and producing heat.

BTW - when you remove nodeboards the compression connectors labeled "Connector Actuation 7/64 Hex> should be released first, then the phillips headed machine screws at the top and bottom of each board.

When you install nodeboard, reverse the process. Tighten the machine screws first, then the compression bolts. I alternate turning each bolt in a pair a few turns at a time so the connector is seated evenly, but *do not* over tighten. Following this procedure prevents the compression connector having to support the weight of the nodeboard during removal/installation.


See Also[edit]


External links (See also)[edit]


futuretech

Photo Gallery[edit]