Please consider a donation to the Higher Intellect project. See https://preterhuman.net/donate.php or the Donate to Higher Intellect page for more info.

Difference between revisions of "Mac OS Runtime Architectures"

From Higher Intellect Vintage Wiki
Jump to navigation Jump to search
 
(No difference)

Latest revision as of 18:36, 25 September 2020

This article describes the Mac OS runtime architecture based upon the Code Fragment Manager (CFM) as well as the original classic 68K runtime architecture.

  • The CFM-based runtime architecture was originally conceived and designed to run on PowerPC-based computers running the Mac OS. A 68K implementation, called CFM-68K, was later created to allow 68K-based machines to run CFM-based code.
  • The classic 68K runtime architecture is the architecture created for the original 68K-based Macintosh computer.

A runtime architecture is a fundamental set of rules that defines how software operates. These rules define

  • how to address code and data
  • how to handle and keep track of programs in memory (multiple applications and so on)
  • how compilers should generate code (for example, does it allow self-modifying code?)
  • how to invoke certain system services

Architectures are platform-independent, although the implementation of an architecture may vary from machine to machine depending on the features (or constraints) available.

CFM-Based Runtime Architecture

The CFM-based runtime architecture relies on fragments and the Code Fragment Manager (CFM) for its operation. This architecture has been implemented as the default architecture for PowerPC-based Mac OS computers and an optional one, CFM-68K, for 68K-based machines. The key concepts are identical for both implementations, so you should read this chapter if you plan to write either PowerPC or CFM-68K code.

In the CFM-based architecture, a fragment is the basic unit of executable code and its associated data. All fragments share fundamental properties such as basic structure and method of addressing code and data. The major advantage of a fragment-based architecture is that a fragment can easily access code or data contained in another fragment. For example, a fragment can import routines or data items from another fragment or export them for another fragment's use. In addition, fragments that export items may be shared among multiple clients.

Note: The term fragment is not intended to suggest that the block of code and data is in any way either small, detached, or incomplete. Fragments can be of virtually any size, and they are complete, executable entities. The term fragment was chosen to avoid confusion with the terms already used in Inside Macintosh volumes to describe executable code (such as component and module).

The Code Fragment Manager handles fragment preparation, which involves bringing a fragment into memory and making it ready for execution. Fragments can be grouped by use into applications and shared libraries, but fundamentally the Code Fragment Manager treats them alike.

Fragment-based applications are launched from the Finder. Typically they have a user interface and use event-driven programming to control their execution.

A shared library, however, is a fragment that exports code and data for use by other fragments. Unlike a traditional static library, which the linker includes in the application during the build process, a shared library remains a separate entity. For a shared library, the linker inserts a reference to an imported function or data item into the client fragment. When the fragment is prepared, the Code Fragment Manager creates incarnations of the shared libraries required by the fragment and binds all references to imported code and data to addresses in the appropriate libraries. A shared library is stored independently of the fragment that uses it and can therefore be shared among multiple clients.

Note: Shared libraries are sometimes referred to as dynamically linked libraries (DLLs), since the application and the externally referenced code or data are linked together dynamically when the application launches.

Using a shared library offers many benefits based on the fact that its code is not directly linked into one or more fragments but exists as a separate entity that multiple fragments can address at runtime. If you are developing several CFM-based applications that have parts of their source code in common, you should consider packaging all the common code into a shared library.

Here are some ways to take advantage of shared libraries:

  • An application framework can be packaged as a shared library. This potentially saves a good deal of disk space because that library resides only once on disk--where it can be addressed by multiple applications--rather than being linked physically into numerous applications.
  • System functions and tools, such as OpenDoc, can be packaged as shared libraries.
  • Updates and bug fixes for a single library can be released without the need to recompile and send copies of all the applications that use the library.

Shared libraries come in two basic forms:

  • Import libraries. These contain code and data that your application requires to run. The Code Fragment Manager automatically prepares these libraries at runtime. Import libraries do not occupy application memory but are stored separately.
  • Plug-ins. These are libraries that provide optional services, such as a spelling checker for a word processor. The application must make explicit calls to the Code Fragment Manager to prepare these libraries and must then find the symbols associated with the libraries. Plug-ins are sometimes referred to as drop-in additions or extensions.

Note: Although the terms are similar, shared library and import library are not interchangeable. An import library is a shared library, but a shared library is not necessarily an import library.

In the CFM-based runtime architecture, the Code Fragment Manager handles the manipulation of fragments. Some of its functions include

  • mapping fragments into memory and releasing them when no longer needed
  • resolving references to symbols imported from other fragments
  • providing support for special initialization and termination routines

Fragments can be shared within a process or between two or more processes. A process defines the scope of an independently-running program. Typically each process contains a separate application and any related plug-ins.

The physical incarnation of a fragment within a process is called a connection. A fragment may have several unique connections, each local to a particular process. Each connection is assigned a connection ID.

Fragments are physically stored in containers, which can be any kind of storage area accessible by the Code Fragment Manager. For example, in System 7 the system software import library InterfaceLib is stored in the ROM of a PowerPC-based Macintosh computer. Other import libraries are typically stored in files of type 'shlb'. Fragments containing executable code are usually stored in the data fork of a file, although it is possible to store a fragment as a resource in the resource fork.

Closures

The Code Fragment Manager uses the concept of a closure when handling fragments. A closure is essentially a set of connection IDs that are grouped according to the order they are prepared. The connections represented by a closure are the root fragment, which is the initial fragment the Code Fragment Manager is called to prepare, and any import libraries the root fragment requires to resolve its symbol references.

During the fragment preparation process, the Code Fragment Manager automatically prepares all the connections required to make up a closure. This process occurs whether the Code Fragment Manager is called by the system (application launch) or programmatically from your code (for example, when preparing a plug-in).

The Structure of Fragments

Every fragment can contain separate code and data sections. A code or data section can be up to 4 GB in size. Code and data sections do not have to be contiguous in memory.

Note: Since all fragments can contain both code and data sections, any fragment can contain global variables.

A code section contains position-independent executable code (that is, code that is independent of its own memory location and the location of its associated data). Code sections are read-only, so fragments can be stored in ROM or file-mapped and paged in from disk as necessary.

A data section is typically allocated in the application heap. Each data section may be instantiated multiple times, creating a separate copy for each connection associated with the fragment. An import library's data section may also be placed into the system heap or temporary memory (when systemwide instantiation is selected).

Although a fragment's code and data sections can be located anywhere in memory, those sections cannot be moved within memory once they are prepared. The Code Fragment Manager must resolve any dependencies a fragment might have on other fragments, and this preparation involves placing pointers to imported code and data into the fragment's data section. To avoid having to prepare fragments in this way more than once, the Mac OS requires that a prepared fragment remain stationary as long as it stays in memory.

Note: Accelerated resources, which model the behavior of classic 68K resources, do not have to be fixed in memory between calls.

CFM-68K Application Structure

Although CFM-68K runtime shared libraries are virtually identical to their PowerPC counterparts, CFM-68K runtime applications are hybrids that retain the segmented form of classic 68K applications.

CFM-68K runtime applications use some classic 68K structures ('CODE' resources, for example), but many of these structures have been modified for the CFM-based architecture. CFM-68K applications have different segment headers and jump tables, as well as a new table for transition vectors. The %A5Init segment does not exist in CFM-68K applications, and the 'CODE'0 resource does not hold the jump table. The following sections describe the CFM-68K application structure in detail.

CFM-68K Shared Library Structure

In some development environments, creating a CFM-68K shared library involves first creating a segmented version of the library and then flattening it to produce a contiguous program that is stored in the file's data fork. In MPW, the mechanism for flattening segmented shared libraries is the MakeFlat tool. This section describes what conversions are necessary to go from a segmented state to a flattened state and how MakeFlat implements these conversions.

You need to read this section in either of these two cases:

  • You want to understand how the MPW MakeFlat tool flattens CFM-68K shared libraries.
  • You are writing a library flattening tool and want to understand what conversions are necessary.

An unflattened shared library has a structure very similar to that of a CFM-68K runtime application. The main differences are as follows:

  • The transition vectors are 8 bytes long instead of 12.
  • The PEF container's data section is not compressed.
  • The 'cfrg'0 resource indicates that the fragment is a library, not an application.

The structure changes radically, however, when you flatten the segmented library using the MakeFlat tool. MakeFlat makes the following changes to a segmented shared library:

  • Converts the shared library's 'CODE' resources (except for 'CODE'0 and 'CODE'6) into code sections in the output PEF container.
  • Modifies the PEF relocations.
  • Converts jump table entries and transition vectors to their flattened state.
  • Compresses the PEF container's data section.
  • Creates a new 'cfrg'0 resource specifying the new location of the PEF container.
  • Adds a debug section to the output PEF container so you can use the 68K Macintosh Debugger to debug shared libraries.
  • Adds code to properly call static constructor or destructor routines if they exist in the shared library.

After making these changes, MakeFlat writes the PEF container to the data fork of the output file.

Classic 68K Runtime Architecture

The classic 68K runtime architecture is the original Macintosh runtime architecture, designed for computers running a Motorola 68000-series microprocessor. Applications are stored as segments that can be loaded into the application heap as necessary. The application space contains the application heap, the application stack, and the A5 world.

Every classic 68K application contains an A5 world, an area of memory that stores the following items:

  • the jump table, which allows the application to make calls between segments
  • the application's global variables
  • the application's QuickDraw global variables, which contain information about the drawing environment
  • the application parameters, which are reserved for use by the Mac OS

The data is referenced as offsets from the value of the A5 register, hence the name A5 world. The application's global variables and QuickDraw global variables are referenced with negative offsets from A5, while application parameters and jump table entries are referenced with positive offsets.

68K compilers typically generate PC-relative instructions for intrasegment references. This restricts the size of segments to 32 KB because the PC-relative instructions on the MC68000 processor use a 16-bit offset. Similarly, references to addresses expressed as offsets from the address stored in A5 are also limited to 16-bit offsets on the MC68000 processor.

Since references to the jump table are expressed as positive offsets from A5, this effectively limits the size of the jump table to 32 KB. References to global variables are expressed as negative offsets from A5, so the size of the global data area is limited to 32 KB as well.

In the past, the Resource Manager used to limit resources to 32 KB, so 16-bit offsets were guaranteed to be sufficient.

The classic 68K runtime architecture reflects the need for maximum memory efficiency in the original Macintosh computer which had 128 KB of RAM and an MC68000 CPU. To run large applications in this limited memory environment, Macintosh applications were broken up into segments ('CODE' resources) that could be loaded into the application heap as necessary.

When you compile and link a program, the linker places your program's routines into code segments and constructs 'CODE' resources for each program segment. The Process Manager loads some code segments into memory when you launch an application. Later, if the code calls a routine stored in an unloaded segment, the Segment Manager loads that new segment into memory. These operations occur automatically by using information stored in the application's jump table and in the individual code segments themselves.

Note that although the Segment Manager loads segments automatically, it does not unload segments. The Segment Manager locks the segment when it is first loaded into memory and any time thereafter when routines in that segment are executing. This locking prevents the segment from being moved during heap compaction and from being purged during heap purging.

Your development environment lets you specify compiler directives to indicate which routines should be grouped together in the same segment. For example, if you have code that is not executed very often (for example, code for printing a document), you can store that in a separate segment, so it does not occupy memory when it is not needed. Here are some general guidelines for grouping routines into segments:

  • Group related routines in the same segment.
  • Put your main event loop into the main segment (that is, the segment that contains the main entry point).
  • Put any routines that handle low-memory conditions into a locked segment (usually the main segment). For example, if your application provides a grow-zone function, you should put that function in a locked segment.
  • Put any routines that execute at interrupt time, including VBL tasks and Time Manager tasks, into a locked segment.
  • Any initialization routines that are executed only once at application startup time should be put in a separate segment. This grouping allows you to unload the segment after executing the routines. However, routines that allocate non relocatable objects (for example, MoreMasters or InitWindows) in your application heap should be called in the main segment, before loading any code segments that will later be unloaded. If you put such allocation routines in a segment that is later unloaded and purged, you increase heap fragmentation.

A typical strategy is to unload all segments except segment 1 (the main segment) and any other essential code segments each time through your application's main loop.

To unload a segment you must call the UnloadSeg routine from your application. The UnloadSeg routine does not actually remove a segment from memory, but merely unlocks it, indicating to the Segment Manager that it may be relocated or purged if necessary. To unload a particular segment, you pass the address of any externally referenced routine contained in that segment.

See Also