Please consider a donation to the Higher Intellect project. See https://preterhuman.net/donate.php or the Donate to Higher Intellect page for more info.

Recommendations for the MIDI Implementation of Electronic Musical Instruments

From Higher Intellect Vintage Wiki
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

David Van Brink
QuickTime Music Architect
Apple Computer
May 1995


Introduction

This document will present my recommendations for the clean implementation of MIDI on music synthesizers. Certain aspects of MIDI are well defined by the MIDI 1.0 Specification, and later documents, such as how to play a note, or interpret a volume controller change. Other parts of the MIDI implementation are not defined, and are handled in a number of different ways on different synthesizers. By following the recommendations in this document, one can create a musical instrument which is very easily controlled by MIDI, and for which sequencers and patch editors can easily be written.

To a small degree, these recommendations are driven by the needs of the QuickTime Music Architecture, which is a set of software API's for writing music applications. The QTMA strives to remove from the user experience any knowledge of MIDI channels and patch numbers, as well as providing for transparent, seamless use of customized instrument patches.

However, I think these recommendations provide a strong foundation for flexible synthesizer use, outside of QTMA, as well.


Recommendations

Synthesizers

A MIDI synthesizer should be addressable by a system exclusive command that contains the manufacturer's ID, and a model ID, and a device ID. The manufacturer's ID is a code uniquely assigned to a maker of MIDI equipment. The model ID is used to discern between different specific synthesizer's within that manufacturer's product line. Finally, the device ID is used to differentiate between multiple synthesizer's of the same type which are being controlled by the same MIDI bus. This device ID should be settable by the user from the front panel. Sometimes this is called the "System MIDI Channel," but I feel this is misleading, and should not be confused with the 16 MIDI channels upon which musical events, such as note-on's, can be sent.

Parts

A MIDI synthesizer has some maximum multitimbrality. This is the number of different instruments that the device may be playing. This is distinct from it's polyphony, which is the number of simultaneous notes that can be played. For a MIDI synthesizer with one MIDI input port, the multimbrality can be no more than 16, the number of MIDI channels it can receive on.

I will use the term "part" to refer to a unit of a synthesizer that plays a single timbre. That is, the number of parts is equal to the device's multitimbrality. Each part should be fully controllable in a modeless fashion. That is, you should be able to send a command, via system exclusive, to any part on the synthesizer at any time, even if that part is currently silent for one reason or another.

The most important thing to be able to control for each part is it's MIDI receive channel, or to receive on no channel, to turn it off. With this capability, it is possible to dynamically assign channels on the MIDI bus to various devices, letting the host decide which devices to use.

Some devices make a distinction between the "drum" part and other parts. Ideally, every part should be able to play either an instrument or drumkit. This allows, for example, multiple different drumkits to be used. At the least, however, the channel upon which the drumkit plays, should be controllable.

Modes

Many synthesizers have different operating modes, such as "single" and "group" mode, where "single" refers playing a single timbre (perhaps responding to all MIDI channels, known as "omni" mode), and "group" refers to a multitimbral mode of operation. Don't do this. If the synthesizer implements "parts" as described above, there is no need to treat these two cases as special modes. If a single-timbre omni-channel-receive state is truly needed, then include the ability to set each part to omni-receive, rather than specifying a special mode for the entire device.

By allowing a programmable key range or velocity filter for each part, you can implement keyboard splits and other performance options all within this same general structure.

Dynamic Voice Allocation

It is much more convenient for any use of the synthesizer if its polyphony is automatically distributed among its parts, as needed. That is, it is preferable if the host does not need to precisely identify how many low-level oscillators are to be assigned to each part. The ability to optionally override the dynamic assignation is sometimes useful. This is called "voice reserves" on some synthesizers.

Patch Changes

A patch is a particular sound that a synthesizer can play, such as "piano" or "violin."

Typically, a synthesizer has some "built-in" patches, in ROM, such as a General MIDI library. It may, additionally, have "user-modifiable" patch locations, where nonstandard, user-created patches may be stored. For those instruments with user-modifiable patch locations, there should be a "store part" system exclusive command, which writes all the parameters for a part into user memory. Without this command, it is necessary for the host to resend all the parameters that it has sent to a part (such as while editing a new sound) to a user location. Another useful command is a "copy patch" command, which copies a patch from a built-in or user-modifiable patch location to a user-modifiable location.

A synthesizer should be able to play any patch on any part. A patch-change command should take effect immediately, leaving current notes playing in their original patch, but immediately causing new notes to play in the new patch. Some synthesizers stutter slightly, or mute all notes on a part, when performing a patch change. This is undesirable.

A minor issue is that the MIDI specification for bank switching is ambiguous. I recommend using controller 0 for bank change, and ignoring controller 32.

Patch Parameters

Many synthesizers have a voice architecture with which one can describe a patch by a collection of parameters. For example, the volume envelope might be defined by four values -- attack, decay, sustain, and release -- each with a range of 0 to 99.

Every parameter should be accessible on every part at all times. That is, it should be possible to modify individual parameters on each part, without affecting other parts.

Some synthesizers have different "modes" for instrument editing and performance. This is a bad thing. Instrument editing and performance should be possible at the same time. In one specific case, the synthesizer has 8 parts, but in order to modify voice parameters, the synthesizer must be put into a special mode where only 1 part is playable. This is very bad, in that the behaviour of the synthesizer changes drastically just because a parameter change was desired.

A less important feature is that changing a parameter should not cause an unpleasant sound. Ideally, changing the parameter should affect currently playing sounds in real time, just as turning a knob on a lovely old analog synthesizer did. If that is not possible for the particular parameter, then it should affect the next note played on that part. The least desirable behaviour is to mute the currently playing notes on that part. (And of course, affecting notes on other parts is entirely unacceptable.)

Some synthesizers require a silence of some particular duration on the MIDI input line after a parameter change. This required duration should be as small as possible, or, prefereably, not required at all. By making this required duration of silence as small as possible, it speeds the download of instrument patches, if they happen to be sent one parameter at a time (which is a perfectly reasonable thing to do), and also makes it more possible to use parameter changes in a realtime fashion.

For a particular voice architecture, there may be parameters which are not used by every possible voice. A common arrangement is where the voice architecture allows for up to 4 oscillators to be used per voice, but allowing the user to program a patch to use as few as 1 oscillator. This is fine; all the parameters for the maximum number that may be used should be settable anyway.

Patch Storage

Of those synthesizer which allow user-modified patches, two cases seem to be common.

First, a synthesizer may perform all parameter changes within a part. That is, each part contains a logical list of parameters, and when you set the part to a particular instrument, then the parameters for that instrument are copied up into the part, to be played. Then, if you modify a parameter of that part, it only affects that part temporarily. That is, if two parts are both playing, say, the "piano" sound, you can alter some parameters on one part, and the other will be unaffected.

The second common case is that a part "points" into the stored instruments. That is, if two parts are both playing a user-programmed "piano" sound, and you change a parameter on that sound, then both parts playing it will take on the new sound.

In both cases, there may or may not be user-modifiable patch locations. In the first case, if there are no user-modifiable patch loations, a user-instrument must be downloaded from the host whenever you want to play it. (You could also set a part to a built-in instrument, and then change only a subset of the parameters, which may speed up the process over downloading the whole patch.) In the second case, there must be user-modifiable patch locations, otherwise no parameters could ever be changed.


Examples

I'll list here a few synthesizers which have good MIDI implementations, and a few which don't. All of the products mentioned are of fine quality, and each of them have brought me many years of aural delights. I only wish to document how certain computer control applications could be facilitated through a more consistent MIDI implementation.

Good Synths

The modern Roland synthesizers, such as the Sound Canvas line, have a nearly perfect MIDI implementation. On every part, each parameter may be independently modified. The only drawback is that a delay of several milliseconds must follow every system exclusive command. They also have the limitation of playing only a single drumkit at a time.

Even the SC-7, which has very limited user programmability, is nicely set up for what parameters it does support.

The Roland MT-32 also has an excellent MIDI implementation, in which parameters for each part may be adjusted independently and in real time. It also requires a MIDI silence after a parameter change, and supports only a single drumkit part. The main omission of the MT-32 is the part-store function. That is, after editing a part's parameters, there is no system exclusive command to store that new patch into one of the device's user-modifiable instrument locations.

Adequate Synths

The Yamaha TG100 has an adequate MIDI implementation. Parameter changes do not occur to a part, but to a user-instrument location. Because there is no copy-instrument command, it is not easily possible to make minor changes to an existing prestored instrument. One must set all of the parameters for a particular user instrument, and then point a part at it. Another drawback of the TG100 is that it has a large library of built-in patches, but certain of those patches can only be played on certain parts. This is a totally unnecessary software limitation. I would almost call it a bug, very nearly.

The Yamaha FB-01 has a pretty nice MIDI implementation, although, like the TG100, it lacks a part-store command. It does not perform dynamic voice allocation, which means the host controller must assign precisely the polyphony needed on each part. It has a nearly-instantaneous patch-change command, which is good. And parameters can be changed independently for each part. (Some of the parameters, notably the FM Operator Frequency Ratio, take effect in real time, which is terrific for bizarre FM synthesis emusic effects.)

Bad Synths

Some synthesizers are designed in such a way that it's impossible to perform certain desirable operations. The Kawai K-1, for example, has 8 parts, and a polyphony of 8. The polyphony is dynamically distributed among the parts, as needed. Unfortunately, certain patch-change values, when received on any channel, throw the synthesizer into a completely different state. A simple patch change may reassign every part to a different patch and MIDI channel and polyphony. This is sometimes bizarre, but not necessarily a problem, especially when a computer is sending the patch changes. What is more of a problem is that you must put the synthesizer into a special mode to alter voice parameters. In this mode, only one part can be played. Thus, it is impossible to use the device multitimbrally while altering parameters. Furthermore, when in this mode, you cannot change the MIDI channel of the device via MIDI.

With all these inconveniences, it is almost trivial to mention that patch-change commands take some 200 milliseconds to take effect, and that notes played too soon after a patch-change message tend to "stutter."

I should mention that I spoke with Kawai engineers a number of years ago about the stuttering problem, and they explained that it was integral to their hardware architecture that voices took some amount of time to be loaded out of storage. Like all technical design, there were tradeoffs to be made. In this case, those tradeoffs severely limited certain applications of this otherwise splendid device.


May 1995 - 11:34.PM