Please consider a donation to the Higher Intellect project. See https://preterhuman.net/donate.php or the Donate to Higher Intellect page for more info.

Sound Retrieval System

From Higher Intellect Vintage Wiki
Jump to navigation Jump to search

What is SRS?

SRS Labs, Inc. develops, markets and licenses unique, leading-edge audio technologies for use in the consumer electronics, computer multimedia, electronic game, automotive and professional sound industries. The company's flagship technology, SRS, the Sound Retrieval System, replaces stereo as the method of accurately reproducing sound and is rapidly becoming the standard for 3-D audio technology. It creates a three-dimensional sound image from any audio source with only two conventional stereo speakers. Whether the signal is mono, stereo, or surround sound encoded, SRS expands the material and immerses the listener in three-dimensional sound. This unique process has been awarded four U.S. patents with 260 claims an 17 issued foreign patents, with 45 pending patents in countries around the world.

SRS, the Sound Retrieval System, was invented by Arnold Klayman after years of research on the psychoacoustics of sound and the dynamics of the human hearing system. SRS differs from stereo and traditional sound expansion techniques because it is based on the human hearing system. It retrieves the spatial information from recordings and restores the original three-dimensional sound field. As a result, the reproduced sound is much closer to a live performance. Like live performances, SRS has no critical listening position (sweet spot). Listeners can move around the room and continue to be immersed in full three-dimensional sound. Speakers are no longer the discernible source of sound. SRS does not rely on encoding or decoding, and does not alter the original program material by adding any form of time delay or phase shift.

Why is SRS needed?

The SRS technology is based on the characteristics of the human hearing system. To understand SRS, you must understand a few of the components of sound and how your ears and brain use them to construct three-dimensional audio images.

If you rub your fingers together in front of your forehead and then slowly bring your hand around to the side of your head just out from your ear keeping the same distance between your fingers and your head, you will note a slight rise in volume and more emphasis on certain mid and high frequencies. In this experiment, rubbing your fingers serves as a stable volume and frequency sound source. Your ears will hear and register to your brain the identical sound very differently, depending on whether the sound comes from in front of you or from the sides. The side sound is much louder and higher in pitch because of your pinna-the external, fleshy portion of your ear.

When the sound wave arrives from the front of your head, the pinna will reflect many frequency components away from the ear canal. Side sounds that enter the ear canal are not reflected by the pinna as much as frontal sounds, so the intensity, and the arrival time of the side sounds are different from frontal sounds. The ear then transfers all of this information to the brain. These spatial cues supplied by the pinna to your brain are called the head-related transfer functions. Because they depend on the volume and direction of the sound, the transfer functions of the waves from the pinna and ear canal are constantly changing. They change because the sounds they register constantly change. This transferred information gives your brain the necessary details to understand what you are hearing and from which direction you are hearing it.

Because a microphone does not have a pinna, recordings made with microphones will always misinterpret the proper frequency representation of side sounds, regardless of how many microphones are used. The original ambience and dynamic feeling of live sound are therefore masked or lost. SRS takes into account the constantly changing transfer functions of the human hearing system and restores the proper frequencies and proportions of direct and indirect sound waves so that what the listener hears is closer to the original performance.

How does SRS work?

A typical stereophonic signal consists of the left channel (L) and the right channel (R). SRS combines the two signals to produce a SUM signal (L+R) and then subtracts-each one from the other to create two DIFFERENCE signals (L-R) and (R-L).

The processed SUM signal includes all direct and centered sounds (dialogue, vocalist, and soloist information). The processed DIFFERENCE signals contain the ambience information (reflected sounds and reverberant fields) and provide the spatial information and directional cues to the human hearing system. Because the L-R and R-L signals are not heard by themselves, but are experienced as part of the stereo signal, microphones and traditional stereo speakers do not properly replicate the DIFFERENCE information. The incorrect recording of the spatial cues in the DIFFERENCE signals is largely responsible for the absence of realism in stereo recordings.

SRS processes the DIFFERENCE and SUM signals so that the resulting signals correspond to the varying transfer functions of the human hearing System. SRS restores the missing spatial cues inherent in the DIFFERENCE signals by selectively emphasizing certain frequencies. This does several things: it enhances the stereo image by restoring the ambience of the live performance that is normally masked by the louder direct sounds, and it provides a much wider listening area. You can walk about the room and still retain a sense of direction of all the musical instruments. The "sweet spot" disappears and you no longer have to sit precisely between two loudspeakers. Realism is restored.