Glossary
A B C D E F G H I L M N P Q R S T U V Z
A ⬆
Adaptive Delta Pulse Code Modulation (ADPCM)
A method of compressing audio data. Although the theory for compression using ADPCM is standard, there are many different algorithms employed. For example, Microsoft's ADPCM algorithm is not compatible with the International Multimedia Association's (IMA) approved ADPCM.
A compounded compression algorithm for voice signals defined by the Geneva Recommendations (G.711). The G.711 recommendation defines A-Law as a method of encoding 16-bit PCM signals into a non-linear 8-bit format. The algorithm is commonly used in United States' telecommunications. A-Law is very similar to µ-Law, however, each uses a slightly different coder and decoder.
A type of distortion that occurs when digitally recording high frequencies with a low sample rate. For example, in a motion picture, when a car's wheels appear to slowly spin backward while the car is quickly moving forward, you are seeing the effects of aliasing. Similarly when you try to record a frequency greater than one half of the sampling rate (the Nyquist Frequency), instead of hearing a high pitch, you may hear a low-frequency rumble.
To prevent aliasing, an anti-aliasing filter is used to remove high-frequencies before recording. Once the sound has been recorded, aliasing distortion is impossible to remove without also removing other frequencies from the sound. This same anti-aliasing filter must be applied when resampling to a lower sample rate.
Alpha is a fourth channel that determines how transparency is handled in an image file. The RGB channels are blended to determine each pixel's color, and the corresponding alpha channel determines each pixel's transparency. The alpha channel can have up to 256 shades of gray: 0 represents a transparent pixel, 255 represents an opaque pixel, and intermediate values are semitransparent.
A process whereby the amplitude (loudness) of a sound is varied over time. When varied slowly, a tremolo effect occurs. If the frequency of modulation is high, many side frequencies are created which can strongly alter the timbre of a sound.
When discussing audio, this term refers to a method of reproducing a sound wave with voltage fluctuations that are analogous to the pressure fluctuations of the sound wave. This is different from digital recording in that these fluctuations are infinitely varying rather than discrete changes at sample time. See Quantization
Describes the frame size of your video as a ratio of its width to its height. For example, video shot in NTSC DV format has a frame size of 720 by 480 pixels, which is roughly a 1.33:1 aspect ratio:
720 (frame width) ÷ 480 (frame height) = 1.5
1.5 x 0.9091 (pixel aspect ratio) = 1.36365.
Frame Size |
Example |
---|---|
4:3 Standard Television 1.33:1 Aspect Ratio |
|
16x9 Widescreen Television 1.78:1 Aspect Ratio |
|
Academy Flat Theatrical Frame 1.85:1 |
|
Academy Scope Theatrical Frame 2.35:1 Aspect Ratio |
|
The attack of a sound is the initial portion of the sound. Percussive sounds (drums, piano, guitar plucks) are said to have a fast attack. This means that the sound reaches its maximum amplitude in a very short time. Sounds that slowly swell up in volume (soft strings and wind sounds) are said to have a slow attack.
Audio Compression Manager (ACM)
The Audio Compression Manager, from Microsoft, is a standard interface for audio compression and signal processing for the Windows operating system. The ACM can be used by software to compress and decompress .wav files.
B ⬆
When discussing audio equalization, each frequency band has a width associated with it that determines the range of frequencies that are affected by the EQ. An EQ band with a wide bandwidth will affect a wider range of frequencies than one with a narrow bandwidth.
When discussing network connections, refers to the rate of signals transmitted; the amount of data that can be transmitted in a fixed amount of time (stated in bits/second): a 56 Kbps network connection is capable of receiving 56,000 bits of data per second.
The time signature of a piece of music contains two pieces of information: the number of beats in each measure of music, and which note value gets one beat. VEGAS® Pro uses this notion to determine the number of ticks to put on the time ruler above the track view and to determine the spacing when the ruler is displaying Measures & Beats.
The tempo of a piece of music can be written as a number of beats in one minute. If the tempo is 60 BPM, a single beat will occur once every second.
A bit is the most elementary unit in digital systems. Its value can only be 1 or 0, corresponding to a voltage in an electronic circuit. Bits are used to represent values in the binary numbering system. As an example, the 8-bit binary number 10011010 represents the unsigned value of 154 in the decimal system. In digital sampling, a binary number is used to store individual sound levels, called samples.
The number of bits used to represent a single sample. VEGAS Pro software uses either 8, 16 or 24-bit samples. Higher values will increase the quality of the playback and any recordings that you make. While 8-bit samples take up less memory (and hard disk space), they are inherently noisier than 16 or 24-bit samples.
Adjusting brightness adds or subtracts values from the color channels in an image to make the image lighter or darker. The maximum brightness setting adds 255 (pure white), and the minimum setting adds 0 (pure black).
A virtual pathway where signals from tracks and effects are mixed. A bus's output is a physical audio device in the computer where the signal is routed. The configuration of busses is saved with the project whereas the routing of busses to hardware is saved with the system. In this way, projects can be easily moved from one system to another without modifying the original layout of the project.
Refers to a set of 8 bits. An 8-bit sample requires one byte of memory to store, while a 16-bit sample takes two bytes of memory to store.
C ⬆
Charge coupled device. The image sensor in a digital camera.
The values that convey chrominance information.
Chrominance
The color content of an image without respect to its brightness.
The clipboard is where data that you have cut or copied is stored. You can then paste the data back into a different location on the VEGAS Pro timeline or paste it into other applications, such as Microsoft Word, or another instance of VEGAS Pro.
Clipping is what occurs when the amplitude of a sound is above the maximum allowed recording level. In digital systems, clipping is seen as a clamping of the data to a maximum value, such as 32,767 in 16-bit data. Clipping causes sound to distort.
An acronym for Coder/Decoder. Commonly used when working with data compression.
Complementary colors are colors that are 180 degrees apart on the color wheel. In the following image, you can see that red and cyan are complementary colors, as are magenta and green, and blue and yellow.
Adjusting contrast multiplies color values in an image to stretch the existing color channel values across a broader or narrower portion of the spectrum. The contrast center determines the anchor point for stretching.
Histogram of original image.
Contrast decreased with a contrast center of 0.5: the histogram is squeezed into a narrower portion of the spectrum anchored from the center of the graph.
Contrast decreased with a contrast center of 0.0: the histogram is squeezed into a narrower portion of the spectrum anchored at the left edge of the graph.
Contrast decreased with a contrast center of 1.0: the histogram is squeezed into a narrower portion of the spectrum anchored at the right edge of the graph.
Mixing two pieces of media by fading one out as the other fades in.
The cutoff-frequency of a filter is the frequency at which the filter changes its response. For example, in a low-pass filter, frequencies greater than the cutoff frequency are attenuated while frequencies less than the cutoff frequency are not affected.
D ⬆
DC Offset occurs when hardware, such as a sound card, adds DC current to a recorded audio signal. This current causes the audio signal to alternate around a point above or below the normal -infinity dB (center) line in the sound file. To visually see if you have a DC offset present, you can zoom all the way into a sound file and see if it appears to be floating over the center line. In the following example, the red line represents 0 dB. The lower waveform exhibits DC offset; note that the waveform is centered approximately 2 dB above the baseline.
A unit used to represent a ratio between two numbers using a logarithmic scale. For example, when comparing the numbers 14 and 7, you could say 14 is two times greater than the number 7; or you could say 14 is 6 dB greater than the number 7. Where did we pull that 6 dB from? Engineers use the equation dB = 20 x log (V1/V2) when comparing two instantaneous values. Decibels are commonly used when dealing with sound because the ear perceives loudness in a logarithmic scale.
In VEGAS Pro, most measurements are given in decibels. For example, if you want to double the amplitude of a sound, you apply a 6 dB gain. A sample value of 32,767 (maximum positive sample value for 16-bit sound) can be referred to as having a value of 0 dB. Likewise, a sample value of 16,384 can be referred to having a value of -6 dB.
A program that enables the Windows operating system to connect different hardware and software. For example, a sound card device driver is used by software to control sound card recording and playback.
Digital Signal Processing (DSP)
A general term describing anything that alters digital data. Signal processors have existed for a very long time (tone controls, distortion boxes, wah-wah pedals) in the analog (electrical) domain. Digital Signal Processors alter the data after it has been digitized by using a combination of programming and mathematical techniques. DSP techniques are used to perform many effects such as equalization and reverb simulation.
Since most DSP is performed with simple arithmetic operations (additions and multiplications), both your computer's processor and specialized DSP chips can be used to perform any DSP operation. The difference is that DSP chips are optimized specifically for mathematical functions while your computer's microprocessor is not. This results in a difference in processing speed.
The practice of adding noise to a signal to mask quantization noise.
A quick way to perform certain operations using the mouse in VEGAS Pro. To drag and drop, you click and hold a highlighted selection, drag it (hold the left-mouse button down and move the mouse) and drop it (let go of the mouse button) at another position on the screen.
The difference between the maximum and minimum signal levels. It can refer to a musical performance (high volume vs. low volume signals) or to electrical equipment (peak level before distortion vs. noise floor). For example, orchestral music has a wide dynamic range while thrash metal has a very small (always loud) range.
E ⬆
Little and Big Endian describe the ordering of multi-byte data that is used by a computer's microprocessor. Little Endian specifies that data is stored in a low to high-byte format; this ordering is used by the Intel microprocessors. Big Endian specifies that data is stored in a high to low-byte format; this ordering is used by the Motorola microprocessors.
Envelopes allow you to automate the change of a certain parameter over time. In the case of volume envelopes, you can create a fade out (which requires a change over time) by adding an envelope and creating an extra point to the line that indicates where the fade starts. Then you pull the end point of the envelope down to infinity.
The process by which certain frequency bands are raised or lowered in level. EQ has various uses. The most common use for VEGAS Pro users is to simply adjust the subjective timbral qualities of a sound.
Essence marks can be saved in XDCAM clips during shooting. These markers can indicate record start/end times, shot marks, flash, filter/gain/shutter speed/white balance changes, or audio clipping. In VEGAS Pro, essence marks can be displayed in the XDCAM Explorer and as media markers. For more information, see "The XDCAM Explorer Window" and Using media markers and regions
An event is an occurrence of a media file on the timeline.
F ⬆
A file format specifies the way in which data is stored on your floppy disks or hard drive. In the Windows operating system, the most common file format is the Microsoft .WAV format. However, VEGAS Pro can read and write to many other file formats so you can maintain compatibility with other software and hardware configurations.
Audio uses frame rates only for the purposes of synching to video or other audio. In the latter case, the rate of 30 non-drop is typically used. In the former case 30 drop is usually used.
The speed at which individual images in the video are displayed on the screen during playback. A faster frame rate results in smoother motion in the video. The television frame rate in the US (NTSC) is 29.97 frames per second (fps). In many parts of Europe and Japan, the television standard is PAL at 25 fps.
The Frequency Spectrum of a signal refers to its range of frequencies. In audio, the frequency range is basically 20 Hz to 20,000 Hz. The frequency spectrum sometimes refers to the distribution of these frequencies. For example, bass-heavy sounds have a large frequency content in the low end (20 Hz - 200 Hz) of the spectrum.
G ⬆
Determines the brightness of the video and is used to compensate for differences between the source and output video and sometimes needs to be calibrated to match the source or destination. Higher gamma values result in lighter or brighter video as displayed on your computer's monitor.
Gamut
Gamut refers to the complete range of something. In video editing, you want to ensure that your colors are within the acceptable range for your broadcast standard. When colors are outside the NTSC or PAL gamut, you can introduce image problems or noise into the video stream. You can use the video scopes to analyze your video before rendering and correct out-of-gamut colors with video plug-ins. For more information, see Monitoring video with scopes
When you're using the color picker, a warning is displayed when you choose an out-of-gamut color. Click the color swatch below the warning to correct the color.
H ⬆
The unit of measurement for frequency or cycles per second (CPS).
I ⬆
An in-place plug-in processes audio data so that the output length always matches the input length. A non-in-place plug-in's output length need not match a given input length at any time: for example, Time Stretch, Gapper/Snipper, Pitch-Shift (without preserving duration), and some Vibrato settings can create an output that is longer or shorter than the input.
Plug-ins that generate tails when there is no more input but otherwise operate in-place (such as reverb and delay) are considered in-place plug-ins.
The insertion point (also referred to as the cursor position) is analogous to the cursor in a word processor. It is where pasted data will be placed or other data may be inserted depending on the operation.
Telecine is the process of converting 24 fps (cinema) source to 30 fps video (television) by adding pulldown fields. Inverse telecine, then, is the process of converting 30 fps (television) video to 24 fps (cinema) by removing pulldown. For more information, see Telecine and Pulldown
Inverting sound data reverses the polarity of a waveform around its baseline. Inverting a waveform does not change the sound of a file; however, when you mix different sound files, phase cancellation can occur, producing a "hollow" sound. Inverting one of the files can prevent phase cancellation.
In the following example, the red line represents the baseline, and the lower waveform is the inverted image of the upper waveform.
Industry Standard Recording Codes (ISRC) were designed to identify CD tracks. The ISRC code is a 12-character alphanumeric sequence in the following format:
Field |
A |
B |
C |
D |
E |
---|---|---|---|---|---|
Sample ISRC |
SE |
T38 |
86 |
302 |
12 |
Field |
Description |
---|---|
A |
Country — Represents the recording's country of origin. |
B |
First Owner — Assigned ID for the producer of the project. Each country has a board that assigns these codes. |
C |
Year of Recording — Represents the year the recording was made. |
D |
Recording — Represents the recording's serial number made by the same producer in that year:
|
E |
Recording Item (1 or 2 digits) Identifies tracks on a CD (each track can have a different ISRC code). |
L ⬆
The values that convey luminance information.
Luminance
The brightness of an image without respect to its color content.
M ⬆
Saved locations in the sound file. Markers are stored in the Regions List and can be used for quick navigation.
Markers can be displayed in the Trimmer window for sound files that contain them, but more often, markers and regions are used at the project level to mark interesting places in the project. For information on using markers, see Inserting markers
A standard way for applications to communicate with multimedia devices like sound cards and CD players. If a device has a MCI device driver, it can easily be controlled by most multimedia software.
A Microsoft Windows program that can play digital sounds or videos using MCI devices. Media Player is useful for testing your sound card setup. For example, if you cannot hear sound when using VEGAS Pro, try using Media Player. If you cannot play sound using Media Player, check the sound card's manual (do not call Technical Support until you have called the sound card manufacturer).
A MIDI device specific timing reference. It is not absolute time like MTC, instead it is a tempo dependent number of "ticks" per quarter note. MIDI Clock is convenient for synching devices that need to do tempo changes mid-song. VEGAS Pro supports MIDI Clock out, but does not support MIDI Clock in.
A MIDI Port is the physical MIDI connection on a piece of MIDI gear. This port can be a MIDI in, out or through. Your computer must have a MIDI to output MIDI Time Code to an external device or to receive MIDI Time code from an external device.
MTC is an addendum to the MIDI 1.0 Specification and provides a way to specify absolute time for synchronizing MIDI capable applications. Basically, it is a MIDI representation of SMPTE timecode.
A function VEGAS Pro performs inherently by adding events to multiple audio tracks.
A mixer configuration that allows you to assign individual tracks to any number of stereo output busses. In single stereo mode, all tracks go out the same stereo bus. Multiple stereo configuration allows you to keep your signals from the tracks discrete if you want them to be.
Musical Instrument Device Interface (MIDI)
A standard language of control messages that provides for communication between any MIDI compliant devices. Anything from synthesizers to lights to factory equipment can be controlled via MIDI. VEGAS Pro utilizes MIDI for synchronization purposes.
N ⬆
Noise-shaping is a technique that can minimize the audibility of quantization noise by shifting its frequency spectrum. For example, in 44,100 Hz audio quantization noise is shifted towards the Nyquist Frequency of 22,050 Hz.
This type of editing involves a pointer-based system of keeping track of edits. When you delete a section of audio in a nondestructive system, the audio on disk is not actually deleted. Instead, a set of pointers is established to tell the program to skip the deleted section during playback.
Refers to raising the volume so that the highest level sample in the file reaches a user defined level. Use this function to make sure you are fully utilizing the dynamic range available to you.
The Nyquist Frequency (or Nyquist Rate) is one half of the sample rate and represents the highest frequency that can be recorded using the sample rate without aliasing. For example, the Nyquist Frequency of 44,100 Hz is 22,050 Hz. Any frequencies higher than 22,050 Hz will produce aliasing distortion in the sample if no anti-aliasing filter is used while recording.
P ⬆
To place a mono or stereo sound source perceptually between two or more speakers.
The file created by VEGAS Pro when a file is opened for the first time. This file stores the information regarding the graphic display of the waveform so that opening a file is almost instantaneous in direct edit mode. This file is stored in the directory that the file resides in and has a .sfk extension. If this file is not in the same directory as the file, or is deleted, it will be recalculated the next time you open the file in direct mode.
The pixel aspect determines whether the pixels are square (1.0) which refers to computers, or rectangular (settings other than 1.000) which typically refers to televisions. The pixel aspect ratio, together with the frame size, determine the frame aspect ratio.
An effect that can be added to the product to enhance the feature set. VEGAS Pro supports all DirectX plug-ins. The built in EQ, Compression and Dithering effects are also considered plug-ins because they work in other DirectX-compatible applications.
Plug-ins can stringed together into a chain so that the output of one effect feeds into the input of another. This allows for complex effects that could not otherwise be created.
Pre-roll is the amount of time elapsed before an event occurs. Post-roll is the amount of time after the event. The time selection defines the pre- and post-roll when recording into a selected event.
A snapshot of the current settings in a plug-in. Presets are created and named so that you can easily get back to a sound that you have previously created.
A preset calls up a bulk setting of a function. If you like the way you tweaked that EQ but do not want to have to spend the time getting it back for later use, save it as a preset.
PCM is the most common representation of uncompressed audio signals. This method of coding yields the highest fidelity possible when using digital storage.
In telecine conversion, fields are added to convert 24 fps film to 30 fps video.
In 2-3 pulldown, for example, the first frame is scanned into two fields, the second frame is scanned into three fields, and so on for the duration of the film. 2-3 pulldown is the standard for NTSC broadcasts of 24p material. Use 2-3 pulldown when printing to tape, but not when you intend to use the rendered video in a VEGAS Pro project. Removing 2-3 pulldown is inefficient because the pulldown fields that are created for frame 3 span two frames:
24 fps film (top) and resulting NTSC video with 2-3 pulldown fields (bottom)
Use 2-3-3-2 pulldown when you plan to use your rendered video as source media. When removing 2-3-3-2 pulldown, Frame three is discarded, and the pulldown fields in the remaining frames are merged:
24 fps film (top) and resulting NTSC video with 2-3-3-2 pulldown fields (bottom)
Punching-in during recording means automatically starting and stopping recording at user-specified times.
Q ⬆
A mixing implementation that allows for four discrete audio channels. These are usually routed to two front speakers and two back speakers to create immersive audio mixes.
The process by which measurements are rounded to discrete values. Specifically with respect to audio, quantization is a function of the analog to digital conversion process. The continuous variation of the voltages of a analog audio signal are quantized to discrete amplitude values represented by digital, binary numbers. The number of bits available to describe these values determines the resolution or accuracy of quantization. For example, if you have 8-bit analog to digital converters, the varying analog voltage must be quantized to 1 of 256 discrete values; a 16-bit converter has 65,536 values.
A result of describing an analog signal in discrete digital terms (see Quantization). This noise is most easily heard in low resolution digital sounds that have low bit depths and is similar to a "shhhhh" type sound while the audio is playing. It becomes more apparent when the signal is at low levels, such as when doing a fade out.
R ⬆
A subsection of a sound file. You can define any number of regions in a project or media file.
The act of recalculating samples in a sound file at a different rate than the file was originally recorded. If a sample is resampled at a lower rate, sample points are removed from the sound file decreasing its size, but also decreasing its available frequency range. When resampling to a higher sample rate, extra sample points in the sound file are interpolated. This increases the size of the sound file but does not increase the quality. When down-sampling one must be aware of aliasing. See AliasingVEGAS Pro will automatically resample all audio that is added to the project's sample rate.
Small tab-shaped controls above the time ruler that represent the location of markers, regions, and loop points in the waveform display.
The time ruler is the area on a data window above the tracks display window that shows the horizontal axis units.
S ⬆
The word sample is used in many different (and often confusing) ways when talking about digital sound. Here are some of the different meanings:
A discrete point in time that a sound signal is divided into when digitizing. For example, an audio CD-ROM contains 44,100 samples per second. Each sample is really only a number that contains the amplitude value of a waveform measured over time.
A sound that has been recorded in a digital format; used by musicians who make short recordings of musical instruments to be used for composition and performance of music or sound effects. These recordings are called samples. In this help system, we try to use sound file instead of sample whenever referring to a digital recording.
The act of recording sound digitally, i.e. to sample an instrument means to digitize and store it.
The sample rate (also referred to as the sampling rate or sampling frequency) is the number of samples per second used to store a sound. High sample rates, such as 44,100 Hz provide higher fidelity than lower sample rates, such as 11,025 Hz. However, more storage space is required when using higher sample rates.
See Bit Depth
The sample value (also referred to as sample amplitude) is the number stored by a single sample. In 16-bit audio, these values range from -32768 to 32767. In 8-bit audio, they range from -128 to 127. The maximum allowed sample value is often referred to as 100% or 0 dB.
A context-sensitive menu that appears when you right-click on certain areas of the screen. The functions available in the shortcut menu depend on the object being clicked on as well as the state of the program. As with any menu, you can select an item from the shortcut menu to perform an operation. Shortcut menus are used frequently for quick access to many commands. An example of a shortcut menu can be found by right-clicking on any waveform display in a data window.
The signal-to-noise ratio (SNR) is a measurement of the difference between a recorded signal and noise levels. A high SNR is always the goal.
The maximum signal-to-noise ratio of digital audio is determined by the number of bits per sample. In 16-bit audio, the signal to noise ratio is 96 dB while in 8-bit audio its 48 dB. However, in practice this SNR is never achieved, especially when using low-end electronics.
Small Computer Systems Interface (SCSI)
A standard interface protocol for connecting devices to your computer. The SCSI bus can accept up to seven devices at a time including CD ROM drives, hard drives and samplers.
Society of Motion Picture and Television Engineers (SMPTE)
SMPTE timecode is used to synchronize time between devices. The timecode is calculated in Hours:Minutes:Second:Frames, where Frames are fractions of a second based on the frame rate. Frame rates for SMPTE timecode are 24, 25, 29.97 and 30 frames per second.
The sound card is the audio interface between your computer and the outside world. It is responsible for converting analog signals to digital and vice versa. VEGAS Pro will work with any Windows-compatible sound card.
Mixer implementation that includes two discrete channels
5.1 surround is a mixer implementation that includes six discrete channels
T ⬆
The process of creating 30 fps video (television) from 24 fps film (cinema). See Inverse Telecine (IVTC) and Pulldown
Tempo is the rhythmic rate of a musical composition, usually specified in beats per minute (BPM).
The format used to display the time ruler and selection times. These include Time, Seconds, Frames, and all standard SMPTE frame rates. The status format is set for each sound file individually.
A discrete timeline for audio data. Audio events sit on audio tracks and determine when a sound starts and stops. Multiple audio tracks are mixed together to give you a composite sound that you hear through your speakers.
A function that will delete all data in a sound file outside of the current selection. For more information, see Trimming events
U ⬆
µ-Law (mu-Law) is a companded compression algorithm for voice signals defined by the Geneva Recommendations (G.711). The G.711 recommendation defines µ-Law as a method of encoding 16-bit PCM signals into a non-linear 8-bit format. The algorithm is commonly used in European and Asian telecommunications. µ-Law is very similar to A-Law, however, each uses a slightly different coder and decoder.
This is the temporary file created before you do any processing to a project. This undo buffer allows the ability to rewrite previous versions of the project if you decide you do not like changes you have made to the project. This undo buffer is erased when the file is closed or the Clear Undo History command is invoked.
These commands allow you to change a project back to a previous state, when you do not like the changes you have made, or reapply the changes after you have undone them. The ability to Undo/Redo is only limited by the size of your hard drive. See Undo Buffer
A list of all of the functions that have been performed to a file that are available to be undone or redone. Undo/Redo History gives you the ability to undo or redo multiple functions as well as preview the functions for quick A/B-ing of the processed and unprocessed material. To display the history list, click the down arrow next to the Undo and Redo buttons.
V ⬆
A file format for digital video.
A software-only router for MIDI data between programs. VEGAS Pro software uses the VMR to receive MIDI timecode and send MIDI clock. No MIDI hardware or cables are required for a VMR, so routing can only be performed between programs running on the same PC. The Virtual MIDI Router is supplied with VEGAS Pro.
Z ⬆
A zero-crossing is the point where a fluctuating signal crosses the zero-amplitude axis. By making edits at zero-crossings with the same slope, the chance of creating glitches is minimized. The fade edit edges setting is used to make zero crossing at event edges by fading the waveform to 0-amplitude over a short period of time.
Zipper noise occurs when you apply a changing gain to a signal, such as when fading out. If the gain does not change in small enough increments, zipper noise can become very noticeable. Fades are accomplished using 64-bit arithmetic, thereby creating no audible zipper noise.