Wednesday, December 15, 2010
Weak current College】 【interpretation audio properties and audio processing-Power By】 【China power house network.
Interpretation of the audio properties we all acknowledge that it is a digital age, in order to pursue the excellent sound quality a lot of people for their tireless efforts. With the advent of the digital era, who recognize the digital audio signal than analog. What is analog signal? actually anything we could hear the sound after audio line or microphone transmission is a series of analog signals. Analog signal is that we can hear. The digital signal is a pile of number signs to record sound, instead of using physical means to save signal. (Normal tape recording is a physically) digital signal we actually don't hear. In this way we can briefly compare simulation time recording and digital age difference: simulation of the era is the original signal to physically recorded to tape (of course in the Studio was completed), and then processing, editing, modifying, and last record to tape, LP, the majority of the audience can appreciate of the carrier. This series of processes are simulated, each step to loss of some signals, to the audience in the hands of nature is far less good, let alone what HI-FI. The digital era is the first step was the original signal into digital audio data, and then use hardware or software processing, this process compared to analog method immense advantages because it almost without any loss. For a machine, just treated figures, of course, the possibility of lost codes are also reasonable, but as long as operation does not occur. Finally to the pile of digital signal transmission for digital recording device such as CD, a much smaller loss naturally! if we pay attention to what side of the CD you will see a lot of CD is like: ADD, AAD, DDD, mark. Three letters of the representative of the film in the sound recording, editing, finished third in the use of simulation (Analog) or digital (Digital). Of course, A representative of the simulation, D representative figures. AAD will demonstrate its recording and editing is a simulation method, and finally irrigation is used digitally, such records is the last recorded music to CD without making any modifications. ADD is a modified process, many classical music playing or recording to simulate command multiple times, and we now hear the CD was recorded after the modified tanks, many of these recordings have tags ADD. But the album is DDD is a modern recording products. Natural, inevitable to CD, and tape the end D can just view is AAA, although it seems not. Therefore, the digital audio signal that we saved voice, voice signal, it is characterized by loss of signal is not easy. While the analog signal is we finally can hear things. But analog signal changes was a disaster, the loss is too big. There is this little country good Glen? Gould to live until now the astonishment. Digital audio copied 100 times without a loss, do not believe you try to COPY a WAVE file? digital recording the most critical step is to convert analog signals into digital signals. On the computer, analog sound signal recording became a Wave file, the Windows comes with a tape recorder can do it, but its functionality is very limited, does not meet our needs, so we used other professional audio software, such as SoundForge, and so on. Recording of files is the Wave files that describe the Wave file there are two indicators, one is a sample precision, the other is a bit. This is a digital audio production is very important that the two concepts, here's a look at it. What is a Wave of sampling accuracy?, it is a digital signal with a bunch of numbers to describe the original analog signal, it to the original analog signal for analysis, we know all the sound has its waveform, a digital signal is in the original analog signal waveform on periodically conduct an "access point", giving each point to a numeric value, this is the "sampling", and then put all together the "point" to describe the analog signal, it is clear that taking a certain amount of time, the more the points described more accurate waveform, this scale we called "sampling accuracy." We are the most common sample precision is 44.1kHz/s. It means every second sampling 44100, use this value because, after experimenting, people find the most suitable sampling precision, lower than this value will be more obvious losses, higher than the value of the human ear has been very difficult, and increases the amount of digital audio.
General in order to achieve very accurate "," we will use the 48k even 96k sampling accuracy, fact, 96k sampling accuracy and precision of the difference between 44.1k sampling would not be like the distinction as 44.1k and 22k is so great, we are using the sampling CD standard is 44.1k, currently one of the most common 44.1k or standard, some people think 96k will be the future trend of the recording industry. Sampling accuracy should be a good thing, but sometimes I think we really listen to the sample precision crafted 96k music and sampling precision crafted 44.1k music difference? ordinary people home audio system can evolve their difference? bit is that we often hear of a noun, digital recording general use 16-bit, 20 bit, 24 bit making music, what is a "bit"? we know there is light there is sound, the physical effect light ring is amplitude as digital recording, you must also accurately represents the music of light, so make sure to wave amplitude is an accurate description, the "bit" is such a unit, 16-bit refers to the amplitude of the wave to 216 65536 bands, according to the analog signal light ring and put it into a level, you can use numbers to represent. And the sample precision, the higher the bitrate, the more detail reflecting changes in the light of the music sound. 20 bit can produce 1048576 level, this kind of dynamic performance of the Symphony is very significant music has no problem. Mentioned a noun "dynamic", it actually refers to a piece of music the loudest and most light contrast can achieve much, we often say "dynamic range", the unit is dB dynamic range, and we use when recording audio bitrate, in closeTogether, if we use a very low bitrate, then we only have very few grades can be used to describe the sound intensity, we of course cannot hear significant comparative strength. Dynamic range and bit rate relationship is; bitrate per bit, dynamic range increase 6dB. So if we use 1 bit recording, then our dynamic range only 6dB, such music is not possible to listen to. 16-bit, the dynamic range is 96dB. It can satisfy the General requirements. 20-bit, the dynamic range of 120dB, contrast is again strongly Symphony can handle it well, the strength of the performance is more than music. Fever-level recording also uses 24-bit, however, and the sample precision, it will not be higher than 20 than unique obvious changes, in theory, a 24-bit can do 144dB dynamic range, but is actually very difficult to achieve because any device will inevitably generate noise, at least at this stage of 24 bits is very difficult to achieve its expected results. Audio processing, audio media, digital processing with computer technology, especially the mass-storage devices and large-capacity memory in the PC, the audio media for digital processing is possible. Digital processing of the audio information is the core of sampling, the sample collected, to reach a variety of effects, this is the audio media digital processing of the basic meaning. Second, audio media, basic processing basic audio digital processing include the following: different sampling rate, frequency, channels between transformation and conversion. Which transform simply as another format and conversion through resampling, which optionally uses interpolation algorithm to compensate for distortions. For audio data itself for the various transformations, such as fade in, fade out, volume adjustment, etc. Digital filtering algorithm to transform, such as high-pass, low pass filter. 3. audio media generalization dealing with for a long time, the computer's researchers have underestimated the voice on human information processing. When virtual technology development, people no longer meet the monotonous voice of plane, and even make sense to have a space in the three-dimensional sound effects. Auditory channel can work with the Visual channel at the same time, the sound of the generalization process not only can you express the voice of spatial information and Visual information of multi-channel integration can create extremely realistic virtual space, which in the future of multimedia system is extremely important. This is also the media processing.
Human perception of sound source location of the most basic theory is duplex theory, this theory is based on two factors: the two ears of arrival between sound and two ears of intensity difference between sound. The time difference is due to the distance of the cause, when the sound coming from the front distance equal, so there is no time difference, but if the right third degree you reach the right ear of the time is less than the left ear about thirty microseconds, and it is this thirty microseconds, allows us to identify the location of the sound source. Intensity difference is due to the attenuation of signals, signal attenuation is because of the distance and naturally occurring, or because the person in the head block, make sound attenuation, have had strength differences, making close to the sound source side of the ears hear sound intensity is greater than the other ear. Theory based on duplex, similarly, as long as an ordinary two-channel audio in two-channel hybrid between between, you can make ordinary two-channel sound has three-dimensional sound field effect. This involves the following the sound field of two concepts: the width and depth of the sound field. The width of the sound field using the time difference of principle, because now is to extend the ordinary stereo audio, sound source position is always in the middle of the sound field, this simplifies our work. Want to work with only the two-channel sound for the appropriate delay and strength of weakened after mutual mixing. As a result of this expansion is limited, the delay cannot be too long, otherwise it will become an echo. The depth of the sound field using intensity difference of principle, specific manifestations is echo. deeper sound field, the longer the delay of the echo. therefore, in the echo settings should be at least three parameters: the echo of the decay rate, ECHO's depth and echo. At the same time, you should provide is used to set another channel sound mixing in the depth of the number of options.
Labels:
[:]
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment