Digital Communication I Explanations
During my time supervising the University of Cambridge Computer Laboratory Part IB Digital Communications I course, various questions or confusions arose. Here is an in depth explanation on the issues raised. To the best of my knowledge it is correct, but if in doubt, ask the lecturer. See also the page dedicated to CRCs, which among other things explains the hardware implementation diagram in the slides.
Different Types of CDMA
CDMA is also known as spread spectrum multiplexing. The key concept is to use a pseudo-random code of length n, with a very high bit rate to encode the data we wish to transmit. The reasoning behind spread spectrum is to decrease the energy density of our transmitted signal, i.e. to spread the power over many different frequencies (i.e. a large bandwidth), rather than have a lot of power for a very narrow bandwidth. Such spreading allows us to cope much better with noise (including other users on the same frequency), and multipath effects (same signal travelling multiple paths, and hence arriving at multiple different times, interfering with each other).
In spread spectrum, the signal is modulated onto a carrier wave using BPSK (as opposed to, say, QPSK -- this way we increase the bandwidth). The resulting function m(t) is then multiplied in the frequency domain by a carrier wave modulated by the pseudo-random code (resulting in a signal we term c(t). This is essentially frequency modulation of c(t) with s(t). Because the bit rate (i.e. number of bits/second) of the pseudo-random code is very high, the bandwidth of c(t) is high. Given that the message bit rate is much lower, we can say that the resulting transmitted signal has bandwidth approximately equal to that of c(t).
To decode spread spectrum, we must synchronise the incoming signal with the inverse code, i.e. we need to know at what point in the pseudo-random code the received signal was transmitted with. Hence we require some method of synchronised clock between sender or receiver, followed by a search (try different small shifts of the code near to where the clock says we should be, as clocks won't be exactly synchronised) to obtain exact synchronisation. Provided different users use different pseudo-random codes the interference should not be too great.
Direct Sequence Spread Spectrum (DSSS) is equivalent to the version of CDMA explained in your notes (XOR the signal with a high rate code, etc.). In this scenario we use BPSK modulation to create our c(t). Hence, we obtain a high bandwidth signal on which we transmit n copies of each message bit. Using majority voting we can reconstruct our message at the receiver. This means that multiple users can transmit on the same frequency at the same time.
Frequency Hopping Spread Spectrum (FHSS) uses Frequency Shift Keying to create c(t). Essentially, the pseudo-random code is divided into groups of bits of a fixed size, and each group is used to modulate a different frequency. The result is that as time progresses we utilise different frequencies for our code signal. Hence, over any short period of time, the bandwidth is relatively low, as we are transmitting our signal only on one frequency. However, over a longer period of time, the bandwidth is very large as we utilising lots of different frequencies. This is actually a special case of frequency division multiplexing, but where each frequency is also time multiplexed.
A key military advantage of spread spectrum techniques is that with a high bandwidth the signals are hard to jam. They are also hard to detect if the inverse code is not known, and hence ascertaining what frequenc(y/ies) to jam is very hard.
The Effect of Latency on Throughput (But Not on Capacity!)
The capacity of a channel is defined as C = Blog(1 + S/N), where B is bandwidth in Hz, S is the signal power, and N is the noise power. The log is to the base 2. Capacity is a theoretical physical limit, measured in bits/second. If we use this physical channel to run a digital channel over it, the better the modulation and coding schemes we use, the closer the bit rate (bits/second) the digital channel achieve as compared to the physical capacity of the channel. If we could design a perfect coding scheme, we would be able to achieve the theoretical limit.
The latency of a channel is defined as the time taken for a symbol to travel from the transmitter to the receiver along the channel. The physics of the channel (e.g. physical length the symbols have to travel, what you use to carry them [e.g. sound, light, changes in voltage]) define a lower limit on the latency of the channel.
The two physical quantities are therefore independent.
If you are using Automatic Repeat Request (ARQ) on a channel, there will be a window size. The window size specifies the number of symbols (probably packets) you may have travelling through the channel at any one time unacknowledged by the receiver. If the window size is 1, then the transmitter may transmit only one packet, then must wait for the acknowledgement (ACK) to return from the receiver, before another packet can be transmitted. With a window size of 2, the transmitter may send two packets, and then must wait for an ACK before being allowed to transmit another, etc..
The time taken for a packet to travel from the transmitter to the receiver is the delay of the channel, d. The time taken for the corresponding ACK to come back is also the same, d. Hence, with a window size of 1, we would have to wait 2d between sending a packet, and being allowed to send the next one. This then means that our throughput would be one packet every 2d seconds.
Hence, if the delay of a channel is very large, the throughput (bits/second) with a small window size would be quite low. The physical capacity of the channel may be much greater, but because of the small window we are artificially constraining the "capacity" (throughput) of the digital channel that we have layered on top of it.
Therefore, whilst latency does not influence capacity, it can severely influence throughput when using a protocol with a windowed flow control scheme.