

1. Signals and systems and their mathematical models
1.1 Introduction
The development of electrical engineering as a scientific discipline that is concerned with the physical knowledge of electricity and magnetism dates back to about the first half of the 18th century, but the first references appeared already around the year 1600 when William Gilbert (1544–1603), Queen Elizabeth I’s personal physician, publishes a book entitled “On the Magnet and Magnetic Bodies and on the Great Magnet the Earth” (De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure). He is the originator of terms such as electricity, electric power and electric attraction. The first proof that electricity and magnetism are related is shown in 1820 by the Danish physicist and chemist Hans Christian Oersted (1777–1851). English scientist Michael Faraday (1791–1867) and American physicist Joseph Henry (1797–1878) discovered, independently of each other, electromagnetic induction. Their discoveries lead Samuel Morse to the construction of the electromagnetic telegraph. Alexander Graham Bell (1847–1922), an American of Scottish descent, is granted the first American patent for telephone. Swedish inventor Lars Magnus Ericsson (1846–1926) and his wife Hilda are considered to be the first to build a telephone into a car. Legend has it that they pulled up at an open-wire telephone line and connected to it long poles, from which they led cables to a telephone. While Magnus was rotating the dynamo, Hilda was speaking to her friend through the operator. Until World War II (1939–1945), the development was predominantly in heavy-current electrical engineering (generation of electric power and its distribution, electric machines for driving, heating, illuminating, etc.). During World War II the communication and radar technologies started to be developed. After the war ended, analog and digital computers and numerical mathematical methods began to be used. In 1965 there was a radical turning point in the use of discrete and analog signal processing, when Cooley and Tukey published their work on spectral analysis with the FFT (Fast Fourier Transform) algorithm [COO-65]. In the year 1972 the Intel Company manufactured the first 8-bit microprocessor i8008. In 1979 the first digital signal processor i2920 was introduced and in 1990 the first GSM mobile network was put into operation.
In all these circuits and devices we come across physical quantities that provide us with the required information. In the general sense, information is any communication, message or data that is of some interest to us. Information itself is immaterial. The physical quantity which enables transferring or storing information is generally referred to as a signal. A signal is the physical carrier of information. It can be, for example, electric voltage, electric current or power, pressure, temperature, etc. For the transfer of information, electrical and optical signals are mostly used. To store information, CD and DVD carriers, USB, RAM memories, etc. are used.
Signals in real world are always random signals. To describe events taking place in real systems, their mathematical models need to be created. Fig. 1.1a gives a part of a rectangular periodic signal, which can be seen, for example, on oscilloscope screens. Fig. 1.1b shows its mathematical model, which can be defined as follows
There is a kind of convention that the instantaneous signal values (voltage ������������, current ������������, etc.) are denoted in lower-case italic type while the constant signal values are written in capital type (constant ����, magnitude ����, etc.). The conjunction symbol ˄ means that the signal values hold for times given by all conditions.
1.2 Classification of signals
Signals (their mathematical models) can be classified according to their certain properties. The basic classification of signals according to the randomness of their waveform is into
– deterministic signals
– random signals
Deterministic signals (determined, regular) are signals that have a known value of the dependent variable for every value of the independent variable. Their waveform is defined by a known function or sequence, e.g. cos����, ln����, etc. Deterministic signals are further classified into
– periodic deterministic signals
– non-periodic deterministic signals
A signal (function) ������������ is periodic if there is a positive number ����1 from the domain of real numbers, i.e. ����1 ∈ ℝ and ����1 > 0, such that for all real ���� ∈ ℝ it holds that
(1.2)
This periodic signal is then denoted as ����p�������� or ������������. The lowest value of ����1 for which the condition (1.2) is satisfied is called the fundamental period. Periodic signals can further be classified into
– harmonic signals
– non-harmonic signals
The harmonic signal is defined using the cosine or the sine function. We use cosine
(1.3)
The value ���� m is called the amplitude, the phase difference between the origin of the function cos���� and the coordinate origin is the initial phase ����1. The frequency ����1 = 1⁄����1. The angular frequency �1 = 2π����1 = 2π⁄����1. The waveform of a harmonic signal can be seen in Fig. 1.2.
Random (irregular, stochastic) signals are signals for which their value cannot be determined accurately as with the deterministic signals. It can only be estimated with a certain probability. The properties of random signals and random processes are established using the cumulative distribution function (cdf), probability density function (pdf) and the so-called moments (mean value, variance, skewness, kurtosis, etc.). Random processes can be divided into
– stationary random processes
– non-stationary random processes
A feature of the stationary random processes is that the parameters of a stationary random process do not depend on the origin of the time axis. Stationary random processes are further subdivided into
– ergodic random processes
– non-ergodic random processes
Random processes are represented by means of a set of realizations (measurements) of a random process. With ergodic random processes, their statistical parameters can be determined from only one realization. In practice, when periodic signals are measured, the mean and the effective values are often used. The calculation of the mean value of a periodic signal can be seen in Fig. 1.3. The mean value is defined as (1.4)
The subscript ’a’ has been derived from average. The mean value ���� a is equal to the hight of a rectangle whose area is the same as the area under the curve ������������. The mean value of a harmonic signal within one period ����1 is ���� a = 0 and therefore the mean value of a harmonic signal per positive half of a period has been introduced (1.5) where ���� m is the amplitude of the harmonic signal ������������
Figure 1.3: How to calculate the mean value of the periodic signal ������������
The calculation of the effective value of a periodic signal is shown in Fig. 1.4. The effective value is defined by the relation (1.6)
The way of calculating is similar to the calculation of the mean value, with the difference that the waveform ������������ is first raised to the power of two and, eventually, the root of the value is extracted. The effective value of a harmonic signal equals (1.7) where ���� m is again the amplitude (maximum value) of the harmonic signal.
1.4: How to calculate the effective value of the periodic signal ������������
1.3 Basic operations with continuous-time signals
We will divide the basic operations with continuous-time signals into two groups. The first group includes operations with one signal: changing the time scale, inverting the time axis, shifting, inverting the time axis with shifting, signal amplification and attenuation. The other group contains operations with two signals: addition, subtraction, multiplication, and division of two signals, convolution and correlation.
1.3.1 Shift in time
Shift in time (Fig. 1.5) is an operation with one signal which assigns to the signal ������������ the signal ����������������������, where
is a real constant, �������� ��,�������������. Taken as an example of a signal delay can be the transmission from the TV camera in the studio to the TV set at home or the signal transmission over a satellite, etc.
1.3.2 Inversion of time axis
Inverting the time axis (Fig. 1.6) is an operation when a signal ������������ is replaced with the signal ��������������. It means that the time axis is rotated about the origin.
1.3.3 Inversion of time axis with shift
Inversion of time axis with shift (Fig. 1.7) assigns to a signal ������������ the signal ����������������������, where �������� ��,�������������. This operation is used in the convolution, when one signal remains unchanged while the other signal has the time axis rotated and is being gradually shifted with respect to the first signal.
1.3.4 Signal amplification, attenuation and inversion
Signal magnitude can be changed using the constant ����, ������������, by which the signal ������������ can be increased (��������������, reduced (0 ≤ �������������� or inverted (����������0��. These operations can be seen in Fig. 1.8.
1.3.5 Changing the time scale
Changing the time scale (Fig. 1.9) is an operation by means of which a signal ������������ is replaced with the signal ����������������, ������������, ����������0. If 0������������������, we are concerned with time expansion (extension of the time scale), if ������������, it is time compression (reduction of the time scale). An example of this operation is the change of the speed of revolutions in the gramophone or tape recorder (and the resulting change in voice pitch, etc.).
1.3.6 Addition, subtraction, multiplication, and division of two signals
Let there be two signals ������������ and ������������. Their sum is
This means that the values of the two signals are summed for the same ����. Similarly, the product of two signals is
Fig. 1.10 gives an example of the product and sum of two signals. Similarly, the difference and quotient of two signals can be obtained. The operation of multiplying two signals is also used to select a short section of a long signal ������������ using the window signal ������������. An example of the operation of window multiplication can be seen in Fig. 1.11. Various types of window are used for this operation: rectangular, Hann, Hamming, Kaiser, Blackman, Bartlett, and other. We come across multiplication of two signals in radio devices, e.g. in mobile phone modulators and demodulators, etc.
1.3.7 Convolution
The convolution of two continuous-time signals (Fig. 1.12) ������������ and ℎ�������� is an operation which is defined (assuming that the integral converges) by the relation (1.10)
In Fig. 1.12 we see an example of obtaining the convolution value for one shift as well as the result of the convolution operation. Convolution belongs to the basic operations with two signals. If, for example, ℎ�������� represents the impulse response of a frequency filter, then the convolution (1.10) performs the operation of frequency filtering of the continuous signal ������������ by a frequency filter with the impulse response ℎ��������
1.3.8 Correlation
The operation of correlation is similar to that of convolution. Correlation indicates the measure of similarity of the waveforms of two signals. The correlation ������������ of two continuous-time signals, ������������ and ������������ (assuming that the integral converges), is equal to (Fig. 1.13)