Download full Wiley's gate instrumentation engineering chapter - wise solved papers (2000 - 2020) la

Page 1


Visit to download the full and correct content document: https://ebookmass.com/product/wileys-gate-instrumentation-engineering-chapter-wise -solved-papers-2000-2020-lalit-gupta/

More products digital (pdf, epub, mobi) instant download maybe you interests ...

GATE 2019 Civil Engineering Previous Years' Solved Question Papers

Trishna Knowledge Systems

https://ebookmass.com/product/gate-2019-civil-engineeringprevious-years-solved-question-papers-trishna-knowledge-systems/

GATE 2019 Electrical Engineering Previous Years' Solved Question Papers

Trishna Knowledge Systems

https://ebookmass.com/product/gate-2019-electrical-engineeringprevious-years-solved-question-papers-trishna-knowledge-systems/

GATE Mechanical Engineering Topic-wise Practice Tests

Trishna Knowledge Systems

https://ebookmass.com/product/gate-mechanical-engineering-topicwise-practice-tests-trishna-knowledge-systems/

GATE Electrical Engineering Topic-wise Practice Tests -

Trishna Knowledge Systems

https://ebookmass.com/product/gate-electrical-engineering-topicwise-practice-tests-trishna-knowledge-systems/

GATE Electronics and Communication Engineering Topicwise Practice Tests Trishna Knowledge Systems

https://ebookmass.com/product/gate-electronics-and-communicationengineering-topic-wise-practice-tests-trishna-knowledge-systems/

GATE 2019 Civil Engineering Trishna Knowledge Systems

https://ebookmass.com/product/gate-2019-civil-engineeringtrishna-knowledge-systems/

Metrology and Instrumentation: Practical Applications for Engineering and Manufacturing (Wiley-ASME Press Series) 1st Edition Samir Mekid

https://ebookmass.com/product/metrology-and-instrumentationpractical-applications-for-engineering-and-manufacturing-wileyasme-press-series-1st-edition-samir-mekid/

Basic Electrical and Instrumentation Engineering P. Sivaraman

https://ebookmass.com/product/basic-electrical-andinstrumentation-engineering-p-sivaraman/

GATE 2019 Electronics and Communication Engineering Trishna Knowledge Systems

https://ebookmass.com/product/gate-2019-electronics-andcommunication-engineering-trishna-knowledge-systems/

GATE

CHAPTER-WISE SOLVED PAPERS INSTRUMENTATION ENGINEERING

Dr. Lalita Gupta Associate Professor, Department of Electronics and Communication Engineering, Maulana Azad National Institute of Technology, Bhopal

Dr Anil Kumar Maini

Outstanding Scientist and former Director of Laser Science and Technology Centre DRDO, New Delhi

Engineering Mathematics Contributed by

Varsha Agrawal Senior Scientist, Laser Science and Technology Centre DRDO, New Delhi

Nakul Maini Analyst Ericsson (India)

GATE INSTRUMENTATION ENGINEERING

Chapter-wise Solved Papers 2000-2020

Copyright©2020byWileyIndiaPvt.Ltd.,4436/7,AnsariRoad,Daryaganj,NewDelhi-110002.

Allrightsreserved.Nopartofthisbookmaybereproduced,storedinaretrievalsystem,ortransmittedin any form or by any means, electronic, mechanical, photocopying, recording or scanning without the writtenpermissionofthepublisher

LimitsofLiability:Whilethepublisherandtheauthorhaveusedtheirbesteffortsinpreparingthisbook, Wileyandtheauthormakenorepresentationorwarrantieswithrespecttotheaccuracyorcompleteness ofthecontentsofthisbook,andspecificallydisclaimanyimpliedwarrantiesofmerchantabilityorfitness foranyparticularpurpose.Therearenowarrantieswhichextendbeyondthedescriptionscontainedin this paragraph. No warranty may be created or extended by sales representatives or written sales materials.

Disclaimer: The contents of this book have been checked for accuracy Since deviations cannot be precluded entirely, Wiley or its author cannot guarantee full agreement. As the book is intended for educationalpurpose,Wileyoritsauthorshallnotberesponsibleforanyerrors,omissionsordamages arising out of the use of the information contained in the book. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered. It is sold on the understandingthatthePublisherisnotengagedinrenderingprofessionalservices.

OtherWileyEditorialOffices:

JohnWiley&Sons,Inc.111RiverStreet,Hoboken,NJ07030,USA Wiley-VCHVerlagGmbH,Pappellaee3,D-69469Weinheim,Germany

JohnWiley&SonsAustraliaLtd,42McDougallStreet,Milton,Queensland4064,Australia

JohnWiley&Sons(Asia)PteLtd,1FusionpolisWalk#07-01Solaris,SouthTowerSingapore138628

JohnWiley&SonsCanadaLtd,22WorcesterRoad,Etobicoke,Ontario,Canada,M9W1L1

Edition:2020

ISBN:978-81-265-5869-8

ISBN:978-81-265-8958-6(ebk) www.wileyindia.com

Printedat:

NOTE TO THE ASPIRANTS

About the Examination

Graduate Aptitude Test in Engineering (GATE) is an examination conducted jointly by the Indian Institute of Science (IISc), Bangalore and the seven Indian Institutes of Technology (at Bombay, Delhi, Guwahati, Kanpur, Kharagpur, Madras and Roorkee) on behalf of the National Coordination Board (NCB)-GATE, Department of Higher Education, Ministry of Human Resource Development (MHRD), and Government of India. Qualifying in GATE is a mandatory requirement for seeking admission and/or financial assistance to:

• Master’s programs and direct Doctoral programs in Engineering/Technology/Architecture.

• Doctoral programs in relevant branches of Science, in the institutions supported by the MHRD and other Government agencies. Even in some colleges and institutions, which admit students without MHRD scholarship/assistantship, the GATE qualification is mandatory.

• Many Public Sector Undertakings (PSUs) have been using the GATE score in their recruitment process.

Graduate Aptitude Test in Engineering (GATE) is basically an examination on the comprehensive understanding of the candidates in various undergraduate subjects in Engineering/ Technology/Architecture and post-graduate level subjects in Science. GATE is conducted for 24 subjects (also referred to as “papers”) are generally held at the end of January or early February of the year. The GATE examination centres are spread in different cities across India, as well as, in six cities outside India. The examination is purely a Computer Based Test (CBT). The GATE score reflects the relative performance level of the candidate in a particular subject, which is quantified based on the several years of examination data.

Validity

The GATE score is valid for THREE YEARS from the date of announcement of the results.

Eligibility for GATE Instrumentation Engineering

Bachelor’s degree holders in Instrumentation Engineering (4 years after 10+2 or 3 years after B.Sc./Diploma in Electrical/ Electronics/Instrumentation).

About the Book

This book GATE Chapter wise-Instrumentation Engineering Solved Papers is designed as a must have resource for the students preparing for the M.E./M.Tech./M.S./Ph.D. in Instrumentation Engineering. It offers Chapter-wise solved previous years’ GATE Instrumentation Engineering questions for the years 2000-2020. This book will help students become well-versed with the pattern of examination, level of questions asked and concept distribution in questions.

Key Features of the Book

• Complete solutions provided for every question, tagged for the topic on which the question is based.

• Chapter-wise analysis of GATE questions provided at the beginning of the book to make students familiar with chapter-wise marks distribution and weightage of each.

• Topic-wise analysis of GATE questions provided at the beginning of each chapter to make students familiar with important topics on which questions are commonly asked.

• Unique scratch code in the book that provides access to

 Free online test with analytics.

 7-Day free subscription for topic-wise GATE tests.

 Instant correction report with remedial action.

These features will help students develop problem-solving skills and focus in their preparation on important chapters and topics.

Gate Instrumentation Engineering: Chapter-Wise Question Distribution Analysis 2010–2020

The table given below depicts the chapter-wise question and marks distribution of previous years’ (2010-2020) GATE Instrumentation Engineering papers. This will help students understand the relative weightage of each chapter in terms of number of questions asked and thus bring focus in their preparation.

C H A P T E R

1

Engineering Mathematics

SYLLABUS

Linear Algebra: Matrix algebra, systems of linear equations, Eigen values and Eigen vectors.

Calculus: Mean value theorems, theorems of integral calculus, partial derivatives, maxima and minima, multiple integrals, Fourier series, vector identities, line, surface and volume integrals, Stokes, Gauss and Green’s theorems.

Differential Equations: First order equation (linear and nonlinear), higher order linear differential equations with constant coefficients, method of variation of parameters, Cauchy’s and Euler’s equations, initial and boundary value problems, solution of partial differential equations: variable separable method.

Analysis of Complex Variables: Analytic functions, Cauchy’s integral theorem and integral formula, Taylor’s and Laurent’s series, residue theorem, solution of integrals.

Probability and Statistics: Sampling theorems, conditional probability, mean, median, mode and standard deviation, random variables, discrete and continuous distributions: normal, Poisson and binomial distributions.

Numerical Methods: Matrix inversion, solutions of non-linear algebraic equations, iterative methods for solving differential equations, numerical integration, regression and correlation analysis.

CHAPTER ANALYSIS

IMPORTANT FORMULAS

Linear Algebra

1. Types of Matrices

(a) Row matrix: A matrix having only one row is called a row matrix or a row vector. Therefore, for a row matrix, m = 1.

(b) Column matrix: A matrix having only one column is called a column matrix or a column vector. Therefore, for a column matrix, n = 1.

(c) Square matrix: A matrix in which the number of rows is equal to the number of columns, say n, is called a square matrix of order n

(d) Diagonal matrix: A square matrix is called a diagonal matrix if all the elements except those in the leading diagonal are zero, that is aij = 0 for all i ≠ j.

(e) Scalar matrix: A matrix A = [aij ]n × n is called a scalar matrix if (i) aij = 0, for all ij ≠ . (ii) aii = c, for all i, where c ≠ 0.

(f) Identity or unit matrix: A square matrix A =

is called an identity or unit matrix if (i) aij = 0, for all i

j (ii) aij = 1, for all i.

(g) Null matrix: A matrix whose all the elements are zero is called a null matrix or a zero matrix.

(h) Upper triangular matrix: A square matrix A = [aij] is called an upper triangular matrix if aij = 0 for i > j.

(i) Lower triangular matrix: A square matrix A = [aij] is called a lower triangular matrix if aij = 0 for i < j.

2. Types of a Square Matrix

(a) Nilpotent matrix: A square matrix A is called a nilpotent matrix if there exists a positive integer n such that An = 0. If n is least positive integer such that An = 0, then n is called index of the nilpotent matrix A.

(b) Symmetrical matrix: It is a square matrix in which aij = aji for all i and j. A symmetrical matrix is necessarily a square one. If A is symmetric, then AT = A

(c) Skew-symmetrical matrix: It is a square matrix in which aij = aji for all i and j. In a skew-symmetrical matrix, all elements along the diagonal are zero.

(d) Hermitian matrix: It is a square matrix A in which (i, j)th element is equal to complex conjugate of the (j, i)th element, i.e. aaijji = for all i and j

(e) Skew-Hermitian matrix: It is a square matrix A = [aij] in which aa ijij =− for all i and j

(f) Orthogonal matrix: A square matrix A is called orthogonal matrix if AAT = ATA = I

3. Equality of a Matrix

Two matrices A = [aij]m × n and B = [b ij]x × y are equal if

(a) m = x, that is the number of rows in A equals the number of rows in B

(b) n = y, that is the number of columns in A equals the number of columns in B

(c) aij = b ij for i = 1, 2, 3, , m and j = 1, 2, 3, , n

4. Some of the important properties of matrix addition are:

(a) Commutativity: If A and B are two m × n matrices, then A + B = B + A, that is matrix addition is commutative.

(b) Associativity: If A, B and C are three matrices of the same order, then

(A + B) + C = A + (B + C ) that is matrix addition is associative.

(c) Existence of identity: The null matrix is the identity element for matrix addition. Thus, A + O = A = O + A

(d) Existence of inverse: For every matrix A = [aij]m × n, there exists a matrix [aij]m × n, denoted by A, such that A + ( A) = O = ( A) + A

(e) Cancellation laws: If A, B and C are matrices of the same order, then A + B = A + C ⇒ B = C B + A = C + A ⇒ B = C

5. Some important properties of matrix multiplication are:

(a) Matrix multiplication is not commutative.

(b) Matrix multiplication is associative, that is (AB)C = A(BC).

(c) Matrix multiplication is distributive over matrix addition, that is A(B + C) = AB + AC

(d) If A is an m × n matrix, then I m A = A = AI n

(e) The product of two matrices can be the null matrix while neither of them is the null matrix.

6. Some of the important properties of scalar multiplication are:

(a) k(A + B) = kA + kB

(b) (k + l) A = kA + lA

(c) (kl) ⋅ A = k(lA) = l(kA)

(d) ( k) ⋅ A = (kA) = k( A)

(e) 1 A = A

(f) 1 A = A

Here, A and B are two matrices of same order and k and l are constants.

If A is a matrix and A2 = A, then A is called idempotent matrix. If A is a matrix and satisfies A2 = I, then A is called involuntary matrix.

7. Some of the important properties of transpose of a matrix are:

(a) For any matrix A, (AT)T = A

(b) For any two matrices A and B of the same order (A + B)T = AT + BT

(c) If A is a matrix and k is a scalar, then (kA)T = k(AT)

(d) If A and B are two matrices such that AB is defined, then (AB)T = BTAT

8. Some of the important properties of inverse of a matrix are:

(a) A 1 exists only when A is non-singular, that is |A|≠ 0.

(b) The inverse of a matrix is unique.

(c) Reversal laws: If A and B are invertible matrices of the same order, then (AB) 1 = B 1A 1

(d) If A is an invertible square matrix, then (AT) 1 = (A 1)T

(e) The inverse of an invertible symmetric matrix is a symmetric matrix.

(f) Let A be a non-singular square matrix of order n Then |adj A|=|A|n 1

(g) If A and B are non-singular square matrices of the same order, then adj (AB) = (adj B)(adj A)

(h) If A is an invertible square matrix, then adj AT = (adj A)T

(i) If A is a non-singular square matrix, then adj (adj A) =|A|n − 2 A

(j) If A is a non-singular matrix, then |A 1|=|A| 1, that is | A A = 1 1 | ||

(k) Let A, B and C be three square matrices of same type and A be a non-singular matrix. Then AB = AC ⇒ B = C and BA = CA ⇒ B = C

9. The rank of a matrix A is commonly denoted by rank (A). Some of the important properties of rank of a matrix are:

(a) The rank of a matrix is unique.

(b) The rank of a null matrix is zero.

(c) Every matrix has a rank.

(d) If A is a matrix of order m × n, then rank (A) ≤ m × n (smaller of the two)

(e) If rank (A) = n, then every minor of order n + 1, n + 2, etc., is zero.

(f) If A is a matrix of order n × n, then A is non-singular and rank (A) = n

(g) Rank of IA = n

(h) A is a matrix of order m × n. If every kth order minor (k < m, k < n) is zero, then rank (A) < k

(i) A is a matrix of order m × n. If there is a minor of order (k < m, k < n) which is not zero, then rank (A) ≥ k

(j) If A is a non-zero column matrix and B is a non-zero row matrix, then rank (AB) = 1.

(k) The rank of a matrix is greater than or equal to the rank of every sub-matrix.

(l) If A is any n-rowed square matrix of rank, n 1, then adj A ≠ 0

(m) The rank of transpose of a matrix is equal to rank of the original matrix.

rank (A) = rank (AT)

(n) The rank of a matrix does not change by pre-multiplication or post-multiplication with a non-singular matrix.

(o) If AB, then rank (A) = rank (B).

(p) The rank of a product of two matrices cannot exceed rank of either matrix.

rank (A × B) ≤ rank A or rank (A × B) ≤ rank B

(q) The rank of sum of two matrices cannot exceed sum of their ranks.

(r) Elementary transformations do not change the rank of a matrix.

10. Determinants

Every square matrix can be associated to an expression or a number which is known as its determinant. If A = [aij] is a square matrix of order n, then the determinant of A is denoted by det A or |A|. If A = [a11] is a square matrix of order 1, then determinant of A is defined as |A|= a11

If A aa aa = ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ 11 12 21 23 is a square matrix of order 2, then determinant of A is defined as |A|= a11a23 a12a21

of order

then determinant of A is defined

11. Minors

The minor M ij of A = [aij] is the determinant of the square sub-matrix of order (n 1) obtained by removing i th row and j th column of the matrix A

12. Cofactors

The cofactor Cij of A = [aij] is equal to ( 1)i + j times the determinant of the sub-matrix of order (n 1) obtained by leaving i th row and j th column of A

13. Some of the important properties of determinants are:

(a) Sum of the product of elements of any row or column of a square matrix A = [ a ij] of order n with their cofactors is always equal to its determinant.

acAac ijij i n ijij j n == == ∑∑ || 11

(b) Sum of the product of elements of any row or column of a square matrix A = [aij] of order n with the cofactors of the corresponding elements of other row or column is zero.

acac ijik i n ijkj j n == == ∑∑ 0 11

(c) For a square matrix A = [aij] of order n, |A|=|AT|

(d) Consider a square matrix A = [aij] of order n ≥ 2 and B obtained from A by interchanging any two rows or columns of A, then |B|= A

(e) For a square matrix A = [aij] of order (n ≥ 2), if any two rows or columns are identical, then its determinant is zero, that is |A|= 0.

(f) If all the elements of a square matrix A = [aij] of order n are multiplied by a scalar k, then the determinant of new matrix is equal to k|A|.

(g) Let A be a square matrix such that each element of a row or column of A is expressed as the sum of two or more terms. Then |A| can be expressed as the sum of the determinants of two or more matrices of the same order.

(h) Let A be a square matrix and B be a matrix obtained from A by adding to a row or column of A a scalar multiple of another row or column of A, then |B|=|A|.

(i) Let A be a square matrix of order n (≥ 2) and also a null matrix, then |A|= 0.

(j) Consider A = [aij] as a diagonal matrix of order n (≥ 2), then |A|= a11 × a22 × a33 ×× a nn

(k) Suppose A and B are square matrices of same order, then

14. There are two cases that arise for homogeneous systems:

(a) Matrix A is non-singular or |A|≠ 0. The solution of the homogeneous system in the above equation has a unique solution, X = 0, that is x1 = x2 =…= xj = 0.

(b) Matrix A is singular or |A|= 0, then it has infinite many solutions. To find the solution when |A|= 0, put z = k (where k is any real number) and solve any

two equations for x and y using the matrix method. The values obtained for x and y with z = k is the solution of the system.

15. The method to solve a non-homogeneous system of simultaneous linear equations. Please note the number of unknowns and the number of equations.

(a) Given that A is a non-singular matrix, then a system of equations represented by AX = B has the unique solution which can be calculated by X = A 1 B

(b) If AX = B is a system with linear equations equal to the number of unknowns, then three cases arise:

•  If A ≠ 0 , system is consistent and has a unique solution given by X = A 1 B

•  If A = 0 and (adj A)B = 0, system is consistent and has infinite solutions.

•  If A = 0 and (adj A)B ≠ 0, system is inconsistent.

16. Cramer’s Rule

Suppose we have the following system of linear equations:

Thus, the solution of the system of equations is given by

17. Augmented Matrix

Consider the following system of equations:

This system can be represented as AXB =

augmented matrix.

18. Cayley–Hamilton Theorem

According to the Cayley–Hamilton theorem, every square matrix satisfies its own characteristic equations. Hence, if

is the characteristic polynomial of a matrix A of order n, then the matrix equation XaXaXaI nnn n ++++= 1 1 2 2 0 is satisfied by XA = .

19. Eigenvalues and Eigenvectors

If A = [aij]n × n is a square matrix of order n, then the vector equation AX = lX, where X =

is an unknown vector and l is an unknown scalar value, is called an eigenvalue

problem. To solve the problem, we need to determine the value of X’s and l’s to satisfy the above-mentioned vector. Note that the zero vector (that is X = 0) is not of our interest. A value of l for which the above equation has a solution X ≠ 0 is called an eigenvalue or characteristic value of the matrix A. The corresponding solutions X ≠ 0 of the equation are called the eigenvectors or characteristicvectors of A corresponding to that eigenvalue, l. The set of all the eigenvalues of A is called the spectrum of A. The largest of the absolute values of the eigenvalues of A is called the spectralradius of A The sum of the elements of the principal diagonal of a matrix A is called the trace of A

20. Properties of Eigenvalues and Eigenvectors

Some of the main characteristics of eigenvalues and eigenvectors are discussed in the following points:

(a) If l1, l2, l3, , l n are the eigenvalues of A, then kl1, kl2, kl3, , kl n are eigenvalues of kA, where k is a constant scalar quantity.

(b) If l1, l2, l3, , l n are the eigenvalues of A, then 11 11 12 3 λλλλ ,, ,. .., n are the eigenvalues of A 1 .

(c) If l1, l2, l3, , l n are the eigenvalues of A, then λλλλ 12 3 kkk n k ,,, ..., are the eigenvalues of Ak .

(d) If l1, l2, l3, , l n are the eigenvalues of A, then AAAA n λλλλ 12 3 ,,, ..., are the eigenvalues of adj A

(e) The eigenvalues of a matrix A are equal to the eigenvalues of AT

(f) The maximum number of distinct eigenvalues is n, where n is the size of the matrix A

(g) The trace of a matrix is equal to the sum of the eigenvalues of a matrix.

(h) The product of the eigenvalues of a matrix is equal to the determinant of that matrix.

(i) If A and B are similar matrices, that is A = IB, then A and B have the same eigenvalues.

(j) If A and B are two matrices of same order, then the matrices AB and BA have the same eigenvalues.

(k) The eigenvalues of a triangular matrix are equal to the diagonal elements of the matrix.

Calculus

21. Rolle’s Mean Value Theorem

Consider a real-valued function defined in the closed interval [a, b], such that

(a) It is continuous on the closed interval [a, b].

(b) It is differentiable on the open interval (a, b).

(c) f(a) = f(b).

Then, according to Rolle’s theorem, there exists a real number cab ∈ (, ) such that ′ = fc() 0.

22. Lagrange’s Mean Value Theorem

Consider a function f(x) defined in the closed interval [a, b], such that

(a) It is continuous on the closed interval [a, b].

(b) It is differentiable on the closed interval (a, b).

Then, according to Lagrange’s mean value theorem, there exists a real number cab ∈ (, ) , such that ′ = fcfbfa ba () () ()

23. Cauchy’s Mean Value Theorem

Consider two functions f(x) and g(x), such that

(a) f (x) and g(x) both are continuous in [a, b].

(b) f ′(x) and g ′(x) both exist in (a, b).

Then there exists a point cab ∈ (, ) such that ′ ′ = fc gc fbfa gbga () () () () () ()

24. Taylor’s Theorem

If f(x) is a continuous function such that f ′(x), ′′ ′ … fxfxfx n (),( ), ,( ) 1 are all continuous in [a, a + h] and fx n () exists in (a, a + h) where h = ba, then according to Taylor’s theorem, fahfahfa h fa h n fa h n fa n n n n () () () ! () () ! () ! ( += + ′ + ′′ ++ + 2 1 1 2 1 )

25. Maclaurin’s Theorem

If the Taylor’s series obtained in Section 24 is centered at 0, then the series we obtain is called the Maclaurin’s series. According to Maclaurin’s theorem,

fhfhf h f h n f h n f n n n n () () () ! () ! () ! () =+ ′ + ′′ ++ () + 00 2 0 1 0 0 21 1

26. Maxima and Minima

Suppose f(x) is a real-valued function defined at an internal (a, b). Then f(x) is said to have maximum value, if there exists a point y in (a, b) such that f (x) = f (y) for all x ∈ (a, b)

Suppose f(x) is a real-valued function defined at the interval (a, b). Then f(x) is said to have minimum value, if there exists a point y in (a, b) such that f (x) ≥ f (y) for all x ∈ (a, b)

Local maxima and local minima of any function can be calculated as:

Consider that f(x) be defined in (a,b) and y ∈ (a, b). Now,

(a) If ′ = fy() 0 and ′ fx() changes sign from positive to negative as ‘x’ increases through ‘y’, then x = y is a point of local maximum value of f(x).

(b) If ′ = fy() 0 and ′ fx() changes sign from negative to positive as ‘x’ increases through ‘y’, then x = y is a point of local minimum value of f(x).

27. Some important properties of maximum and minima are given as follows:

(a) If f(x) is continuous in its domain, then at least one maxima and minima lie between two equal values of x

(b) Maxima and minima occur alternately, that is no two maxima or minima can occur together.

28. Maximum and minimum values in a closed interval [a, b] can be calculated using the following steps:

(a) Calculate ′ fx()

(b) Put ′ fx() = 0 and find value(s) of x. Let c1, c2, ..., c n be values of x.

(c) Take the maximum and minimum values out of the values f(a), f(c1), f(c2), …, f(c n), f(b). The maximum and minimum values obtained are the absolute maximum and absolute minimum values of the function, respectively.

29. Partial Derivatives

Partial differentiation is used to find the partial derivative of a function of more than one independent variable.

The partial derivatives of f(x, y) with respect to x and y are defined by

∂ = +− → f x fxayfxy aa lim () (, )

= +− → f x fxybfxy bb lim (, )( ,)

and the above limits exist.

∂∂fx is simply the ordinary derivative of f with respect to x keeping y constant, while ∂∂fx is the ordinary derivative of f with respect to y keeping x constant. Similarly, second-order partial derivatives can be calculated by

,,, and is, respectively, denoted by

30. Some of the important relation are given as follows:

(a) If u = f(x, y) and y = f(x), then

(b) If u = f(x, y) and x = f1(t1,t

31. Integration Using Table

and

then

Some of the common integration problems can be solved by directly referring the tables and computing the results. Table 1 shows the result of some of the common integrals we use.

Table 1 | Table of common integrals

(Continued)

x ∫ sinsinxxC −+ 1 3 3 cos nxdx ∫ 11 1 2 n xx n n xdxC n n cossin cos + ∫+ cos nxdx ∫ tan tan −+ ∫ 1 2 1 x n xd nxC dx xa 22 ∫ 1 2 a xa xa C ln + +

xa 22∫± ln xxaC +±+ 22 xnxdx sin ∫ 1 2 n nxnxnxCsincos ()+ xnxdx cos ∫ 1 2 n nxnxnxCcossin (+)+ ebxdx ax sin ∫ eabxbbx ab C ax sincos () + + 22

ebxdx ax cos ∫ eabxbbx ab C ax cossin (+) + + 22 xnxdx 2 sin ∫ 1 2 2 3 22 n nxnxnx nxnxC (cos cos sin) −+ ++ xnxdx 2 cos ∫ 1 2 2 3 22 n nxnxnx nxnxC (sin sin cos) ++

32. Integration by Partial Fraction

The formula which come handy while working with partial fractions are given as follows: 1 xa dxxaC =− ∫()+ ln 11 22 1 ax dx a x a C + =⎛ ⎝ ⎜⎞ ⎠ ∫⎟+ tan x ax dxaxC 22 22 1 2 + =+ ∫()+ ln

33. Integration Using Trigonometric Substitution

Trigonometric substitution is used to simplify certain integrals containing radical expressions. Depending on the function we need to integrate, we substitute one of the following expressions to simplify the integration:

(a) For ax 22 , use xa = sin. θ

(b) For ax 22 + , use xa = tan. θ

(c) For xa 22 , use xa = sec. θ

34. Some of the important properties of definite integrals are given as follows:

(a) The value of definite integrals remains the same with change of variables of integration provided the limits of integration remain the same.

fxdxfydy a b a b () () ⋅= ∫∫

(b) fxdxfxdx a b b a () () ⋅= ∫∫

(c) fxdxfxdxfxdx a b a c c b () () () ⋅=⋅+⋅∫∫∫ if a < c < b

(d) fxdxfxdxfaxdx aaa () () () ⋅=⋅+ ∫∫∫ 0 2 00 2

(e) fxdxfaxdx aa () () ⋅= ∫∫ 00

(f) fxdxfxdx a aa () () ⋅= ∫∫ 2 0 , if the function is even.

fxdx a a () ∫⋅= 0 , if the function is odd.

(g) fxdxnfxdx naa () () ⋅= ∫∫ 00 if f(x) = f(x + a)

35. Some of the important properties of double integrals are:

(a) When x1, x2 are functions of y and y1, y2 are constants, then f(x, y) is integrated with respect to x keeping y constant within the limits x1, x2 and the resulting expression is integrated with respect to y between the limits y1, y2.

fxydxdyfxydxdy Qx x y y (, )( ,) ∫∫∫ =∫ 1 2

(b) When y1, y2 are functions of x and x1, x2 are constants, f(x, y) is first integrated with respect to y, keeping x constant and between the limits y1, y2 and

the resulting expression is integrated with respect to x within the limits x1, x2.

fxydxdyfxydydx Qy y x x (, )( ,) ∫∫∫ =∫ 1 2 1 2

(c) When x1, x2, y1 and y2 are constants, then

fxydxdyfxydxdyfxydydx Qx x y y y y x x (, )( ,) (, )

36. Change of Order of Integration

As already discussed, if limits are constant fxydxdyfxydydx x x y y y y x x (, )( ,) 1 2 1 2 1 2 1 2 ∫ ∫∫ =∫

Hence, in a double integral, the order of integration does not change the final result provided the limits are changed accordingly.

However, if the limits are variable, the change of order of integration changes the limits of integration.

37. The length of the arc of the polar curve rf=() θ between the points where θαβ = , is r dr d d 2 2 +⎛ ⎝

The length of the arc of the curve xftygt =()=() , between the points where t = a and t = b is

38. Fourier Series

Fourier series is a way to represent a wave-like function as a combination of sine and cosine waves. It decomposes any periodic function into the sum of a set of simple oscillating functions (sines and cosines). The Fourier series for the function f(x) in the interval ααπ <<+ x 2 is given by fx a anxbnx n n n n () cossin=++ =

The values of a0, a n and b n are known as Euler’s formulae.

39. Conditions for Fourier Expansion

Fourier expansion can be performed on any function f(x) if it fulfills the Dirichlet conditions. The Dirichlet conditions are given as follows:

(a) f(x) should be periodic, single-valued and finite.

(b) f(x) should have a finite number of discontinuities in any one period.

(c) f(x) should have a finite number of maxima and minima.

40. Fourier Expansion of Discontinuous Function

We can define the Euler’s formulae as follows:

At x = c, there is a finite jump in the graph of function. Both the limits, left-hand limit (that is, f(c 0)) and right-hand limit (that is, f(c + 0)), exist and are different. At such a point, Fourier series gives the value of f(x) as the arithmetic mean of these two limits. Hence, at x = c, fxfcfc () =−()++ ⎡() ⎣⎤ ⎦ 1 2 00

41. Change of Interval

Till now, we have talked about functions having periods of 2π. However, often the period of the function required to be expanded is some other interval (say 2c). Then, the Fourier expansion is given as follows:

However, since fxnx c () sin π is even, b

Thus, if a periodic function f(x) is even, its Fourier expansion contains only cosine terms, a0 and a n (b) When f(x) is an odd function,

c fxdx c c 0 1 0 == ∫ ()

c fxnx c dx n c c == ∫ 1 0 () cos π

However,

fxnx

Thus, if a periodic function f(x) is odd, its Fourier expansion contains only sine terms and b n .

43. Vectors

Vector is any quantity that has magnitude as well as direction. If we have two points A and B, then vector between A and B is denoted by AB

Position vector is a vector of any points, A, with respect to the origin, O. If A is given by the coordinates x, y and z APxyz =++ 22 2

44. Zerovector is a vector whose initial and final points are same. Zero vectors are denoted by 0 . They are also called null vectors.

45. Unitvector is a vector whose magnitude is unity or one. It is in the direction of given vector A and is denoted by A

46. Equalvectors are those which have the same magnitude and direction regardless of their initial points.

47. Addition of Vectors

According to triangle law of vector addition, as shown in Fig. 1,

42. Fourier Series Expansion of Even and Odd Functions

(a) When f(x) is an even function,

Figure 1 | Triangle law of vector addition. cab =+

According to parallelogram law of vector addition, as shown in Fig. 2, BC A O c a b

Figure 2 | Parallelogram law of vector addition. cab =+

If we have two vectors represented by adjacent sides of parallelogram, then the sum of the two vectors in magnitude and direction is given by the diagonal of the parallelogram. This is known as parallelogram law of vector addition.

48. Some important properties of vector addition are:

(a) If we have two vectors a and b abba +=+

(b) If we have three vectors a , b and c () ()abcabc ++ =+ +

49. Multiplication of Vectors

(a) Multiplication of a vector with scalar: Consider a vector a and a scalar quantity v. Then kaka =

(b) Multiplication of a vector with another vector using dot product: Dot product or scalar product of two vectors is given by abab ⋅= cos θ

where a = magnitude of vector a , b = magnitude of vector b and q = angle between a and b , 0 ≤ q ≤ p.

50. Some important laws of dot product are:

(a) AB = BA

(b) AB + C = AB + AC

(c) k(AB) = (kA) B = A (kB)

(d)

51. Multiplication of Vectors Using Cross Product

The cross or vector product of two vectors is given by sin ˆ ,ababn ×=θ

where a = magnitude of vector a

b = magnitude of vector b

q = angle between a and b , 0 ≤ q ≤ p

n = Unit vector perpendicular to both a and b

The result of a × b is always a vector.

52. Some important laws of cross product are:

(a) A × B = B × A

(b) A × (B + C) = A × B + A × C

(c) m(A × B) = (ma) × B = A × (mB)

(d)

ˆˆ iijjkkijkjkikij ×= ×= ×= ×= ×= ×= 0 (e) If A = A1 ˆ i + A

,

53. Derivatives of Vector Functions

The derivative of vector A(x) is defined as dA dx AxxAx xx = +− → lim () () Δ Δ Δ 0

if the above limits exists.

If A(x) = A1(x) ˆ i + A2(x) ˆ j + A3(x) ˆ k , then dA dx dA dx dA dx dA dx =++ 12 3 ˆˆ ˆ ij k

If A(x, y, z) = A1(x, y, z) ˆ i + A2(x, y, z) ˆ j + A3(x, y, z) ˆ k , then dA dA dx dx dA dydy dA dz dz =++ d dy ABA dB dy dA dy B ()⋅=+ d dz ABA dB dz dA dz B ()×=×+×

A unit vector perpendicular to two given vectors a and b is given by c ab ab = × × ||

54. Gradient of a Scalar Field

If we have a scalar function a(x, y, z), then the gradient of this scalar function is a vector function which is defined by grad aa f x if y jf z k =∇ = ∂ ∂ + ∂ ∂ + ∂ ∂ ˆˆ ˆ

55. Divergence of a Vector

If we have a differentiable vector A (x, y, z), then divergence of vector A is given by div AA A x A y A z xyz =∇= ∂

where A x, A y and A z are the components of vector A

56. Curl of a Vector

The curl of a continuously differentiable vector A is given by

If the function f is differentiable at x, then the directional derivative exists alone any vector v, ∇= ∇⋅ vfxfxv () ()

where ∇ on the right-hand side of the equation is the gradient and v is the unit vector given by v v v =

The directional derivative is also often written as follows: d dv vv x v y v xyz z =⋅ ∇=∂ ∂ + ∂ ∂ + ∂ ∂

59. Scalar Triple Product

The scalar product of three vectors a, b and c is defined as [, ,] () () ()abcabcbcacab =⋅ ×= ⋅× =⋅ ×

where A x, A y and A z are the components of vector A

57. Some important points of divergence and curl are:

(a) ∇ ⋅ ∇ A =∇2A = ∂ ∂ + ∂ ∂ + ∂ ∂ 2 2 2 2 2 2 A x A y A z

(b) ∇×∇ A = 0

(c) ∇∇× A = 0

(d) ∇× (∇× A ) =∇(∇ A ) ×∇2 A

(e) ∇ (∇ A ) =∇× (∇× A ) + ∇2 A

(f) ∇ ( A + B ) =∇ A + ∇ B

(g) ∇× ( A + B ) =∇× A + ∇× B

(h) ∇ ( A × B ) = B (∇× A ) A (∇× B )

(i) ∇× ( A × B ) = (B ⋅∇) AB (∇⋅A) (A∇) B + A (∇B)

58. Directional Derivative

The directional derivative of a scalar function, fxfxxxn () (, ,, ) =… 12 along a vector vvv n =… (, ,) 1 is a function defined by the limit

∇= +− → vhfxfxhvfx h () lim () () 0

The volume of a parallelepiped whose sides are given by the vectors a, b and c, as shown in Fig. 3, can be calculated by the absolute value of the scalar triple product. Vabc parallelepiped =⋅ × |( )| a × b c b a

Figure 3 | Parallelepiped with sides given by the vectors a, b and c.

60. Vector Triple Product

If we have three vectors a, b and c, then the vector triple product is given by abcbaccab ×× =⋅ () () ()

61. Line Integrals

Let us say that points (ak, bk) are chosen so that they lie on the curve between points (xk 1, yk 1) and (xk, yk). Now, consider the sum, {( ,) (, )}PxQy kkk k n kkkαβαβΔΔ + = ∑ 1

GATE IN CHAPTER-WISE SOLVED PAPERS

The limit of the sum as n →∞ is taken in such a way that all the quantities Δ xk and Δ yk approach zero, and if such limit exists, it is called a line integral along the curve C and is denoted by [( ,) (, )] PxydxQxydy c ∫+

The limit exists if P and Q are continuous at all points of C. To understand better, refer to Fig. 4.

(αk + 1, βk + 1)

(xk, yk) (xk + 1, yk + 1)

(α1, β1)

(x1, y1)

(a2, b2)

Figure 4 | Line integral.

In the same way, a line integral along a curve C in the three-dimensional space is given by [] AdxAdyAdz c 12 3 ∫++

where A1, A2 and A3 are functions of x, y and z, respectively.

62. Surface Integrals

Suppose g(x, y, z) is single valued and continuous for all values of C. Now, consider the sum

gc kkkp k n (, ,) αβγΔ = ∑ 1

where (ak, bk, gk) is any arbitrary point of ΔCk. If the limit of this sum, n→∞ is in such a way that each ΔC k → 0 exists, the resulting limit is called the surface integral of g(x, y, z) over C. The surface integral is denoted by gxyzds C (, ,) ∫∫

Figure 5 graphically represents surface integrals.

5 | Surface integrals.

63. Stokes’ Theorem

AdrAnds cs ⋅=∇×∫∫∫ ()

64. Green’s Theorem

x f y dxdyfdxfdy sc 12 21 ()

65. Gauss Divergence Theorem ∇⋅=⋅

AdVAndS vs

Differential Equations

66. Variable Separable

If all functions of x and dx can be arranged on one side and y and dy on the other side, then the variables are separable. The solution of this equation is found by integrating the functions of x and y

fxdxgydyC () () =+∫∫

67. Homogeneous Equation

Homogeneous equations are of the form dy dx fxy gxy = (, ) (, )

where f(x, y) and g(x, y) are homogeneous functions of the same degree in x and y. Homogeneous functions are those in which all the terms are of nth degree. To solve homogeneous equation,

Figure

68.

(a) Put yvx = , then dy dx vx dy dx =+

(b) Separate v and x and then integrate. dy dxfyx dv fvv dx x yxv = = = ⇒ ⇒ (/ ) () / ⇒=+ ∫ dv fvv xC () log

Linear Equation of First Order

If a differential equation has its dependent variables and its derivatives occur in the first degree and are not multiplied together, then the equation is said to be linear. The standard equation of a linear equation of first order is given as dy dxPyQ +=

where P and Q are functions of x

Integrating factor = () I.F. =∫ e Pdx yeQedxC PdxPdx ∫=⋅∫+ ∫

⇒=+ ∫ yQdxC () () I.F. I.F.

69. Exact Differential Equation

A differential equation of the form M(x, y) dx + N(x, y) dy = 0, if df = Mdx + Ndy where f(x, y) is a function, is called an exact differential equation.

Solution of differential equation is f(x, y) = C.

The necessary condition for the differential equation to be exact is

∂ = ∂ ∂ M y N x

The exact differential equation can be solved using the following steps:

(a) First, integrate M with respect to x keeping y constant.

(b) Next, integrate the terms of N with respect to y which do not involve x

(c) Then, the sum of the two expressions is equated to an arbitrary constant to give the solution.

MdxNdyC⋅+⋅=∫∫

70. Integrating Factor

A differential equation which is not exact can be converted into an exact differential equation by multiplication of a suitable function. This function is called integrating factor.

Some of the important formulae are:

(a) If the given equation of the form MdxNdy+= 0 is homogeneous and MxNy+≠ 0, then

I.F. = + 1 MxNy

(b) If the given equation is of the form fxyyxgxyxy (, )( ,) ∂+∂= fxyyxgxyxy (, )( ,) ∂+∂= 0 and MxNy−≠ 0, then

I.F. = 1 MxNy

(c) If the given equation is of the form MdxNdy += 0 and 1 N M y N x fx ∂ ∂ ∂ ∂ ⎛

⎠ ⎟= () where fx() is a function of x, then

I.F. =∫⋅∂ e fxx ()

(d) If the given equation is of the form MdxNdy += 0 and 1 M N x M y fy ∂ ∂ ∂ ∂

⎠ ⎟= () where fy() in a function of y, then

I.F. = e fyy () ∫⋅∂

71. Clairaut’s Equation

An equation of the form ypxfp=+ (), where p dy dx = is called a Clairaut’s equation.

We know that ypxfp=+ () (1)

Differentiating above equation with respect to x, we get ppx dp dxfpdp dx xfpdp dx =++′ ⇒+′= () [( )] 0

Therefore, [( )] xfp+′= 0 or dp dx = 0

Now, if dp dx = 0, then p = c (2)

Thus, eliminating p from Eqs. (1) and (2), we get ycxfc=+ () as the general solution of the given equation.

Hence, the solution of Clairaut’s equation is obtained on replacing p by c.

72. Linear Differential Equation

The general form of a differential equation of nth order is denoted by

n n n n n nn +++= 1 1 1 2 2 2

dy dxp dy dxp dy dxpyX

where p1, p2, , pn and X are functions of x

If operator d/dx is denoted by D, then substituting it in above equation, we get

DypDypDypyX nnn n ++++= 1 1 2 2

⇒=fDyX ()

where f(D) = Dn + p1Dn 1 + + pn = 0.

As already mentioned before, the equation can be generalized as

fDyX () =

The general solution of the above equation can be given as

y = (Complementary Function) + (Particular Integral)

The general form of the linear differential equation with constant coefficients is given as

Case III: If the roots of A.E. are complex, that is αβαβ +−iimm n ,, , ..., 3 , then

yeCxCxcece xmx n mx n =++++ αββ (cos sin) 12 3 3

where CccCicc 11 22 12=+=− and ()

If the complex roots are equal, then

yecxcxcxcxce x n mx n =+++++ αββ [( )cos () sin] 12 34

73. Particular Integrals

Consider the equation given in the previous section () DkDkDkyX nnn n ++++= 1 1 2 2

The particular integral of the equation is given by

P.I. = ++++ 1 1 1 2 2 DkDkDk X nnn n

The following cases arise for particular integrals:

(a) When X = eax, then

P.I. = 1 fD e ax () = 1 fa e ax () , if fa() ≠ 0

If fa() , = 0 then

dy dx kdy dx k d x kyX

n n n n n nn ++ ∂ ++= 1 1 1 2 2 2

where k1, k2, , k n are constants and X is a function of x only.

The equation dy dx kdy dx k d dxky

n n n n n nn ++++= 1 1 1 2 2 2 0

where k1, k2, , k n are constants can also be denoted as () DkDkDky nnn n ++++= 1 1 2 2 0 and the equation DkDkDk nnn n ++++= 1 1 2 2 0 is called the auxiliary equation (A.E.).

Now, if m1, m2, …, m n are the roots, then the complementary functions can be calculated using the following cases:

Case I: If the roots of A.E. are distinct and real, that is m1 ≠ m2 ≠ m3 ≠≠ m n, then

ycecece mxmxmx =+++ 12 3 12 3

Case II: If some roots of A.E. are real and some are equal, that is m1 = m2 ≠ m3, then

yccxecc mxmx =+++ () 12 3 1 3

P.I. () , = ′ x fa e ax if ′≠fa() 0

If ′ = fa() 0, then

P.I. = x faefa ax 2 0 ′′ ′′≠ () ,( ) if

(b) When X = sin ax, then

P.I. () sin = 1 2 fD ax = 1 2 fa ax () sin , if fa()−≠ 2 0

(c) When X = cos ax, then

P.I. = 1 2 fD ax () cos = 1 2 fa ax () cos , if fa()−≠ 2 0

(d) When X = xm, then

P.I. == 1 1 fDxfDx mm () [( )]

Expansion of [( )] fD 1 is to be carried up to the term Dm because (m + 1)th and higher derivatives of xm are zero.

(e) When Xevx ax = (),then

P.I. () () = 1 fD evx ax

P.I. () () = + e fDa vx ax 1

(f) When X = xv(x), then

P.I. () () = 1 fD xvx =−

x fD fDfD vx () () () () 1

Thus, the following steps should be used to calculate the complete solution:

(i) Find the auxiliary equation (A.E.) from the differential equation.

(ii) Calculate the complementary function using the cases given before.

(iii) Calculate particular integral (P.I.) using the cases explained before.

(iv) To find the complete solution, use the following expression:

y =+C.FP.I

74. Homogeneous Linear Equation

Cauchy’s homogeneous linear equation is given by x dy dx kxdy dx kx x nkyX n n n n n n n nn ++ ∂ ∂ ++= 1 1

where k1, k2, , k n are constants and X is a function of x

75. Bernoulli’s Equation

The equation dy dxPyQyn += , where P and Q are functions of x and can be reduced to Leibnitz’s linear equation and is called the Bernoulli’s equation.

To solve the Bernoulli’s equation, divide both sides by y n , so that y dy dxPyQnn+= 1 (3) Putting yz n 1 = , we get 1 ()= ny dy dx dz dx n

Therefore, Eq. (3) becomes 1 1 += n dz dx PzQ

() dz dx PnzQn 11

which is Leibnitz’s linear equation in z and can be solved using the methods mentioned under Integrating Factor.

76. Homogeneous Euler–Cauchy Equation

An ordinary differential equation is called homogeneous Euler Cauchy equation, if it is in the form axybxycy 2 0 ′′+′+= (4)

where a, b and c are constants. This equation has two linearly independent solutions, which depend on the following quadratic equation:

am2 + (ba) m + c = 0

This quadratic equation is known as characteristic equation. Let us consider Eq. (4) for x > 0 as this equation is singular for x = 0.

(a) If the roots of the characteristic equation are real and distinct, say m1 and m2, two linearly independent solutions of the differential equations are xm1 and xm2

(b) If the roots of characteristic equation are real and equal, that is, m1 = m2 = m; the linearly independent solutions of the differential equation are xm and xm ln x. The general solution for the first case is given by y = c1xm1 + c2xm2

The general solution for the second case is given by y = (c1 + c2 ln x) xm

(c) If the roots of characteristic equation are complex conjugate, that is, m1 = a + jb and m2 = a jb, linearly independent solutions of differential are xa cos (b ln x) and xa sin (b ln x).

The general solution is given by y = xa [C1 cos (b ln x) + C2 sin (b ln x)]

77. Non-Homogeneous Euler–Cauchy Equation

The non-homogeneous Euler Cauchy equation is of the form axybxycyrx 2 ′′+′+= ()

Here, a, b and c are constants. The method of variation of parameters to find solution to non-homogeneous Euler Cauchy equation is given. As a first step, divide equation by ax2 so as to make coefficient of ′′ y as unity.

′′+′+= y b ax y c ax y rx ax 22 () or

′′+′+= y b ax y c ax yrx 2 ()

rx rx ax () () = 2

Particular solution is given by yxyx yxrx Wyy dxyxyxrx Wyy dx p () () () () (, ) () () () (, ) =−+∫∫ 1 2 12 2 1 12

General solution to non-homogeneous Euler Cauchy equation is therefore given by the following equation: y(x) = C1 y1(x) + C2 y2(x) + yp(x)

78. Variation of Parameters Method

The method of variation of parameters is a general method that can be used to solve non-homogeneous linear ordinary differential equations of the form ′′+′+=ypyqyx

where p, q and x are functions of x

The particular integral in this case is given by P.I. =−+∫ ∫ y yX W dxyyX W dx 1 2 2 1

Here, y1 and y2 are the solutions of ′′+′+=ypyqy 0 and Wyy yy = ′′ 12 12 is called the Wronskian of y1 and y2

79. Separation of Variables Method

(a) Partial differential equations representing engineering problems include the one-dimensional heat flow equation given by ∂ = ∂ u dt

(b) Wave equation representing vibrations of a stretched string given by

81. Exponential Function of Complex Variables zre i = θ

82. Circular Function of Complex Variables

The circular functions of the complex variable z can be defined by the equations:

sin z ee i iziz = 2

cos z ee iziz = + 2

tan sin cos z z z =

with cosec z, sec z and cot z as respective reciprocals of above equations.

Now, cossin , ziz ee i ee i e iziziziz iz += + += 22 where z = x + iy

Also, eyiy iy =+cossin , where y is real.

Hence, ei i θθθ =+cossin , where θ is real or complex.

This is called Euler’s theorem.

Now, according to De Moivre’s theorem for complex number,

cossin cossin ,θθθθ(+θ)=()=+ ienin nin

whether q is real or complex.

83. Hyperbolic Functions of Complex Variables

Hyperbolic functions can be given as

sinhx ee xx = 2

(c) Two-dimensional heat flow equation that becomes two-dimensional Laplace equation in steady state and given by

transmission line equations, two-dimensional wave equation and so on.

Complex Variables

80. A number of the form x + iy, where x and y are real numbers and i =−() 1 , is called a complex number.

x is called the real part of x + iy and is written as R(x + iy), whereas y is called the imaginary part and is written as I(x + iy).

cosh x ee xx = + 2

tanh x ee ee xx xx = +

coth x ee ee xx xx = +

sech x ee xx = + 2

cosech x ee xx = 2

Hyperbolic and circular functions are related as follows: sinsin coscos tantan ixix ixx ixix = = = h h h

84. Logarithmic Function of Complex Variables

If z = x + iy and w = u + iy are related such that ez w = , then w is said to be a logarithm of x to the base e and is written as we z = log

Also, eeez winwin + =⋅= 22ππ [ ∵ e in 2 1 π = ]

⇒=+logsin zw 2 π

85. Cauchy–Riemann Equations

A necessary condition for wuxyivxy =()+() ,, to be analytic in a region R is that u and v satisfy the following equations:

(c) If ccc 12 3 ,, , be any number of closed curves within c, then

fzdzfzdzfzdz fzdz ccc c () ⋅=() ⋅+() +() ⋅+ ∫∫∫ ∫ 12 3

88. Cauchy’s Integral Formula

If f(x) is analytic within and on a simple closed curve c and a is any point interior to c, then

Above equations are called Cauchy–Riemann equations. If the partial derivatives in above equations are continuous in R, the equations are sufficient conditions that fz() be analytic in R.

The derivative of fz() is then given by

or

The real and imaginary parts of an analytic function are called conjugate functions.

86. Cauchy’s Theorem

If fz() is an analytic function and ′ ()fz is continuous at each point within and on a closed curve c, then according to Cauchy’s theorem, fzdz c () ∫⋅= 0

87. Some of the important results that can be concluded are:

(a) The line integral of a function fz() , which is analytic in the region R, is independent of the path joining any two points of R

(b) Extension of Cauchy’s theorem: If f(z) is analytic in the region R between the two simple closed curves c and c1, then fzdzfzdz cc () ⋅=()∫∫ 1

Consider fz z () α in above equation, which is analytic at all points within c except at z =α Now, with a as center and r as radius, draw a small circle c1 lying entirely within c

Generally, we can write that fnifz z ndz n c α πα ()=()

! 2 1

89. Taylor’s Series of Complex Variables

If f(z) is analytic inside a circle C with centre at a, then for z inside C,

90. Laurent’s Series of Complex Variables

If f(z) is analytic in the ring-shaped region R bounded by the two concentric circles C and C1 of radii r and r1 (such that rr > 1 ) and with center at a, then for all z in R

91. Zeros and Poles of an Analytic Function

(a) A zero of an analytic function f(z) is the value of z for which f(z) = 0.

(b) A singular point of a function f(z) is a value of z at which f(z) fails to be analytic.

92. Residues

The coefficient of z () α 1 in the expansion of fz() around an isolated singularity is called the residue of fz() at the point.

It can be found from the formula a n d zdzzfz n z n → = ()()() ⎡ ⎣⎤ ⎦ 1 1 1 1 1 lim ! α α

where n is the order of the pole.

The residue of f(z) at z = a can also be found by Res fifz c α (π)=() ∫ 1 2

93. Residue Theorem

If f(z) is analytic in a region R except for a pole of order n at z = a and let C be a simple closed curve in R containing z = a, then

fzdzi c ( ) =× ∫ 2π (sum of the residue at the singular points within C )

94. Calculation of Residues

(a) If f (z) has a simple pole at z = a, then

Res fzfz z αα (α)=−()() ⎡ ⎣⎤ →⎦lim

(b) If fzzz ()=()() ϕψ / , where

ψαα zzfzf()=−()()()≠,0 , then

Res f α ϕα ψα ()=() ()

(c) If f (z) has a pole of order n at z =∞, then

Res f n d dzzfx n n n z αα

()=()()()

Probability and Statistics

95. Types of Events

(a) Each outcome of a random experiment is called an elementary event.

(b) An event associated with a random experiment that always occurs whenever the experiment is performed is called a certain event.

(c) An event associated with a random experiment that never occurs whenever the experiment is performed is called an impossible event.

(d) If the occurrence of any one of two or more events, associated with a random experiment, presents the occurrence of all others, then the events are called mutually exclusive events.

(e) If the union of two or more events associated with a random experiment includes all possible outcomes, then the events are called exhaustive events.

(f) If the occurrence or non-occurrence of one event does not affect the probability of the occurrence or non-occurrence of the other, then the events are independent.

(g) Two events are equally likely events if the probability of their occurrence is same.

(h) An event which has a probability of occurrence equal to 1 P , where P is the probability of occurrence of an event A, is called the complementary event of A.

96. Axioms of Probability

(a) The numerical value of probability lies between 0 and 1.

Hence, for any event A of S, 01 ≤≤ PA()

(b) The sum of probabilities of all sample events is unity. Hence, P(S) = 1.

(c) Probability of an event made of two or more sample events is the sum of their probabilities.

97. Conditional Probability

Let A and B be two events of a random experiment. The probability of occurrence of A if B has already occurred and PB() ≠ 0 is known as conditional probability. This is denoted by PAB(). / Also, conditional probability can be defined as the probability of occurrence of B if A has already occurred and PA() ≠ 0 . This is denoted by PBA() /

98. Geometric Probability

Due to the nature of the problem or the solution or both, random events that take place in continuous sample space may invoke geometric imagery. Hence, geometric probabilities can be considered as non-negative quantities with maximum value of 1 being assigned to subregions of a given domain subject to certain rules. If P is an expression of this assignment defined on a domain S, then

01 1 <≤⊂= PAASPS () ,( ) and

The subsets of S for which P is defined are the random events that form a particular sample spaces. P is defined by the ratio of the areas so that if σ() A is defined as the area of set A, then PA A s () () () = σ σ

99. Rules of Probability

Some of the important rules of probability are given as follows:

(a) Inclusion–Exclusion principle of probability: PABPAPBPAB () () () ()∪=+−∩

If A and B are mutually exclusive events, PAB()∩= 0 and then formula reduces to

PABPAPB () () () ∪=+

(b) Complementary probability:

PAPA () () =− 1 c

where PA() c is the complementary probability of A

(c) PABPAPBAPBPAB () () () () () ∩=∗=∗//

where PAB() / represents the conditional probability of A given B and PBA() / represents the conditional probability of B given A

If BBB n 12,, ..., are pairwise disjoint events of positive probability, then

PAP A B PBP A B PB P A B PB n n () () () () = ⎛

1 1 2 2

(d) Conditional probability rule:

PABPBPAB () () () ∩=∗ /

⇒= ∩ PAB PAB PB () () () /

Or PBA PAB PA () () () / = ∩

(e) Bayes’ theorem: Suppose we have an event A corresponding to a number of exhaustive events BBB n 12,, ..., .

If PBi () and PABi () / are given, then PBA PBPAB iPBPAB ii ii () () () () () / / / = ∑

(f) Rule of total probability: Consider an event E which occurs via two different values A and B. Also, let A and B be mutually exhaustive and collectively exhaustive events.

Now, the probability of E is given as PEPAEPBE () () ()=∩+∩ =∗+∗ PAPEAPBPEB () () () //()

This is called the rule of total probability.

100. Arithmetic Mean

ArithmeticMeanforRawdata

Suppose we have values xxx n 12,, ..., and n are the total number of values, then arithmetic mean is given by

x n xi i n = = ∑ 1 1 where x = arithmetic mean.

ArithmeticMeanforGroupedData(Frequency Distribution)

Suppose f i is the frequency of xi , then the arithmetic mean from frequency distribution can be calculated as

x Nfx i n ii =∑ = 1 1

where Nf i n i =∑ 1

101. Median

MedianforRawData

Suppose we have n numbers of ungrouped/raw values and let values be xxx n 12,, ..., . To calculate median, arrange all the values in ascending or descending order.

Now, if n is odd, then median = () n ⎡+ ⎣ ⎢ ⎤ ⎦ ⎥ 1 2 th value

If n is even, then median

nn 22 1 2 th th valuevalue

MedianforGroupedData

To calculate median of grouped values, identify the class containing the middle observation.

Median = L N F f h m + ⎡(+)−+ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ × 1 2 1 ()

where L = lower limit of median class

N = total number of data items =∑ f

F = cumulative frequency of class immediately preceding the median class

fm = frequency of median class

h = width of class

102. Mode

Mode of raw values of data is the value with the highest frequency or simply the value which occurs the most number of times.

ModeofRawData

Mode of raw data is calculated by simply checking which value is repeated the most number of times.

ModeofGroupedData

Mode of grouped values of data is calculated by first identifying the modal class, that is, the class which has the target frequency. The mode can then be calculated using the following formula:

Mode = Lff fff mh m +× 1 12 2

where

L = lower limit of modal class

fm = frequency of modal class

f1 = frequency of class preceding modal class

f 2 = frequency of class following modal class

h = width of model class

103. Relation Between Mean, Median and Mode

Empirical mode = 3 Median 2 Mean

When an approximate value of mode is required, the given empirical formula for mode may be used.

There are three types of frequency distribution:

(a) Symmetric distribution: It has lower half equal to upper half of distribution. For symmetric distribution,

Mean = Median = Mode

(b) Positively skewed distribution: It has a longer tail on the right than on the left.

Mode ≤ Median ≤ Mean

(c) Negatively skewed distribution: It has a long tail on the left than on the right.

Mean ≤ Median ≤ Mode

Figure 6 shows the three types of frequency distribution.

Figure 6 | (a) Symmetrical frequency distribution, (b) positively skewed frequency distribution and (c) negatively skewed frequency distribution.

104. Geometric Mean

GeometricMeanofRawData

Geometric mean of n numbers xxx n 12,, ..., is given by

1 1 12 3 /

GeometricMeanofGroupedData

Geometric mean for a frequency distribution is given by loglog( ) G.M. = = ∑ 1 1 Nfx ii i n where Nf i n i =∑ =1

105. Harmonic Mean

HarmonicMeanofRawData

Harmonic mean of n numbers xxxx n 12 3 ,, , ..., is calculated as

H.M. = = ∑ n x ii n 1 1

HarmonicMeanofGroupedData

Harmonic mean for a frequency distribution is calculated as

H.M. / = () = ∑ N fxii i n 1

where Nf i n i =∑ =1

106. Mean Deviation

MeanDeviationofRawData

Suppose we have a given set of n values xxxx n 12 3 ,, , ..., , then the mean deviation is given by

M.D. =∑ = 1 1 n xX i n i

where x = mean.

The following steps should be followed to calculate mean deviation of raw data:

(a) Compute the central value or average ‘A’ about which mean deviation is to be calculated.

(b) Take mod of the deviations of the observations about the central value ‘A’, that is xx i .

(c) Obtain the total of these deviations, that is ∑− = i n xX 1 2

(d) Divide the total obtained in step 3 by the number of observations.

MeanDeviationofDiscreteFrequencyDistribution

For a frequency deviation, the mean deviation is given by

M.D. =− = ∑ 1 1 Nfxx ii i n where Nf i n i =∑ =1 .

The following steps should be followed to calculate mean deviation of discrete frequency deviation:

(a) Calculate the central value or average ‘A’ of the given frequency distribution about which mean deviation is to be calculated.

(b) Take mod of the deviations of the observations from the central value, that is xx i

(c) Multiply these deviations by respective frequencies and obtain the total fxx ii ∑

(d) Divide the total obtained by the number of observations, that is Nf i n i =∑ =1 to obtain the mean deviation.

MeanDeviationofGroupedFrequencyDistribution

For calculating the mean deviation of grouped frequency distribution, the procedure is same as for a discrete frequency distribution. However, the only difference is that we have to obtain the mid-points of the various classes and take the deviations of these mid-points from the given central value.

107. Standard Deviation

StandardDeviationofRawData

If we have n values xxx n 12,, ..., of X, then

The following steps should be followed to calculate standard deviation for discrete data:

(a) Calculate mean () X for given observations.

(b) Take deviations of observations from the mean, that is, () xX i

(c) Square the deviations obtained in the previous step and find

(d) Divide the sum by n to obtain the value of variance, that is

(e) Take out the square root of variance to obtain standard deviation,

StandardDeviationofDiscreteFrequencyDistribution

If we have a discrete frequency distribution of X, then

N x N x X X i i n i i n () ()

where Nf i n =∑ =1 .

The following steps should be followed to calculate variance if discrete distribution of data is given:

(a) Obtain the given frequency distribution.

(b) Calculate mean () X for given frequency distribution.

(c) Take deviations of observations from the mean, that is () xX i

(d) Square the deviations obtained in the previous step and multiply the squared deviations by respective frequencies, that is fxX ii()

(e) Calculate the total, that is ∑− = i n ii fxX 1 2 () .

(f) Divide the sum ∑− = i n ii fxX 1 2 () by N, where Nf i =∑ Nf i =∑ , to obtain the variance, σ 2

(g) Take out the square root of the variance to obtain standard deviation, σ=∑ ⎡ ⎣ ⎢ ⎤ ⎦ =⎥ 1 1 2 n fxX i n ii() .

StandardDeviationofGroupedFrequencyDistribution

For calculating standard deviation of a grouped frequency distribution, the procedure is same as for a discrete frequency distribution. However, the only difference is that we have to obtain the mid-point of the various classes and take the deviations of these midpoints from the given central point.

108. Coefficient of Variation

Coefficient of variation (C.V.) is a measure of variability which is independent of units and hence can be used to compare two data sets with different units.

C.V. =× σ X 100

where σ represents standard deviation and X represents mean.

109. Random Variable

Random variable can be discrete and continuous.

Discreterandomvariable is a variable that can take a value from a continuous range of values.

Continuousrandomvariable is a variable that can take a value from a continuous range of values.

If a random variable X takes xxx n 12,, ..., with respective probabilities PPP n 12,, ..., , then

(d) PaxbPaxbPaxb Paxbfxdx a b () () () () () <<=≤≤=<≤

112. General Discrete Distribution

Suppose a discrete variable X is the outcome of some random experiment and the probability of X taking the value xi is Pi , then

PXxPi ii() ,, ... ===for1 2

where, Pxi () ≥ 0 for all values of i and ∑= Pxi () 1

The set of values xi with their probabilities Pi of a discrete variable X is called a discrete probability distribution. For example, the discrete probability distribution of X, the number which is selected by picking a card from a well-shuffled deck is given by the following table:

is known as the probability distribution of X

110. Properties of Discrete Distribution

(a) ∑= Px() 1

(b) ExxPx () () =∑

(c) VxExExxPxx

() () () () ()

where E(x) denotes the expected value or average value of a random variable x and V(x) denotes the variable of a random variable x

111. Properties of Continuous Distribution

(a) Cumulative distribution function is given by Fxfxdx x () () =

(b) Exxfxdx () () =

(c) VxExExxfxdx xfxdx () () () () () =− []=

The distribution function F(x) of discrete variable X is defined by

FxPXxPi i n () () =≤= = ∑ 1

where x is any integer.

The mean value ( x ) of the probability distribution of a discrete variable X is known as its expectation and is denoted by E(x). If f(x) is the probability density function of X, then

Exxfx ii i n () () =∑ 1

Variable of a distribution is given by σ 22 1 =− = ∑ () ()xxfx ii i n

113. Binomial Distribution

Binomial distribution is concerned with trails of a respective nature whose outcome can be classified as either a success or a failure.

Suppose we have to perform n independent trails, each of which results in a success with probability P and in a failure with probability X which is equal to 1 P. If X represents the number of successes that occur in the n trails, then X is said to be binomial random variable with parameters (n, p).

The binomial distribution occurs when the experiment performed satisfies the following four assumptions of Bernoulli’s trails.

(a) They are finite in number.

(b) There are exactly two outcomes: success or failure.

(c) The probability of success or failure remains same in each trail.

(d) They are independent of each other.

The probability of obtaining x successes from n trails is given by the binomial distribution formula,

PXCPp n x xnx () ()=− 1

where P is the probability of success in any trail and (1 p) is the probability of failure.

114. Poisson Distribution

Poisson distribution is a distribution related to the probabilities of events which are extremely rare but which have a large number of independent opportunities for occurrence.

A random variable X, taking on one of the values 0, 1, 2, , n, is said to be a Poisson random variable with parameters m if for some m > 0, Px em x mx () ! =

For Poisson distribution,

Mean = E () x = m

Variance = V () x = m

Therefore, the expected value and the variable of a Poisson random variable are both equal to its parameters m

115. Hypergeometric Distribution

If the probability changes from trail to trail, one of the assumptions of binomial distribution gets violated; hence, binomial distribution cannot be used. In such cases, hypergeometric distribution is used; say a bag contains m white and n black balls. If y balls are drawn one at a time (with replacement), then the probability that x of them will be white is

116. Geometric Distribution

This distribution is known as hypergeometric distribution. For hypergeometric distribution,

Consider repeated trails of a Bernoulli experiment E with probability of success, P, and probability of failure, qp =− 1 . Let x denote the number of times E must be repeated until finally obtaining a success. The distribution is Also, pxqpxqp

The mean of geometric distribution = qp /

The variance of geometric distribution = qp / 2 .

117. General Continuous Distribution

When a random variable X takes all possible values in an interval, then the distribution is called continuous distribution of X

A continuous distribution of X can be defined by a probability density function f(x) which is given by pxfxdx () () −∞≤≤∞==

The expectation for general continuous distribution is given by Exxfxdx ()=()

The variance for general continuous distribution is given by VxxXfxdx ()==()()

118. Uniform Distribution

If density of a random variable X over the interval −∞<<<∞ ab is given by

()=<< 1 ,

Then the distribution is called uniform distribution.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.