Numerical Simulations for the Discrete Hankel Transform

Page 1

Numerical Simulations for the Discrete Hankel Transform

By: Ugo Chouinard Student ID: 6191337

MCG 4220 – Thesis Department of Mechanical Engineering University of Ottawa

April 2015

Thesis Supervisor: Dr. Natalie Baddour

Š 2015 U.Chouinard


Abstract In this thesis, numerical simulations are performed on a new discrete Hankel transform algorithm in order to test its ability to properly transform a function in the space domain to the frequency domain and vice-versa. The algorithm accuracy and precision is demonstrated through various function transforms. Furthermore, the algorithm is improved in order to reduce the computing time required for performing the transform. Finally, different methods are considered in order to further improve the accuracy of the discrete Hankel transform.

ii


Acknowledgment I would like to express my gratitude to my thesis supervisor Dr. Natalie Baddour, who helped and guided me throughout the entire length of my thesis, for her knowledge and experience she shared, and for the valuable feedback that she provided me which helped to improve my understanding and skills.

iii


Table of Contents Abstract ............................................................................................................................... ii Acknowledgment ............................................................................................................... iii Table of Contents ................................................................................................................iv List of figures ......................................................................................................................vi List of tables ...................................................................................................................... vii 1

Introduction .................................................................................................................. 1

2

Hankel Transform Theory............................................................................................ 2

3

2.1

The Continuous Hankel Transform ....................................................................... 2

2.2

The Discrete Transform ........................................................................................ 2

2.2.1

Discretization Points ...................................................................................... 4

2.2.2

Scaling Factor ................................................................................................ 4

Hankel Transform Test and Results:............................................................................ 6 3.1

3.1.1

Accuracy ........................................................................................................ 6

3.1.2

Precision ......................................................................................................... 6

3.1.3

Function Limit ............................................................................................... 7

3.1.4

Sampling Size ................................................................................................ 7

3.2

4

Method for testing the Algorithm: ........................................................................ 6

Test Functions ....................................................................................................... 8

3.2.1

Gaussian Function .......................................................................................... 8

3.2.2

Sinc Function ............................................................................................... 12

3.2.3

Modified Exponential Function ................................................................... 15

Improving the Discrete Hankel Transform ................................................................ 18 4.1

Computing Speed ................................................................................................ 18

iv


4.1.1

The Y Matrix ................................................................................................ 18

4.1.2

Matrix Multiplication ................................................................................... 20

4.1.3

Calculating the Bessel Zeros ........................................................................ 21

4.2

The Energy Problem ............................................................................................ 23

4.2.1

Energy of a Signal ........................................................................................ 24

4.2.2

Energy Captured vs Total Energy ................................................................ 24

4.2.3

Loss of information at the Function limits ................................................... 25

4.2.4

Varying sample size ..................................................................................... 26

4.2.5

Alternative Sampling Points ........................................................................ 27

4.2.6

Summary ...................................................................................................... 29

5

Discussion .................................................................................................................. 30

6

Summary and Conclusion .......................................................................................... 32

7

References .................................................................................................................. 33

Appendix A.

MATLAB Code........................................................................................ 34

A-1.

Space Sampling Function ................................................................................ 34

A-2.

Frequency Sampling Function ......................................................................... 34

A-3.

Y matrix Assembly Function ........................................................................... 34

A-4.

Gaussian Function Test Code .......................................................................... 35

A-5.

Sinc Function Test Code.................................................................................. 39

A-6.

Modified Exponential Test Code ..................................................................... 45

A-7.

Signal Energy Computation............................................................................. 49

Appendix B.

Theory and Operational Rules for the Discrete Hankel Transform ......... 51

v


List of figures Figure 3-1: Space-limited test function and its 1st and 11th order Hankel transforms ......... 9 Figure 3-2: Gaussian Function DHT with n=1 and n=11 with error where the solid line is the continuous transform and the dotted line the discrete transform ................................. 10 Figure 3-3: Gaussian function IDHT using n=1 and n=11 with their respective error where the solid line is the continuous transform and the dotted line the discrete transform ............................................................................................................................................ 11 Figure 3-4: Sinc Function Space and Hankel domain curves for n=1 and n=11 where the solid line is the continuous transform and the dotted line the discrete transform .............. 12 Figure 3-5: Sinc Function DHT for n=1 and n=11 with respective error where the solid line is the continuous transform and the dotted line the discrete transform ...................... 13 Figure 3-6: Sinc Function IDHT for n=1 and n=11 with respective error where the solid line is the continuous transform and the dots the discrete transform ................................. 14 Figure 3-7: Modified Exponential Function Space and Hankel domain curves for n=1 and n=11 ................................................................................................................................... 15 Figure 3-8: Modified exponential DHT for n=1 and n=11 with respective error where the solid line is the continuous transform and the dots the discrete transform ........................ 16 Figure 3-9: Modified exponential IDHT for n=1 and n=11 with respective error where the solid line is the continuous transform and the dots the discrete transform ........................ 16 Figure 4-1: Value of K for which a tolerance of 10-4 is met for order n where the soline line is the approximate curve for calcualting K and the dotted line is the calculated K values ................................................................................................................................. 22 Figure 4-2: Value of K for which a tolerance of 10-6 is met for order n where the soline line is the approximate curve for calcualting K and the dotted line is the calculated K values ................................................................................................................................. 23 Figure 4-3: DHT of sinc function, alternative sampling points, n=1 where the solid line is the continuous transform and dots the discrete transform ................................................. 28 Figure 4-4: Sinc function DHT with shifted sample points for n=1 where the solid line is the continuous transform and dots the discrete transform ................................................. 29

vi


List of tables

Table 4-1: total energy of test function and captured energy of the Hankel Transfrom in space domain ...................................................................................................................... 25 Table 4-2: Gaussian Function dynamic error values for n=1 ............................................ 26 Table 4-3: Sinc Function dynamic error values for n=1 .................................................... 26 Table 4-4: Modified exponential function dynamic error values for n=1.......................... 26 Table 4-5: captured energy improvement for n=1 ............................................................. 27

vii


1

1 Introduction

The field of engineering is partly driven by different mathematical methods for solving a wide variety of problems. However, some mathematical tools are limited by their ability to only be used analytically. One of these limited tools is the Hankel transform. The Hankel transform is usually involved in solving boundary values problem with radial symmetry. Some classical engineering problems that fall into this category are heat conduction through a cylinder, or a disk with internal heat generation [1, Ch. 9] and vibration of a circular membrane [2, p. 287]. These problems feature radial symmetry and therefore should be solvable through the use of a Hankel transform. However, the problem arises in the nature of the Hankel transform, which is defined with the Bessel function. Since a Bessel series is most commonly defined as an infinite series, performing analytical integration is sometimes impossible. In fact, there is only a select few functions for which a forward and inverse Hankel transform pair is known. Thus, due to the nature of the Hankel transform and its usefulness in engineering applications, there is a need for a numerical solution. Many algorithm have been developed in order to solve the transform numerically, such as the one proposed by [3] and [4], but none of them are fully discrete or possess their own set of mathematical rules and properties. [5] proposes a fully discrete algorithm having its own set of rules in order to approximate the continuous transform in a discrete way. The following report consist of the simulations performed in order to test the theory developed in [5]. Moreover, this report also investigates a method of improving the computational speed of the algorithm used to compute the discrete Hankel Transform.


2

2 Hankel Transform Theory 2.1 The Continuous Hankel Transform The Hankel transform is an integral transform that allows for a function defined in the space domain to be transformed to the Hankel domain(frequency domain). This transformation is referred to as the forward transform. In the case of the inverse transform, the function is taken from the frequency domain into the space domain. Then, the function definitions in the frequency and space domain are referred to as a Hankel transform pair. The analytical definition of the forward transform is defined as [6, p. 5.6]

n {f= (r )} F= (ρ ) n

0

f (r ) rJ n ( ρ r ) dr

(2.1)

where rJ n ( r r) is the kernel of the integral transform and J n is the Bessel function of order n . The inverse Hankel transform is then defined as ∞

)} f= (r ) ∫ Fn ( ) J n ( r ) d  n −1{Fn (= rrrrr 0

(2.2)

The various properties of the transform are not presented here but may be found in the literature. Specifically, further information about the transform is available in [2], [6],[7].

2.2 The Discrete Transform Performing a numerical (discrete) Hankel transform would result in an approximation of the continuous Hankel transform (CHT). It is possible to approximate the CHT by using the discrete Hankel transform (DHT) proposed in [5], defined as

F = Y nN f

(2.3)

This discrete transform consists of taking an N − 1 vector f , which is the discretized function to be transformed, and an N − 1 square matrix of Hankel order n , Y nN , to perform the matrix-vector multiplication and obtain the vector F , which in turn is the discrete function in the transformed (Hankel) domain. The Y nN matrix in equation (2.3) is defined as having the m, k th entry given by [5]


3

= YmnN,k

with

 jnm jnk  J   1 ≤ m, k ≤ N − 1 n jnN J n2+1 ( jnk )  jnN  2

(2.4)

jnk being the k th zero of the Bessel function order n .

Since the core of the tested discrete transform is the transformation matrix Y nN , various properties have to be maintained. One of these properties is that the matrix Y nN proposed in [5] possesses orthogonality properties, where Y nN Y nN = Ι . The first Bessel zero used in computing the entries of the Y nN matrix is the first non-zero value of the Bessel zero of order n . If the Y nN matrix is not assembled following that rule, the matrix loses its orthogonality property and thus performing the discrete transform would lead to improper results. This also applies to the way the discretization of the function has to be performed and is shown in section 2.2.1. Given the orthogonality properties of the matrix Y nN , the inverse discrete Hankel transform is then given by

f = Y nN F

(2.5)

The discrete Hankel forward and inverse transform as given in equations (2.3) and (2.5) can be used to approximate the continuous Hankel transform at certain discrete points. Given a function f o evaluated at the discrete points rnk with k being an integer such that 0 < k < N in the space domain, the discrete approximation to the nth order Hankeltransform function F evaluated at the discrete points ρ nm ,with m being an integer such that 0 < m < N , is given as [5] N −1

F [m] = α ∑ YmnN,k f [k ]

(2.6)

k =1

where α is a scaling factor to be discussed below, and F [m] = F ( ρ nm ) , f [k ] = f ( rnk ) . Consequently, the inverse discrete Hankel transform (IDHT) is given by N −1

f [k ] = α ∑ YmnN,k F [m] m =1

(2.7)


4 For both the DHT and IDHT, α

is a scaling factor which depends on the function

properties and shall be discussed in 2.2.2. Furthermore, the vector entries f [k ] and F [m] are the entries of the vector corresponding to the continuous functions evaluated at rnk and ρ nm , respectively so that F [m] = F ( ρ nm ) discretization points rnk

and f [k ] = f ( rnk ) . The choice of

and ρ nm is discussed in section 2.2.1. The full theory of the

discrete Hankel transform is given in 0. 2.2.1

Discretization Points

In order to properly use the discrete transform, a function has to be discretized at specific sample points. These sample points are given in the space domain for a range [0, R] as [5]

= rnk

jnk R jnN

for 1 ≤ k ≤ N − 1

(2.8)

For the frequency domain in the range [0, W p ] , the sample points are

= r nm

jnm Wp jnN

for 1 ≤ m ≤ N − 1

(2.9)

It is important to note that as in the case of the transformation matrix Y nN , the first Bessel zero used in computing the sample points is the first non-zero value. 2.2.2

Scaling Factor

The scaling factor used in the transform depends on whether the function is space-limited or band-limited. If a function is space-limited function between [0, R] , then the scaling factor for the forward transform as given in equation (2.6) is given by [5]

α=

R2 jnN

Similarly, for an inverse transform of a space-limited function, scaling factor in equation (2.7) is given by

(2.10) f [k ] = f ( rnk ) the


5

α=

jnN R2

(2.11)

In the case of band limited function, the scaling factor of the forward transform is given as

α=

jnN Wp 2

(2.12)

Similarly for the inverse transform

α=

Wp 2 jnN

The derivation of these scaling factors was shown in [5].

(2.13)


6

3 Hankel Transform Test and Results: 3.1 Method for testing the Algorithm: In order to test the accuracy of the Hankel transform algorithm, functions with known continuous Hankel transforms (CHT) were used. The selected functions had different properties to ensure that the algorithm is tested under various conditions. Furthermore, the functions have been tested using two different orders of the transform to further verify the versatility of the discrete transform. The tests were performed using MATLAB. The relevant code can be found in Appendix A. 3.1.1

Accuracy

For the purpose of testing the accuracy of the DHT and IDHT, the dynamic error has been used. It is defined as [4]  f (v) − f * (ν ) e(v) = 20 log10   max f *(v)

  

(3.1)

for any v in the space domain or Hankel domain where f ( v ) is the continuous forward or inverse Hankel transform and f * ( v )

is the discrete counterpart computed via the

DHT/IDHT. The dynamic error uses the ratio of the absolute error to the maximum amplitude of the function, calculated on a log scale. Therefore, a large negative decibel error is desired for an accurate discrete transform. The dynamic error is used instead of the percent error since it allows the function to be zero whereas using the percent error would result in division by zero. 3.1.2

Precision

The precision of the algorithm is tested by sequentially performing a pair of forward and inverse transforms and comparing the result to the original function. Doing so ensures that the algorithm does not add further error to the transform while performing calculations. To verify the precision, an average of the absolute error between each sample points of the original function f ( v ) and the discretely computed function f * ( v ) is used and is given by


7

= ε

1 N

N

f − f*

(3.2)

n =1

An ideal precision would result in the absolute error being zero. 3.1.3

Function Limit

In Hankel (Fourier) theory, a function limited in the space domain (space-limited) is infinite in the Hankel frequency domain and vice-versa. Depending on the function, it may be difficult to see in which domain the function is actually limited. Thus, the concept of effective limit is used. Therefore, a function is defined as being “effectively limited in space by R” means that if r > R , then as r → ∞ , f ( r ) → 0 . In other words, the function can be made as close to zero as desired by selecting an R that is large enough. The same idea can be applied to the spatial frequency domain, where the effective domain would be denoted by Wp. Furthermore, if a function is effectively limited in one domain, it would usually not be the case in the transformed domain; i.e it would be infinite. The functions selected for numerical testing were chosen so that one would be effectively space-limited and the other effectively band-limited. In order to see which type of limit could be applied to the selected functions, their space and frequency representations were plotted and effective limits were chosen based on observation. If the forward or inverse transform of a given function is desired without posessing any information about the properties of the desired transform, then it is required to decide the type of limit (space-limited or band-limited) based on the displayed properties of the function. For instance, if a function is periodic in the space domain, it is more likely to be effectively limited in the frequency domain and vice-versa. 3.1.4

Sampling Size

When discretizing a function in order to perform the transform, a relationship between the space limit, band limit and sampling size has to be maintained. The necessary relationship is given by [5]

Wp =

jnN R

(3.3)


8

where W p is the effective band-limit, R is the effective space limit and jnN is the Nth zero of J n ( r ) .

Furthermore, based on traditions from Discrete Fourier transforms,

sample sizes would be given by N ≅ 2 for some positive integer x. However, for the x

discrete Hankel transform, this is not mandatory.

3.2 Test Functions 3.2.1

Gaussian Function

The first chosen function is effectively limited in space. It is given by

f (r ) = e− a r r n 2 2

(3.4)

with its Continuous Hankel Transform (CHT) given by [1, Ch. 9] = F (ρ )

ρn

 ρ2  exp − 2  (2a 2 ) n +1  4a 

(3.5)

The function has been tested using a Hankel transform of Bessel order n = 1 and n = 11 . The parameter a was chosen arbitrarily to be a = 5. The graphs for each of the chosen orders in both space and frequency are displayed in Figure 3-1.


9

Figure 3-1: Space-limited test function and its 1st and 11th order Hankel transforms

From the graph of the n = 1 function and its Hankel transform, it can be assumed that f ( r ) is space-limited from R = 1. The same space limit can also be imposed for n = 11

. It can be observed that the function could potentially be band-limited from Ď = 50 for

n = 1 and from Ď = 55 with n = 11 . Being able to know both effective limits in space and frequency implies that the minimum number of sample points can be used to perform the transform. Moreover, by taking a larger domain in space and frequency, more information is used by the transform. Thus, this would result in R = 2 and W p = 100 for n = 1 which gives

jnN = 200 . Using the value of a power of 2 close to the calculated jnN gives N = 64 , corresponding to jnN = 201.8455 . The same can be done with the n = 11 functions. Consequently, values are taken as R = 2 and W p = 110 . This results in N = 64 and

jnN = 217.2774 . Furthermore, imposing a space limit results in using the following scaling factors:


10

R2 j for the DHT and nN2 for the Inverse Discrete Hankel Transform (IDHT). jnN R The results for the FDHT and IDHT of the Gaussian function are shown in Figure 3-2 and Figure 3-3, respectively.

Figure 3-2: Gaussian Function DHT with n=1 and n=11 with error where the solid line is the continuous transform and the dotted line the discrete transform


11

Figure 3-3: Gaussian function IDHT using n=1 and n=11 with their respective error where the solid line is the continuous transform and the dotted line the discrete transform

It can be noted that for both the forward and inverse transforms, the computed error is low even for a relatively small N.

Performing the DHT and IDHT sequentially results in ε = 1.6926 ×10−17 from the original function for n = 1 and = ε 8.5249 ×10−22 (3.2).

for n = 11 with ε calculated from equation


12

3.2.2

Sinc Function

The second function used is the sinc function defined by f (r ) =

sin(ar ) ar

(3.6)

and its CHT, given by [4] n  πn  r  cos      2  × a   2 r2  r2  a 1− 2 1 + 1 − 2 a  a   F (r )     a   sin  n arcsin     r    r >a  r2 2  −1 a a2 

   

n

r<a (3.7)

For the sinc function, the assumed limits are chosen with respect to the true functions. They are shown in Figure 3-4.

Figure 3-4: Sinc Function Space and Hankel domain curves for n=1 and n=11 where the solid line is the continuous transform and the dotted line the discrete transform


13 In both cases, the Hankel function is effectively limited at W p = 30 . Compared to the previous case where a limit in both space and frequency could have been imposed, the periodicity of the function prevents that in this case. Thus, instead of being able to take a large enough range in both domains, a larger sample size will be used in order to have more “information� captured by the transform. Then, N can be taken as 256, which computes to jnN = 805.0327

for n = 1 and jnN = 820.6675 for n = 11 . Keeping the

relationship between N,R and W p results in

R = 26.75 for n = 1 and R = 27.5 for

n = 11 . Since a band-limit was imposed on the function, the transform was done by using the Wp 2 jnN for the DHT and for the IDHT. frequency scaling factor given by Wp 2 jnN

The results of the DHT and IDHT of the sinc function are shown in Figure 3-5 and Figure 3-6 respectively, where the solid line is the true function and the dotted lines denote the numerical transform.

Figure 3-5: Sinc Function DHT for n=1 and n=11 with respective error where the solid line is the continuous transform and the dotted line the discrete transform


14

Figure 3-6: Sinc Function IDHT for n=1 and n=11 with respective error where the solid line is the continuous transform and the dots the discrete transform

It can be seen in Figure 3-5 that The DHT suffers from Gibbs phenomenon at the discontinuity, as expected. Performing the sequential set of DHT and IDHT results in an average absolute error of

ε = 5.2274 ×10−15 from the original function for n = 1 and ε = 6.1430 ×10−13 for n = 11 with ε calculated from equation (3.2).


15

3.2.3

Modified Exponential Function

A third function was also used to test the algorithm. The function is defined by

f (r ) =

e − ar r

(3.8)

with its CHT given by [1, Ch. 9]

F (ρ ) =

(

) )

ρ 2 + a2 − a sn

(

s2 + a2

n

(3.9)

Similarly to the first function, it is also effectively space limited. However, as can be seen in Figure 3-7, the function cannot be taken as being also effectively band limited as was the case for the first function.

Figure 3-7: Modified Exponential Function Space and Hankel domain curves for n=1 and n=11

As seen in Figure 3-7, the function is space limited at r = 1 for both n = 1 and n = 11 . Taking a sufficiently large sample size such as N = 128 gives W p = 399.8 for n = 1 and 415.3 for n = 11 . The corresponding DHT and IDHT results are shown in Figure 3-8 and Figure 3-9 respectively.


16

Figure 3-8: Modified exponential DHT for n=1 and n=11 with respective error where the solid line is the continuous transform and the dots the discrete transform

Figure 3-9: Modified exponential IDHT for n=1 and n=11 with respective error where the solid line is the continuous transform and the dots the discrete transform

It should be noted that the error in the DHT tends to increase as Ď increases.


17

Similar to the two previous cases, the discrete transform has good accuracy due to its almost negligible absolute error being ε = 5.5603 ×10−13 for n = 1 and ε = 1.7993 ×10−10 for n = 11 with ε calculated from equation (3.2).


18

4 Improving the Discrete Hankel Transform 4.1 Computing Speed The discrete transform comprises of three parts that are “costly” from a computational point of view 1. Assembling the Y matrix 2. Calculating the Bessel zeros 3. Performing the matrix multiplications 4.1.1

The Y Matrix

The Y matrix is assembled by a MatLab function. This function needs to calculate every single m,kth entry of the matrix. Thus, for a sample of size N, the program has to run in

O( N 2 ) . As stated in 2.2, the m,kth entry is given by:

= YmnN,k

 jnk jnm  J  n jnN J n2+1 ( jnk )  jnN  2

1 ≤ m, k ≤ N − 1

(4.1)

Therefore, each iteration needs to perform a few multiplications and the evaluation of two Bessel functions. Furthermore, each entry of the matrix uses different Bessel zeros. Thus, the need for reducing the amount of computations performed while assembling the Y matrix is evident. 4.1.1.1 Reducing the calculation time of Bessel zeros: The first step taken in order to reduce computing time is to accept an array of Bessel function zeros as a parameter in the function. Doing so allows the pre-calculation of the zeros only once, outside the function and can therefore be used for other purposes, instead 2 of calculating them at each iteration. Thus, instead of performing 3N calculations of

Bessel zeros, only N calculations are required. Subsequently, when a certain zero is needed, it can be directly fetched from the array.


19 4.1.1.2 Reducing the calculation time of Jn+1 Bessel Function The second step taken to reduce the number of calculations performed is to create an array of N elements containing the Bessel Function of the first kind of order n+1 for a

k 1..N − 1 . Doing so reduces the number of evaluations of the given zero J n +1 ( jnk ),= 2 Bessel function from N to N.

4.1.1.3 Reducing the evaluation time of the Jn Bessel Function  jnk jnm  jnN

The other Bessel Function that has to be computed is J n 

  . This Bessel function 

2 also needs to be evaluated N times in the Y matrix. However, it is possible to reduce the

number of computations by calculating only the different pairs of multiplications. Doing so would reduce the number of computation to

jnk ,

jnm

N ( N + 1) . 2

For example, if the required sampling size were N = 10 , then the algorithm would only require 55 Bessel function computations instead of 100. For a small N, such difference would not be noticed. However as N gets much larger, the computing time would be reduced significantly since as N → ∞ ,

N ( N + 1) N 2 ≈ thus reducing computing time by 2 2

half. 4.1.1.3.1 Maps and other Complex Data Structures Different techniques have been tested to achieve the desired results. The first being the use of map containers to store the result of the Bessel function zeros jnk jnm . The employed key would be the result of the multiplication of jnk jnm . Since Bessel zeros are not evenly spaced, it would have resulted in a unique key for a given

( m, k )

pair.

Although this option seemed promising, multiples drawbacks were found. One of them is the fact that map containers are not always default implementations in some languages. Thus, it would be required to create a data structure in order to use the map for reduced computing time.


20

Another drawback is the slow process of searching and inserting in the map. Depending on the way the map is implemented, it can take up to O( N ) operations to search and insert. Thus, for a single ( m, k ) pair, it would be required to first search for the jnk jnm key to see if it exists. If not, the key would need to be inserted in the structure to allow for reusing the corresponding values. Furthermore, some map implementations are ordered, meaning that every time a certain key and value are inserted, it would be done by sorting the keys, thus increasing computing time. Another type of data structure could be used to store a certain (m, k ) pair, but would require computing time in order to perform a search or insert. Thus, the use of a more complex data structure was set aside in favour of another method of reducing calculations. 4.1.1.3.2 Jn matrix A more effective way of matching the pairs (m, k ) is to build a two dimensional array, Jn, of size

( N − 1) × ( N − 1)

containing jnk jnm as the ( k , m ) th entry. This 2-D array

would be symmetrical. Thus, there would only be the need to calculate the upper triangular part of the matrix. From there, the ( m, k ) entry could be assigned to the ( k , m ) entry. This would be done in worst case O (1) instead of O ( N )

for the map. This Jn

matrix could also be used as a building block for the Y matrix to reduce computing time. 4.1.1.4 Speed vs Memory usage As discussed in 4.1.1.1, 4.1.1.2 and 4.1.1.3, reducing the number of calculations could be beneficent in terms of computing time. However, doing so would significantly increase the memory consumption of the program since the data would need to be stored during the whole processing of the subroutine. 4.1.2

Matrix Multiplication

When performing the Discrete Hankel Transform, the Y matrix is multiplied with the discrete function. This implies a multiplication of an ( N − 1) × ( N − 1) matrix with an

( N − 1) ×1

2 vector. Thus about N multiplications and N

additions have to be


21

2 performed, resulting in N + N

operations. Unfortunately, there are no shortcuts

available yet to perform this operation. 4.1.3

Calculating the Bessel Zeros

Section 4.1.1.1 discussed how to avoid calculating the same Bessel zeros multiple times by passing an array as a parameter to the function. However, there is still a need to calculate the zeros before they are used. [8] provides the algorithm used in this work to calculate zeros. It consists of using Halley’s method, [9], in order to obtain a value for the zero within a certain tolerance. However, since it iterates multiple times (up to 100) and computes two Bessel function evaluations per iteration, it can thus take up to 200N computations in order to find all the required zeros which can be significantly costly in computing time.

Therefore, a need for increasing the computational speed of the

calculation of the Bessel zeros is needed. It is shown in [5] that as k → ∞ , the kth Bessel zero of order n can be approximated as

1 π  jnk ≈  2k + n −  2 2 

(4.2)

The accuracy of the approximations depends on the order n and the value of k . Consequently, for a given n , there exists an integer K such that for k > K , the calculated root can be reasonably approximated by (4.2). Test were performed to see which value of K needed to be attained in order for the approximation be within a −4 tolerance of 10 . This can be seen in Figure 4-1.


22

Figure 4-1: Value of K for which a tolerance of 10-4 is met for order n where the soline line is the approximate curve for calcualting K and the dotted line is the calculated K values

Figure 4-1 was computed by comparing the true value of the kth zero of order n with the one from the approximation given in (4.2). When an accuracy of 10-4 between the real and approximate value was found, the k value for which this occurred was recorded. The computation was allowed to run up to the 20,000th root. If k exceeded that limit, a value of -1 was recorded. This explains the spikes in Figure 4-1. In general, a quasi-linear behaviour can be observed. −4 Thus, the value of K for which a tolerance of 10 is attained can be approximated by

= K 225n + 113

The same process was performed for a tolerance of 10-6, and is shown in Figure 4-2.

(4.3)


23

Figure 4-2: Value of K for which a tolerance of 10-6 is met for order n where the soline line is the approximate curve for calcualting K and the dotted line is the calculated K values

The equation of the line in Figure 4-2 is given by = k 2312n + 1126

(4.4)

It is to be noted that for an increase in accuracy by a factor of 100, the k value increases by a factor of about 10.

4.2 The Energy Problem As shown in sections 3.2.1, 3.2.2 and 3.2.3, the algorithm for performing a discrete Hankel transform is quite accurate. However, the transform is more effective for the first test function than the two other test functions.

One possible reason for this is that for

some functions, the discrete transform captures more of the energy of the signal than for other functions. This idea is further elaborated below.


24

4.2.1

Energy of a Signal

The energy of a continuous signal f ( t ) can be computed from the analytical solution of [10]

E function = ∫

−∞

2

f (t) dt

(4.5)

Since the Hankel transform is defined on a positive interval, then the corresponding energy integral for functions that can be Hankel transformed is EHankel = ∫

0

2

f (t ) dt

(4.6)

A corresponding discrete definition for a discrete signal f [ n ] is given by ∞

Ediscrete = ∑ f [n]

2

(4.7)

n =0

However, in order to properly compare the energy captured in the range of the transform, the trapezoidal rule will be used. 4.2.2 Energy Captured vs Total Energy Since the discrete Hankel transform has to be limited in space and frequency as explained in 3.1.3, then the function energy captured by the discrete Hankel transform is as mentioned by the trapezoidal rule and can be computed by 2 2  R   f [k ] + f [k + 1]  Espace ∑  =  ( jnk +1 − jnk )    j 2 k =1  nN  

(4.8)

2 2  WP   F [m] + F [m + 1]  E=  ( jnm+1 − jnm )    ∑ frequency  2 m =1  jnN  

(4.9)

N −2

N −2

Clearly, the function energy captured by the DHT and the true function energy are not the same and a percentage of energy captured can be computed as the ratio of the DHT energy captured from equations (4.8) and (4.9) to the actual function energy calculated from equation (4.6). More concretely,


25

= %captured

Espace EHankel

×100

(4.10)

The percent amount of energy captured is used to compare between the DHT and the full function. The error has been computed for the functions in section 3.2 with both Hankel transform orders, ( n = 1 and n = 11 ) with the test functions. The results are displayed in Table 4-1. Table 4-1: total energy of test function and captured energy of the Hankel Transfrom in space domain

Function 1 2 3

n

Total Energy for continuous function 1 0.0013 11 0.0013 1 0.3142 11 0.3142 1 1.0000e+15 11 1.0000e+15

Captured Energy discrete function 0.0012 7.0210e-04 0.1906 0.0316 76.9271 10.9869

Percent energy Captured 98.6058 56.0195 60.6735 10.0729 7.6927e-12 1.0987e-12

It is to note that the total energy of the signal does not usually depend on the order of the Hankel transform, n, but this could be a property of the chosen function only. However, it is obvious that the amount of energy captured drops drastically when n is increased. 4.2.3

Loss of information at the Function limits

A continuous function defined in the space domain on [0, R], it is never actually sampled at 0 or R by the proposed discrete Hankel transform. Specifically, the sampling points are given by:

= rk

jnk R for 1 ≤ k ≤ N − 1 jnN

(4.11)

It can be seen that the numerator will never reach N and thus rk will never reach R. However, the reason that r1 is not zero is less obvious. Although there is a Bessel zero at 0 ( J n ( 0 ) = 0 )for n > 0 , the first Bessel zero is referred to as the next one after 0. Thus

jn1 is not 0.


26

As n increases, the value of jn1 also increases. Thus, for the same N, the value of the first sample point of order n2 rn 2,1 would be greater than rn1,1 for n 2 > n1 . This has the effect that as n increases, the energy at the beginning of the signal is lost. Thus, there is a need to capture the energy between 0 and rn ,1 . In order to reduce the effect of loss of information at the domain limits, different possibilities are discussed in the following sections. 4.2.4

Varying sample size

The first way to get r1 to approach 0 is to increase N. This follows since increasing N results in increasing jnN as well, thus reducing rk . Test were performed on the functions of 3.2.1 to 3.2.3. For each function, maximum, minimum and mean value of the dynamic error were computed for N and 2N. Furthermore, the percentage of energy captured was also computed and compared between N and 2N. The results are displayed in Table 4-2 to Table 4-5. Table 4-2: Gaussian Function dynamic error values for n=1

N 64 128 % Improvement

Max (dB) -302.3724 -305.8943 1.164

Min (dB) -338.4960 -368.2076 8.78

Mean (dB) -321.7705 -323.6407 0.58

Table 4-3: Sinc Function dynamic error values for n=1

N 256 512 % Improvement

Max (dB) -16.3271 -16.3248 0

Min (dB) -99.5833 -123.7430 24.26

Mean (dB) -58.0337 -68.6879 18.37

Table 4-4: Modified exponential function dynamic error values for n=1

N 128 256 % Improvement

Max (dB) -27.7835 -33.7122 21.35

Min (dB) -66.5132 -74.3113 11.73

Mean (dB) -35.9181 -41.8843 16.59


27

Table 4-5: captured energy improvement for n=1

Function

Gaussian Sinc Modified Exponential

% Energy % Energy % Energy % increase of captured (N) captured (2N) increase mean accuracy 98.6058 99.8190 1.23 0.58 60.6735 79.5341 31.08 18.37 7.6927e-12 1.7518e-11 127.75 16.59

As it can be seen from tables Table 4-2 to Table 4-5 that doubling the sample size improves the accuracy of the transform. Although increasing N usually augments the resolution of the function, in this case it increases the total energy captured. Increasing N for functions where more than 90% of the energy is captured does not seem to be necessary. 4.2.5

Alternative Sampling Points

From section 2.2.1 , the sample points are taken as:

= rk

jnk R for 1 ≤ k ≤ N − 1 jnN

(4.12)

As mentioned in 4.2.3, the first Bessel zero is not taken at zero. However, it was investigated to see if doing so would improve the accuracy of the transform. Thus, the sample points would become:

= rk '

jnk ' R for 0 ≤ k ' ≤ N − 2 jnN

(4.13)


28 4.2.5.1 Test and results The test was performed on the sinc function for n = 1 .The result is shown Figure 4-3.

Figure 4-3: DHT of sinc function, alternative sampling points, n=1 where the solid line is the continuous transform and dots the discrete transform

It can be seen that the transform in Figure 4-3 is less accurate than in Figure 3-5. This is due to the fact that the Y matrix is defined for 1 ≤ k ≤ N − 1 . If the Y matrix were also defined from 0 ≤ k ≤ N − 2 , it would lose its orthogonality properties upon which the invertibility of the transform depends. Thus, the greater error with the sampling points of equation (4.13) arises from the discrete entry of the function not matching its corresponding entry in the Y matrix, resulting in a loss of accuracy. 4.2.5.2 Shifting the alternative sample points In order to attempt another solution to the problem discussed above, tests were made for shifting the function with the new sample points. Doing so would permit sampling the function at 0 while trying to match the corresponding entries in the matrix. Thus, the jn 0 point would be shifted to jn1 and so on. Since Bessel zeros are not evenly spaced, this can be attained by implementing the generalized shift as described by [5]. The results are displayed in Figure 4-4.


29

Figure 4-4: Sinc function DHT with shifted sample points for n=1 where the solid line is the continuous transform and dots the discrete transform

The functions have been shifted about the first entry of the Y matrix so that jn 0 becomes

jn1 . However, the test was inconclusive since the result is the same as in the unshifted function of the alternate sampling points, which can be seen in Figure 4-3. 4.2.6

Summary

As has been seen in the previous sections, the only method found so far to improve the discrete transform accuracy is to increase the number of sample points in order to bring

r1 as close as possible to zero. Thus, accuracy has to be traded for computing time. Hence, an efficient solution would be to compute the percentage of energy captured prior to attempting to increase the sampling size.


30

5 Discussion The discrete Hankel transform tested was shown to have an acceptable accuracy. As a comparative measure, the algorithm used in [4] and [11] had similar dynamic error for the sinc function of the same order. The algorithm used in those cases sampled the function at 0, which significantly increased the energy captured by the transform as shown in 4.2.2. Another aspect of the transform that was tested is the precision of the algorithm, which implies that for a set of consecutive forward and inverse transform executed, the result would always be within a very small tolerance. The verification of precision from [4] with a sample size of N = 512 for the sinc function resulted in an added error of

= ε 2.2 ×10−13 . However, for N = 256 in the case of the tested algorithm, the error

ε 5.23 ×10−15 . Thus the algorithm tested on the sinc function in this work resulted in= was shown to be more precise. The algorithm was also proven to be solid under various conditions. The tests were not only performed with a first order Hankel transform, but also with higher orders. The algorithm was also shown to be effective in performing the transform for functions displaying various properties and with different type of limits: space or frequency. Furthermore, since computational speed is important while developing algorithms that are used with large data sizes, the developed algorithm was improved in many ways to reduce the number of operations executed. As mentioned in 4.1, the best solution found so far is a trade-off between reducing the computing speed and increasing the memory usage of the algorithm for storing repeated values. Although the space-time trade-off is clear in the current solution, it might not be obvious that reducing time to the detriment of memory is the best solution. However, recent trends in computer science tends to favour the increase in memory usage of a program in order to reduce the execution time due to cheap and high availability memory [12].Furthermore, since the definition of the DHT bounds the algorithm to perform size-dependent operations, it is then crucial to reduce the operations that are repeated as it was done in 4.1. The solution that was found the most suitable for storing the repeated values is the basic array or vector, as previously discussed in 4.1. The more complex data structures were


31

found to be less-efficient due to their slow element accessing and insertion. However, this might only be related to the implementations of these structures in MATLAB. Therefore, if the algorithm were to be implemented in other languages, it would be valid to reconsider which data structure is the most efficient in accession and insertion speed, as well as the memory consumption of the structure. Another important consideration when performing the discrete Hankel transform is the sampling size that is used. Although larger sample size increases the accuracy, it might not always be necessary to do so. In 4.2.4, it was shown that for a function where more than 90% of the energy was captured, doubling the size only improves the accuracy by about 1%. Thus, it is important to first verify how much of the energy is captured before increasing the size which would avoid increasing the computational time for a negligible improvement on accuracy.


32

6 Summary and Conclusion In the current work, it was possible to successfully test the discrete Hankel transform as proposed in [5]. The tests were performed on different functions and the accuracy of the transform was demonstrated. Factors influencing the accuracy of the tested transform were discussed and trials for improving the accuracy were presented. Many efforts at reducing the computational time of the discrete transform were also presented.


33

7 References [1] A. D. Poularikas, The transforms and applications handbook. Boca Raton: CRC Press, 2000. [2] L. C. Andrews, B. K. Shivamoggi, and Society of Photo-optical Instrumentation Engineers, Integral transforms for engineers. Bellingham, Wash.: SPIE, 1999. [3] H. F. Johnson, “An improved method for computing a discrete Hankel transform,” Comput. Phys. Commun., vol. 43, no. 2, pp. 181–202, Jan. 1987. [4] M. Guizar-Sicairos and J. C. Gutierrez-Vega, “Computation of quasi-discrete Hankel transforms of integer order for propagating optical wave fields,” J. Opt. Soc. Am. A, vol. 21, no. 1, p. 53, 2004. [5] N. Baddour and U. Chouinard, “Theory and Operational Rules for the Discrete Hankel Transform,” Journal of the Optical Society of America A. [6] L. Debnath and D. Bhatta, Integral transforms and their applications. Boca Raton: Chapman & Hall/CRC, 2007. [7] A. N. Srivastava and M. Ahmad, Integral transforms and fourier series. Oxford, U.K.: Alpha Science International Ltd., 2012. [8] G. von Winckel, “Bessel Function Zeros,” 25-Jan-2005. [Online]. Available: http://www.mathworks.com/matlabcentral/fileexchange/6794-bessel-function-zeros. [9] “Halley’s

Method

--

from

Wolfram

MathWorld.”

[Online].

Available:

http://mathworld.wolfram.com/HalleysMethod.html. [Accessed: 18-Mar-2015]. [10]

“Lecture 5: Signal Energy and Parseval’s Formula ,Free Online Course Materials,

USU

OpenCourseWare.”

[Online].

Available:

http://ocw.usu.edu/Electrical_and_Computer_Engineering/Signals_and_Systems/5_1 0node10.html. [Accessed: 10-Apr-2015]. [11]

W. Higgins and D. Munson, “An algorithm for computing general integer-order

Hankel transforms,” IEEE Trans. Acoust. Speech Signal Process., vol. 35, no. 1, pp. 86–97, Jan. 1987. [12]

“Space-time tradeoff in Computer Science - Cprogramming.com.” [Online].

Available:

http://www.cprogramming.com/tutorial/computersciencetheory/space-

time-tradeoff.html. [Accessed: 10-Apr-2015].


34

Appendix A. MATLAB Code A-1. Space Sampling Function % %n order of bessel function %N number of sample points to take %R limit of sample function r=spaceSampler(n,N,R,zeros) for k=1:N-1 r(k)=(zeros(k)*R)/zeros(N); end end

A-2. Frequency Sampling Function %n order of bessel function %N number of sample points to take %R limit of sample function ro=freqSampler(n,N,R,zeros)

for m=1:N-1 ro(m)=zeros(m)/R; end end

A-3. Y matrix Assembly Function % Y is the N-1 x N-1 transformation matrix to be assembled % % n is the order of the bessel function % N is the size of the transformation matrix %


35 function Y = YmatrixAssembly(n,N,zeros)

for m=1:N-1 for k=1:N-1 jnk=zeros(k); jnm=zeros(m); jnN=zeros(N); jnplus1=besselj(n+1, jnk); Y(m,k)=(2*besselj(n,(jnk*jnm/jnN)))/(jnN*jnplus1^2);

end end

end

A-4. Gaussian Function Test Code % %test for (e^-a^2r^2)r^n % clear

N=128; n=1; R=2; a=5;

%number of elements %order of transform %space limit %function propety

%calculation of the N first bessel zeros zeros=besselzero(n,N,1); W=zeros(N)/R; %band limit %calling function to construct matrix for transform Y=YmatrixAssembly(n,N,zeros); %creating sample points spaSample=spaceSampler(n,N,R,zeros); %discretizing the function for i=1:N-1


36

toDHTFunc(i)=exp(-a^2*spaSample(i)^2)*(spaSample(i))^n; end %performing the transform hankelFunc=(Y*toDHTFunc.')*(R^2/zeros(N)); %set of hankel domain points to match value of transform fSamp=freqSampler(n,N,R,zeros);

%creating a discrete true function for j=1:N-1 truFunc(j) = ((fSamp(j)^n)/(2*a^2)^(n+1))*exp(-fSamp(j)^2/(4*a^2)); end

%calculating the error from transform and true function using dynamic error for l=1:N-1 error(l)= 20*log10(abs(truFunc(l)- hankelFunc(l))/max(abs(hankelFunc))); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%second test with different order N2=128; n2=11; R2=2; zeros2=besselzero(n2,N2,1); W2=zeros2(N2)/R2; Y2=YmatrixAssembly(n2,N2,zeros2); spaSample2=spaceSampler(n2,N2,R2,zeros2);

for i2=1:N2-1

toDHTFunc2(i2)=exp(-a^2*spaSample2(i2)^2)*(spaSample2(i2))^n2; end

hankelFunc2=(Y2*toDHTFunc2.')*(R2^2/zeros2(N2)); fSamp2=freqSampler(n2,N2,R2,zeros2); for j2=1:N2-1


37

truFunc2(j2)=((fSamp2(j2)^n2)/(2*a^2)^(n2+1))*exp(-fSamp2(j2)^2/(4*a^2)); end for l2=1:N2-1 error2(l2)= 20*log(abs(truFunc2(l2)- hankelFunc2(l2))/max(abs(hankelFunc2))); end figure(1) subplot(2,2,1) plot(spaSample, toDHTFunc) title('Space Function with n=1') xlabel('r') ylabel('f(r)') subplot(2,2,2) plot(fSamp,truFunc) title('Hankel Function with n=1') xlabel('\rho') ylabel('F(\rho)') subplot(2,2,3) plot(spaSample2, toDHTFunc2) title('Space Function with n=11') xlabel('r') ylabel('f(r)') subplot(2,2,4) plot(fSamp2,truFunc2) title('Hankel Function with n=11') xlabel('\rho') ylabel('F(\rho)') figure(3) subplot(4,1,1) plot(fSamp,hankelFunc,'.',fSamp,truFunc,'-'); str=sprintf('DHT with n = %d, N = %d,R= %d, a= %d ', n,N,R,a); title(str); %%%% subplot(4,1,2) plot(fSamp,error) title('error of the DHT with n=1') ylabel('dB') %%%% subplot(4,1,3) plot(fSamp2,hankelFunc2,'.',fSamp2,truFunc2,'-'); str=sprintf('DHT with n = %d, N = %d,R= %d, a= %d ', n2,N2,R2,a); title(str);


38 %%%%% subplot(4,1,4) plot(fSamp2,error2) title('error of the DHT with n=11') ylabel('dB') xlabel('\rho') %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %IDHT part IDHTFunc=Y*truFunc.'*(zeros(N)/R^2); for k=1:N-1 errorIDHT(k)= 20*log10(abs(toDHTFunc(k)- IDHTFunc(k))/max(abs(IDHTFunc))); end IDHTFunc2=Y2*truFunc2.'*(zeros2(N2)/R2^2); for k2=1:N2-1 errorIDHT2(k2)= 20*log10(abs(toDHTFunc2(k2)- IDHTFunc2(k2))/max(abs(IDHTFunc2))); end figure(4) subplot(4,1,1) plot(spaSample, IDHTFunc,'.',spaSample,toDHTFunc,'-') str21=sprintf('IDHT using n = %d, N = %d, W= %d, a= %d ', n,N,W,a); title(str21) subplot(4,1,2) plot(spaSample,errorIDHT) title('error of the idht with n=1') ylabel('dB') %xlabel('[m]') subplot(4,1,3) plot(spaSample2, IDHTFunc2,'.',spaSample2,toDHTFunc2,'-') str2=sprintf('IDHT using n = %d, N = %d, W= %d, a= %d ', n2,N2,W2,a); title(str2) subplot(4,1,4) plot(spaSample2,errorIDHT2) title('error of the idht with n=11') ylabel('dB') xlabel('r')

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%% %function retrieval retrievedFunc1=Y*hankelFunc*zeros(N)/R^2; for val1=1:N-1


39 errRetrieved(val1)=abs(retrievedFunc1(val1)-toDHTFunc(val1)); end mean(errRetrieved) retrievedFunc2=Y2*hankelFunc2*zeros2(N2)/R2^2; for val2=1:N2-1 errRetrieved2(val2)=abs(retrievedFunc2(val2)-toDHTFunc2(val2)); end mean(errRetrieved2) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%

%absolute error of the trnasform from true functions for val=1:N-1 errAbDHT1(val)= abs(truFunc(val)- hankelFunc(val)); errAbIDHT1(val)=abs(toDHTFunc(val)-IDHTFunc(val)); end for val=1:N2-1 errAbDHT11(val)= abs(truFunc2(val)- hankelFunc2(val)); errAbIDHT11(val)=abs(toDHTFunc2(val)-IDHTFunc2(val)); end

A-5. Sinc Function Test Code % %Hankel transform of the sinc function %DHT and IDHT are performed with two different orders % % clear N=256;%number of elements n=1; %order of transform R=15; %space limit a=5; %function property zeros=besselzero(n,N,1); %calculating first N bessel zero Y=YmatrixAssembly(n,N,zeros); %Assembling Y matrix for transform


40

spaSample=spaceSampler(n,N,R,zeros); %Space sampling points of the function W=zeros(N)/R;

%frequency limit

%discretization of the function for counter=1:N-1

toDHTFunc(counter)=sin(a*spaSample(counter))/(a*spaSample(counter)); end

%performing the transform from matrix multiplication hankelFunc=(Y*toDHTFunc.')*(zeros(N)/W^2);

fSamp=freqSampler(n,N,R,zeros); %Sampling points in the frequency domain

%discretization of the true function in the frequency domain for counter2=1:N-1 if fSamp(counter2)<a truFunc(counter2) = (fSamp(counter2)/a)^n*cos(n*pi/2)/(a^2*sqrt(1fSamp(counter2)^2/a^2)*(1+sqrt(1-fSamp(counter2)^2/a^2))^n); elseif fSamp(counter2)>a truFunc(counter2)=sin(n*asin(a/fSamp(counter2)))/(a^2*sqrt(fSamp(counter2)^2/a^21)); end

end

%calculation of the dynamic error for l=1:N-1 error(l)= 20*log10(abs(truFunc(l)- hankelFunc(l))/max(abs(hankelFunc))); end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5 % the next part is doing the hankel transform of the sinc using another % order


41

n2=4; R2=27.5; N2=256; zeros2=besselzero(n2,N2,1); W2=zeros(N2)/R2; Y2=YmatrixAssembly(n2,N2,zeros2); spaSample2=spaceSampler(n2,N2,R2,zeros2);

for counter3=1:N2-1

toDHTFunc2(counter3)=sin(a*spaSample2(counter3))/(a*spaSample2(counter3)); end

figure(2) plot(spaSample2, toDHTFunc2) hankelFunc2=(Y2*toDHTFunc2.')*(zeros2(N2)/W2^2); fSamp2=freqSampler(n2,N2,R2,zeros2);

for counter4=1:N2-1 if fSamp2(counter4)<a truFunc2(counter4) = ((fSamp2(counter4)/a)^n2)*cos(n2*pi/2)/(a^2*sqrt(1fSamp2(counter4)^2/a^2)*(1+sqrt(1-fSamp2(counter4)^2/a^2))^n2);

elseif fSamp2(counter4)>a truFunc2(counter4)=sin(n2*asin(a/fSamp2(counter4)))/(a^2*sqrt(fSamp2(counter4)^2/a^2-1)); end end

for l2=1:N2-1 error2(l2)= 20*log10((abs(truFunc2(l2)- hankelFunc2(l2)))/(max(abs(hankelFunc2)))); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %true function in both domain


42 figure(1) subplot(2,2,1) plot(spaSample, toDHTFunc) title('Space Function of n=1') xlabel('r') ylabel('f(r)') subplot(2,2,2) plot(fSamp,truFunc) title('Hankel Function n=1') xlabel('\rho') ylabel('F(\rho)') xlim([0 30]) subplot(2,2,3) plot(spaSample2,toDHTFunc2) title('Space Function of n=11') xlabel('r') ylabel('f(r)') subplot(2,2,4) plot(fSamp2,truFunc2) title('Hankel Function n=11') xlabel('\rho') ylabel('F(\rho)') xlim([0 30]) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %outputing the DHT with error figure(3) subplot(4,1,1) plot(fSamp,hankelFunc,'.',fSamp,truFunc,'-'); str=sprintf('DHT with n = %d, N = %d,R= %g, a= %d ', n,N,R,a); title(str); %%%% subplot(4,1,2) plot(fSamp,error) title('error of the dht with n=1') ylabel('dB');

%%%% subplot(4,1,3) plot(fSamp2,hankelFunc2,'.',fSamp2,truFunc2,'-'); str=sprintf('DHT with n = %d, N = %d,R= %g, a= %d ', n2,N2,R2,a); title(str);

%%%%% subplot(4,1,4) plot(fSamp2,error2)


43 title('error of the dht with n=11') xlabel('\rho') ylabel('dB'); ylim([-100 10])

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %IDHT part IDHTFunc=Y*truFunc.'*(W^2/zeros(N)); %performing the idht with the previously calculated frequency function for k=1:N-1 errorIDHT(k)= 20*log10(abs(toDHTFunc(k)- IDHTFunc(k))/max(abs(IDHTFunc))); end IDHTFunc2=Y2*truFunc2.'*(W2^2/zeros(N2)); for k2=1:N2-1 errorIDHT2(k2)=20*log10(abs(toDHTFunc2(k2)- IDHTFunc2(k2))/max(abs(IDHTFunc2))); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%55 %plotting the IDHT with its error figure(4) subplot(4,1,1) plot(spaSample, IDHTFunc,'.',spaSample,toDHTFunc,'-') str41=sprintf('IDHT using n = %d, N = %d, W= %d, a= %d ', n,N,W,a); title(str41) ylim([-1 1.5]) subplot(4,1,2) plot(spaSample,errorIDHT) title('error of the idht with n=1') ylabel('dB') subplot(4,1,3) plot(spaSample2,IDHTFunc2,'.',spaSample2,toDHTFunc2,'-') str42=sprintf('IDHT using n = %d, N = %d, W= %d, a= %d ', n2,N2,W2,a); title(str42) subplot(4,1,4) plot(spaSample2,errorIDHT2) title('error of the IDHT with n=11') ylabel('dB') xlabel('r')

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %retrieval error --> how much is lost doing forward and then backward %transform retrievedFunc1=Y*hankelFunc*W^2/zeros(N);


44

for val1=1:N-1 errRetrieved(val1)=abs(retrievedFunc1(val1)-toDHTFunc(val1)); end

mean(errRetrieved)

retrievedFunc2=Y2*hankelFunc2*W2^2/zeros2(N2); for val2=1:N2-1 errRetrieved2(val2)=abs(retrievedFunc2(val2)-toDHTFunc2(val2)); end mean(errRetrieved2) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%% %absolute error of the trnasform from true functions for val=1:N-1 errAbDHT1(val)= abs(truFunc(val)- hankelFunc(val)); errAbIDHT1(val)=abs(toDHTFunc(val)-IDHTFunc(val)); end for val=1:N2-1 errAbDHT11(val)= abs(truFunc2(val)- hankelFunc2(val)); errAbIDHT11(val)=abs(toDHTFunc2(val)-IDHTFunc2(val)); end

display('error dht n=1, max, min mean') max(error) min(error) mean(error) display('error dht n=11, max, min, mean') max(error2) min(error2) mean(error2)


45

A-6. Modified Exponential Test Code % %test for e-ar/r % clear

N=256; n=1; R=1; a=5;

%number of elements %order of transform %space limit %function propety (radius i think)

%calculation of the N first bessel zeros zeros=besselzero(n,N,1); W=zeros(N)/R; %band limit %calling function to construct matrix for transform Y=YmatrixAssembly(n,N,zeros);

%creating sample points spaSample=spaceSampler(n,N,R,zeros);

%discretizing the function for i=1:N-1

toDHTFunc(i)=exp(-a*spaSample(i))/(spaSample(i)); end %performing the transform hankelFunc=(Y*toDHTFunc.')*(R^2/zeros(N)); %set of hankel domain points to match value of transform fSamp=freqSampler(n,N,R,zeros);

%creating a discrete true function for j=1:N-1 truFunc(j) = ((((a^2+fSamp(j)^2)^(1/2))-a)^n)/(fSamp(j)^n*(fSamp(j)^2+a^2)^(1/2)); end


46 %calculating the error from transform and true function using dynamic error for l=1:N-1 error(l)= 20*log10(abs(truFunc(l)- hankelFunc(l))/max(abs(hankelFunc))); end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%second test with different order N2=256; n2=11; R2=1; zeros2=besselzero(n2,N2,1); W2=zeros2(N2)/R2; %band limit Y2=YmatrixAssembly(n2,N2,zeros2); spaSample2=spaceSampler(n2,N2,R2,zeros2);

for i2=1:N2-1

toDHTFunc2(i2)=exp(-a*spaSample2(i2))/(spaSample2(i2)); end

hankelFunc2=(Y2*toDHTFunc2.')*(R2^2/zeros2(N2)); fSamp2=freqSampler(n2,N2,R2,zeros2); for j2=1:N2-1 truFunc2(j2)=((((a^2+fSamp2(j2)^2)^(1/2))a)^n2)/(fSamp2(j2)^n2*(fSamp2(j2)^2+a^2)^(1/2)); end for l2=1:N2-1 error2(l2)= 20*log(abs(truFunc2(l2)- hankelFunc2(l2))/max(abs(hankelFunc2))); end display('error') max(error) min(error) mean(error) display('error 2') max(error2)


47 min(error2) mean(error2) figure(1) subplot(2,2,1) plot(spaSample, toDHTFunc) title('Space Function with n=1') xlabel('r') ylabel('f(r)') subplot(2,2,2) plot(fSamp,truFunc) title('Hankel Function with n=1') xlabel('\rho') ylabel('F(\rho)') subplot(2,2,3) plot(spaSample2, toDHTFunc2) title('Space Function with n=11') xlabel('r') ylabel('f(r)') subplot(2,2,4) plot(fSamp2,truFunc2) title('Hankel Function with n=11') xlabel('\rho') ylabel('F(\rho)') figure(3) subplot(4,1,1) plot(fSamp,hankelFunc,'.',fSamp,truFunc,'-'); str=sprintf('DHT with n = %d, N = %d,R= %d, a= %d ', n,N,R,a); title(str); %%%% subplot(4,1,2) plot(fSamp,error) title('error of the DHT with n=1') ylabel('dB') %%%% subplot(4,1,3) plot(fSamp2,hankelFunc2,'.',fSamp2,truFunc2,'-'); str=sprintf('DHT with n = %d, N = %d,R= %d, a= %d ', n2,N2,R2,a); title(str);

%%%%% subplot(4,1,4) plot(fSamp2,error2) title('error of the DHT with n=11') ylabel('dB') xlabel('\rho') %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


48

%IDHT part IDHTFunc=Y*truFunc.'*(zeros(N)/R^2); for k=1:N-1 errorIDHT(k)= 20*log10(abs(toDHTFunc(k)- IDHTFunc(k))/max(abs(IDHTFunc))); end IDHTFunc2=Y2*truFunc2.'*(zeros2(N2)/R2^2); for k2=1:N2-1 errorIDHT2(k2)= 20*log10(abs(toDHTFunc2(k2)- IDHTFunc2(k2))/max(abs(IDHTFunc2))); end figure(4) subplot(4,1,1) plot(spaSample, IDHTFunc,'.',spaSample,toDHTFunc,'-') str21=sprintf('IDHT using n = %d, N = %d, W= %d, a= %d ', n,N,W,a); title(str21) subplot(4,1,2) plot(spaSample,errorIDHT) title('error of the idht with n=1') ylabel('dB') %xlabel('[m]') subplot(4,1,3) plot(spaSample2, IDHTFunc2,'.',spaSample2,toDHTFunc2,'-') str2=sprintf('IDHT using n = %d, N = %d, W= %d, a= %d ', n2,N2,W2,a); title(str2) subplot(4,1,4) plot(spaSample2,errorIDHT2) title('error of the idht with n=11') ylabel('dB') xlabel('r')

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%% %function retrieval retrievedFunc1=Y*hankelFunc*zeros(N)/R^2; for val1=1:N-1 errRetrieved(val1)=abs(retrievedFunc1(val1)-toDHTFunc(val1)); end mean(errRetrieved) retrievedFunc2=Y2*hankelFunc2*zeros2(N2)/R2^2;


49 for val2=1:N2-1 errRetrieved2(val2)=abs(retrievedFunc2(val2)-toDHTFunc2(val2)); end mean(errRetrieved2) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%

%absolute error of the trnasform from true functions for val=1:N-1 errAbDHT1(val)= abs(truFunc(val)- hankelFunc(val)); errAbIDHT1(val)=abs(toDHTFunc(val)-IDHTFunc(val)); end for val=1:N2-1 errAbDHT11(val)= abs(truFunc2(val)- hankelFunc2(val)); errAbIDHT11(val)=abs(toDHTFunc2(val)-IDHTFunc2(val)); end

A-7. Signal Energy Computation %computes the energy of the different function a=5 n=1;

%gaussian function Ne=64; ze1=besselzero(1,Ne,1); ze11=besselzero(11,Ne,1); sp1=spaceSampler(1,Ne,2,ze1); sp2=spaceSampler(11,Ne,2,ze11); display('now earv n=1') func=@(x) (exp(-a^2.*x.^2).*(x).^n).^2; f1=integral(func,sp1(1),2) f2=integral(func,0,Inf) double(f1/f2)*100 display('now earv n=11')


50 f11=integral(func,sp2(1),2) f21=integral(func,0,Inf) double(f11/f21)*100 %sinc function display('now sinc n=1') N=512; z=besselzero(n,N,1); z2=besselzero(11,N,1); samp=spaceSampler(n,N,27,z); samp2=spaceSampler(11,N,26.75,z2); func2=@(x)(sin(a.*x)./(a.*x)).^2;

f3=integral(func2,samp(1),samp(N-1)) f4=integral(func2,0,Inf) double(f3/f4)*100 display('now sinc n=11') f5=integral(func2,samp2(1),26.75) f6=integral(func2,0,Inf) double(f5/f6)*100 %modified exponential display('now e-ar/r n=1') Nem=256; zem1=besselzero(1,Nem,1); zem11=besselzero(11,Nem,1); func3=@(x) (exp(-a.*x)./x).^2; s31=spaceSampler(1,Nem,1,zem1); f7=integral(func3,s31(1),1) f8=integral(func3,1e-15,Inf) double(f7/f8)*100 display('now e-ar/r n=11') s32=spaceSampler(11,Nem,1,zem11); f9=integral(func3,s32(1),1) f10=integral(func3,1e-15,Inf) double(f9/f10)*100


51

Appendix B. Theory and Operational Rules for the Discrete Hankel Transform


N. Baddour and U. Chouinard

Vol. 32, No. 4 / April 2015 / J. Opt. Soc. Am. A

611

Theory and operational rules for the discrete Hankel transform Natalie Baddour* and Ugo Chouinard Department of Mechanical Engineering, University of Ottawa, 161 Louis Pasteur, K1N 6N5 Ottawa, Canada *Corresponding author: nbaddour@uottawa.ca Received February 6, 2015; accepted February 6, 2015; posted February 11, 2015 (Doc. ID 234161); published March 20, 2015 Previous definitions of a discrete Hankel transform (DHT) have focused on methods to approximate the continuous Hankel integral transform. In this paper, we propose and evaluate the theory of a DHT that is shown to arise from a discretization scheme based on the theory of Fourier–Bessel expansions. The proposed transform also possesses requisite orthogonality properties which lead to invertibility of the transform. The standard set of shift, modulation, multiplication, and convolution rules are derived. In addition to the theory of the actual manipulated quantities which stand in their own right, this DHT can be used to approximate the continuous forward and inverse Hankel transform in the same manner that the discrete Fourier transform is known to be able to approximate the continuous Fourier transform. © 2015 Optical Society of America OCIS codes: (070.2465) Finite analogs of Fourier transforms; (070.2025) Discrete optical signal processing; (070.0070) Fourier optics and signal processing. http://dx.doi.org/10.1364/JOSAA.32.000611

1. INTRODUCTION The Hankel transform has seen applications in many areas of science and engineering. In optics, it has seen a variety of applications in the propagation of optical beams and wavefields [1,2], generation of light diffusion profiles of layered turbid media [3], measurement of particles via a Fraunhofer diffraction pattern [4], propagation of cylindrical electromagnetic fields [5], reconstruction of optical fields [6], imaging through layered lenses [7], deflection tomographic reconstructions [8], design of beam shapers [9], and properties of microlenses [10]. Given the need for numerical computation with the Hankel transform, there have been correspondingly many attempts to define a discrete Hankel transform (DHT) in the literature. However, prior work has focused on proposing methods to approximate the calculation of the continuous Hankel integral. This stands in stark contrast to the approach taken with the Fourier transform, where the discrete Fourier transform (DFT) is a transform in its own right, with its own mathematical theory of the manipulated quantities. To quote Bracewell [11], “We often think of this as though an underlying function of a continuous variable really exists and we are approximating it. From an operational viewpoint, however, it is irrelevant to talk about the existence of values other than those given and those computed (the input and output). Therefore, it is desirable to have a mathematical theory of the actual quantities manipulated.” An additional feature of a carefully derived DFT is that it can be used to approximate the continuous Fourier transform, with relevant sampling and interpolation theories that can be used. A similar set of theories does not exist for the Hankel transform, that is, a DHT as a complete and orthogonal transform that possesses its own mathematical theory, which in turn can be shown to be useful in approximating its continuous counterpart. Thus, the goal of this paper is to propose and 1084-7529/15/040611-12$15.00/0

evaluate the mathematical theory for the DHT, using the same ideas that have shown the DFT to be so useful in various disciplines. The standard set of shift, modulation, multiplication, and convolution rules are derived and presented. In addition, we show that this proposed DHT can be used to approximate the continuous Hankel transform in the same manner that the DFT is known to be able to approximate the continuous Fourier transform.

2. LITERATURE REVIEW DHTs have received less attention than DFTs, although there is still a good body of literature on the subject. However, it should be noted that in all cases there has been no proposal of a true DHT in the same manner as the DFT, in the sense of a discrete transform that stands on its own, with its own set of rules. Rather, what has been considered in the literature to date are various ways to numerically implement the continuous Hankel transform. This is potentially somewhat confusing since what is often referred to as a DHT is, in fact, a discrete approximation to the continuous Hankel transform. This stands in contrast to the Fourier transforms, where the DFT is a transform that stands alone, and its use to approximate the continuous Fourier transform is well understood. The seminal work on numerical computation of Hankel Transforms was by Siegman [12], where a nonlinear change of variables was used to convert the one-sided Hankel transform integral into a two-sided cross-correlation integral, which was then evaluated with an FFT. Agrawal and Lax suggested an end correction to Siegman’s approach [13]. Agnesi et al. found analytical formulations to the approach taken by Siegman to help improve accuracy without lower end corrections [14]. Cree and Bones reviewed several approaches to the numerical evaluation of Hankel transforms [15] and concluded that the performance of all algorithms © 2015 Optical Society of America


612

J. Opt. Soc. Am. A / Vol. 32, No. 4 / April 2015

depends on the type of function to be transformed. They also reported that projection based methods provided acceptable accuracy with better efficiency than numerical quadrature. There have been many further attempts at a discretized version of the continuous Hankel transform integral. Suter and Hedges [16] showed that the algorithm proposed by Ferrari [17,18] could be interpreted as an application of the projection slice theorem, reducing the computation of the Hankel transform to a combination of Tchebychev and Fourier transforms [19]. In a similar vein, Oppenheim et al. exploited symmetry and the projection slice theorem to once again use an FFT to numerically compute the Hankel transform [20,21]. Hansen computed the Hankel transform by using a combination of a fast Abel transform and a fast Fourier transform [22,23]. Mook followed a similar approach of evaluating the Hankel transform as an Abel transform followed by a Fourier transform [24]. Barakat et al. proposed a method for the zeroth order Hankel transform based on Filon quadrature [25]. They compared their work to that of Magni et al. [1], who used a sampling scheme in combination with an FFT for their calculation of the zeroth order Hankel transform. Murphy and Gallagher also used a sampling scheme in combination with an FFT for their computation of the Hankel transform [26]. Markham and Conchello evaluated seven different numerical approaches to the numerical computation of a Hankel transform [27]. These were based on Filon quadrature, five different projection slice methods, and another approach based on the Abel transform. For the oscillating functions they considered in their paper, they found that one of the projection slice methods offered the best compromise between accuracy and computational efficiency. Candel took a completely different approach by using a combination of a Fourier-selectionsummation (FSS) method in conjunction with a Bessel function large argument asymptotic expansion [28]. Higgins and Munson extended Candel’s original approach of taking a one-dimensional Fourier transform followed by repeated summations of preselected Fourier components [29] to higher integer order Hankel transforms [30]. Garg et al. proposed a continuous but finite Hankel transform [31], for application to continuous problems on finite or semi-infinite domains, extending concepts first introduced by Sneddon [32]. Other generalized finite Hankel transforms, for example, have also been introduced [33]. Various other kernels have been proposed to evaluate the continuous Hankel transform [34–37]. Gupta et al. used an orthonormal exponential approximation [34]; Singh et al. used wavelets [35], Haar wavelets [38], linear Legendre multi-wavelets [39], and a hybrid of Block-pulse and Legendre polynomials [40]; Bisseling and Kosloff used an FFT [36]; and Knockaert used fast sine and cosine transforms [37]. Cavanagh and Cook used Gaussian–Laguerre polynomial expansions for their numerical approach to the Hankel transform [41]. In all cases, the starting point for the work is a focus on developing a discrete approximation of a continuous Hankel transform and does not start with a definition of a discrete Hankel transform entity that has its own rules and properties. Interestingly, Guizar-Sicairos and Gutiérrez-Vega [2] use a discretization approach that involves the zeros of the Bessel functions. Their work is an extension to n-ordered Hankel transforms of an approach developed for zero-order Hankel transforms by Yu et al. [42], and termed the “Quasi-discrete

N. Baddour and U. Chouinard

Hankel Transform.” This is highly interesting in its similarity to the approach taken by Fisk-Johnson [43], although neither Guizar-Sicairos and Gutiérrez-Vega nor Yu et al. seem to have been aware of this prior work by Fisk-Johnson. The works of Yu et al. [42] and Guizar-Sicairos and Gutiérrez-Vega [44] were the first to demonstrate a discrete version of the Parseval theorem for the Hankel transforms. Jerri [45] came close to defining a DHT that follows in the path of the DFT. However, Jerri missed a crucial orthogonality relationship, and thus his proposed transform approach is neither complete nor orthogonal. In different research areas, there has been work done on discrete matrix transforms known as Bessel transforms that bear a resemblance to a Hankel transform. These have been proposed and demonstrated by Lemoine in [46,47], and Layton and Stade in [48], and have generally been applied to quantum mechanical eigenvalue problems. These latter two approaches differ in their choices of boundary conditions on the chosen Bessel functions. Both papers impose the condition that the functions must remain bounded at the origin. However, these two sets of authors differ in the treatment of the basis Bessel functions at the outer radial boundary in both physical and spatial frequency space. Layton and Stade assume a vanishing derivative of the Bessel functions at the outer boundary, whereas Lemoine assumes a vanishing of the basis Bessel function itself. Other choices of boundary conditions at the outer radii are possible in both physical space and spatial frequency space, leading to other possibilities for the actual structure of the matrix transform. In this paper, a discrete Hankel transform is proposed and, following in the steps of the DFT, the accompanying set of orthogonality, shift, modulation, multiplication, convolution and Parseval rules are also derived. In addition, we demonstrate that this DHT can be used to approximate the continuous Hankel transform, in parallel to the way in which the DFT is known to be able to approximate the continuous Fourier transform.

3. HANKEL TRANSFORMS AND BESSEL SERIES For the sake of completeness, we define the Hankel transform and Fourier–Bessel series as used in this work. A. Hankel Transform The nth order Hankel transform F ρ of the function f r of a real variable, r ≥ 0, is defined by the integral [49] Z F ρ Hn f r

∞ 0

f r J n ρr rdr;

(1)

where J n z is the nth order Bessel function as shown in Eq. (1). If n is real and n > −1∕2, the transform is selfreciprocating and the inversion formula is given by Z f r

0

F ρ J n ρr ρdρ:

(2)

Thus, Hankel transforms take functions in the spatial r domain and transform them to functions in the frequency ρ domain, f r ⇔ F ρ . The notation ⇔ is used to indicate a Hankel transform pair.


N. Baddour and U. Chouinard

Vol. 32, No. 4 / April 2015 / J. Opt. Soc. Am. A

B. Fourier–Bessel Series (Transform) Functions defined on a finite portion of the real line 0; R can be expanded in terms of a Fourier–Bessel series [50] given by f r

∞ X

f kJ n

k 1

j nk r ; R

(3)

where the order of the Bessel function is arbitrary and j nk denotes the kth root of the nth Bessel function. The Fourier–Bessel coefficients f k of the function f r can be found from fk

Z

2

R2 J 2n 1 j nk 0

R

f r J n

j nk r rdr: R

(4)

Equations (3) and (4) can be considered to be a transform pair where the continuous function f r is forward-transformed to the discrete vector f k , given by the finite integral in (4). The inverse transformation, which returns f r when starting with f k , is then given by the summation in Eq. (3). The Fourier–Bessel series is to the Hankel transform as the Fourier series is to the Fourier transform. Just as the Fourier series is defined for a finite interval and has an infinite integral counterpart, the continuous Fourier transform, the Fourier– Bessel series similarly has a counterpart over an infinite interval, namely the Hankel transform.

4. INTUITIVE DISCRETIZATION SCHEME VIA SAMPLING THEOREMS A. Discretization Scheme for a Band-Limited Function Suppose that the function f r is band-limited in the frequency Hankel domain so that its spectrum F ρ is zero outside an interval 0; 2πW . The interval is written in this form since W would typically be quoted in units of Hz (cycles per second) if using temporal units, or cycles per meter if using spatial units. Therefore, the multiplication by 2π ensures that the final units are in s−1 or m−1 . Because the spectrum F ρ is defined on a finite portion of the real line 0; 2πW , it can be expanded in terms of a Fourier–Bessel series so that F ρ

∞ X

F kJ n

k 1

j nk ρ : 2πW

(5)

The Fourier–Bessel coefficients can be found from (4) to give

Fk

2

R 2πW 0

F ρ J n

j nk ρ 2πW

4π 2 W 2 J 2n 1 j nk

ρdρ

B. Discretization Scheme for a Space-Limited Function By the same token, suppose that we assume that the function f r is space-limited so that f r is zero outside an interval 0; R . Since the function f r is defined on a finite portion of the real line 0; R , then it can be expanded in terms of a Fourier–Bessel series so that f r

1 j nk 2 2 2 f : 2π W J n 1 j nk 2πW (6)

In (6), we have used the fact that f r can be written in terms of its inverse Hankel transform, Eq. (2), in combination with the fact that the function is assumed band-limited. Equajnk tion (6) states that the values f 2πW determine the Fourier coefficients F k in the series expansion of F ρ . Therefore, the jnk determine the function f r completely since samples f 2πW (i) F ρ is determined on 0; 2πW through Eqs. (5) and (6), and F ρ is zero otherwise, and (ii) f r is known if F ρ is known. Another way of looking at this is that band-limiting, a function to 0; 2πW , results in information about the original function in space at samples r nk j nk ∕2πW .

∞ X

f kJ n

k 1

j nk r : R

(7)

As before, the Fourier–Bessel coefficients can be found from fk

Z

2 R2 J 2n 1 j nk

R

0

f r J n

j nk r 2 j rdr 2 2 F nk : R R R J n 1 j nk (8)

Here, we have used the definition of the Hankel transform F ρ , Eq. (1), in the right-hand side of Eq. (8). Equation (8) states that the values of f r are determined by F j nk ∕R . This follows, since for positions greater than R, f r is zero and for smaller positions smaller than R, f r is determined if its Fourier–Bessel coefficients are determined. Therefore, the samples F j nk ∕R determine the function f r and its transform F ρ completely. Another way of looking at this is that space limiting a function to 0; R implies discretization in spatial frequency space, at frequencies ρnk j nk ∕R. C. Intuitive Discretization Scheme for the Hankel Transform 1. Forward Transform We demonstrated above that a band-limited function, with ρ ≤ W ρ 2πW , can be written as F ρ

∞ X

2

k 1

W 2ρ J 2n 1 j nk

f

j nk j ρ J n nk : Wρ Wρ

(9)

Evaluating the previous Eq. (9) at the sampling points ρnm

613

jnm W ρ jnN

gives

jnk X ∞ 2f W j nm W ρ j nk j nm W ρ ρ : Jn F j nN W ρ j nN W 2ρ J 2n 1 j nk k 1 j

(10)

W

For m < N, ρnm nmjnN ρ < W ρ , Eq. (10), with the summation over infinite k, is exact. Now suppose we choose to terminate our series at k N. Noting that at k N, the last term in (10) is J n jnNjnNjnm j W jnk J n j nm 0 and substituting ρnm nmjnN ρ , r nk W for the ρ sampling points, then Eq. (10) becomes F ρnm

N −1 X

2

k 1

W 2ρ J 2n 1 j nk

f r nk J n

j nk j nm : j nN

(11)

In this case, the truncated sum in Eq. (11) does not represent F ρnm exactly because of the truncation at N terms, but should provide an approximation since the Fourier– Bessel series is known to converge. This also provides a good motivation for a discrete Hankel transform formulation.


614

J. Opt. Soc. Am. A / Vol. 32, No. 4 / April 2015

N. Baddour and U. Chouinard

2. Inverse Transform Similarly, for a space-limited function we stated that for r ≤ R, then ∞ X 2 j nm j nm r f r : (12) F J n R R R2 J 2n 1 j nm m 1

Fisk-Johnson discussed the analytical derivation of Eq. (17) in the appendix of [43]. Equation (17) is exactly true in the limit as N → ∞ and is true for N > 30 within the limits of computational error 10−7 . For smaller values of N, Eq. (17) holds with the worst case for the smallest value of N giving 10−3 .

Evaluating (12) at r nk jjnknNR gives

6. TRANSFORMATION MATRICES

X ∞ j R 2 j nm j nm j nk R : f nk J F n j nN R R j nN R2 J 2n 1 j nm m 1

(13)

Again, Eq. (13) with its infinite summation is exact. Terminating the series at N, further recalling that J n jnNjnNjnm

J n j nm 0, and using ρnm jnm R , then Eq. (13) simplifies to f r nk

N −1 X

2

m 1

R2 J 2n 1 j nm

F ρnm J n

j nm j nk : j nN

(14)

As before, the truncated sum in Eq. (14) does not represent f r nk exactly, but does provide a good motivation for the inverse discrete Hankel transform formulation. 3. Intuitive Discretization Scheme and Kernel The preceding development shows that a natural, N-dimensional discretization scheme in finite space 0; R and finite frequency space 0; W ρ is given by 9 j nk jnk R = r nk W j ρ nN k 1…N − 1: (15) j W ρnk jRnk nkjnN ρ ; j nN R

can be used to change from a fiThe relationship W ρ nite frequency domain to a finite space domain. Furthermore, the preceding Eqs. (11) and (14) show that both forward and inverse discrete versions of the transforms contain an expression of the form 2 j nk j nm : (16) J n j nN J 2n 1 j nk This leads to a natural choice of kernel for the discrete transforms, as shall be outlined below.

A. Transformation Matrix With inspiration from the notation in [43], and an additional scaling factor of 1∕j nN , we define an N − 1 × N − 1 transformation matrix with the (m, k)th entry given by Y nN m;k

j nN J 2n 1 j nk

N−1 X

nN Y nN i;k Y k;m

k 1

(20)

Equation (20) states that the rows and columns of the matrix Y nN m;k are orthonormal and can be written in matrix form as Y nN Y nN I;

(21)

where I is the N − 1 dimensional identity matrix, and we have nN written the N − 1 square matrix Y nN . Clearly, this imm;k as Y plies that the inverse of Y nN is given by itself: Y nN −1 Y nN :

(22)

The forward and inverse truncated and discretized transforms given in Eqs. (11) and (14) can be expressed in terms of Y nN . The forward transform, Eq. (11), can be written as N−1 R2 X Y nN f r : j nN k 1 m;k nk

(23)

Similarly, the inverse transform, Eq. (14), can be written as

4J n j nm j nk ∕j nN J n j nk j ni ∕j nN δmi j 2nN J 2n 1 j nm J 2n 1 j nk k 1

If written in matrix notation, then the Kronecker-delta of Eq. (18) is the identity matrix.

k ≤ N − 1: (19)

j 2nN J 2n 1 j nk J 2n 1 j nm

k 1

f r nk

where j nm represents the mth zero of the nth order Bessel function J n x , and δmi is the Kronecker-delta function, defined as 1 if m n δmn : (18) 0 otherwise

1 ≤ m;

δim :

N−1 X

(17)

N−1 X 4J n j ni j nk ∕j nN J n j nk j nm ∕j nN

F ρnm

It is shown in [43] that the following discrete orthogonality relationship is true:

i ≤ N − 1;

Jn

j nm j nk j nN

In Eq. (19), the superscripts n and N refer to the order of the Bessel function and the dimension of the space that are being considered, respectively. The subscripts m and k refer to the m; k th entry of the transformation matrix. Furthermore, the orthogonality relationship, Eq. (17), states that

5. DISCRETE ORTHOGONALITY OF THE BESSEL FUNCTIONS

1 ≤ m;

2

N−1 j nN X Y nN F ρnm : R2 m 1 k;m

(24)

B. Another Choice of Transformation Matrix Following the notation in [2], we can also define a different N − 1 × N − 1 transformation matrix with the m; k th entry given by T nN m;k 2

J n j nm j nk ∕j nN J n 1 j nm J n 1 j nk j nN

1 ≤ m;

k ≤ N − 1: (25)

In Eq. (25), the superscripts n and N refer to the order of the Bessel function and the dimension of the space that are


N. Baddour and U. Chouinard

Vol. 32, No. 4 / April 2015 / J. Opt. Soc. Am. A

being considered, respectively. The subscripts m and k refer to the m; k th entry of the matrix. From (19), it can be seen nN nN , is a real symmetric matrix. that T nN m;k T k;m , therefore T nN The relationship between the T nN m;k and Y m;k matrices is given by T nN m;k

J n 1 j nm Y nN m;k : J n 1 j nk

(26)

The orthogonality relationship, Eq. (17), can be written as N−1 X 4J n j nm j nk ∕j nN J n j nk j ni ∕j nN k 1

J 2n 1 j nm J 2n 1 j nk j 2nN

N−1 X

define a discrete Hankel transform (DHT), we can use either formulation: N−1 X

Fm

Y nN m;k f k

or

δmi : (27)

k 1

Equation (27) states that the rows and columns of the matrix T nN are orthonormal so that T nN is an orthogonal matrix. Furthermore, T nN is also symmetric. Equation (27) can be written in matrix form as

Fm

k 1

(28)

Therefore, the T nN matrix is unitary and furthermore orthogonal since the entries are real. Using the symmetric, orthogonal transformation matrix T nN , the forward transform from Eq. (11) can be written in as F ρnm

N−1 R2 X J j T nN n 1 nm f r nk ; j nN k 1 m;k J n 1 j nk

(29)

or

N−1 X

fk

Y nN k;m F m

or

F T nN f;

(30)

The symmetrical form of (30) highlights the fact that T nN m;k is the kernel of the discrete transform. Similarly, the inverse discrete transform of Eq. (14) can be written as N−1 j X J j f r nk nN2 T nN n 1 nk F ρnm ; R m 1 k;m J n 1 j nm

fk

N−1 X

T nN k;m F m :

(35)

This can also be written in matrix form as f Y nN F or

f T nN F:

(36)

We note that the forward and inverse transforms are the same. Proof We show the proof for the Y nN formulation, but it proceeds similarly if Y nN is replaced with T nN . Substituting Eq. (35) into the right-hand side of (33) gives Y nN m;k

fk

N−1 X

Y nN m;k

" N−1 X

k 1

# Y nN k;p F p

:

(37)

p 1

Switching the order of the summation in Eq. (37) gives N−1 X N−1 X

nN Y nN m;k Y k;p F p

p 1 k 1

| {z }

N−1 X

δmp F p F m :

(38)

p 1

δmp

(31)

The inside summations, as indicated in Eq. (38), are recognized as yielding the Dirac-delta function, the orthogonality property of Eq. (20) [or Eq. (27) if using T nN ], which in turn yields the desired result. This proves that the DHT given by (33) can be inverted by (35).

(32)

8. GENERALIZED PARSEVAL THEOREM

or, symmetrically, as N −1 f r nk j X F ρnm T nN nN2 : J n 1 j nk R m 1 k;m J n 1 j nm

(34)

m 1

k 1 N−1 F ρnm R2 X f r nk T nN : J n 1 j nm j nN k 1 m;k J n 1 j nk

(33)

where F is any N − 1 dimensional column vector and f is also any column vector, defined in the same manner. The inverse discrete Hankel transform (IDHT) is then given by

N−1 X

or, in a more symmetrical form, as

T nN m;k f k :

Here, the transform is of any N − 1 dimensional vector f k to any N − 1 dimensional vector F m for the integers m, k, where 1 ≤ m; k < N − 1. This can be written in matrix form as

m 1

T nN T nN T nN T nN T I:

N−1 X k 1

F Y nN f nN T nN m;k T k;i

615

7. DISCRETE FORWARD AND INVERSE HANKEL TRANSFORM From the previous section, it is clear that the two natural nN nN choices of kernel for a DFT are either Y nN m;k or T m;k . Y m;k is closer to the discretized version of the continuous Hankel transform that we hope the discrete version emulates. However, T nN m;k is an orthogonal and symmetric matrix; therefore it is energy preserving and will be shown to lead to a Parsevaltype relationship if chosen as the kernel for the DHT. Thus, to

Inner products are preserved and thus energies are preserved under the T nN matrix formulation. To see this, consider any two vectors given by the transform g T nN G, h T nN H, then gT h T nN G T T nN H GT T nN T T nN H GT H: | {z }

(39)

I

On the other hand, the Y nN matrix formulation does not directly preserve inner products: gT h Y nN G T Y nN H GT Y nN T Y nN H:

(40)


616

J. Opt. Soc. Am. A / Vol. 32, No. 4 / April 2015

N. Baddour and U. Chouinard

However, under the Y nN formulation, the inner product between J n 1gk jnk and J n 1h jk nk is preserved. To see this, we calculate the inner product between them as N−1 X

gk hk J j J n 1 j nk k 1 n 1 nk N−1 N−1 X X 1 Y nN Y nN k;p Gp k;q H q 2 J j k 1 n 1 nk p 1 q 1 N−1 X

N−1 X N−1 X N−1 X

1 Y nN Y nN H q Gp 2 J j k;p k;q p 1 q 1 k 1 n 1 nk

N−1 X N−1 X

1

N−1 X 4 j nk j np ∕j nN J n j nk j nq ∕j nN

J 2 j p 1 q 1 n 1 np k 1

j 2nN J 2n 1 j nk J 2n 1 j nq | {z }

H q Gp : (41)

9. TRANSFORM RULES In keeping with the development of the well-known DFT, we develop the standard toolkit of rules for the discrete Hankel transform. In the following, the Y nN is used, but all expressions apply equally if Y nN is replaced with T nN . A. Transform of Kronecker-Delta Function The discrete counterpart of the Dirac-delta function is the Kronecker-delta function, δkk0 . We recall that the DHT as defined above is a matrix transform from an N − 1 dimensional vector to another. The vector δkko is interpreted as the vector as having zero entries everywhere except at position k k0 (k0 fixed so δkk0 is a vector), or in other words, the k0 th column of the N − 1 sized identity matrix. The DHT of the Kronecker-delta can be found from the definition of the forward transform via

δpq

Making use of the now-present Dirac-delta function, Eq. (41) simplifies to give a modified Parseval relationship: N−1 X k 1

X N−1 Hp Gp gk hk : J n 1 j nk J n 1 j nk J n 1 j np J n 1 j np p 1

H δkko

N−1 X k 1

Y m;k δkko Y nN m;k0 :

(48)

The symbol H · is used to denote the operation of taking the discrete Hankel transform. This gives us our first DHT transform pair of order n dimension N − 1, and we denote this relationship as

(42) In other words, under a DHT that uses the Y nN matrix, inner products of the scaled functions are preserved but not the inner products of the functions themselves. A. Parseval Theorem As a consequence of the orthogonality property of T nN , the T nN based DHT is energy preserving, meaning that Parseval’s theorem is satisfied: N−1 X

jF m j2

m 1

N−1 X

jf k j2 :

δkko ⇔ Y n;N m;k0 :

(49)

Here, f k ⇔ F m is used to denote a transform pair and Y nN m;k0 is k0 th column of the transformation matrix Y nN . B. Inverse Transform of the Kronecker-Delta Function From Eq. (49), we can deduce that the vector f k that transforms to the Kronecker-delta vector δmmo function. Namely, we take the forward transform of f k Y n;N k;m0 :

(43)

(50)

k 1

In matrix notation, this can be written as F¯ T F f¯ T f;

(44)

As before, Y nN k;m0 represents the m0 th column of the transformation matrix Y nN . From the forward definition of the transform, Eq. (33), the transform of Y n;N k;m0 is given by Fm

N−1 X

Y nN m;k f k

N−1 X

nN Y nN m;k Y k;m0 δmm0 ;

where the overbar indicates a conjugate transpose and the superscript T indicates a transpose. ¯ h f and This follows from (40) by substituting g f; ¯ H F. For the formulation with Y nN as the transforG F; mation kernel, the equivalent expression is

where we have used the orthogonality relationship (20). This gives us another DHT pair:

¯ T Y nN f f¯ T Y nN T Y nN f: F¯ T F Y nN f

Y n;N k;m0 ⇔ δmmo :

(45)

Although it is obvious from Eq. (42) that if we define the “scaled” vector f Scaled k

fk ; J n 1 j nk

F Scaled p

Fp ; J n 1 j np

(46)

then by straightforward substitution of scaled vectors and their conjugates for g and h functions and their transforms, it follows that ¯ f Scaled : FScaled FScaled f Scaled T

T

(47)

k 1

k 1

(51)

(52)

C. Generalized-Shift Operator For a one-dimensional Fourier transform, one of the known transform rules is the shift rule, which states that f x − a F−1 fe−iaω fˆ ω g

1 2π

Z

∞ −∞

fe−iaω fˆ ω geiωt dω:

(53)

In Eq. (53), fˆ ω is the Fourier transform of f x , F−1 denotes an inverse Fourier transform, and e−iaω is the kernel of the Fourier transform operator. Motivated by this result, we define a generalized-shift operator by finding the inverse


N. Baddour and U. Chouinard

Vol. 32, No. 4 / April 2015 / J. Opt. Soc. Am. A

DHT of the DHT of the function multiplied by the DHT kernel. This is a discretized version of the definition of a generalized shift operator as proposed by Levitan [51]. (He suggested the complex conjugate of the Fourier operator, which for Fourier transforms is the inverse transform operator.) We thus propose the definition of the generalized-shifted function to be given by f shift k;ko

N−1 X

Y nN p;ko F p | {z }

Y nN k;p

p 1

;

f shift k;ko

N−1 X p 1

that rely on Hankel transforms [53,54] and of generalized Hankel convolutions [55–57]. D. Transform of the Generalized Shift We now consider the forward DHT transform of the shifted function f shift k;ko . From the definition, the DHT of the shifted function can be found from N−1 X

(54)

k 1

shift in Hankel domain

where 1 ≤ k; ko ≤ N − 1. For a single, fixed value of ko , then shift f shift k;ko is another N − 1 vector, with the notation f k;ko implying a k0 -shifted version of f k . This generalizes the notion of the shift, usually denoted f k−ko , which inevitably encounters difficulty when the subscript k − ko falls outside the range 1; N − 1 . We note that, if all possible shifts ko are considered, then f shift k;ko is an N − 1 square matrix (in other words, a twodimensional structure), whereas the original unshifted f k is an N − 1 vector. For the DFT, when the shifted subscript k − ko falls outside the range of the indices, it is usually interpreted modulo the size of the DFT. However, the kernel of the Fourier transform is periodic so this does not create difficulties for the DFT. The Bessel functions are not periodic so the same trick cannot be used with the Hankel transform. In fact, this lack of periodicity and lack of simple relationship between J n x − y and J n x is the reason that the continuous Hankel transform does not have a convolution-multiplication rule [52]. Therefore, the notation f k−ko would not make mathematical sense when used with the DHT. With the definition given by Eq. (54), no such confusion arises since the definition is unambiguous for all allowable values of k and ko . The shifted function f shift k;ko can also be expressed in terms of the original unshifted function f k . Using the definition of F m from Eq. (33) and a dummy change of variable, then Eq. (54) becomes nN Y nN k;p Y p;ko F p

N−1 X p 1

nN Y nN k;p Y p;ko

N−1 X

Y nN p;m f m :

N−1 X p 1

nN Y nN k;p Y p;ko F p

N−1 X N−1 X

N−1 X N−1 X

m 1 p 1

p 1 k 1

nN nN Y nN k;p Y p;ko Y p;m :

f shift k;ko

m 1

nN Y nN k;p Y p;ko F p :

(59)

N−1 X p 1

nN δmp Y nN p;ko F p Y m;ko F m : (60)

This yields another transform pair and is the shift-modulation rule. This rule is analogous to the shift-modulation rule for regular Fourier transforms, whereby a shift in the spatial domain is equivalent to modulation in the frequency domain: nN f shift k;ko ⇔ Y m;ko F m :

(61)

Note that Eq. (61) does not imply a summation over the m index. For a fixed value of ko on the left-hand side, the corresponding transformed value of F m is multiplied by the m; ko th entry of the Y nN matrix. E. Modulation We consider the forward DHT of a function “modulated” in the space domain f k Y nN k;ko gk . Here, the interpretation of f k Y nN g is that the kth entry of the vector g is multiplied by k k;ko the k; ko th entry of Y nN for a fixed value of ko . No summation is implied, so this is not a dot product; both f k and Y nN k;ko gk are N − 1 vectors. Again, we implement the definition of the forward transform, N−1 X

Y nN m;k f k

k 1

N−1 X k 1

nN Y nN m;k Y k;ko gk ;

(62)

and write gk in terms of its inverse transform: (56) gk

N−1 X

Y nN k;p Gp :

(63)

p 1

(57)

Then Eq. (62) becomes N−1 X k 1

Y nN m;k f k

N−1 X k 1

nN Y nN m;k Y k;ko gk

N−1 X k 1

nN Y nN m;k Y k;ko

N−1 X

Y nN k;p Gp : (64)

p 1

Interchanging the order of summation gives

It then follows that Eq. (56) can be written as N −1 X

p 1

(55)

As indicated in Eq. (56), the quantity in brackets can be considered to be a type of shift operator acting on the original unshifted function. We can define this as

p 1

N−1 X

δmp

shift operator

S nN k;ko ;m

Y nN m;k

k 1

| {z }

| {z }

N −1 X

N−1 X

nN nN Y nN m;k Y k;p Y p;ko F p

m 1

nN nN Y nN k;p Y p;ko Y p;m f m :

shift Y nN m;k f k;ko

Changing the order of summation gives

Changing the order of summation gives f shift k;ko

617

S nN k;ko ;m f m :

N−1 X N−1 X

(58)

This triple-product shift operator is similar to previous definitions of shift operators for multidimensional Fourier transforms

p 1 k 1

nN nN shift Y nN m;k Y k;ko Y k;p Gp Gm;ko :

(65)

| {z } shift operator

By comparing Eq. (65) with Eqs. (56)–(57), we recognize the shift operator as indicated in (65). This produces a


618

J. Opt. Soc. Am. A / Vol. 32, No. 4 / April 2015

N. Baddour and U. Chouinard

modulation-shift rule as would be expected, so that the forward DHT of a modulated function is equivalent to a generalized shift in the frequency domain. This yields another transform pair: shift Y nN k;ko gk ⇔ Gm;ko :

(66)

In other words, Eq. (66) says that modulation in the space domain is equivalent to a shift in the frequency domain, as would be expected for a (generalized) Fourier transform. F. Convolution We consider the convolution using the generalized shifted function previously defined. The convolution of two functions is defined as N−1 X

f k g h k

k0 1

k0 1

gko hshift k;ko

N−1 X N−1 X k0 1 q 1

Y nN ko ;q Gq

h g k

(67)

N−1 X k0 1

N−1 X

Y nN m;k gk hk

k 1

N−1 X p 1

nN Y nN k;p Y p;ko H p

o

nN Y nN p;ko Y ko ;q

Y nN k;p H p Gq :

(68)

Y nN m;k

N−1 X q 1

However, from the orthogonality relationship (20), the summation over k0 gives the Kronecker-delta function, so that Eq. (68) becomes

N−1 X

gko hshift k;ko

N−1 X N−1 X

δpq Y nN k;p H p Gq

q 1 p 1

Y nN k;p H p Gp :

(69)

p 1

The right-hand side of Eq. (69) is clearly the inverse transform of the product of the transforms H p F p . This gives us another transform pair:

g h k

N−1 X k0 1

gko hshift k;ko

⇔ H m Gm :

(70)

It follows from Eq. (69) that interchanging the roles of g and h will yield the same result, meaning N−1 X k0 1

gshift k;ko hko

N−1 X p 1

N−1 X

Y nN k;p H p :

(73)

p 1

| {z } | {z }

Y nN m;k gk hk

k 1

N−1 X q 1

Gq

N−1 X N−1 X

hk

nN nN Y nN m;k Y k;q Y k;p H p

p 1 k 1

| {z }

N −1 X

Gq H shift m;q G H m :

Y nN k;p Gp H p :

(74)

q 1

This gives us yet another transform pair that says that multiplication in the spatial domain is equivalent to convolution in the transform domain: N −1 X

Gq H shift m;q G H m :

(75)

Interchanging the roles of G and H in Eq. (75) demonstrates that convolution in the transform domain also commutes: N −1 X

Gq H shift m;q

q 1

k0 1

Y nN k;q Gq

Rearranging gives

G H m

δpq

g h k

(72)

q 1

| {z }

N−1 X

gko hshift k;ko g h k :

gk

gk hk ⇔

hshift k;k

q 1 p 1 k0 1

k0 1

shift operator

gko

N−1 X k 1

| {z } | {z }

N−1 X N−1 X N−1 X

N−1 X

gshift k;ko hko

G. Multiplication We now consider the forward transform of a product in the space domain f k gk hk so that

N−1 X

gko hshift k;ko :

The meaning of Eq. (67) follows from the traditional definition of a convolution: multiply one of the functions by a shifted version of a second function and then sum over all possible shifts. Subsequently, from the definition of the inverse transforms, we obtain N−1 X

Therefore, it follows that

(71)

N −1 X

Gshift m;q H q H G m :

(76)

q 1

10. USING THE DHT TO APPROXIMATE THE CONTINUOUS HANKEL TRANSFORM Equations (10) and (13), with their infinite summations, are exact for band-limited and space-limited functions, respectively. Here, we propose using the discrete Hankel transform with finite domains for both space and frequency to approximate the continuous transform (where it is understood that a function could be band-limited or space-limited, but not both). In other words, we propose the following N-dimensional discretization scheme in finite space 0; R and finite frequency space 0; W ρ given by 9 jnk jnk R = r nk W jnN ρ ρnk jRnk

jnk W ρ jnN

;

k 1…N − 1:

(77)

In Eq. (77), the order of the Bessel zeros must match the order of the discrete Hankel transform that is sought, in keeping with the derivation of the intuitive discretization scheme in Section 4. The relationship W ρ jnN R can be used to relate the finite frequency domain to the finite space domain.


N. Baddour and U. Chouinard

Vol. 32, No. 4 / April 2015 / J. Opt. Soc. Am. A

If the function f r is defined between 0; R and we want to calculate a forward transform, then it is proposed to use the space and frequency space discretization scheme of r nk

j R nk ; j nN

Ď nm

j nm R

k ≤ N − 1:

(78)

To compute the forward transform, sample the function f r on r nk jjnknNR and assign this to f k so that f k f r nk f jjnknNR . Then calculate F m from the DHT:

Fm

N−1 R2 X Y nN f j nN k 1 m;k k

or

F

R2 nN Y f: j nN

(79)

The resulting values F m are then an approximation

to F Ď nm F jnm R . Conversely, if the Hankel transform F Ď is defined on 0; W Ď and it is desired to calculate the inverse transform, then use the discretization scheme j nk r nk W Ď

Ď nm

)

jnm W Ď jnN

1 ≤ m;

k ≤ N − 1:

(80)

N−1 W 2Ď X Y nN F j nN m 1 k;m m

or

f

W 2Ď nN Y F: j nN

(81)

Then the resulting values f k calculated via the IDHT are an j nk approximation to f r nk f W . Ď A. Numerical Tests The calculations proposed to use the DHT to approximate the continuous Hankel transform were tested on two functions with known forward and inverse Hankel transforms. All simulations were performed using Matlab (Mathworks). The first function (modified Gaussian) is given by 2 2

f r e−a r r n :

(82)

Here, a is a real number and n is an integer. The continuous nth order Hankel transform of (82) [so that the order n of the Hankel transform is the same as the power of r in (82)] is known to be given by [49] F Ď

2 Ď n âˆ’Ď 2 4a : e 2a2 n 1

(83)

For the modified Gaussian, the closed form analytical expression for the nth order Hankel transform only exists when the order of the modification is the same as the order of the Hankel transform.

sin ar : ar

(84)

Its nth order Hankel transform is also known to be given by [49] 8 n > Ď€n Ď > > > cos > > a > 2 > r

r

n ¡ > > > Ď 2 Ď 2 > 2 > 1 − a > 1 1− 2 < a2 a F Ď > a > > > sin n arcsin > > Ď > > r

> > 2 > Ď > > > −1 a2 : a2

for Ď < a :

(85)

for Ď > a

For the purpose of testing the accuracy of the DHT and IDHT, the dynamic error is used, defined as [2] e v 20 log10

j W Sample the function F Ď nm F nmjnN Ď and assign this to F m

jnm W Ď so F m F Ď nm F jnN . Calculate f k from the IDHT:

fk

The second function to be tested is the sinc function, which is given by f r

1 ≤ m;

619

j f v − f ν j : max j f v j

(86)

This error function compares the difference between the exact function values f v (evaluated from the continuous function) and the function values estimated via the discrete transform, f ν , scaled with the maximum value of the discretely estimated samples. Equation (86) can be used to evaluate the computation of either forward or inverse Hankel transform via the DHT/IDHT, and compared with known continuous Hankel relationships. The dynamic error uses the ratio of the absolute error to the maximum amplitude of the function on a log scale. Therefore, negative decibel errors imply an accurate discrete estimation of the true transform value. It is noted that −320 dB corresponds to floating point numerical accuracy. The transform is also tested for accuracy on itself. This is performed by consecutive forward and then inverse transformation to verify that the transforms themselves do not add errors. For this evaluation, the average absolute error PN 1 i 1 jf i − f i j is used. N To maintain good sampling of the function for the discrete transform, it is important to maintain the W Ď jnN R relationship. Although the functions chosen for testing the transforms are not exactly space or band-limited, they can be considered to be approximately limited since the function or its transform approaches zero at some point (it can be made as close to zero as we like). Moreover, the two functions were chosen so that one is effectively space limited and the other is effectively band-limited. Thus, for the purpose of the tests, the true function has been computed, and space and band-limits have been imposed. It is to be noted that the functions are evaluated at the (scaled) Bessel zeros in accordance with Eqs. (78) or (80), which implies that their evaluation does not start at zero on the plots. 1. Evaluation of the DHT of the Modified Gaussian The first function, the modified Gaussian of Eq. (82), is tested with a 5, and with two different orders of n, n 1 and n 11. The true functions are shown in Fig. 1.


620

J. Opt. Soc. Am. A / Vol. 32, No. 4 / April 2015

Fig. 1. (a) and (b) Scaled Gaussian and its (c) first- and (d) eleventhorder Hankel Transform.

From the graph of the function and its transform for n 1, it can be assumed that f r is space-limited at r 1 and F ρ is band-limited at ρ 50. Taking a slightly larger domain for numerical tests, we choose R 2 and W ρ 100. From

W ρ jnN R , the closest Bessel function zero is j nN 201.8455, with N 64. The same can be done with the n 11 function. Consequently, values are taken as R 2 and W ρ 110. This results in N 64 and j nN 217.2774. Choosing the function 2 as space limited results in using the scaling factors jRnN for the

DHT and jRnN2 for the IDHT. The results of performing the finite DHT and IDHT calculations and comparing these with evaluating the continuous transforms at the same sampling points are shown in Figs. 2 and 3, respectively. In the figures, the solid lines are the continuous transforms and the dotted lines are the discrete transforms.

Fig. 2. (a) Continuous (solid line) and discrete (dotted line) n 1 Hankel transform of the n 1 modified Gaussian, and (b) the error between the continuous and discrete n 1 HT. (c) Continuous (solid line) and discrete (dotted line) n 11 Hankel transform of the n 11 modified Gaussian, and (d) the error between the continuous and discrete n 11 HT.

N. Baddour and U. Chouinard

Fig. 3. (a) Continuous (solid line) and discrete (dotted line) n 1 Inverse Hankel transform of the n 1 modified Gaussian, and (b) the error between the continuous and discrete n 1 IHT. (c) Continuous (solid line) and discrete (dotted line) n 11 Inverse Hankel transform of the n 11 modified Gaussian, and (d) the error between the continuous and discrete n 11 IHT.

It can be noted that, for both the forward and inverse transforms, the computed error is very low, even for relatively small values of N. Performing the forward DHT and then the IDHT on the obtained result results in an average absolute error of 1.6926e-17 from the original function for n 1 and 8.5249e-22 for n 11, for N 64 in both cases. 2. Evaluation of the DHT of the Sinc Function For the sinc function of Eq. (84), the assumed limits are taken with respect to the true functions, as shown in Fig. 4. In both cases, from Fig. 4 it can be seen that the Hankel function is effectively band-limited at W ρ 30. Thus, taking a sample size of N 256 gives j nN 805.0327 for n 1 and j nN 820.6675 for n 11, we obtain R 26.75 for n 1 and R 27.5 for n 11. Since the function is band-limited, the approximation to the continuous transform is done by using the frequency scaling W2 factor jWnN2 for the DHT and jnNp for the IDHT. The results of perp forming the finite DHT and IDHT calculations and comparing

Fig. 4. (a) and (b) Sinc function and its (c) HT with n 1 and (d) HT for n 11.


N. Baddour and U. Chouinard

Vol. 32, No. 4 / April 2015 / J. Opt. Soc. Am. A

621

Fig. 6. (a) Comparison of continuous (solid line) and discrete (dotted line) n 1 Inverse Hankel transform of a sinc, (b) the error in part (a), (c) continuous and discrete n 11 IHT of a sinc, and (d) the error in part (c).

Fig. 5. (a) Comparison of continuous (solid line) and discrete (dotted line) n 1 Hankel transform of a sinc, (b) the error in part (a), (c) continuous and discrete n 11 HT of a sinc, and (d) the error in part (c).

Table 1. Summary of DHT Relationships fk

Operation Definition of forward transform Definition of inverse transform Dirac-delta in space Dirac-delta in frequency Generalized shift in space

Fm PN−1 nN k 1 Y m;k f k Fm Y n;N m;k0 δmmo Y nN m;ko F m

fk Y nN k;m F m δkko Y n;N k;m0 PN−1 shift nN f k;ko p 1 Y nN k;p Y p;ko N −1 X P nN nN F p N−1 Y nN k;p Y p;ko Y p;m f m m 1 PN−1

m 1

p 1

| {z } shift operator

Generalized shift in frequency Convolution in space Multiplication in space

Y nN k;ko gk P shift g h k N−1 k0 1 gko hk;ko gk hk

these with evaluating the continuous transforms at the same sampling points are shown in Figs. 5 and 6, respectively. In the graphs, the continuous transform is displayed as a solid line and the discrete transform as the dots. The DHT suffers from Gibbs phenomenon at the discontinuity, as expected. However, the dynamic error for the rest of the function remains low. Furthermore, performing the forward DHT and then the IDHT on the obtained results in an average absolute error of 5.2274e-15 from the original function for n 1 and 6.1430e-13 for n 11, for N 256 in both cases.

11. SUMMARY AND CONCLUSIONS In summary, in this paper we have motivated, proposed, and evaluated the mathematical theory for the DHT, using the same ideas that have shown the DFT to be so useful in various disciplines. The standard set of shift, modulation, multiplication, and convolution rules were derived. The summary of these rules is shown in Table 1. In addition, we proposed the use of this DHT to approximate the continuous Hankel transform in the same manner that the DFT is known to be able to approximate the continuous Fourier transform, using specifically chosen sampling points in both spatial and

Gshift m;ko PN−1

H m Gm

shift m0 1 Gmo H m;m0 G H m

frequency domains. The errors of using the DHT to approximate its continuous counterpart were shown to be low for the chosen effectively space-limited and effectively band-limited functions.

ACKNOWLEDGMENTS This work was financially supported by the Natural Sciences and Engineering Research Council of Canada.

REFERENCES 1. V. Magni, G. Cerullo, and S. De Silvestri, “High-accuracy fast Hankel transform for optical beam propagation,” J. Opt. Soc. Am. A 9, 2031–2033 (1992). 2. M. Guizar-Sicairos and J. C. Gutiérrez-Vega, “Computation of quasi-discrete Hankel transforms of integer order for propagating optical wave fields,” J. Opt. Soc. Am. A 21, 53–58 (2004). 3. H.-M. Kim, K. H. Ko, and K. H. Lee, “Real-time convolution method for generating light diffusion profiles of layered turbid media,” J. Opt. Soc. Am. A 28, 1276–1284 (2011). 4. S. Nakadate and H. Saito, “Particle-size-distribution measurement using a Hankel transform of a Fraunhofer diffraction spectrum,” Opt. Lett. 8, 578–580 (1983). 5. D. Zhang, X. Yuan, N. Ngo, and P. Shum, “Fast Hankel transform and its application for studying the propagation of cylindrical electromagnetic fields,” Opt. Express 10, 521–525 (2002).


622 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.

26. 27. 28. 29. 30. 31.

J. Opt. Soc. Am. A / Vol. 32, No. 4 / April 2015 A. W. Norfolk and E. J. Grace, “Reconstruction of optical fields with the Quasi-discrete Hankel transform,” Opt. Express 18, 10551–10556 (2010). R. Kotynski, T. J. Antosiewicz, K. Król, and K. Panajotov, “Twodimensional point spread matrix of layered metal-dielectric imaging elements,” J. Opt. Soc. Am. A 28, 111–117 (2011). H. Chehouani and M. El Fagrich, “Adaptation of the Fourier– Hankel method for deflection tomographic reconstruction of axisymmetric field,” Appl. Opt. 52, 439–448 (2013). O. Komenda and M. Skeren, “Design of rotationally symmetric diffractive beam shapers using IFTA,” in Frontiers in Optics (Optical Society of America, 2006), paper JSuA48. B. D. Muric, D. V. Pantelic, D. M. Vasiljevic, and B. M. Panic, “Properties of microlenses produced on a layer of tot’hema and eosin sensitized gelatin,” Appl. Opt. 46, 8527–8532 (2007). R. Bracewell, The Fourier Transform and Its Applications (McGraw-Hill, 1999). A. E. Siegman, “Quasi fast Hankel transform,” Opt. Lett. 1, 13–15 (1977). G. P. Agrawal and M. Lax, “End correction in the quasi-fast Hankel transform for optical propagation problems,” Opt. Lett. 6, 171–173 (1981). A. Agnesi, G. C. Reali, G. Patrini, and A. Tomaselli, “Numerical evaluation of the Hankel transform: remarks,” J. Opt. Soc. Am. A 10, 1872–1874 (1993). M. J. Cree and P. J. Bones, “Algorithms to numerically evaluate the Hankel transform,” Comput. Math. Appl. 26, 1–12 (1993). B. W. Suter and R. A. Hedges, “Understanding fast Hankel transforms,” J. Opt. Soc. Am. A 18, 717–720, (2001). J. A. Ferrari, “Fast Hankel transform of order zero,” J. Opt. Soc. Am. A 12, 1812–1813 (1995). J. A. Ferrari, D. Perciante, and A. Dubra, “Fast Hankel transform of nth order,” J. Opt. Soc. Am. A 16, 2581–2582 (1999). B. W. Suter, “Fast Nth-order Hankel transform algorithm,” IEEE Trans. Signal Process. 39, 532–536 (1991). A. V. Oppenheim, G. V. Frisk, and D. R. Martinez, “Computation of the Hankel transform using projections,” J. Acoust. Soc. Am. 68, 523–529 (1980). A. V. Oppenheim, G. V. Frisk, and D. R. Martinez, “An algorithm for the numerical evaluation of the Hankel transform,” Proc. IEEE 66, 264–265 (1978). E. Hansen, “Fast Hankel transform algorithm,” IEEE Trans. Acoust., Speech, Signal Process. 33, 666–671 (1985). E. Hansen, “Correction to ‘Fast Hankel transform algorithm’,” IEEE Trans. Acoust., Speech, Signal Process. 34, 623–624 (1986). D. Mook, “An algorithm for the numerical evaluation of the Hankel and Abel transforms,” IEEE Trans. Acoust., Speech, Signal Process. 31, 979–985 (1983). R. Barakat, E. Parshall, and B. H. Sandler, “Zero-order Hankel transformation algorithms based on Filon quadrature philosophy for diffraction optics and beam propagation,” J. Opt. Soc. Am. A 15, 652–659 (1998). P. K. Murphy and N. C. Gallagher, “Fast algorithm for the computation of the zero-order Hankel transform,” J. Opt. Soc. Am. 73, 1130–1137 (1983). J. Markham and J.-A. Conchello, “Numerical evaluation of Hankel transforms for oscillating functions,” J. Opt. Soc. Am. A 20, 621–630 (2003). S. Candel, “Dual algorithms for fast calculation of the FourierBessel transform,” IEEE Trans. Acoust., Speech, Signal Process. 29, 963–972 (1981). S. M. Candel, “An algorithm for the Fourier-Bessel transform,” Comput. Phys. Commun. 23, 343–353 (1981). W. E. Higgins and J. Munson, “An algorithm for computing general integer-order Hankel transforms,” IEEE Trans. Acoust., Speech, Signal Process. 35, 86–97 (1987). M. Garg, A. Rao, and S. L. Kalla, “On a generalized finite Hankel transform,” Appl. Math. Comput. 190, 705–711 (2007).

N. Baddour and U. Chouinard 32. L. Debnath and D. Bhatta, Integral Transforms and Their Applications, 2nd ed. (CRC Press, 2010). 33. N. T. Eldabe, M. El-Shahed, and M. Shawkey, “An extension of the finiteHankel transform,” Appl. Math.Comput. 151, 713–717 (2004). 34. P. K. Gupta, S. Niwas, and N. Chaudhary, “Fast computation of Hankel Transform using orthonormal exponential approximation of complex kernel function,” J. Earth Syst. Sci. 115, 267–276 (2006). 35. V. K. Singh, O. P. Singh, and R. K. Pandey, “Efficient algorithms to compute Hankel transforms using wavelets,” Comput. Phys. Commun. 179, 812–818 (2008). 36. R. Bisseling and R. Kosloff, “The fast Hankel transform as a tool in the solution of the time dependent Schrödinger equation,” J. Comput. Phys. 59, 136–151 (1985). 37. L. Knockaert, “Fast Hankel transform by fast sine and cosine transforms: the Mellin connection,” IEEE Trans. Signal Process. 48, 1695–1701 (2000). 38. R. K. Pandey, V. K. Singh, and O. P. Singh, “An improved method for computing Hankel transform,” J. Franklin Inst. 346, 102–111, (2009). 39. V. K. Singh, O. P. Singh, and R. K. Pandey, “Numerical evaluation of the Hankel transform by using linear Legendre multi-wavelets,” Comput. Phys. Commun. 179, 424–429 (2008). 40. V. K. Singh, R. K. Pandey, and S. Singh, “A stable algorithm for Hankel transforms using hybrid of Block-pulse and Legendre polynomials,” Comput. Phys. Commun. 181, 1–10 (2010). 41. E. Cavanagh and B. D. Cook, “Numerical evaluation of Hankel transforms via Gaussian-Laguerre polynomial expansions,” IEEE Trans. Acoust., Speech, Signal Process. 27, 361–366 (1979). 42. L. Yu, M. Huang, M. Chen, W. Chen, W. Huang, and Z. Zhu, “Quasi-discrete Hankel transform,” Opt. Lett. 23, 409–411 (1998). 43. H. F. Johnson, “An improved method for computing a discrete Hankel transform,” Comput. Phys. Commun. 43, 181–202 (1987). 44. M. Guizar-Sicairos and J. C. Gutiérrez-Vega, “Two-dimensional Fourier transform of scaled Dirac delta curves,” J. Opt. Soc. Am. A 21, 1682–1688 (2004). 45. A. J. Jerri, “Towards a discrete Hankel transform and its applications,” Appl. Anal. 7, 97–109 (1978). 46. D. Lemoine, “The discrete Bessel transform algorithm,” J. Chem. Phys. 101, 3936 (1994). 47. D. Lemoine, “Discrete cylindrical and spherical Bessel transforms in non-direct product representations,” Chem. Phys. Lett. 224, 483–488 (1994). 48. E. G. Layton and E. Stade, “Generalized Fourier-grid R-matrix theory; a discrete Fourier-Riccati-Bessel transform approach,” J. Phys. B 26, L489 (1993). 49. R. Piessens, “The Hankel transform,” in The Transforms and Applications Handbook, 2nd ed. (CRC Press, 2000), pp. 9.1–9.30. 50. J. Schroeder, “Signal processing via Fourier-Bessel series expansion,” Digital Signal Process. 3, 112–124, (1993). 51. B. M. Levitan, “Generalized displacement operators,” in Encyclopedia of Mathematics (Springer, 2002). 52. N. Baddour, “Application of the generalized shift operator to the Hankel transform,” SpringerPlus 3, 246 (2014). 53. N. Baddour, “Operational and convolution properties of twodimensional Fourier transforms in polar coordinates,” J. Opt. Soc. Am. A 26, 1767–1777 (2009). 54. N. Baddour, “Operational and convolution properties of threedimensional Fourier transforms in spherical polar coordinates,” J. Opt. Soc. Am. A 27, 2144–2155 (2010). 55. M. Belhadj and J. J. Betancor, “Hankel convolution operators on entire functions and distributions,” J. Math. Anal. Appl. 276, 40–63 (2002). 56. J. de Sousa Pinto, “A generalised Hankel convolution,” SIAM J. Math. Anal. 16, 1335–1346 (1985). 57. S. P. Malgonde and G. S. Gaikawad, “On a generalized Hankel type convolution of generalized functions,” Proc. Indian Acad. Sci., Math. Sci. 111, 471–487 (2001).


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.