DOI QR코드

DOI QR Code

Spatial Compounding of Ultrasonic Diagnostic Images for Rotating Linear Probe with Geometric Parameter Error Compensation

  • Choi, Myoung Hwan (Dept. of Electronic and Communication Eng., Kangwon National University) ;
  • Bae, Moo Ho (Dept. of Electronic Eng., Hallym University)
  • Received : 2013.03.26
  • Accepted : 2014.01.27
  • Published : 2014.07.01

Abstract

In ultrasonic medical imaging, spatial compounding of images is a technique where ultrasonic beam is steered to examine patient tissues in multiple angles. In the conventional ultrasonic diagnostic imaging, the steering of the ultrasonic beam is achieved electronically using the phased array transducer elements. In this paper, a spatial compounding approach is presented where the ultrasonic probe element is rotated mechanically and the beam steering is achieved mechanically. In the spatial compounding, target position is computed using the value of the rotation axis and the transducer array angular position. However, in the process of the rotation mechanism construction and the control system there arises the inevitable uncertainties in these values. These geometric parameter errors result in the target position error, and the consequence is a blurry compounded image. In order to reduce these target position errors, we present a spatial compounding scheme where error correcting transformation matrices are computed and applied to the raw images before spatial compounding to reduce the blurriness in the compounded image. The proposed scheme is illustrated using phantom and live scan images of human knee, and it is shown that the blurriness is effectively reduced.

Keywords

1. Introduction

One of the characteristics of ultrasonic medical images is a characteristic granular particle looking structure commonly known as speckle [1-3]. Speckle is one of the fundamental characteristics of ultrasound imaging arising from the ultrasonic physics, and it is a major cause of image quality degradation. Speckle in ultrasonic images arises from the presence of closely spaced and randomly distributed microscopic scatterers [1, 4]. Techniques such as spatial compounding and frequency compounding have been proposed to reduce the effect of the speckle. These compounding techniques have been widely investigated in research papers [5-15], and spatial compounding in the image space has been used to reduce speckle brightness variations. In the spatial compounding technique, images are created from ultrasonic echo signals gathered from a number of different ultrasonic beam angles, and these images are merged using appropriate functions such as pixel intensity averaging to form a spatially compounded image. In the resulting images, the structural targets show consistently strong echoes while speckles show reduction in the brightness Consequently, the structural targets in spatially compounded images are enhanced and variations in the soft tissues due to speckle noise are averaged out [16]. This has been demonstrated both in vitro [14, 17] and in vivo [11, 18-22]. These results indicate that spatial compounding can enhance the delineation of the boundaries and internal structure of lesions. In most of the conventional and commercial systems, the ultrasonic beam steering is achieved by electronically controlling the excitation of the individual elements in the ultrasonic transducer array. However, as the electronic steering angle increases, undesirable properties of the electronic beam steering also increase. For example, grating lobe artifact increases and results in decreased contrast resolution. The effective aperture size decreases as the steering angle increases, and the increased obliquity factor of the element reduces the transducer sensitivity.

A new experimental beam steering approach was proposed in Bae [23] where the ultrasonic beam steering is achieved by mechanically rotating the probe element. The goal is to avoid the problems associated with the electronic beam steering described above. A linear transducer array is rotated about an axis in the plane of the image. The structure of the rotating linear probe is shown in Fig. 1. This structure was used to investigate the feasibility of a new concept, and has not been studied in other literature or used in commercial products. The linear probe is immersed in water tank and rotated about an axis located above the phantom target, and the rectangular image area attached to the probe rotates with the linear probe. The images acquired from the ultrasonic beams at different rotation angles are used in spatial compounding scheme. However, the construction of the rotation mechanism and the control system accompanies the inevitable uncertainties in the values of the rotation axis position and the rotation angle. These geometric parameter errors result in the target position error, and the consequence is a blurry compounded image. The nominal rotation angle of the rotating probe mechanism may not be exactly the same as the actual values. These geometric parameter errors are caused by the imprecision in the construction of the probe mechanism and cannot be eliminated completely, and must be compensated by a calibration process.

Fig. 1.Rotating linear probe and image areas. Target numbers 1 to 6 are shown in (b).

In this paper, we present a spatial compounding technique for the rotating linear probe mechanism proposed in [23] that reduces the effect of the geometric parameter error. The effect of uncertainty in the mechanical parameters was compensated by applying error compensation matrices to the input images before spatial compounding. An efficient algorithm was developed that computes the error compensation matrices. The experimental results show the improvement in the quality of compounded images. The major contribution of this work is that a procedure was proposed that reduces the effect of parametric error of the probe in the ultrasonic spatial compounding. A preliminary result of this work had been presented in [26].

This paper is organized as follows. In Section 2, spatial compounding with geometric parameter error compensation is described. The proposed spatial compounding algorithm is compared with the conventional spatial compounding algorithm, and the procedure to compute the error compensation matrices is described. In Section 3, experimental results are described that includes the phantom and live images, followed by discussions in Section 4 and conclusions in Section 5.

 

2. Spatial Compounding with Geometric Parameter Error Compensation

In spatial compounding method, a compounded image is computed from a series of consecutive images obtained at different scan angles [10, 16]. A schematic diagram of rotating linear probe and its image areas are shown in Fig. 1 (a), and a sequence of images with varying rotation angle is shown in Fig. 1(b). Let Ik, k=-N,…0,…N, denote the images obtained at rotation angle θ =k *Δθ where Δθ is the incremental angle of rotation. An example with N=2 is shown in Fig. 1(b). Images acquired by the rotating linear probe are initially in the reference coordinate frame attached to the rotating probe, and these images need to be transformed to a universal coordinate frame prior to the spatial compounding. In this work, the coordinate frame attached to the image I0 is used as the universal coordinate frame. As the probe is rotated back and forth, a sequence of images such as I0, I1, I2, I1, I0, I−1, I−2, I−1,… is obtained in the universal coordinate frame. In this image sequence, image index k represents the angle of the probe. The image sequence Ik is then rearranged in the order of acquisition time index, such that a new sequence of images is written as where i is the time index. In conventional spatial compounding, a compounded image at time step n, Yn(x,y), is computed from current and previous sequence of images using a suitable function F.

where K is the number of images used in the spatial compounding. Examples of the function F used in the literature include linear averaging, median, mean-excludingminimum, root mean square, etc [24]. The function F of linear averaging, for example, computes the linear average of pixel brightness at position (x, y) over the sequence of images. This conventional spatial compounding scheme is shown in Fig. 2(a). In this work, a linear averaging scheme was used for simplicity, since the specific function F used in the compounding is not relevant when we are trying to reduce the effect of the mechanical parameter error.

Fig. 2.Spatial compounding schemes

In the transformation from the rotating probe coordinate frame to the universal frame, the position of the axis of rotation and the rotation angle of the probe are needed. If these two values contain errors, the same target point would be shown in different positions in different images, and the consequence is a blurry compounded image as shown in Fig. 4(a). In this work, a series of input images is modified by applying error compensation matrices that will correct the image positions before being used in the spatial compounding as shown in Fig. 2(b). The error compensation matrices are computed from the images of wire targets of a phantom using image registration. As shown in Fig. 1(b), the center image I0 undergoes no rotation and thus is free from the influence of errors in the rotation axis and rotation angle, and thus, the center image I0 is used as the reference image with respect to which all other wire target images will be transformed for registration. Let Qk(•), k=-N, …0,…N, denote the transformation function that will transform the input image Ik and the resulting transformed images be denoted as Jk = Qk (Ik). The images must be transformed so that the position of a wire target in Jk is exactly aligned with the same wire target position in image I0 after the transformation, and the process is summarized below.

Fig. 4.Spatial compounding results of phantom image: (a) before and (b) after error correction

Step 1: For each image Ik, wire target image is converted to binary image using a gray level threshold. Threshold value is selected so that clear binary images of wire targets are obtained.

Step 2: Position vector of wire targets uk,n = [xk,n, yk,n]T, n=1,…M, are obtained by computing the center of gravity of wire targets in the binary images, where M is the number of wire targets, and xk,n and yk,n are the x and y coordinates of the position vector uk,n of the n-th wire target. Position vectors of the wire targets in the reference image I0 is denoted as u0,n = [x0,n, y0,n]T, n=1,…M.

Step 3: Let the position vectors be expressed in homogeneous coordinates [25]. Let Hk, k=-N,…0,… N, denote the 4×4 homogeneous transformation matrices that transform uk,n to u0,n . Then, since the images are in x-y plane, we can write

where

Thus, Hk is the transformation matrix that aligns the wire targets of Ik to the wire targets of the reference image I0 as shown in (2). Using the first, second and fourth row of (2), the equation is simplified as below.

where

The matrix Tk effectively transforms the target position in input image Ik to the correct position, and functions as the error compensation matrix. Applying Tk to all wire targets in the image Ik,

Let eq. (4) be written in a simplified form as

If the number of target points M is greater than three and the target points are not all located in a straight line, is nonsingular and the transformation matrix Tk can be computed from (5) as below.

Step 4: In the transformation Jk = Qk (Ik), let the pixel at position (u,v) of Ik be transformed to the pixel at position (x,y) of Jk. Then, from (3) we can write

Hence, for each pixel position (x, y) of Jk, the transformation Jk = Qk (Ik) is computed by

where u and v are given by (7). The sequence of images Jk is used in the spatial compounding as shown in Fig. 2(b). After rearranging the sequence of images Jk in the order of image acquisition time steps as compounded image at time step n can be computed using (1) as below.

 

3. Experimental Results

A linear probe is attached to the rotation mechanism so that the probe can be rotated as shown in Fig. 1. The probe and the phantom targets were immersed in a water tank, and measurements were obtained using Accuvix XQ manufactured by Samsung Medison. The ATS Multipurpose Phantom Model 539 was used. The probe is rotated about the nominal axis point (x, y) = (−2.02mm, 5.94mm). Nominal step angle of rotation Δθ was 1.26 degree, and the probe is rotated through N=8 steps in both clockwise and counter clockwise direction. Images are obtained in 640 × 480 gray level images and used in the spatial compounding. Five consecutive images were used in the compounding computation, (K=5), as this number is commonly used in the ultrasonic imaging industry.

Input images of wire targets in phantom were denoted as Ik, k=−8,…0,…8. Binary images of Ik were computed by selecting a threshold manually such that wire target positions can be located clearly. In each image, six wire targets near the phantom surface were used (M=6). Wire target positions of Ik, uk, n = [xk,n, yk,n]T, n=1,…6, k=−8, …, 0,…,8, were obtained by computing the center of gravity of each wire target in the binary images. The error compensation matrices Tk were computed using (6), and used in computing the transformed images in (8). Let the target positions in the images Ik and Jk be denoted as Iuk,n and Juk,n, n=1,…6, respectively. Errors between the wire target positions in image Ik and the wire target positions in reference image I0, denoted as IE(k,n)=| Iuk,n − Iu0,n|, n=1,…6, k= −8,…, 0,…, 8, were computed. These errors were compared with the same errors in the transformed images Jk, JE(k,n)=|Juk,n − Ju0,n|. Since the wire targets do not move, each target position in all images should ideally be constant, and the errors in the target positions should be zero. The computed errors are shown in Fig. 3. In the first sub-figure of Fig. 3(a), the upper line represents the error of the input image, IE(−8,n)=|Iu−8,n − Iu0,n|, n=1,…6, while the lower line represents the error of the transformed image, JE(−8,n)=|Ju−8,n errors IE(k,n) increase with rotation angle up − Ju0,n|. Position to over 15 pixels in I−8 and decrease to 1 pixel in I1. Target position errors in the transformed images, JE(k,n), were reduced to less than or equal to one pixel in all images. Hence, the transformation of the images to remove the effect of the errors in the rotation axis and rotation angle is shown to be effective. The transformation matrices are computed only once in the calibration step, and they can be used repeatedly in the spatial compounding process. In Fig 3(b), position errors of wire targets are shown as a function of frame number. For each target, the position error before the correction is the largest for frame number −8 and +8, which corresponds to the largest probe rotation angle, while the position error is zero for frame number 0, which is zero by definition and corresponds to no probe rotation angle. After the correction is applied, the position errors for all wire target are reduced below one pixel.

Fig. 3(a) Position error graph of wire targets as a function of target number for 16 images before and after the error correction (b) Position error of wire targets as a function of frame number before and after the error correction. Frame number corresponds to the probe rotation angle.

Spatially compounded images of wire targets from the input images Ik are shown in Fig. 4(a) and spatially compounded images of wire targets from the transformed images Jk is shown in Fig. 4(b). It can be seen that blurriness is reduced considerably when the transformed images Jk are used. Horizontal line plots through the center of gravity of wire target A of Fig. 4(a) in the five input images are shown in Fig. 5(a). Peaks of the line plots are not aligned exactly, and five individual peaks can be seen. When these images are used in the spatial compounding, five curves are averaged and the result is shown in Fig. 5(b). When the transformed images Jk were used in the compounding, the alignment of the five individual peaks improved as shown in Fig. 5(c), and the result is a stronger as well as sharper compounded image as shown in Fig. 5(d).

Fig. 5.Horizontal line plots of five images and the compounded image through target A in Fig. 4.

The proposed compounding method was applied to the live scan images of a human knee and an image is shown in Fig. 6. Comparison of the images in three different locations indicated by rectangular boxes shows that when the proposed error correction matrices were used, skin surface line is sharper and the separations of internal structures are improved.

Fig. 6.Spatial compounding result of human knee image (a) before and (b) after parameter error correction

 

4. Discussions

Reduced target position error and better alignment of images demonstrates the effectiveness of the error compensation matrices Tk in the proposed method. When wire target positions were computed from the original images Ik, error between positions of a wire target in Ik and I0, IE(k,n)=|Iuk,n − Iu0,n|, varied from 1 pixels to 15 pixels, the largest error occurring in I−8 and I8 which can be expected since these two images are acquired at the largest rotation angle. After the transformation of the images using the error compensation matrices, the errors between the positions of wire targets in the transformed images Jk and J0, JE(k,n)=|Juk,n − Ju0,n|, were found to be less than 1 pixel.

Line plots in Fig. 5 show the horizontal line images through the target A. Reason for the blurry wire target A in Fig. 4(a) is shown in Fig. 5(a) where five individual peaks do not overlap exactly and results in target image to spread out when the images were averaged in the compounding. Using the error compensation matrix Tk, the alignment of the peaks were improved and resulted in sharper and stronger compounded image. Standard deviation of the position of the five peak values were 5.7 before and 0.2 after the error compensation, and the peak pixel values of the compounded images were 112 before and 155 after the error compensation respectively.

Since both the conventional spatial compounding method and the proposed method use the spatial compounding, it is expected that the speckle characteristics in the results of the two methods should look similar as shown in Fig. 6. It is the blurring of the targets that is reduced.

 

5. Conclusions

We presented a spatial compounding approach where input images are acquired by mechanically rotating linear probe element. A linear transducer array is rotated about an axis in the plane of the image. In the computation of the ultrasound image, we need the values of the axis of rotation and angular position of the transducer array. However, the construction of rotation mechanism and control system accompanies the inevitable uncertainties in these values. These geometric parameter errors result in the target position error, and the consequence is a blurry compounded image. We presented a spatial compounding scheme where error correcting transformation matrices are computed and applied to the input images before spatial compounding to reduce the blurriness in the compounded image. The proposed scheme is illustrated using phantom and live scan images, and it is shown the blurriness is effectively reduced.

References

  1. J. G. Ahbott and F. L. Thurstone: 'Acoustic speckle: Theory and experimental analysis', Ultrason. Imaging, 1979, 1, 303-324. https://doi.org/10.1177/016173467900100402
  2. C. B. Burckhardt: 'Speckle in ultrasound B scans', IEEE Trans. Sonics and Ultrason., 1978, SU-25, 1-6.
  3. P. N. T. Wells and M. Halliwell: 'Speckle in ultrasonic imaging', Ultrason., 1981, 19, 225-229. https://doi.org/10.1016/0041-624X(81)90007-X
  4. R. F. Wagner, S. W. Smith, J. M. Sandrik and H. Lopez: 'Statistics of speckle in ultrasound B scans', IEEE Trans. Sonics Ultrason., 1983, SU-30, 156-163.
  5. J. F. Krucker, G. L. LeCarpentier, J. B. Fowlkes and P. L. Carson, 'Rapid elastic image registration for 3-D ultrasound', IEEE Trans. Med. Imag., 2002, 21(11), 1384-1394. https://doi.org/10.1109/TMI.2002.806424
  6. P. C. Li and M. O'Donnell: 'Elevational spatial compounding', Ultrason. Imaging, 1994, 16(3), 176-189.
  7. P. A. Magnin, O. T. von Ramm and F. L. Thurstone: 'Frequency compounding for speckle contrast reduction in phased array images,' Ultrason. Imaging, 1982, 4(4), 267-281. https://doi.org/10.1177/016173468200400303
  8. H. E. Melton and P. A. Magnin: 'A-mode speckle reduction with compound frequencies and compound bandwidths', Ultrason. Imaging, 1984, 6(3), 159-173. https://doi.org/10.1177/016173468400600205
  9. J.-Y. Meuwly, J.-P. Thiran and F. Gudinchet: 'Application of adaptive image processing technique to real-time spatial compound ultrasound imaging improves image quality', Invest. Radiol., 2003, 38(5), 257-262.
  10. M. O'Donnell and S. D. Silverstein: 'Optimum displacement for compound image generation in medical ultrasound', IEEE Trans. Ultrason., Ferroelect., Freq. Contr., 1988, 35(4), 470-476. https://doi.org/10.1109/58.4184
  11. D. P. Shattuck and O. T. von Ramm: 'Compound scanning with a phased array', Ultrason. Imaging, 1982, 4(2), 93-107. https://doi.org/10.1177/016173468200400201
  12. S. D. Silverstein and M. O'Donnell: 'Speckle reduction using correlated mixed-integration techniques', in Proc. SPIE 768 Pattern Recognition and Acoust. Imaging 1987, 168-172.
  13. S. D. Silverstein and M. O'Donnell: 'Frequency and temporal compounding of partially correlated signals: Speckle suppression and image resolution', in Proc. SPIE 845 Visual Commun. Image Processing II, 1987, 188-194.
  14. G. E. Trahey, S. W. Smith and O. T. von Ramm: 'Speckle pattern correlation with lateral aperture translation: Experimental results and implications for spatial compounding', IEEE Trans. Ultrason., Ferroelect., Freq. Contr., 1986, 33(3), 257-264. https://doi.org/10.1109/T-UFFC.1986.26827
  15. G. E. Trahey, J. W. Allison, S. W. Smith and O. T. von Ramm: 'A quantitative approach to speckle reduction via frequency compounding', Ultrason. Imaging, 1986, 8(3), 151-164. https://doi.org/10.1177/016173468600800301
  16. P. M. Shankar: 'Speckle Reduction in Ultrasound B-Scans Using Weighted Averaging in Spatial Compounding', IEEE Trans. Ultrasonics, Ferroelectrics, and Frequency Control, 1986, 33(6), 754-758. https://doi.org/10.1109/T-UFFC.1986.26892
  17. S. K. Jespersen, J. E. Wilhjelm and H. Sillesen: 'Multi-angle compound imaging', Ultrason. Imag., 1998, 20, 81-102. https://doi.org/10.1177/016173469802000201
  18. M. Berson, A. Roncin and L. Pourcelot: 'Compound scanning with an electrically steered beam', Ultrason. Imaging, 1981, 3, 303-308. https://doi.org/10.1177/016173468100300306
  19. D. A. Carpenter, M. J. Dadd and G. Kossoff: 'A multimode real time scanner', Ultrasound in Med. and Biol., 1980, 6, 279-284. https://doi.org/10.1016/0301-5629(80)90024-1
  20. A. Hernandez, O. Basset, P. Chirossel and G. Gimenez: 'Spatial compounding in ultrasonic imaging using an articulated scan arm', Ultrasound Med. Biol, 1996, 22(2), 229-238. https://doi.org/10.1016/0301-5629(95)02038-1
  21. S. Huber, M. Wagner, M. Medl and H. Czembirek: 'Real-time spatial compound imaging in breast ultrasound', Ultrasound Med. Biol., 2002, 28(2), 155-163. https://doi.org/10.1016/S0301-5629(01)00490-2
  22. K. V. Jenderka: 'Resolution improved Ultrasound Attenuation Estimation based on RF-data of Spatial Compound Scans', IEEE Int. Ultrasonics, Ferro-electrics, and Frequency Control Conference, 2004, 2078-2081.
  23. Moo-Ho Bae, B-S. Kim, M-K. Jeong, S-J. Kwon, W-Y. Lee and J-C. Seo: 'Improvement of ultrasound image quality by spatial compounding of RF echo', The 11th World Congress in Ultrasound, WFUMB 2006, Ultrasound in Medicine and Biology, 2006, 32(5S), 275.
  24. J. E. Wilhjelm, M. S. Jensen, S.K. Jespersen, B. Sahl and E. Falk: 'Visual and Quantitative Evaluation of Selected Image Combination Schemes in Ultrasound Spatial Compound Scanning', IEEE Trans. on Medical Imaging, 2004, 23(2), 181-190. https://doi.org/10.1109/TMI.2003.822824
  25. Fu, Gonzales, and Lee: 'Robotics: Control, Sensing, Vision and Intelligence', 1987, McGraw-Hill.
  26. Myoung H. Choi, "Correction of Parameter Error in Rotating Linear Probe Using Image Registration for Ultrasonic Diagnostic Imaging", Proc. of International Conference on Control, Automation, and Systems, pp. 909-912, Seoul, Korea, Oct., 2008.

Cited by

  1. Spatial Angular Compounding ofPhotoacoustic Images vol.35, pp.8, 2016, https://doi.org/10.1109/TMI.2016.2531109