0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Article |

Computer-Aided Diagnosis for 3-Dimensional Breast Ultrasonography FREE

Dar-Ren Chen, MD; Ruey-Feng Chang, PhD; Wei-Ming Chen, MS; Woo-Kyung Moon, MD
[+] Author Affiliations

From the Department of General Surgery, China Medical College and Hospital, Taichung, Taiwan (Dr D.-R. Chen), and Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi, Taiwan (Drs Chang and Ms W.-M. Chen); and the Department of Diagnostic Radiology, Seoul National University Hospital, Seoul, South Korea (Dr Moon).


Arch Surg. 2003;138(3):296-302. doi:10.1001/archsurg.138.3.296.
Text Size: A A A
Published online

Hypothesis  Using 3-dimensional (3-D) over 2-dimensional (2-D) ultrasonographic (US) images of the breast represents a potentially significant advantage for computer-aided diagnosis (CAD).

Background  Although conventional 2-D US images of the breast are increasingly used in surgical clinical practice, 3-D US imaging of the breast, a newly introduced technique, can offer more information than 2-D US images do.

Design  This study deals with a CAD method for use with the proposed 3-D US images of the breast and compares its performance with conventional 2-D US versions.

Methods  The test databases included 3-D US images of 107 benign and 54 malignant breast tumors for a total of 161 US images. All solid nodules at US belong to categories above C3 (ie, probably benign). The 3-D US imaging was performed using a scanner (Voluson 530; Kretz Technik, Zipf, Austria). New 3-D autocorrelation coefficients extended from the traditional 2-D autocorrelations were developed to extract the texture characteristics of the 3-D US images. The extracted texture features of the 3-D US images were used to classify the tumor as benign or malignant using the neural network.

Results  At the receiver operating characteristic analysis, 3-D and 2-D autocorrelation calculating schemes yielded Az values (ie, area under the receiver operating characteristic curve) of 0.97 and 0.85 in distinguishing between benign and malignant lesions, respectively. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value are statistically significantly improved using 3-D instead of 2-D US images for CAD.

Conclusions  The proposed system (for 3-D and 2-D CAD) is expected to be a useful computer-aided diagnostic tool for classifying benign and malignant tumors on ultrasonograms and can provide a second reading to help reduce misdiagnosis. Findings from this study suggest that using 3-D over 2-D US images for CAD represents a potentially significant advantage.

Figures in this Article

ULTRASONOGRAPHY (US) has been used in medicine since the Second World War and is recognized as a noninvasive, nonradioactive, real-time, inexpensive imaging modality. During the 1990s, significant technical advances were made in diagnostic US because higher frequency linear transducers were introduced. Increasing the computing power of US platforms allowed fully digital systems with improved resolution and contrast of the image. This digitalization is a great help in the field of computer applications for image processing. "If you are still using ultrasonography only for the distinction between cystic and solid lesions of breast, you are missing the boat" said Raymond in 2000.1 Ultrasonography has a well-established role in breast imaging. Using US imaging of breast tumors, Stavros et al2 described several malignant features that can be used to differentiate malignant from benign lesions with promising results. However, it is unlikely that the radiologist's or surgeon's decisions will ever be based 100% on imaging alone and that computer-aided diagnosis (CAD) may improve the interpretation rate. In studies by Chen et al,35 the benign and malignant tumors are classified in a diagnostic model by applying a 2-dimensional (2-D) normalized autocorrelation matrix to the multilayer feed-forward neural network. The normalized autocorrelation matrix is computed using 2-D US images. Because only the coefficients of a 2-D US image will be calculated and saved into the 2-D autocorrelation coefficient matrix, the characterization from real tissues may be lost by such 1-slice operation and the accuracy of 2-D US image analysis may be affected by the transducer position during the image acquisition. An experienced observer is needed to control the transducer position to perfectly identify the relevant structures and locations, which is not easy to reproduce in clinical applications.

From the viewpoint of texture analysis, we did not know how many views of a tumor image should be used to represent the whole texture information of a tumor.4,6 The answer may be as many as possible; however, that is inaccessible using the conventional 2-D US imaging. The use of 2-D US images plays a major role in clinical US; however, it has been extended into 3-dimensional (3-D) US images gradually. Three-dimensional US is a newly developed technique and it has been introduced into clinical use; it can provide more information about a tumor. The texture characteristics of tumors can be captured morphologically by 3-D US images. However, the quality of 3-D US images of the breast is limited by the US scanning equipment owing to image reconstruction. In this article, to overcome the quality limitation problem, we present a new algorithm for the diagnosis of breast tumors using 3-D US images and accurately distinguish between benign and malignant diseases.

Conventional 2-D US image analysis of breast lesions include the following: shape, width-depth ratio, lesion compressibility, margin characterization, echogenicity, and echotexture. Additional 3-D US information displayed in the multiplanar mode offers the new aspect of the coronal plane. It allows classification of the breast masses by retracting and compressing patterns as described by Rotten et al.7 Three-dimensional US imaging has the potential to be a reliable diagnostic tool because the US image is not a transmission image but a cross-sectional image and the inner tissue of a tumor can be easily observed.8 Moreover, since a 2-D US image represents a thin slice of the patient's anatomy in a particular orientation and location, it is difficult to locate the same image plane in subsequent examinations.9 We propose that the use of 3-D US images to perform the breast examination will overcome the limitations of conventional 2-D US images. The aim of this study is to present a new approach to the diagnosis of breast cancer using 3-D US images and a neural network based on the coefficients of the 3-D autocorrelation matrix. The modified version of the neural network model is also developed to fit such a multiple coefficient structure.

DATA ACQUISITION

A radiologist (W.K.M.) supplied the entire database. Data were collected from January 1, 2001, to June 30, 2001. All supplied cases were used for analysis without selection. The test database contained 161 three-dimensional volumes of pathologically proven cases. There were 107 benign (ie, 6 tumors were originally categorized as benign, 96 tumors as indeterminate, and 5 tumors as malignant) and 54 malignant (ie, 2 tumors were originally categorized as benign, 13 tumors as indeterminate, and 39 tumors as malignant) US images of breast tumors in the database. The tumor sizes ranged from 0.81 to 3.54 cm in diameter for all 161 cases. One of us (W.K.M.) categorized all of the tumors in the database using the following Breast Imaging Reporting and Data System of the American College of Radiology: C3, probably benign, was used for the benign tumors; C4, suspicious, was used for all indeterminate cases; and C5, highly suggestive of malignancy, was used for the malignant tumors. All solid nodules at US belong to categories above C3. Most cases were categorized as indeterminate or malignant because we included only histologically confirmed cases. Table 1 lists the number of tumors for the various specific subtypes of the 161 benign and malignant tumors in this study.

Table Graphic Jump LocationTable 1. Specific Subtypes of the 161 Benign and Malignant Tumors Studied

The subimages of volume of interest (VOI) were manually selected by a breast surgeon (D.-R.C.)) who was familiar with breast US interpretations and CAD applications but masked to the diagnosis of the tissue and clinical categorization from the radiologist before the VOI selections. The selected VOIs were outlined, leaving a 1- to 2-mm border around the lesion. The VOI has to include the entire extent of the tumor margins.

Three-dimensional US imaging was performed using a scanner (Voluson 530; Kretz Technik, Zipf, Austria) and a small port transducer (Voluson S-VNW5-10, Kretz Technik). The transducer, which is a linear-array transducer with a frequency of 5 to 10 MHz, has a scan width of 40 mm (switchable in 3-mm steps) and a sweep angle of 20° to 25° to allow for the performance of 3-D volume scan. All of the US imaging was performed with the patient in a supine postion with an arm extended overhead. No stand-off pad was used. The lesion of interest was refocused after the 2-D examination was completed; it was then analyzed using 3-D US imaging of the breast. The volume scan is automatically performed by a slow-tilt movement of a sectorial mechanical transducer. The process of acquiring the 2-D US images at regular angular intervals results in a set of 2-D image planes arranged in a fanlike geometric pattern, as shown in Figure 1A-B. Volume data were obtained through the reconstruction of these 2-D US images and were saved into a computer file on a magneto-optical disk. The magneto-optical file could be read and analyzed using a personal computer. We developed a program to obtain the 3-D US data directly from the 3-D volume file. Before using this program, the 3-D volume file should be saved in Cartesian coordinates using the Voluson 530D or 3D View 2000 program. In the 3-D volume file of Cartesian coordinates, the volume data set is a set of consecutive 2-D image planes, as shown in Figure 1C.

Place holder to copy figure label and caption
Figure 1.

A, A set of 2-dimensional (2-D) image planes arranged in a fanlike geometric pattern. B, The reconstructed 3-dimensional volume. C, The 3-dimensional ultrasonographic data in Cartesian coordinates.

Graphic Jump Location
3-D AUTOCORRELATION FUNCTION

Each 3-D US volume image consists of many pixels with different values of gray level intensity. Usually a breast tumor on US is hypoechoic, that is, the pixels have a lower level intensity than the surrounding tissue.10 The correlation between neighboring pixels within the 3-D US images is a patent feature of a tumor. The normalized autocorrelation coefficients11 can be used to reflect the interpixel correlation within an image. In general, the 3-D autocorrelation coefficients are further modified into a mean-removed version to generate similar autocorrelation features for images with different brightnesses but with a similar texture. This modified version is expressed as Graphic Jump LocationImage not available.Graphic Jump LocationImage not available.Graphic Jump LocationImage not available. where ′ Δ(Δm, Δn, Δp) is the normalized autocorrelation coefficient between pixel (i, j, k) and pixel (i + Δm, j + Δn, k + Δp) in an image with size M × N × P, and f is the mean value of f(x, y, z). The absolute value is adopted in the above equation because, when the gray level of a pixel is subtracted from the mean, a negative value may result. This study found 3-D autocorrelation coefficients for each breast tumor US image and used these coefficients as the interpixel features to distinguish the differences between benign and malignant tumors.

Because the data acquisition system is using a 3-D volume transducer to scan over the breast, the volume scan is automatically performed by slow-tilt movement sectorial 2-D planes. Such a scanned plane is formatted as an image in a fanlike geometric pattern, and not in parallel with other planes. To form 3-D volumes from a series of the planes, several coordinate-system transformations are required. The data acquisition system reconstructs the image coordinate to a volume coordinate by using interpolation and enlargement or the scale-down method. Therefore, the pixel per centimeter, called the "pixel rate," of each of the 3-D US images is different. The variable range of the pixel rate is between 20 pixels per centimeter to 115 pixels per centimeter. Since the large variation of the resolution will influence the diagnostic result, the proposed program needs to recalculate the Δm, Δn, and Δp in the 3-D auto-correlation expression to keep variation. The calibrated expression is defined as: (3) δm = δn = δp = (Pixel Rate − Autocorrelation Step) where the unit of the pixel rate is pixels per centimeter and the unit of the autocorrelation step is centimeters. In this article, the value of the autocorrelation step is 0.018 cm.

3-D US IMAGES BASED ON THE 3-D AUTOCORRELATION METHOD

The proposed method consists of 2 stages as shown in Figure 2. In the first stage of the segmentation, rough subimages of the VOI are determined and autocorrelation coefficients are generated. Figure 3 illustrates the shape of the VOI in the 3-D US image, which is cuboid. If a physician has already identified the tumor in the 3-D US image, then the pixels of the VOI can be extracted. To extract the subimages of the VOI, 3 frames need to be defined by the physician using a rectangular zone that just includes the tumor border, which was described earlier. They are the first, middle, and last frames. As shown in Figure 4, the first frame is a rectangular zone in which the tumor appeared; the middle frame is a rectangular zone including the largest diameter of the tumor; and the last frame is a rectangular zone where the tumor is tending to disappear.

Place holder to copy figure label and caption
Figure 2.

The structure of neural network tissue classification model. 3-D indicates 3-dimensional; VOI, volume of interest. For a detailed description see the "3-D US Images Based on 3-D Autocorrelation Method" subsection of the "Methods" section.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 3.

Volume of interest (VOI) in the 3-dimensional (3-D) ultrasonographic images.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 4.

The program screen of the proposed 3-dimensional (3-D) autocorrelation scheme.

Graphic Jump Location

When the frames are defined, a 3-D VOI array will be extracted from the 3-D volume file and then the 3-D normalized autocorrelation matrix will be generated by the 3-D autocorrelation function. As shown in the second stage in Figure 2, we used the modified version of the 3-D normalized autocorrelation matrix as the input layer of the neural network. Each dimension of the autocorrelation matrix can be of any size. In our experiment, the size of Δm, Δn, and Δp is 3 for all cases (to avoid excess in the amount of the coefficients), so processing each 3-D US image produces a 3 × 3 × 3 autocorrelation matrix (ie, 27 auto-correlation coefficients). The autocorrelation matrix is normalized, δ (0, 0, 0) is always 1. Thus, except for the element γ(0, 0, 0), other autocorrelation coefficients can be formed as a 26-dimension characteristic vector.

These characteristic vectors were further fed into the source nodes in the input layer of the multilayer feed-forward neural network. The architecture of the neural network scheme is simple.12 Our proposed scheme uses only 1 type of neural network model, a multilayer feed-forward neural network with 27 input nodes, 10 hidden nodes, and 1 output node. The 26-dimension characteristic vectors, in combination with a predefined threshold of the input layer, constitute the input signals of the neural network.

When the performance is suboptimal for new 3-D US images, these images will be added to the original training set again to reproduce a new set of synaptic weight vectors by adjusting the free parameters. The value produced by the output layer is used to decide whether the tumor is benign or malignant. When the output value of a 3-D US image is near enough to 1, the tumor in the image will be categorized as malignant. Conversely, when the output value is near 0, the system will categorize the tumor as benign.

SIMULATIONS (k-FOLD CROSS-VALIDATION)

The 161 three-dimensional US images in the database, as listed in Table 1, were randomly divided into m groups. The m − 1 groups, called "training groups," provide the neural network to determine the set of synaptic weight vectors and the residual one is set as an outside group. The training process was stopped when the value for improvement of error distortion was smaller than 0.1 or when the number of training iterations was more than 10 000. Once the process is stopped, the network was then tested on the outside group and the result was recorded. Next, the outside group was then replaced with any group in the training sets and training continued again. This process was repeated until all m groups were used as the training and outside groups. In this simulation, m is 5 and each group has about thirty-two 3-D US images.

2-D US IMAGES

In this part, we used the correlation between neighboring pixels within the 2-D US images as classifying features of the tumor. The normalized autocorrelation coefficients can be used to reflect the interpixel correlation within an image. Similar to the autocorrelation features for 3-D images, the 2-D normalized 5 × 5 autocorrelation coefficients were used for 2-D US image analysis and its detail was described by Chen et al3 in 1999. Because of the variation of pixel resolution in each volume data set, the 2-D autocorrelation coefficient is needed to be modified by equation 3. The subimage of the region of interest is extracted from the middle frame of the VOI in the 3-D US images as the database for the 2-D CAD system. The reason for this is because the middle frame is the rectangular zone in the image with the largest diameter of a tumor. The performance of the 2-D US images diagnostic method is compared with that of the proposed schemes for 3-D US images.

The receiver operating characteristic (ROC) curve was used to represent the diagnostic performance and the χ2 test was used for statistical analysis. In this study, the software package LABROC1 by C. E. Metz, PhD, University of Chicago, Chicago, Ill, was used to fit the ROC curve.

The overall performance of the process was evaluated on a total of 161 cases, that is, 107 cases of benign and 54 cases of malignant tumors. The overall performance of the neural network was examined by the ROC area index Az over the testing output value. In Figure 5, both diagnostic performances for 2-D and 3-D were compared. Three-dimensional US imaging (with an Az value of 0.9717) is better than 2-D US imaging (with an Az value of 0.8457). Table 2 lists the number of training iterations and error distortions in each training set for 3-D and 2-D, respectively. By comparing the number of iterations and error distortions in Table 2, the convergence of the neural network training can be clearly discerned. The speed of the convergence using the 3-D autocorrelation method is faster than the conventional method, and the level of error distortion with the 3-D autocorrelation method is also lower than that found within the conventional method. These results confirm that the performance using 3-D interpixel correlations in 3-D US images is better than the 2-D versions in classifying benign and malignant lesions. The distortion error is calculated by the absolute difference between the desired output and the actual output of the neural network. Table 3 lists the performance of proposed 3-D and conventional 2-D methods for different threshold values. In the 3-D set, with a threshold value of 0.1, the neural machine correctly identifies 98 (91%) of 107 benign tumors and 50 (92%) of 54 malignant tumors. In the 2-D set, with a threshold value of 0.2, the neural network correctly identifies 86 (80%) of 107 benign tumors and 38 (70%) of 54 malignant tumors. The performance between the 3-D and 2-D methods is compared in Table 4. Accuracy for the 3-D vs 2-D was 91.9% vs 77.0% (P<.001); sensitivity, 92.6% vs 70.4% (P<.005); specificity, 91.6% vs 80.4% (P<.02); positive predictive value, 84.7% vs 66.4% (P<.02); and negative predictive value, 96.1% vs 84.3% (P<.005). The performance of the 3-D method was better than the 2-D method with statistical significance using the χ2 test. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value for the diagnostic performance between the 3-D and 2-D methods are clearly shown in Figure 6. All diagnositic measurements of proposed 3-D schemes are higher than those of the conventional 2-D ones.

Place holder to copy figure label and caption
Figure 5.

The receiver operating characteristic (ROC) curves for the neural network in the classification of malignant and benign tumors. The mean (SD) Az value for the ROC curve of our proposed 3-dimensional (3-D) ultrasonographic (US) method is 0.9717 (0.013) and the Az value of conventional 2-dimensional (2-D) US method is 0.8457 (0.03).

Graphic Jump Location
Table Graphic Jump LocationTable 2. The Number of Malignant and Benign Cases of Breast Tumors, Number of Iterations, and Error Distortions in Each Training Set for 3-Dimensional (3-D) and 2-Dimensional (2-D) Ultrasonographic Images
Table Graphic Jump LocationTable 3. Performance of Neural Network for Different Threshold Values for the Proposed 3-Dimensional (3-D) and Conventional 2-Dimensional (2-D) Ultrasonographic Imaging
Table Graphic Jump LocationTable 4. Classification and Number of Breast Nodules Comparing the Proposed 3-Dimensional (3-D) With the Conventional 2-Dimensional (2-D) Ultrasonographic Imaging Results
Place holder to copy figure label and caption
Figure 6.

Accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for the diagnostic performance between 2-dimensional (2-D) and 3-dimensional (3-D) methods were demonstrated where the line curves. CAD indicates computer-aided diagnosis.

Graphic Jump Location

Correlation between neighboring pixels provides important information for CAD using medical US images. This article proposes a new neural network diagnostic system adopting 3-D interpixel correlation features instead of our previous 2-D features to obtain a better diagnostic result. Our previous 2-D autocorrelation coefficients were calculated from 2-D US images with the same pixel resolution. However, the 3-D US images even from the same machine have a large variation of pixel resolutions. Hence, the 3-D autocorrelation coefficients must be modified as noted in equation 3 for considering the resolution variation.

In clinical practice, frankly benign breast lesions (BI-RADS [Breast Imaging Report and Data System] category 2) seen on US were not confusing to an interpreter, and the CAD system has no role here. But for a lesion at US that belongs to a C3 or more category, seeking a second opinion to increase the diagnostic confidence becomes helpful if the second opinion-providing system can work well. It is true especially for an inexperienced US interpreter. This study focused on that point. Further studies are under way using a larger test set of tumor images. This scheme deals with the classification part of the problem (malignant vs benign) and excludes the detection part. The diagnostic procedures, however, must involve the physician to identify the tumor. Sometimes, this is inefficient if there are many such projects to deal with or if the US image is not sharp. Consequently, we must still strive to find some efficient ways to fully computerize the diagnosis of tumors using US images. Enhancement of the accuracy of US technology is also needed.12 For example, we can improve the accuracy of 2-D accumulative and multiple autocorrelation schemes by capturing the 2-D slices in another 2 directions. Moreover, an artificial intelligence technique, called a "support vector machine,"13 has been recently used for research purposes. As a consequence, the idea of using a support vector machine to replace the judgment and learning functions of the neural network algorithm becomes one of the most important issues in our future works. We are hoping that the better usage of the judgment and learning functions in a support vector machine will further improve the precision of the diagnosis of breast cancer.

Giger et al14 at the University of Chicago, Chicago, Ill, did many studies on CAD on US of the breast. Recently, they performed the ROC study with or without aid of a CAD system. Observers (6 experts and 6 nonexperts) gave their likelihood that the lesion was malignant and also their patient management recommendation (biopsy or follow-up). The output of their system shows the estimated probability of malignancy of the lesion (eg, 81%) and the computer also displays a variety of lesions that have characteristics similar to the one at hand and for which the diagnosis is known. The above idea is similar to the one in our previous article.5

The conventional 2-D US of the breast is increasingly used in surgical clinical practice because it offers many benefits compared with other medical imaging techniques. Nevertheless, conventional 2-D US images are not enough to transmit the entire US information of a solid breast lesion while stored 3-D US can offer comprehensive information of all 2-D lesion aspects and provide, in addition, simultaneously the coronal plane. This additional information has been proved to be helpful for both clinical applications15 and CAD using texture analysis. The proposed systems for 2-D CAD and 3-D CAD are expected to be a useful computer-aided diagnostic tools for classifying benign and malignant tumors in ultrasonograms, and they can provide a second reading to help reduce misdiagnosis. They can be routinely established for surgical office-based US of breast. They can easily be implemented on existing commercially available diagnostic US machines; all that would be required for the CAD system is a personal computer with CAD software. From this study, accuracy, sensitivity, specificity, positive predictive value, and negative predictive value for CAD are improved using 3-D US instead of 2-D US imaging. This suggests that using 3-D US images for CAD represents a potentially significant advantage over 2-D US imaging. The artificial neural network ROC performance based on the 2-D US regions of interest is significantly lower in this study (Az = 0.85) than what we reported in a previous study (Az = 0.96).3 One of the most important reasons for this finding was the difficulty level of this data set (C4 in most cases) was higher than before (C3, mostly).

Corresponding author: Dar-Ren Chen, MD, Department of General Surgery, China Medical College and Hospital, 2 Yer-Der Rd, Taichung, Taiwan, Republic of China (e-mail: dlchen88@ms13.hinet.net).

Accepted for publication November 9, 2002.

Corresponding author: Dar-Ren Chen, MD, Department of General Surgery, China Medical College and Hospital, 2 Yer-Der Rd, Taichung, Taiwan, Republic of China (e-mail: dlchen88@ms13.hinet.net).

Accepted for publication November 9, 2002.

Raymond  HW Letter from the editor. Semin Ultrasound CT MR. 2000;21285
Link to Article
Stavors  ATThickman  DRapp  CL  et al.  Solid breast nodules: use of sonography to distinguish between benign and malignant lesions. Radiology. 1995;196123- 134
Link to Article
Chen  DRChang  RFHuang  YL Computer-aided diagnosis applied to US of solid breast nodules by using neural networks. Radiology. 1999;213407- 412
Link to Article
Chen  DRChang  RFHuang  YL  et al.  Texture analysis of breast tumors on sonograms. Semin Ultrasound CT MR. 2000;21308- 316
Link to Article
Chen  DRChang  RFHuang  YL Breast cancer diagnosis using self-organizing map for sonography. Ultrasound Med Biol. 2000;26405- 411
Link to Article
Garra  BSKrasner  BHHorii  SCAscher  SMuk  SKZeman  RK Improving the distinction between benign and malignant breast lesions: the value of sonographic texture analysis. Ultrason Imaging. 1993;15267- 285
Link to Article
Rotten  DLevaillant  J-MZerat  L Use of three-dimensional ultrasound mammography to analyze normal breast tissue and solid breast masses. Merz  Eed.3-D Ultrasonography in Obstetric and Gynecology. Philadelphia, Pa Lippincott Williams & Wilkins Inc1998;73- 78
Cheng  XYAkiyama  IItoh  K  et al.  Breast tumor diagnosis system using three-dimensional ultrasonic echography. Proceedings of the19th International Conference on IEEE Engineering in Medicine and Biology Society Oct 30-Nov 2, 1997 Chicago, IL, USA Storrs, Conn: IEEE Standard Office1997;517- 520
Fenster  ACardinal  NTong  SDowney  DB Development and evaluation of a 3D ultrasound imaging system. Proceedings of the IEEE Instrumentation and Measurement Technology Conference May 18-21, 1998 St Paul, Minn. Vol 1. Storrs, Conn IEEE Standard Office1998;562- 565
Cheng  XYAkiyama  IItoh  KWang  YTaniguchi  NNakajima  M Automated detection of breast tumors ultrasonic images using fuzzy reasoning.  Proceedings of the IEEE International Conference on Image Processing October 26-29, 1997 Washington, DC.Vol 3.Storrs, Conn: IEEE Standard Office; 199:420-423.
Gonzalez  RCWoods  RE Digital Image Processing.  Reading, Mass Addison-Wesley Publishing Co1992;312- 315
Sahiner  BChan  HPPetrick  N  et al.  Classification of mass and normal breast tissue: a convolution neural network classifier with spatial domain and texture images. IEEE Trans Med Imaging. 1996;15598- 610
Link to Article
Cristianini  NShawe-Taylor  J An Introduction to Support Vector Machine and Other Kernel-Based Learning Methods.  Cambridge, England Cambridge University Press2000;
Giger  MLHallaq  HAHuo  Z  et al.  Computerized analysis of lesions in US images of the breast. Acad Radiol. 1999;6665- 674
Link to Article
Rotten  DLevaillant  JMZerat  L Analysis of normal breast tissue and of solid breast masses using three-dimensional ultrasound mammography. Ultrasound Obstet Gynecol. 1999;14114- 124
Link to Article

Figures

Place holder to copy figure label and caption
Figure 1.

A, A set of 2-dimensional (2-D) image planes arranged in a fanlike geometric pattern. B, The reconstructed 3-dimensional volume. C, The 3-dimensional ultrasonographic data in Cartesian coordinates.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.

The structure of neural network tissue classification model. 3-D indicates 3-dimensional; VOI, volume of interest. For a detailed description see the "3-D US Images Based on 3-D Autocorrelation Method" subsection of the "Methods" section.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 3.

Volume of interest (VOI) in the 3-dimensional (3-D) ultrasonographic images.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 4.

The program screen of the proposed 3-dimensional (3-D) autocorrelation scheme.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 5.

The receiver operating characteristic (ROC) curves for the neural network in the classification of malignant and benign tumors. The mean (SD) Az value for the ROC curve of our proposed 3-dimensional (3-D) ultrasonographic (US) method is 0.9717 (0.013) and the Az value of conventional 2-dimensional (2-D) US method is 0.8457 (0.03).

Graphic Jump Location
Place holder to copy figure label and caption
Figure 6.

Accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for the diagnostic performance between 2-dimensional (2-D) and 3-dimensional (3-D) methods were demonstrated where the line curves. CAD indicates computer-aided diagnosis.

Graphic Jump Location

Tables

Table Graphic Jump LocationTable 1. Specific Subtypes of the 161 Benign and Malignant Tumors Studied
Table Graphic Jump LocationTable 2. The Number of Malignant and Benign Cases of Breast Tumors, Number of Iterations, and Error Distortions in Each Training Set for 3-Dimensional (3-D) and 2-Dimensional (2-D) Ultrasonographic Images
Table Graphic Jump LocationTable 3. Performance of Neural Network for Different Threshold Values for the Proposed 3-Dimensional (3-D) and Conventional 2-Dimensional (2-D) Ultrasonographic Imaging
Table Graphic Jump LocationTable 4. Classification and Number of Breast Nodules Comparing the Proposed 3-Dimensional (3-D) With the Conventional 2-Dimensional (2-D) Ultrasonographic Imaging Results

References

Raymond  HW Letter from the editor. Semin Ultrasound CT MR. 2000;21285
Link to Article
Stavors  ATThickman  DRapp  CL  et al.  Solid breast nodules: use of sonography to distinguish between benign and malignant lesions. Radiology. 1995;196123- 134
Link to Article
Chen  DRChang  RFHuang  YL Computer-aided diagnosis applied to US of solid breast nodules by using neural networks. Radiology. 1999;213407- 412
Link to Article
Chen  DRChang  RFHuang  YL  et al.  Texture analysis of breast tumors on sonograms. Semin Ultrasound CT MR. 2000;21308- 316
Link to Article
Chen  DRChang  RFHuang  YL Breast cancer diagnosis using self-organizing map for sonography. Ultrasound Med Biol. 2000;26405- 411
Link to Article
Garra  BSKrasner  BHHorii  SCAscher  SMuk  SKZeman  RK Improving the distinction between benign and malignant breast lesions: the value of sonographic texture analysis. Ultrason Imaging. 1993;15267- 285
Link to Article
Rotten  DLevaillant  J-MZerat  L Use of three-dimensional ultrasound mammography to analyze normal breast tissue and solid breast masses. Merz  Eed.3-D Ultrasonography in Obstetric and Gynecology. Philadelphia, Pa Lippincott Williams & Wilkins Inc1998;73- 78
Cheng  XYAkiyama  IItoh  K  et al.  Breast tumor diagnosis system using three-dimensional ultrasonic echography. Proceedings of the19th International Conference on IEEE Engineering in Medicine and Biology Society Oct 30-Nov 2, 1997 Chicago, IL, USA Storrs, Conn: IEEE Standard Office1997;517- 520
Fenster  ACardinal  NTong  SDowney  DB Development and evaluation of a 3D ultrasound imaging system. Proceedings of the IEEE Instrumentation and Measurement Technology Conference May 18-21, 1998 St Paul, Minn. Vol 1. Storrs, Conn IEEE Standard Office1998;562- 565
Cheng  XYAkiyama  IItoh  KWang  YTaniguchi  NNakajima  M Automated detection of breast tumors ultrasonic images using fuzzy reasoning.  Proceedings of the IEEE International Conference on Image Processing October 26-29, 1997 Washington, DC.Vol 3.Storrs, Conn: IEEE Standard Office; 199:420-423.
Gonzalez  RCWoods  RE Digital Image Processing.  Reading, Mass Addison-Wesley Publishing Co1992;312- 315
Sahiner  BChan  HPPetrick  N  et al.  Classification of mass and normal breast tissue: a convolution neural network classifier with spatial domain and texture images. IEEE Trans Med Imaging. 1996;15598- 610
Link to Article
Cristianini  NShawe-Taylor  J An Introduction to Support Vector Machine and Other Kernel-Based Learning Methods.  Cambridge, England Cambridge University Press2000;
Giger  MLHallaq  HAHuo  Z  et al.  Computerized analysis of lesions in US images of the breast. Acad Radiol. 1999;6665- 674
Link to Article
Rotten  DLevaillant  JMZerat  L Analysis of normal breast tissue and of solid breast masses using three-dimensional ultrasound mammography. Ultrasound Obstet Gynecol. 1999;14114- 124
Link to Article

Correspondence

CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 27

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles