0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Article |

Computer-Aided Diagnosis for Surgical Office-Based Breast Ultrasound FREE

Ruey-Feng Chang, PhD; Wen-Jia Kuo, MS; Dar-Ren Chen, MD; Yu-Len Huang, PhD; Jau-Hong Lee, MD; Yi-Hong Chou, MD
[+] Author Affiliations

From the Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi, Taiwan (Drs Chang, and Huang and Mr Kuo); Department of General Surgery, China Medical College and Hospital, Taichung, Taiwan (Drs Chen and Lee); and the Department of Radiology, Veterans General Hospital, Taipei, Taiwan (Dr Chou).


Arch Surg. 2000;135(6):696-699. doi:10.1001/archsurg.135.6.696.
Text Size: A A A
Published online

Hypothesis  The computer-aided diagnostic system is an intelligent system with great potential for categorizing solid breast nodules. It can be used conveniently for surgical office-based digital ultrasonography (US) of the breast.

Design  Retrospective, nonrandomized study.

Setting  University teaching hospital.

Patients  We retrospectively reviewed 243 medical records of digital US images of the breast of pathologically proved, benign breast tumors from 161 patients (ie, 136 fibroadenomas and 25 fibrocystic nodules), and carcinomas from 82 patients (ie, 73 invasive duct carcinomas, 5 invasive lobular carcinomas, and 4 intraductal carcinomas). The digital US images were consecutively recorded from January 1, 1997, to December 31, 1998.

Intervention  The physician selected the region of interest on the digital US image. Then a learning vector quantization model with 24 autocorrelation texture features is used to classify the tumor as benign or malignant. In the experiment, 153 cases were arbitrarily selected to be the training set of the learning vector quantization model and 90 cases were selected to evaluate the performance. One experienced radiologist who was completely blind to these cases was asked to classify these tumors in the test set.

Main Outcome Measure  Contribution of breast US to diagnosis.

Results  The performance comparison results illustrated the following: accuracy, 90%; sensitivity, 96.67%; specificity, 86.67%; positive predictive value, 78.38%; and negative predictive value, 98.11% for the computer-aided diagnostic (CAD) system and accuracy, 86.67%; sensitivity, 86.67%; specificity, 86.67%; positive predictive value, 76.47%; and negative predictive value, 92.86% for the radiologist.

Conclusion  The proposed CAD system provides an immediate second opinion. An accurate preoperative diagnosis can be routinely established for surgical office-based digital US of the breast. The diagnostic rate was even better than the results of an experienced radiologist. The high negative predictive rate by the CAD system can avert benign biopsies. It can be easily implemented on exisiting commercial diagnostic digital US machines. For most available diagnostic digital US machines, all that would be required for the CAD system is only a personal computer loaded with CAD software.

Figures in this Article

BREAST ULTRASOUND (US) has become an increasingly integral part of the evaluation, diagnosis, and treatment of breast disease. It is the most useful adjunctive technique to mammography. Also, it plays an important role in differentiating cystic from solid masses and in guiding interventional procedures. With US, the levels of diagnostic confidence and accuracy depend on the quality of the examination. Good-quality equipment must be used to produce high-quality-image resolution. The skilled operator must properly mark the region of interest (ROI) to achieve the correct diagnosis and differential diagnosis. The rapid development of US made it seem advisable to reconsider the clinical value of breast US, especially using the high-resolution, real-time US, and computer-aided diagnostic (CAD) system. High-resolution probes, computer-enhanced imaging, and portable machinery have led to the widespread adoption of real-time US by breast surgeons. Surgeons have a much greater clinical correlation than radiologists by performing the digitial US studies. Breast surgeons should not be excluded in the multidisciplinary care of a patient with breast disease. On the contrary, it must include the surgeon, radiologist, pathologist, and patient. Ultrasonographic examination is painless, requires no roentgenographic exposure, and, with proper training, may be easily performed in a timely, convenient manner in a physician's office. Hieken and Velasco1 found that it takes 3 or 4 months to learn how to use a breast US before the surgeon felt comfortable with the technique. To achieve the same level of using US as an experienced radiologist, a surgeon should take much longer. To shorten the long learning curve, a CAD system can optimize this performance.

Recently, technical advances in US have expanded the potential usefulness of this modality for the evaluation of breast lesions. Scientists were less familiar with neural networks. In early 1989, the Joint Conference on Neural Networks, conducted in San Diego, Calif, helped to make this field grow considerably. A neural network comprises computer programs that learn to make diagnostic predictions. The construction is simple. The neural networks learn by example and they deal with ambiguous data. They can provide an educated guess, something conventional computer programs could not do well, or at all. Previous articles25 have suggested that neural networks can assist physicians with diagnosis. They have been applied to the prediction of breast cancer6 and ovarian malignancy.7 They are potentially useful for differential diagnosis of interstitial lung disease.8 However, the aforementioned studies applied the database of clinical findings. Texture features are helpful to classify masses and normal tissue on mammograms,9 and the most useful features were those derived from co-occurrence matrices of the images of the images.10 In this study, the physician located the ROI of digital US images and used the autocorrelation features of a tumor to make the diagnosis by using the learning vector quantization (LVQ) neural networks.

The improvement of classification performance can be achieved by connecting with supervised learning rules for LVQ11 with a self-organizing map of the neural network.12 That is, the input samples along with correct classification labels are given for these neural network models. In this study, the autocorrelation feature vector and a desired result are used as the input signals for the LVQ training process. The classification label produced by the best-match neuron is used to decide whether a tumor image is benign or malignant.

The US examination was performed using a handheld 7.5 MHz lineal array transducer; no acoustic standoff pad was used. All solid masses measured from 0.8 to 3.7 cm were sampled percutaneously by fine needle aspiration or open biopsy and then correlated with pathological features. We retrospectively reviewed 243 medical records of digital US images of the breast of pathologically proved, benign breast tumors from 161 patients (ie, 136 fibroadenomas and 25 fibrocystic nodules), and carcinomas from 82 patients (ie, 73 invasive duct carcinomas, 5 invasive lobular carcinomas, and 4 intraductal carcinomas). The images were consecutively recorded from January 1, 1997, to December 31, 1998. One breast surgeon (D.-R.C.) captured all digitized US images and sampled all apiration specimens. The patients' ages ranged from 17 to 64 years (average age, 42 years).

The image of the ROI is manually selected with a region that extended beyond the lesion margins by 1 to 2 mm in all directions and then saved as a file for later analysis by the proposed LVQ system. A real-time digitized US monochrome image and an ROI subimage of a tumor is shown in Figure 1. The texture correlation between neighboring pixels within an ROI subimage is used to classify the tumor. We adopted the normalized autocorrelation coefficients13 as the texture features of a tumor. The 2-dimensional normalized autocorrelation coefficient between pixel (i , j ) and pixel (im , jn ) in an image with size M×N is defined as follows:

where

and f is the mean value of f (x, y). To remove the influence of brightness on US images, the mean value is removed from each pixel's value.

Place holder to copy figure label and caption

A 736 × 556-pixel digital image is captured from the ultrasonographic scanner. In a 1 × 1-cm rectangle, there are 58 × 58 = 3364 pixels. The region-of-interest rectangle is 1.98 × 1.66 cm and 115 × 96 pixels.

Graphic Jump Location

In this study, the LVQ1 learning algorithm14 has been selected to train the proposed LVQ model. In this work, we use the LVQ Program Package (LVQ_PAK, Version 3.1; prepared by the LVQ Programming Team of the Helsinki University of Technology Laboratory of Computer and Information Science, Helsinki, Finland, April 7, 1995). The detailed descriptions of the learning algorithms could be found in the program document of LVQ_PAK. The 2-dimensional normalized autocorrelation matrix is used as the input of the LVQ model. The dimension of the matrix can be fixed for any size of image. In this work, a US ROI image produces a 5 × 5, autocorrelation matrix as the texture features. Notices that the value of γ(0, 0) is always 1 for a normalized autocorrelation matrix. Hence, except for the element γ(0, 0), other autocorrelation coefficients are formed as a 24-dimensional image feature vector. To diminish the occurrence of dead neuron in the LVQ model, we remove the dead neurons according to the training set after a number of training iterations. Meanwhile, the ambiguous codeword will be split with the members of the training set in the cluster to create 2 new code words. The steps will be performed until the amount of code words reaches an acceptable percentage of the training set.

The arbitrarily selected 153 digital US images (ie, 101 benign breast tumors and 52 carcinomas) are used to train the LVQ. The test set contains 30 carcinomas and 60 benign breast tumors. To compare the performance of the proposed CAD system with the radiologist, an experienced radiologist (Y.-H.C.) (familiar with breast US examinations) who was completely blind to these test cases was asked to classify these tumors on a 3-point scale (where 1 indicates benign; 2, indeterminate; and 3, malignant). The category of "not benign" is the number including malignant and indeterminate cases of the sonographic classification according to Stavros et al.15 This represents the total number of lesions requiring biopsy according to their US classification. Table 1 lists the classification of breast nodules by the proposed LVQ and an experienced radiologist.

Table Graphic Jump LocationTable 1. Classification of Breast Nodules by the Radiologist and the Proposed LVQ*

For the LVQ learning algorithms, the maximal number of iterations is limited to 10,000 and the initial codebook sizes 10, 20, 30, 40, and 50 are selected. We find that the effective and similar performance is achieved by using codebook size in the ranges 20 to 30. Using an initial codebook size of 20 and limit the amount after splitting codewords to be 10% of the training set, the LVQ correctly identifies 29 of 30 malignant tumors and 52 of 60 benign tumors. Table 2 lists the summary of performance between the LVQ diagnostic system and the radiologist (Y.-H.C.). The results illustrated the following: accuracy, 90%; sensitivity, 96.67%; specificity, 86.67%; positive predictive value, 78.38%; and negative predictive value, 98.11% for the CAD system and accuracy, 86.67%; sensitivity, 86.67%; specificity, 86.67%; positive predictive value, 76.47%; and negative predictive value, 92.86% for the experienced radiologist.

Table Graphic Jump LocationTable 2. Summary of Performance Between the Radiologist and the LVQ* Diagnostic System

Interpretive criteria established for breast lesions have been accepted as appropriate descriptors for the interpretation of breast abnormality. The experienced radiologists, Stavors et al,15 and Skaane and Engedal,16 reported that the sensitivity of breast US for malignancy was 98.4% and 99.55%, the specificity was 67.8% and 29%, the positive predictive value was 38% and 66%, and the negative predictive value was 99.5% and 98%, respectively; the overall accuracy as reported by Stavros et al was 72.9%. If they are well trained, surgeons will be expected to have the same accuracy rate of interpretation as experienced radiologists. This is not a hard task for surgeons because the surgeon is the primary care provider for the patient. Surgeons learn the performance of US easily because they do the cytology or biopsy of patients by themselves so that they can correlate the information of both image and abnormality. However, because some features overlap between benign and malignant criteria, surgeons may sometimes have difficulty in interpreting findings with confidence. The proposed CAD programs are an intelligent system with extreme potential for categorizing solid breast nodules with a high accuracy rate, which was compared with the results of experienced radiologists. Computer-aided diagnosis of radiological images has become a rapidly expanding field of research. Methods of image texture analysis are undergoing great development and use within the field of medical imaging. The combination of image texture analysis and automated decision making of the LVQ model provides an immediate second opinion and an accurate preoperative diagnosis can be routinely established for surgical office-based digital US of breast. It can be easily implemented on existing commercial diagnostic digital US machines. For most available diagnostic digital US machines, all that would be required for the CAD system is only a personal computer loaded with CAD software.

Corresponding author: Dar-Ren Chen, MD, Department of General Surgery, China Medical College and Hospital, 2 Yer-Der Rd, Taichung, Taiwan (e-mail: dlchen88@ms13.hinet.net).

Hieken  TJVelasco  JM A prospective analysis of office-based breast ultrasound. Arch Surg. 1998;133504- 508
Link to Article
Gurney  JW Neural networks at the crossroads: caution ahead. Radiology. 1994;19327- 28
Boone  JM Neural networks at the crossroads. Radiology. 1993;189357- 359
Astion  MLWilding  P The application of backpropagation neural networks to problems in pathology and laboratory medicine. Arch Pathol Lab Med. 1992;116995- 1001
Piraino  DWAmartur  SCRichmond  BJ  et al.  Application of an artificial neural network in radiographic diagnosis. J Digit Imaging. 1991;4226- 232
Link to Article
Baker  JAKornguth  PJLo  JYWilliford  MEFloyd  CE Breast cancer: prediction with artificial neural network based on BI-RADS standardized lexicon. Radiology. 1995;196817- 822
Biagiotti  RDesii  CVanzi  EGacci  G Predicting ovarian malignancy: application of artificial neural networks to transvaginal and color Doppler flow. Radiology. 1999;210399- 403
Link to Article
Asada  NDoi  KMahon  HM  et al.  Potential usefulness of an artificial neural network for differential diagnosis of interstitial lung disease: pilot study. Radiology. 1990;177857- 860
Sahiner  BChan  HPPetrick  N  et al.  Classification of mass and normal breast tissue: a convolution neural network classifier with spatial domain and texture images. IEEE Trans Med Imaging. 1996;15598- 610
Link to Article
Garra  BSKrasner  BHHorii  SCAscher  SMuk  SKZeman  RK Improving the distinction between benign and malignant breast lesions: the value of sonographic texture analysis. J Ultrasound Med. 1993;13267- 285
Kohonen  T Improved versions of learning vector quantization. Proc Int Jt Conference on Neural Networks. 1990;1545- 550
Kohonen  TOja  ESimula  OVisa  A.Kangas  J Engineering applications of the self-organizing map. Proc IEEE. 1996;841358- 1384
Link to Article
Gonzalez  RCWoods  RE Image compression. Digital Image Processing. New York, NY Addison-Wesley Longman Inc1992;312- 315
Kohonen  T The self-organizing map. Proc IEEE. 1990;781464- 1480
Link to Article
Stavros  ATThickman  DRapp  CLDennis  MAParker  SHSisney  GA Solid breast nodules: use of sonography to distinguish between benign and malignant lesions. Radiology. 1995;196123- 134
Skaane  PEngedal  K Analysis of sonographic features in the differentiation of fibroadenoma and invasive ductal carcinoma. AJR Am J Roentgenol. 1998;170109- 114
Link to Article

Figures

Place holder to copy figure label and caption

A 736 × 556-pixel digital image is captured from the ultrasonographic scanner. In a 1 × 1-cm rectangle, there are 58 × 58 = 3364 pixels. The region-of-interest rectangle is 1.98 × 1.66 cm and 115 × 96 pixels.

Graphic Jump Location

Tables

Table Graphic Jump LocationTable 2. Summary of Performance Between the Radiologist and the LVQ* Diagnostic System
Table Graphic Jump LocationTable 1. Classification of Breast Nodules by the Radiologist and the Proposed LVQ*

References

Hieken  TJVelasco  JM A prospective analysis of office-based breast ultrasound. Arch Surg. 1998;133504- 508
Link to Article
Gurney  JW Neural networks at the crossroads: caution ahead. Radiology. 1994;19327- 28
Boone  JM Neural networks at the crossroads. Radiology. 1993;189357- 359
Astion  MLWilding  P The application of backpropagation neural networks to problems in pathology and laboratory medicine. Arch Pathol Lab Med. 1992;116995- 1001
Piraino  DWAmartur  SCRichmond  BJ  et al.  Application of an artificial neural network in radiographic diagnosis. J Digit Imaging. 1991;4226- 232
Link to Article
Baker  JAKornguth  PJLo  JYWilliford  MEFloyd  CE Breast cancer: prediction with artificial neural network based on BI-RADS standardized lexicon. Radiology. 1995;196817- 822
Biagiotti  RDesii  CVanzi  EGacci  G Predicting ovarian malignancy: application of artificial neural networks to transvaginal and color Doppler flow. Radiology. 1999;210399- 403
Link to Article
Asada  NDoi  KMahon  HM  et al.  Potential usefulness of an artificial neural network for differential diagnosis of interstitial lung disease: pilot study. Radiology. 1990;177857- 860
Sahiner  BChan  HPPetrick  N  et al.  Classification of mass and normal breast tissue: a convolution neural network classifier with spatial domain and texture images. IEEE Trans Med Imaging. 1996;15598- 610
Link to Article
Garra  BSKrasner  BHHorii  SCAscher  SMuk  SKZeman  RK Improving the distinction between benign and malignant breast lesions: the value of sonographic texture analysis. J Ultrasound Med. 1993;13267- 285
Kohonen  T Improved versions of learning vector quantization. Proc Int Jt Conference on Neural Networks. 1990;1545- 550
Kohonen  TOja  ESimula  OVisa  A.Kangas  J Engineering applications of the self-organizing map. Proc IEEE. 1996;841358- 1384
Link to Article
Gonzalez  RCWoods  RE Image compression. Digital Image Processing. New York, NY Addison-Wesley Longman Inc1992;312- 315
Kohonen  T The self-organizing map. Proc IEEE. 1990;781464- 1480
Link to Article
Stavros  ATThickman  DRapp  CLDennis  MAParker  SHSisney  GA Solid breast nodules: use of sonography to distinguish between benign and malignant lesions. Radiology. 1995;196123- 134
Skaane  PEngedal  K Analysis of sonographic features in the differentiation of fibroadenoma and invasive ductal carcinoma. AJR Am J Roentgenol. 1998;170109- 114
Link to Article

Correspondence

CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 15

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles