Int J CARS (2006) 1:461–485 DOI 10.1007/s11548-006-0031-y
CAR Posters
3D digital brain atlas construction using Talairach and Tournoux 88 Dingguo Chena Æ Vipin Chaudharyb Æ Ishwar K Sethia a Department of Computer Science and Engineering, Oakland University, Rochester, MI, USA b Institute for Scientific Computing, Wayne State University, Detroit, MI, USA Keywords 3D digital brain atlas Æ Talairach and Tournoux 88 Æ Nuages interpolation Æ Volume rendering 1 Introduction In this poster, a 3D reconstruction and rendering algorithm using classical Talairach and Tournoux 88 brain atlas slices (TT88) is proposed. 2 Methods The steps to create 3D models and volume rendering from 2D atlas images are: Step 1: Open a slice sequence of TT88 [1] Define objects and create boundary lines for each object. Segment the slices by color code to automatically detect object boundaries; finally, manually refine boundaries to trace object boundaries interactively, Boundary lines are organized by object groups for more effective management and more flexible use by the rendering functions. Step 2: Interpolate additional cross-sections between each pair of adjacent original plates using the Nuages algorithm proposed by Geiger [2], and intersect the shell surface of the resulting solid at half the slice distance Step 3: Post processing, including smoothing and polygon reduction, [3] is applied for further improvement 3 Experimental results The experimental environment is a SGI workstation running IRIX 5.3 with 512 MB memory. GLUT 2.0 and OpenGL 2.0 graphics libraries are used. Once resample slices are generated in section 2, 3D volumetric rendering is used to create 3D models with an isotropic voxel dimension of no more that 1 mm along each axis as shown in Fig. 1. References
(a) T T88 Coronal Plane Rendering
3. Melax S (1998) A simple, fast, and effective polygon reduction algorithm. Game Developers Mag 11:44–49 Quantitative evaluation of osteoporosis likelihood using multi-slice CT images Masahiro Ueharaa Æ Shinsuke Saitaa Æ Mitsuru Kuboa Æ Yoshiki Kawataa Æ Noboru Nikia Æ Masako Itob Æ Hiromu Nishitanic Æ Keigo Tominagad ÆNoriyuki Moriyamae a Department of Optical Science, University of Tokushima, Japan b School of Medicine, Nagasaki University, Japan c School of Medicine, Institute of Health Biosciences the University of Tokushima Graduate School, Japan d Tochigi Public Health Service Association, Japan e National Cancer Center Reseach Center for Cancer Prevention and Screening, Chiba, Japaan Keywords Multi-slice CT image Æ Osteoporosis ÆVertebra 1 Introduction In Japan, about 10 million people are suffering from osteoporosis. We developed an algorithm to support the diagnosis of osteoporosis. This was done by using an image taken with a high dose and a low dose Multi-slice CT. From this the density of the inner part of cancellous bone region was classified. 2 Methods First, the bone and vertebral regions were extracted. To classify the body of the vertebra, the endplate and intervertebral disc region was used. From this the cancellous bone region in each body of the vertebra was obtained. The extracted cancellous bone was analysed. 3 Result The algorithm which extracts the thoracic vertebra cancellous bone could classify 42 cases out of 42 cases (100%) with high dose CT and 192 cases out of 229 cases (84%) with low dose CT. 4 Conclusion An algorithm was developed which classified the vertebral body by applying high dose and low dose multi-slice CT. The result of the cancellous bone region extraction and the decrease of the average CT value which is related to age increase was quantitatively verified (Figs. 1, 2)
(b) Labeling with Color information
Fig. 1 Models created for the atlas structure. a TT88 coronal plane rendering, b labeling with color information 1. Nowinski WL, Fang A, Nguyen BT et al (1997) Multiple brain atlas database and atlas-based neuroimaging system. Comput Aided Surg 2(1):42–66 2. Geiger B (1993) Three-dimensional modeling of human organs and its application to diagnosis and surgical planning. Tech Rep RR No. 2105, Institut National de Recherche en Informatique et en Automatique INRIA
Fig. 1 Cancellous bone region
123
462
Int J CARS (2006) 1:461–485 method is a reasonable trade-off between processing time and required human interaction. Nevertheless, it is sensitive to the value of the weighting parameter. Acknowledgments This work was partly supported by Colciencias project No. 12040416468. References 1. Carrillo JF, Orkisz M, Herna´ndez Hoyos M. Extraction of 3D vascular tree skeletons based on the analysis of connected components evolution, In: 11th CAIP, France: Springer, 2005, pp 604– 611
Fig. 2 Analytical result of cancellous References 1. Shiomi N, Kubo M, Niki N et al (2004) A computer aided diagnosis for osteoporosis using multi-slice CT images, Technical report of IEICE vol.103.No.96, pp 31–36 2. Nishihara S, Fujita H et al (2005) Evaluation of osteoporosis in Xray CT examination. Comput Med Imaging Graph 29:259–266 Vascular tree extraction from 3D images Juan F. Carrilloa Æ Marcela Herna´ndez Hoyosa Æ Eduardo E. Da´vilab Æ Maciej Orkiszb a Grupo Imagine – Grupo de Ing. Biome´dica, Ing. de Sistemas y Computacio´n, Univ. de los Andes, Bogota, Colombia b CREATIS, CNRS 5515 and INSERM U630 Research Unit, Lyon, France Keywords Segmentation Æ Clustering Æ K-means Æ Skeleton Æ Bifurcations 1 Introduction Our work is dealing with the segmentation of branching tubular structures in 3D medical images. Its focus is the extraction of the vascular tree skeleton. The method has to cope with low signal-tonoise ratio, low contrasts and with bright structures near the vessel, e.g. calcifications, fat, bones or heart cavities as in images of coronary arteries. 2 Methods The method uses an iterative framework [1] which adds centerline points and detects bifurcations, by analyzing the contents of a sphere sliding along the centerline. At each iteration, a local segmentation is performed within the sphere. Two innovations are proposed to reject unwanted structures, while enforcing the vessels’ tubular shape. (1) The local segmentation uses a K-Means clustering algorithm with different metrics for different clusters. The metric used for the ‘‘vessel’’ cluster is the distance from a voxel to the cluster’s 3D-space principal inertia axis, while the intensity difference with the cluster’s centroid is used for the ‘‘background’’ cluster. This leads to a minimization of a weighted sum of the clusters’ variances and of the ‘‘vessel’’ cluster’s inertia moment. (2) A vesselness measure, based on the comparison of the segmented volume against a constructed vessel model, is used as stopping criterion for the process. 3 Results and conclusion The method was applied to 16 MRA and 12 CTA 3D images of different anatomic regions (Fig. 1). It efficiently detected and managed the bifurcations. Each image was processed in less than 5 min which is fast enough to be used in daily physician practice. The
Fig. 1 Examples of segmentation results, from left to right: carotid MRA image, pulmonary CT image, coronary CT image and aortoiliac MRA image
123
A visualization tool for Geant4-based medical physics applications Akinori Kimuraa Æ Satoshi Tanakab Æ Takashi Sasakic,d a Ashikaga Institute of Technology, Japan b Ritsumeikan University, Shiga, Japan c High Energy Accelerator Research Organization, Ibaraki, Japan d CREST, JST, Saitama, Japan Keywords Visualization Æ Dose Æ Modality Æ Ray-casting Æ Geant4 simulation 1 Introduction Geant4 [1] is a toolkit for the simulation of the passage of particles through matter. It has been utilized for many medical physics applications recently [2]. The Geant4 toolkit includes graphics systems to render simulated geometric definitions of the matter and simulated particle trajectories. However, it is not suitable for the display of complicated modality image and does not support the display of the calculated dose distribution. Therefore, we have been developing a visualization tool (GRAPE) which is enable to display both complicated modality image and the calculated dose distribution. GRAPE helps us to confirm simulation results intuitively and easily. 2 Visualization tool The appearance of GRAPE consists of one 3D image, three 2D images and two setting histograms as shown in Fig. 1. For the 3D image, we adopt the ray-casting method and the early ray termination method for the speedup. It is possible to visualize the 3D image of both the modality image and the dose distribution with high accuracy. The 2D images consist of three axial sections using MPR (Multi-Planer Reformat). Opacity curves and color maps for the modality image and the dose distribution can be edited individually in the histograms. Figure 2 shows a complex structure of a human head visualized with the different opacity curve and color map of Fig. 1.
3D image (modality)
MPR images (modality & dose distribution)
Setting histograms
Fig. 1 Our visualization tool (GRAPE) displaying calculated dose distribution with modality image For displaying simulated result with GRAPE, it is necessary to store a calculated data in the dedicated format. We prepared a class library in order to retrieve/store data from/to the dedicated format data file. 3 Conclusion We have been developing a visualization tool, GRAPE, for Geant4based medical physics applications. GRAPE supports overlapping displaying of a modality image and dose distribution calculated through a Geant4 simulation. GRAPE supports personal computers on which Linux or Windows 2000/XP operation system works. We plan to release GRAPE freely
Int J CARS (2006) 1:461–485
Fig. 2 3D image of a result of simulated proton irradiations to a human head available, which can be found at the web site (http://www.geant4.kek.jp/GRAPE/). References 1. Agostinelli S, Allison J, Amako K et al (2003) (GEANT4 Collaboration). GEANT4: a simulation toolkit. Nucl Instrum Meth A506:250–303 2. Archambault L, Beaulieu L, Carrier JF et al (2003) Overview of Geant4 applications in medical physics. IEEE nuclear science symposium conference record, vol 3, pp 1743–1745 Estimation of correlation between respiratory signals and organ motion for sensor based respiratory gated radiotherapy Jin-Bum Chunga Æ Jeong-Woo Leea Æ Tae-Suk Suha a Department of Biomedical Engineering, School of Medicine, The Catholic University of Korea, Seoul, South Korea b Department of Radiation Oncology, College of Medicine, Konkuk University, Seoul, South Korea Keywords Respiratory motion Æ Diaphragm motion Æ External respiratory signal measurement system Æ Respiration gated Radiotherapy 1 Introduction The purpose of this study was to quantify the correlation of respiratory signals measured by sensor system, to internal anatomy motion. Currently, this subject has not yet been adequately explored [1]. 2 Materials and methods In order to measure the respiration motion signals, respiratory signal measurement system (RSMS) was used in this study. The RSMS was composed of mobile computer, four channel data acquisition system, and three sensors (spirometer, thermistor, belt transducer) which determine the respiratory signals. The fluoroscopic video signals, typically 15 s each, were digitally recorded at 10 frames per second from three patients who participated in this study. To evaluate the diaphragm motion-external respiratory signals relationship, a linear least-squares fit was performed, and R2 values (the coefficient of determination) was quantified. 3 Results and conclusion Averaged over all patients, a diaphragm motion of 19.3 mm was seen and the average respiratory period of 4.1 s was determined during free-breathing. The correlation coefficient of respiratory signals to diaphragm motion show that the ratio of diaphragm motion to abdominal displacement by belt-transducer exhibited high correlation of 0.92 with a standard deviation of about 0.06, whereas within the ratio of respiratory volume and respiratory temperature to diaphragm motion were correlated. In the same way, the correlation of respiratory temperature and volume to diaphragm motion were
463 measured with the phase shifts derived from time delay of )0.2 and )0.4 s. A systematic time delay was detected the correlation data by cross correlation analysis. This means that diaphragm motion is 0.2– 0.4 s ahead of respiratory volume and respiratory temperature. With cross correlation analysis, the correlation coefficient of diaphragm motion-respiratory volume ranged 0.85–0.98 and that of diaphragm motion-respiratory temperature ranged 0.83–0.98. After correct the time delay, external respiratory signals obtained by spirometer and thermistor sensor correlated generally well with diaphragm motion. This correlation can be used to predict diaphragm motion, based on the respiratory signals and may be possible with the development of respiration-correlated 4D CT methods. Hence, using the respiratory sensor system in respiratory gated radiotherapy would improve the accuracy for treatment of moving tumor. Thus it is expected that this method should come into wider use. References 1. Yoshikazu T, Takeji S, Yoshiyuki S et al (2004) Correlation between the respiratory waveform measured using a respiratory sensor and 3D tumor motion in gated radiotherapy. Int J Radiat Oncol Biol Phys 60:951–958 Development of visual integration function in the multi-vender hospital information system—evaluation of IHE IT-infrastructure Yutaka Andoa Æ Masami Mukaia Æ Koji Uemuraa Æ Shinichiro Satoa Æ Hirohiko Tsujiia Æ Nobuhiko Tsukamotob Æ Takashi Nakashimac Æ Noriaki Daitod a National Institute of Radiological Sciences, Chiba, Japan b Dokkyo University School of Medicine, Tochigi, Japan c Hitachi Medical Corporation, Tokyo, Japan d TechMatrix Corporation, Tokyo, Japan Keywords IHE IT infrastructure Æ Enterprise user authentication Æ Patient synchronized application 1 Introduction Generally hospital information system consists of electronic medical record (EMR) systems, some departmental systems and PACSs. To retrieve easily and effectively the patient data stored among these systems, some methods that enable to operate simultaneously multivender systems are proposed. These methods are called the visual integration, for example, the CCOW [1] (Clinical Context Object Workgroup) standard of the HL7. This standard is used in the IHE (Integrating the Healthcare Enterprise) technical framework [2]. We developed the visual integration function expandable for several monitors in accordance with the IHE ITI [3] guideline. We planed to check the feasibility of the visual integration function and promote the visual integration. 2 Methods We designed the four types of implementations including applet, servlet, Active-X, JAVA. We divided the function into actors compliant to the IHE documentation (EUA: Enterprise User Authentication and PSA: Patient Synchronized Application). We combined these middle-ware and our existent applications. Our clinical information system consists of four systems, a computerized physician order entry (CPOE) system, a clinical patient database, a heavy particle therapy schedule management system and PACS. To modify these systems by the library, we could use the single sign on function and the synchronized patient selection function. 3 Results In the multi-vender systems we realized the single-sign-on and synchronized-patient-selection function by using the middle-ware. The systems were improved and the EUA/PSA function was added to each system. There were hard problems to solve. We had to change the program flow because the system was not designed to use the visual integra-
123
464 tion. We had to define the synchronization scope in the use case. Our PACS used the windows login function. We had to implement the EUA function outside the PACS. We are now thinking that the promotion of the EUA/PSA library is very important and the expansion of the IHE ITI integration profile is useful for the efficiency and accuracy of healthcare. We are planning these middlewares to be open to the public. 4 Conclusion We could build the visual integration function by using our middleware in the multi-vender systems and believe these functions will go a long way toward solving the operation difficulties. References 1. The health level seven context management specification Version 1.4, ANSI/CMS V1.4–2002 2. IHE, Integrating the healthcare enterprise, http://www.ihe.net/ 3. ACC/HIMSS/RSNA: IT Infrastructure technical framework Revision 2.0 Workflow analysis of pathology department in Japanese hospitals Ikuo Tofukujia Æ Takashi Okunob Æ Hidemi Yamamotoc Æ Masato Tanakad Æ Takashi Nakashimae a Takasaki University of Health and Welfare, Japan b Olympus Medical Systems Co., Ltd., Tokyo, Japan c Atsugi Municipal Hospital, Japan d Hamamatsu Medical Imaging Center, Japan e Hitachi Medical Corporation, Tokyo, Japan Keywords IHE Æ Pathology Æ Workflow Æ DICOM 1 Introduction IHE activities have the origin in the radiology department, now are expanding into many other clinical departments of hospital information systems. The IHE-J pathology WG was founded in 2003 to apply IHE methodology to pathology department and make it easy, economically and quickly to connect pathology department system with hospital information systems. Above authors are the members of this WG. 2 Methods This WG investigated the workflows of the routine pathology services in eight hospitals. These workflows told us that the standardization in the pathology department will be fruitful. Then we decided the scopes of our work as follows; (1) routine histopathology services, (2) routine cytopathology services, (3) routine intra-operative quick frozen section services, (4) autopsy. 3 Results Procedures of pathology department are based on the specimen driven methodology. For example, histology services contain procedure steps such as (1) specimen registration, (2) gross picture, (3) fixation, (4) cutting, (4) embedding, (5) sectioning, (6) staining, (7) labelling, (8) observation and reporting. Also these procedure steps generate hierarchy structure of information. The information of the radiology department are structured with simple three levels as STUDY-SERIES-IMAGE, but the information structure of histology for the operation specimen has six levels of hierarchy as PATIENT-SPECIMEN-BLOCK-SLIDE-OBSERVATION-IMAGE. The information structure of biopsy histology may have another six levels of hierarchy as PATIENT-BLOCK-SLIDE-OBSERVATION-SPECIMEN-IMAGE, and cytopathology may have another four levels structure as PATIENT-SPECIMEN-SLIDE-OBSERVATION-IMAGE. We found the very large differences of information structures of radiology and pathology. 4 Considerations and conclusion We have concluded it is the effective way to use DICOM standards to store and archive pathology images in the practical medicine. At the beginning of DICOM application for the pathology images, we should solve the hierarchy problem. Fortunately DICOM WG-26 was established in December 2005, we together with another countries people concerned with pathology have just started to discuss and solve these problems.
123
Int J CARS (2006) 1:461–485 After the mapping of the hierarchy structure of pathology images to the DICOM world, we will define the technical framework of pathology services under the collaboration with international IHE pathology WG. Accuracy analysis of thickness measurement for two adjacent sheet structures in CT images Y. Chenga Æ Y. Satoa Æ H. Tanakaa Æ T. Nishiia Æ N. Suganoa Æ H. Nakamuraa Æ H. Yoshikawaa Æ S. Wangb Æ S. Tamuraa a Division of Interdisciplinary Image Analysis, Osaka University Graduate School of Medicine, Japan b Robot Research Institute, Harbin Institute of Technology, P.R. of China Keywords Articular cartilage Æ Numerical simulation Æ Accuracy validation 1 Introduction The results of previous study [1] concerning accuracy of thickness measurement of sheet structures are valid under the restriction that the sheet structure is not influenced by its peripheral structures. In such cases, we call this sheet structure ‘‘single sheet structure’’. However, the sheet structures in the actual medical images often violate the restriction such as femoral and acetabular cartilages in the hip joint. In this case, two cartilages (two sheet structures) maybe influence each other. Therefore, we present a theoretical simulation method to determine whether accuracy of thickness measurement of one sheet structure is affected by the other sheet structure in CT images. 2 Methods We introduce a 3D two adjacent sheet structures separated by a small distance s0, modeling the two cartilages in the hip joint. Two adjacent sheet structures, PSF of the scanner, and scanning process are modeled and simulated, and then the post-processing is performed for thickness measurement. To validate the theoretical simulations, four phantoms of sheet-like objects with known thickness were scanned, one for a single sheet, the other three for two adjacent sheets with three different distances s0 = 0.5, 1.0, and 1.5 mm, respectively, and then determined their thickness. 3 Results and conclusion The simulation method was validated by phantom experiments with the actual measurement. We compared the measured thickness of a single sheet with that of the sheet influenced by the adjacent sheet and confirmed that considerable underestimation in thickness measurement occurred due to the influence of the adjacent sheet. Although the previous study [1] shows that thickness of thickerstructure can be accurately measured, the results of our study indicate that: (1) thickness of thicker-structure can no longer be measured accurately when the distance s0 is less than 1.5 mm, and (2) as s0 decreases, the error in measuring the thickness of sheet structure increases. References 1. Sato Y et al (2003) Limits on the accuracy of 3-D thickness measurement in magnetic resonance images–effects of voxel anisotropy, IEEE Trans Med Imag 22(9):1076–1088 Fully automatic localization of the articular space in MR images of the hip joint Reza A. Zoroofia Æ Yoshinobu Satob Æ Mahdieh Khanmohammadia Æ Takashi Nishiic Æ Katsuyuki Nakanishic Æ Hisashi Tanakac Æ Nobuhiko Suganoc Æ Hideki Yoshikawac Æ Hironobu Nakamurad Æ Shinichi Tamurab a Control and Intelligent Processing Center of Excellence, Electrical and Computer Engineering Department, Faculty of Engineering, University of Tehran, Iran b Division of Image Analysis, Osaka University, Graduate School of Medicine, Japan
Int J CARS (2006) 1:461–485 c
Department of Orthopaedic Surgery, Osaka University, Graduate School of Medicine, Japan d Department of Radiology, Osaka University Graduate School of Medicine, Japan Keywords Hough transform Æ Directional derivative filter Æ Cartilage segmentation Æ Adaptive thresholding 1 Introduction MRI is considered as one of the most effective choices for diagnosis of cartilage diseases [1]. In the previous work [2], a fast SPGR sequence under continuous leg traction was proposed for effective separation of the acetabular and femoral cartilages in the hip joint. By investigating a previous work [3], we found that accurate localization of the articular space in the hip was as a key step for cartilage quantification and hence considered as the goal of this research. 2 Methods We develop a multi-step algorithm for accurate localization of the acetabular space as follows. The center of the femoral head was automatically approximated by a Hough transform. The cartilage and articular lesions inside the hip joint were enhanced by the first and second directional derivative filters along radial directions originating from the sphere center. The Canny filter was then applied to the images of above steps. The detected edges were combined in a heuristic manner. Bony tissues of the images were segmented by an adaptive thresholding technique proposed by Otsu [4]. Unwanted edges were removed by using the bone masks. 3 Results Data sets of five volunteers (10 hips) and 25 patients (50 hips) were available in the experiments. Data sets of five healthy hips and fifteen clinical hips were randomly selected for evaluation. Results confirmed the effectiveness of the technique. 4 Conclusion We have developed a fully automatic method for accurate localization of articular space in the hip joint based on directional derivative filters, canny operator, and Otsu’s adaptive thresholding. The proposed method in the presence of the available data sets showed effective for 90% of the cases. Our future work is to extend the research and develop a clinically feasible software package for quantification of the acetabular and femoral cartilages of the hip. References 1. Kauffmann C, Gravel P, Godbout B et al (2003) IEEE Trans on Biomed Eng 978–988 2. Nishii T, Sugano N, Sato Y et al (2004) OsteoArthritis and Cartilage 650–657 3. Sato Y, Nakanishi K, Tanaka H et al (2001) Proc. CARS, vol. 1230, pp 352–358 4. Otsu N (1979) IEEE Trans Systems, Man and Cybernetics, vol. 9, pp 62–66 Wavelet denoising in a chest digital X-ray image by discriminant analysis M. Katagiria Æ T. Kobayashia Æ H. Kubotaa a Graduate School of Health Science, Suzuka University of Medical Science, Japan Keywords Wavelet denoising Æ Discriminant analysis 1 Introduction For the noise reduction of digital image by wavelet shrinkage, the optimum threshold must be chosen so that the result is as close as possible to the unknown noise-free image. Donoho’s threshold is widely known [1]. Its equation includes noise information. But in almost case, the noise information contained in a medical image is unknown. Our purpose is to obtain the optimal threshold without noise information by using discriminant analysis (DA) in wavelet domain. 2 Methods White Gaussian noise was added to the noise-free image (a digital chest X-ray image) varying its signal-to-noise ratio (SNR) from )6.6
465 to 40.2 [dB]. Wavelet coefficients of noise-added image were calculated. In this calculation, we applied a Daubechies’ wavelet of the sequence length 4. Wavelet components were obtained up to resolution levels -2 (low frequency side), -4 and -6 (high frequency side). We treated all wavelet components (vertical, horizontal and diagonal for all resolution levels) separately, and obtained all histograms of the components. Then, DA processing was performed to each histogram and the optimal threshold value was calculated. Finally, shrinkages have been done by soft threshold. In order to investigate the performance of our DA method, we also compared with the case of Donoho’s method. The performance of noise reduction was measured by mean square error for the original image 3 Results and conclusion For the resolution level -1 (high frequency side), the optimal threshold indicated by the DA method was a small absolute value of wavelet coefficient. On the contrary, for the resolution levels -4 (low frequency side), the DA method indicated the wavelet coefficient of the part where the curve of the histogram fell. By using the DA method, some improvement was found for the noised image of 20 [dB] or less (noisy image). For example, we obtained 2.2 [dB] improvement for 7.4 [dB] noised image when the resolution level was up to -4. For Donoho’s method, the 4.5 dB improvement was obtained at the same condition. However, the SNR of a denoised image decreased when the SNR of the noised image exceeded 20 dB for both methods. For the noised image of 20 dB or less, the magnitude of noise reduction was hardly related to the resolution level in the DA method. When the SNR exceeded 30 dB, it became an opposite effect (artifact). The degree of artifact was larger for the resolution level of lower frequency side. The opposite effect was seen also by Donoho’s method for the noised image more than 30 dB. References 1. Donoho DL (1995) De-noising by soft-thresholding. IEEE Trans Information Theory 41:613–627 Segmentation on multi-region image of liver based on level set method Huiyan Jianga Æ Yuejun Lia Æ Hiroshi Fujitab a Northeastern University, Shenyang, P.R. of China b Gifu University, Gifu, Japan Keywords Image segmentation Æ MRI Æ Level set method Æ Narrow band method Æ Wavelet transform 1 Introduction Medical image segmentation is a key process to CAD. In abdominal MRI, the liver may appear in disjoint regions due to pathological changes or the view of the slice presented. Common segmentation methods usually obtain only the largest region of the liver. This paper presents a new segmentation method for multi-region liver image. 2 Methods Image denoising and locating of aorta in abdomen MRI were first carried out respectively based on Meyer wavelet transform and improved Ostu algorithm. An improved region growing method based on multi-seeds and a voting mechanism was developed for multiregion liver segmentation. In the method, N seeds were set in the left region of the aorta. The region growing method was employed and N results were obtained. A voting mechanism was employed to obtain one segmentation region. Next, another N seeds were set near the above segmented region based on the distance and the grey level values. The region growing and the voting processes were repeated to obtain another candidate region. The new candidate was matched and merged with the previous one to obtain a new segmentation region. The process was iterated until the length or width of the segmentation liver region reaches a pre-set value and a result was obtained (Fig. 1). In order to improve the segmentation accuracy, the
123
466
Fig. 1 Crude segmentation by region growth method narrow band level set method was used to obtain the final result based on the above-obtained region. 3 Results In the experiment based on narrow band level set method, we select the parameters: speed term is 2, curvature force term is 0.05, attraction force term is 0.033, length step is 1, time step is 0.01, narrow band width is 9 pixels, iterate 30 times, then get the segmentation result (Fig. 2) with good precision.
Fig. 2 Fine segmentation by level set method 4 Conclusion Based on the combination of the improved region growth method and a narrow band level set method, a liver segmentation image with precise edges is obtained from abdominal MRI. Acknowledgment Supported by Nature Science Foundation of Liaoning Province, China. Grant No. 20042020 Soft copy radiology—how do I know which monitor to use? L.H. Sima Æ K.L. Mantheya Æ B. Keirb a PACS Support, Princess Alexandra Hospital, Brisbane (PAH), Australia b Biomedical Technology Services (BTS), Queensland Health, Brisbane, Australia Keywords Monitor Æ Radiological diagnosis Æ Conformance criteria 1 Introduction A set of conformance criteria based on TGA-18 [1] has been developed as a relatively simple scientific basis for monitor selection for radiological applications. 2 Methods • Viewing angle: assessed qualitatively. Acceptance criterion: no adverse angular responses for an individual observer seated at a typical viewing distance of 600 mm. • Contrast modulation: assessed qualitatively ‘‘in field’’ using the technique described by Leachtenauer [2] and the Cm.tif test pattern. Acceptance criterion: Cm ‡ 50% • Available shades of gray: from monitor/display card subsystem specifications. Acceptance criterion: 10 bits LUT output for monochrome display and 32-bit colour.
123
Int J CARS (2006) 1:461–485 • Maximum luminance and contrast ratio: measured using Verilum [3]. Acceptance criterion: minimum absolute luminance of 175 cd m-2; minimum contrast ratio of 250:1. (ACR [4]) • Luminance conformance: measured using Verilum. Acceptance criterion: monitor able to accept GSDF [5] conformance at ‘‘good’’ level as per Verilum • Luminance linearity: measured using Verilum. Acceptance criterion: less than 15% deviation from the centre value. Spatial resolution is not included in these criteria as it is a function of monitor design and of the viewing application (i.e. magnify, zoom and pan functions). 3 Results One high brightness, monochrome monitor and three commercial grade colour monitors were tested. Two colour monitors failed to meet the contrast ratio acceptance criterion and one of these two failed to satisfy the contrast modulation acceptance criterion. The remaining two monitors achieved acceptance of all listed criteria. 4 Conclusion Current trends for monitors in radiological diagnosis are towards high quality, high resolution colour flat panels that may provide cost savings over the traditional high resolution, high brightness monochrome monitors. A set of criteria to provide a degree of scientifically based guidance in selection of monitor technology for radiological diagnostic displays has been developed. References 1. APM TG18 (2004) Assessment of display performance for medical imaging systems, Task Group 18 Report, Pre print Draft Version 10 2. Leachtenauer JC (2004) Electronic image display: equipment selection and operation. SPIE Press, p 174 3. http://www.image-smiths.com (2005) Image Smiths Inc, VeriLum display management software, accessed 25 25 Nov 4. ACR (2001) ACR standard for digital image data management. American College of Radiology 5. NEMA (2003) Digital imaging communications in medicine (DICOM) Part 14: Grayscale Standard Display Function, PS 3.14 Flow segmentation in phase contrast MRA image series using autoregressive modeling and generalized EM algorithm Ali Gooyaa Æ Hongen Liaoa Æ Kiyoshi Matsumiyaa Æ Ken Masamunea Æ Takeyoshi Dohia a Graduate School of Information Science and Technology, The University of Tokyo, Japan Keywords Autoregressive (AR) process Æ Linear prediction coefficients (LPC) Æ Expectation-maximization 1 Introduction Segmentation of CSF and pulsate arteries in PC-MRA based on a single image can produce improper classifications since the flow cannot be identified by a unique intensity. We present a novel automated bimodal flow segmentation method by using retrospective gated phase contrast MRA (PC-MRA) image series. 2 Method The intensity time series of each pixel is modelled as an AR process and features including LPC [1], covariance matrix of LPC and variance of prediction error are extracted from each profile. Bayesian classification of the feature space is then achieved using a non-Gaussian likelihood probability function and unknown parameters of the likelihood function are estimated by a generalized EM algorithm [2]. 3 Result A male volunteer was scanned using 1.0 T MR system and 25 images are obtained within one heart cycle as shown Fig. 1 (slice thickness = 10 mm, FOV = 25 cm, matrix = 180 · 256, NEX = 2, TE = 6.6 ms, flip angle = 10, VENC = 10.0 cm/s.). CSF and blood vessels are identified successfully as indicated in Fig. 1d.
Int J CARS (2006) 1:461–485
Fig. 1 Cine retrospective gated PC-MRA images indicating pulsate behavior of CSF and cerebrovascular (a–b). c Initial segmentation obtained from K-means algorithm. d Segmentation obtained after ten iterations 4 Conclusion We observe that using this method identification of extremely low flow rates is possible. This highlights its future applications in identification of capillaries and small vessels. Acknowledgments The work was supported in part by Grant-in-Aid for Scientific Research (17100008, 17680037) of the JSPS and MEXT, Japan. References 1. Kashyap RL (1978) Optimal feature selection and decision rules in classification problems with time series. IEEE Trans Inform Theory 24(3):281–288 2. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B, 39:1–38 Computer-aided diagnosis for acute stroke in nonenhanced CT Noriyuki Takahashia Æ Yongbum Leeb Æ Du-Yih Tsaib Æ Kiyoshi Ishiia a Department of Radiology, Sendai City Hospital, Japan b School of Health Sciences, Faculty of Medicine, Niigata University, Japan Keywords Computer tomography Æ Cerebral infarction Æ Computer-aided diagnosis Æ Adaptive smoothing filter 1 Introduction Computed tomography (CT) is still the most commonly used imaging examination in the diagnosis of acute cerebral infarction [1]. An early CT sign such as the loss of gray–white matter interface (GWMI) due to acute stroke is considered difficult to detect because of the presence of quantum noise in CT images. In this study we present a novel image processing technique acting as a noise reduction filter to improve the detection of this early sign, which is considered as the first step for computer-aided diagnosis. We propose an adaptive partial median filter (APMF) aiming to improve the visibility of normal GWMI. The purpose of this study is to apply the APMF to nonenhanced CT images, and to evaluate the effect of using the APMF on the detection of the early CT sign. 2 Methods The APMF is a specially designed median filter used to perform local smoothing with edge preserved. The size and the shape of the APMF are variable, and are determined according to the distribution of pixel values of contours or edges in the region of interest. By adjusting three major parameters existing in the APMF, an optimal condition for image enhancement can be obtained. These parameters control the degree of noise reduction and edge blurring in an image. To determine the optimal parameter values and evaluate the performance of the APMF, simulation experiment with computer-simulated gray–white matter of a brain CT image was performed. Our result showed a 76% reduction of noise without significantly blurring the structures in the image. Qualitative analysis was conducted using nonenhanced images. The APMF was applied to CT images obtained in 20 patients with acute ( < 5 h) middle cerebral infarction and 40 control patients. Four radiologists indicated their confidence level regarding the presence or absence of early CT sign
467 on each case. Receiver operating characteristic (ROC) curve analysis was performed. 3 Results The average area under the ROC curve (Az) was improved from 0.840 to 0.940 for all radiologists by applying the APMF to the original images. The difference in Az values with and without the APMF was statistically significant with a P value of .002 for all radiologists. In conclusion, the APMF strongly reduced image noise while preserving the normal gray–white matter interface in the nonenhanced CT images. The APMF has the potential to improve the detection of the early CT sign due to acute stroke. References 1. Adams HP Jr, Adams RJ, del Brott T et al (2003) Guidelines for the early management of patients with ischemic stroke: a scientific statement from the Stroke Council of the American Stroke Association. Stroke 34:1056–1083 Statistical modeling of white matter fiber tract based on diffusion MRI and its application for automated tractography Y. Masutania Æ S. Aokia Æ O. Abea Æ K. Ohtomoa a Image Computing and Analysis Lab., Radiology, University of Tokyo Hospital, Japan Keywords Diffusion tensor Æ White matter Æ Fiber modelling Æ Statistical atlas 1 Introduction The diffusion tensor tractography (DTT) based on MR diffusion tensor imaging (DTI) is widely used as a powerful tool for visualizing and analysing white matter fiber tract. In this presentation, we briefly introduce our novel statistical atlas of specific fiber tract, and its application for automated tractography. 2 Methods Our novel fiber modelling method is based on partial reconstruction of vector/tensor field that represent fiber orientation [1], which is based on interactive selection of control voxels by ROI setting except for crossing area, and calculation of diffusion tensors. For building statistical atlas of specific tract structure, we performed normalization of each DTI data set based on T2WI registration to a template brain and re-orientation of tensors based on the principle direction preservation scheme. Probability density map is created for describing statistical shape of each structure. Then, for describing statistics of diffusion tensor, we adopted mean tensor and covariance matrix of the sample tensors in the reconstructed tensor field [2]. In performing atlas-based DTT with a new data set, tensors used for tensor field reconstruction in tracking the fiber tract structure are determined based on the Mahalanobis distance (MD) for picking up similar tensors locally in the atlas [3]. 3 Results and summary By using ten data sets of normal volunteers, we have constructed the first version of our statistical atlas. The original DTI data sets are in 128 · 128 · 91 matrix size with 30 directions of motion probing gradient. The left and right CBT and CST from the motor areas were successfully modelled by using our interactive method. Then, we performed atlas-based DTT, by using a new DTI data set that is not included in the atlas. Automated DTT was performed successfully for the left CBT after capturing tensors by probability density and MD threshold adjusted interactively though several undesired trajectories, such as ones reaching to the sensory areas, were manually removed. In summary, we have developed a novel fiber tract reconstruction technique based on a statistical atlas of fiber tracts from ten DTI data sets. Usage of more subjects of DTI data is supposed to make our method more reliable. References 1. Masutani Y, Aoki S, Abe O et al (2004) RBF-based reconstruction of fiber orientation vector field for white matter fiber tract modeling. In: Proceedings of the ISMRM ¢04 (CD-ROM), May, Kyoto
123
468 2. Masutani Y, Aoki S, Abe O et al (2005) Building statistical atlas of white matter fiber tract based on vector/tensor field reconstruction in diffusion tensor MRI. Lecture Notes on Computer Science vol. 3804. In: Proceedings of the ISVC’05, Dec., Lake Tahoe, pp 84–91 3. Masutani Y, Aoki S, Abe O et al (2006) Brain white matter fiber tractography based on statistical atlas for specific fiber tract structure. In: Proceedings of the IEEE ISBI’06, April, Arlington (in press) Effects of surface correspondence methods in statistical shape modelling of the proximal femur on approximation accuracy T. Okadaa Æ M. Nakamotoa Æ Y. Satoa Æ N. Suganoa Æ T. Asakab Æ Y. Chenb Æ H. Yoshikawaa Æ S. Tamuraa a Osaka University Graduate School of Medicine, Japan b Ritsumeikan University College of Information Science and Engineering, Kasatsu, Japan Keywords Nonrigid registration Æ Hip joint Æ 3D CT images Æ Shape approximation Æ Accuracy analysis 1 Introduction The purpose of this work is to investigate the effects of surface correspondence methods in statistical shape modeling of the proximal femur on approximation accuracy. 2 Methods We used the surface correspondence method using surface parameterization based on anatomical features such as the femoral head center, femoral neck axis, lesser trochanter, and greater trochanter (method A) and the method based on point-based nonrigid registration [1] (method B). In method B, the experiments were performed using different numbers of points. 3 Results A total of 51 CT data sets were used for construction of the statistical shape models. On approximation accuracy by linear projection, method A was more accurate than method B, and the accuracy of method A was 0.64 mm (Fig. 1a). On approximation accuracy by minimizing distances between control points and original shape surfaces, there was no significant difference between methods A and B when more than 700 points were used in method B (Fig. 1b). The accuracy was 0.47 mm in both methods A and B.
Fig. 1 Approximation accuracy of the statistical shape model of the proximal femur. a Approximation accuracy by linear projection, b approximation accuracy by minimizing distances 4 Conclusion As correspondence methods in statistical shape modeling of the proximal femur, point-based nonrigid registration using 700 points shows the accuracy comparable to a method based on anatomical features, and the accuracy was 0.47 mm. References 1. Chui H, Rangarajan A (2003) A new point matching algorithm for non-rigid registration. Comput Vis Image Und 89:114–141 Temporal subtraction on 3D CT images by using nonlinear image warping technique Takayuki Ishidaa Æ Shigehiko Katsuragawab Æ Ikuo Kawashitaa Æ Hyounseop Kimc Æ Yoshinori Itaic Æ Kazuo Awaib Æ Qiang Lid Æ Kunio Doid a Department of Clinical Radiology, Hiroshima International University, Japan
123
Int J CARS (2006) 1:461–485 b
School of Health Sciences, Kumamoto University, Japan Department of Mechanical and Control Engineering, Kyushu Institute of Technology, Japan d Department of Radiology, The University of Chicago, USA Keywords Temporal subtraction Æ Lung CT Æ 3D image warping Æ Interval changes 1 Introduction The detection of very subtle lesions and/or lesions overlapped with vessels on 3D CT images is a time consuming and difficult task for radiologists. Therefore, we have developed a 3D temporal subtraction technique based on a nonlinear image warping technique. 2 Methods For each current section image, we selected the corresponding previous section image, by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. To achieve accurate registration between the current and previous CT images, we applied the local matching with a nonlinear image warping technique to the previous CT image based on 3D cross-correlation. The local shift vectors (X–Y–Z shift vectors with rotations) for all VOI pairs, which were selected in the current and the previous CT images, were determined when the cross-correlation value became the maximum. Thus, the voxel by voxel local shift vectors were used for conversion of the 3D coordinates of the previous CT image, i.e., for 3D image warping of the CT image. The warped previous CT image was then subtracted from the current CT image. 3 Results and conclusion With the 3D temporal subtraction technique applied to 19 clinical cases, the majority of normal background structures, such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as dark shadows on subtraction images. The temporal subtraction image will be useful for the detection of interval changes on CT images, and thus could assist radiologists in the early detection of lung cancer. c
Acknowledgment This work was supported by a Grant-In-Aid for Scientific Research on Priority Areas (17032009) from the Ministry of Education, Culture, Sports, Science and Technology in Japan and the USPHS Grants CA62625 and CA98119 Automated feature extraction for breast cancer grading with Bloom–Richardson scheme Ł. Jelen´a Æ A. Krzy_zaka Æ T. Fevensa a Department of Computer Science and Software Engineering, Concordia University, Montre´al, Canada Keywords Bloom-Richardson Æ Breast cancer grading Æ Cytology grading 1 Introduction Among women, breast cancer is one of the most often diagnosed cancers. Precise diagnosis is crucial in the determination of the course of effective treatment. Precision is obtained by determination of the degree of malignancy of the tumor. 2 Methods In this work we use a Hough Transform (HT) [2] to extract ellipses from cytological smears. Ellipses are good descriptors of size and position of cell nuclei. Bloom and Richardson [1] proposed features allowing grading of breast cancers according to three malignancy categories. We propose the following features for Bloom–Richardson grading scheme: 1. Degree of structural differentiation—this factor reflects the tendency of cells in the image to form clusters, or to loosely spread around the image. We are able to calculate the dispersion of the cells in the image using centers extracted by HT. 2. Pleomorphism—it takes into account the geometrical features of the nuclei. For this factor, we extract the following features: – • Area—the total number of all nuclei pixels. – • Texture—more malignant nuclei stain more intensively giving a darker shade.
Int J CARS (2006) 1:461–485 – • Eccentricity—deviation of the nuclei in the image from a healthy nucleus. 3. Frequency of hyperchromatic and mitotic figures—we can define the HMF factor as a sum of all mitosis. Mitotic nuclei are those that stain most intensively. 3 Results Our experiments show that even with a simple representation of nuclei one can perform grading with features described above and level of accuracy depends on the threshold to noise tradeoff. We performed tests on a database of breast cancer fine needle aspirate biopsy smears. Over 50% of test images gradings agree with the gradings of a pathologist. 4 Conclusions According to pathologists, the features described above closely reflect of what they look for during the examination of such slides [3]. The automated extraction of these features will not only aid the pathologist in determining their diagnosis but, more importantly, assist in making the diagnosis more precise and objective. References 1. Bloom HJG, Richardson WW (1957) Histological grading and prognosis in breast cancer. Br J Can 11:359–377 2. Ballard DH (1981) Generalizing the Hough transform to detect arbitrary shapes. Pat Rec 13(2):111–122 3. Hodak I, Kardum-Skelin I, Sustercic D et al (2005) Image analysis of breast cancer. Cytopathology 16(2):8 The methodology of multi-level watershed 3D medical image segmentation Feifeng Yanga Æ Lixu Gua Æ Jianfeng Xua Æ Jian Yanga a Computer Science, Shanghai Jiaotong University, People’s Republic of China Keywords Watershed, Medical image segmentation, Multi-level immersion 1 Introduction The watershed transform [1] is a popular and effective segmentation method deriving from the field of mathematical morphology. However, the over-segmentation is still the major drawback. In this paper, we propose a novel multi-level (ML) watershed algorithm to decrease the over-segmentation and apply it to 3D medical images. 2 Methods Vincent and Soille [2] simulated water flooding coming up from the ground and defined a catchment basins without predetermining the regional minima. Since the pixels are immersed level by level in the immersing process, we can modify the way of immersion to multilevel immersion covering more pixels each time, and the over-segmentation problems can be relieved accordingly. As to different kinds of multi-level implementation, we can classify our ML immersion based watershed into two categories: Static ML and Dynamic ML Watershed. In the Static ML Watershed, the multilevel can be evaluated before the immersing process and its values are only related to the original image data. In contrast, during the Dynamic ML Watershed, the multi-level should be evaluated in the level-by-level immersing processing and the values are related to the current level number. 3 Results We here employed a ‘‘headsq’’ MRI data and a brain MRI data downloaded from VTK and BrainWeb1, respectively. Each image or dataset was firstly computed to get the gradient magnitude, and it was employed in the ML immersion. As depicted in Table 1, the resulted number of watershed regions by using ML are significantly reduced compared with the traditional watershed. 4 Conclusion The proposed ML immersion algorithm can significantly resist the over-segmentation with similar time complexity and memory consumption to the traditional one.
1
http://www.bic.mni.mcgill.ca/cgi/brainweb1
469 Table 1 The number of watershed regions by applying different methods Data set
Traditional
Static ML
Dynamic ML
Headsq Brain
16,965 18,5298
3,344 71,018
2,898 94,134
References 1. BTM Roerdink J, Meijster A (2000) The watershed transform: definitions, algorithms, and parallelization strategies. Fundam Inform 41:187–228 2. Vincent L, Soille P (1991) Watersheds in digital spaces: an efficient algorithm based on immersion on simulations. IEEE Trans Pattern Anal Mach Intell13(6) Automatic segmentation of pulmonary structures in chest CT images Yeny Yima Æ Helen Hongb a School of Computer Science and Engineering, Seoul National University, South Korea b Division of Multimedia Engineering, College of Information and Media, Seoul Women’s University, South Korea Keywords CT Æ Segmentation Æ Lungs Æ Airways ÆVessels Æ Region-growing 1 Introduction We propose an automatic segmentation method to automatically extract lung boundaries, airways, and pulmonary vessels in the chest CT images. 2 Methods We first segment the thorax from the background air and then the lungs from the thorax using the inverse operation of the 2D region growing (iRG) [1]. To automatically select a threshold value, we use optimal thresholding. After applying 2D iRG, 3D connected component analysis is applied to ensure that non-pulmonary structures are not erroneously identified as lung regions. Second, pulmonary vessels are extracted from the result of first step by gray-level thresholding. All pixels with a gray level larger than the threshold value are identified as pulmonary vessels. Third, trachea and left and right mainstem bronchi are delineated from the lungs by our branchbased 3D region growing. To increase robustness of branch-based 3D region growing, we apply pre-filtering and post-processing. Finally, we extract the lungs by subtracting the trachea and left and right mainstem bronchi from the result of first step. 3 Results Our segmentation method has been applied to ten patient datasets with pulmonary nodule or embolism. The performance of our method is evaluated with the aspects of visual inspection, accuracy and processing time. To evaluate the accuracy, we compare automatically detected lung contours by our method to manually traced lung borders by two radiologists. Experimental results show that our method extracts lung boundaries, airways, and pulmonary vessels accurately. 4 Conclusion Using 2D iRG and connected component analysis, the airways and the lungs can be accurately extracted without hole-filling. In particular, connected component analysis in low-resolution can reduce the memory and computation time. Branch-based region growing can segment the trachea and left and right mainstem bronchi without leaks to lung parenchyma. References 1. Yim Y, Hong H, Shin YG (2005) Hybrid lung segmentation in chest CT images for computer-aided diagnosis. In: Proceedings of 7th international workshop on enterprise networking and computing in healthcare industry 2005, IEEE, Busan, Korea, pp 378– 383 2. Gonzalez RC, Woods RE (2002) Digital image processing, 2nd edn. Prentice Hall, New Jersey
123
470 Pulmonary vessel and parenchyma registration using contrast enhanced CT images for blood flow assessment Hidenori Shikataa Æ Yoshitaka Tamurab Æ Kazuo Awaib Æ Yasuyuki Yamashitab Æ Shoji Kidoa a Applied Medical Engineering Science, Yamaguchi University Graduate School of Medicine, Ube, Japan b Diagnostic Radiology, Graduate school of Medical Sciences, Kumamoto University, Japan Keywords Pulmonary vessel Æ Contrast enhanced CT Æ Blood flow Æ Registration Æ Subtraction image 1 Introduction Contrast enhanced (CE) CT images are widely used for blood flow assessment. Digital subtraction images between non-CE CT and CE CT images are of help in evaluating the blood flow intuitively. In this paper, we present an automated system that generates subtraction images between non-CE and CE CT images using non-rigid registration techniques. 2 Methods Lung region is automatically segmented and identified to the left and the right based on the vessel segmentation results. A vessel enhancement filter is applied in the lung region to increase the significance of the vessels during the registration process. Registration is performed separately to the left and the right lung without subsampling. In the registration, a free-form deformation model [1] was used with a mutual information [2] as cost function. 3 Results The system was applied to five sets of non-CE and CE CT scans obtained from five patients. The system successfully generated subtraction images that are similar to the corresponding blood flow scintigraphy SPECT images. 4 Conclusion The subtraction images that produced by the system show high potential to be used for clinical assessment of regional blood flow. References 1. Rueckert D, Sonoda LI, Hayes C, Hill DLG, Leach MO, Hawkes DJ (1999) Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans Med Imaging.18(8):712–721 2. Mattes D, Haynor DR, Vesselle H, Lewellen TK, Eubank W (2003) PET-CT image registration in the chest using free form deformations. IEEE Trans Med Imaging22(1):120–128 X-ray restoration with adaptive PDE filter for an accurate calibration of the acquisition system for the 3D reconstruction of the human spine S. Kadourya Æ F. Cherieta,b a Ecole Polytechnique de Montreal, Canada b Sainte-Justine Hospital Research Centre, Monteal, Canada Keywords 3D reconstruction Æ X-ray calibration Æ Adaptive PDE restoration filter Æ Spinal deformities 1 Introduction Three-dimensional reconstruction of the spine from X-ray images used for the evaluation of spinal deformities, is a procedure which requires a great level of precision as well as X-rays of high quality for an automatic detection of the landmarks required for the calibration process. However the model for X-ray image formation implies physical phenomena, which introduces non-negligible distortions and smeared images. The objective of this study was to implement an adaptive shock-diffusion PDE filter specifically for X-rays, so to attenuate the inherited distortion and noise from the imaging pipeline, as well as to detect automatically steel pellets for the calibration process which is crucial for obtaining high level of precision and reproducibility. 2 Methods One frontal and one lateral X-ray were acquired from an acquisition system for the 3D reconstruction of 20 scoliotic spines. In order to
123
Int J CARS (2006) 1:461–485 eliminate small structure components and noise, a PDE-based filter [1] was implemented in this study specifically for radiographic image regions containing calibration markers. This filter was composed of an anisotropic diffusion filter [2] and a shock filter [3], both using a measure of local coherence on image space. Because both filters are based on PDE, these were combined into a single equation where the optimal filter parameters were determined. 3 Results Using a paired Student‘s t test for the 20 scoliotic spines, significant differences in the accuracy of the calibration process were analyzed. The evaluation of the calibration accuracy was made by computing the root-mean-squared 3D reconstruction error of an object placed in lumbar region of the patient’s spine, where each pellet of the object was precisely measured by a coordinate measuring machine. Results show that using an automated method to detect markers on PDE restored images diminished the average RMS error of 0.14 mm, when compared to a manual method using raw X-rays. 4 Conclusion The results confirm that using a PDE-based filter to restore calibration X-rays for an automatic detection and matching of markers, is a viable and accurate procedure for the calibration process used in the 3D reconstruction of the spine. Even though the improvement may seem minor, these errors were calculated on easily visible circular steel pellets. This is however not the case of anatomical structures, where the identification error is about 2 mm [4] and the corresponding 3D reconstruction can reach 5 mm. Therefore future work will look to adapt this filter to restore landmarks such as vertebral endplates and pedicles to enhance the 3D accuracy of the spine. References 1. Hurtut T (2003) Enhancement and segmentation of scar images after a scoliosis surgery, DICTA 2. Weickert J (1999) Coherence-enhancing diffusion filtering. Int J Comp Vis 31: 3. Remaki L, Cheriet M (2003) Numerical schemes of shock filter models for image enhancement and restoration. J Math Imaging Vis 2: 4. Dansereau J et al (1992) Effect of radiographic landmark identification errors on the accuracy of three-dimensional reconstruction of the human spine. Med Phys Imaging, IFMBE A method for fast and automatic segmentation of soft organs from CT and MR images S. Casciaroa Æ L. Massoptiera Æ E. Samsetb Æ E. Casciaroc Æ A. Distantea a Bioengineering Division, IFC-CNR Lecce, Italy b University of Oslo, Faculty of Medicine and Department of Informatics, Norway c Bioengineering Division, ISBEM, Brindisi, Italy Keywords Active contour Æ GVF snake Æ Real-time segmentation 1 Introduction In recent years the number of patients undergoing to radiotherapical and complex surgical treatments increased. These treatments have to be carefully planned to make sure that the pathological regions are spatially localized [1]. This important planning can be accurately supported by computerized segmentation and registration tools. Our algorithm is a first step to get a reliable, automatic and fast segmentation tool. 2 Methods With this algorithm, we propose an automatic soft organ segmentation scheme by using a coarse-to-fine approach in two main steps. The coarse segmentation is obtained by using several classical 3D techniques connected in a precise way. After a median filtering procedure, the user draws a simple rectangle containing only targeted soft organ’s pixels. It enables to compute simple statistics (mean, standard deviation) in order to do an adaptive threshold. Holes are
Int J CARS (2006) 1:461–485 filled and the biggest 16-connected region is kept. To refine this first result, we used an active contour method based on a gradient vector flow as external forces described in [2]. Then, we reach the segmentation of the targeted soft organ. 3 Results The quality of the results is assessed by comparing the automatic segmentation and a hand-made segmentation of two MRI and CT volumes. Besides a visual comparison, the Dice similarity coefficient (DSC) is used to qualify both results. A DSC index superior at 0.93 signifies that our algorithm reaches good results for both image types. However, its performances of time processing are not optimal because it takes 15 min to get the final result. 4 Conclusion Automatic soft organ extraction is a challenging task due to the ambiguity of organs boundary and the complexity of nearby organs [3]. The future work will focus on decreasing the processing time by investigating parallelization. A second objective will be to segment automatically tumors inside the soft organ. References 1. Ogawa Y, Nemoto K, Kakuto Y et al (2005) Construction of a remote radiotherapy planning system. Int J Clin Oncol 10(1):26–29 2. Xu C, Prince JL (1998) Snakes, shapes, and gradient vector flow. IEEE Trans Image Process 7(3):359–369 3. Gao J, Kosaka A, Kak A (2005) A deformable model for automatic CT liver extraction. Acad Radiol 12(9):1178–1189 Fast DRR generation based on GPU Ji Gua Æ Lixu Gua a School of Software Engineering, Shanghai Jiao Tong University, People’s Republic of China Keywords 2D/3D registration Æ DRR Æ Ray-casting Æ GPU based operation 1 Introduction The registration of pre-operative volumetric datasets (e.g. CT data) to intra-operative two-dimensional projections (e.g. X-rays) of the represented object is a common problem in a variety of medical applications. In this paper, we present an improved algorithm for generating the Digitally Reconstructed Radiograph (DRR) using Ray Casting algorithm based on Graphics Processing Unit (GPU). Our ray-casting scheme depends on hardware interpolation instead of software one and it can achieve a satisfying DRR generation with higher accuracy and lower time cost. Since it significantly accelerates the speed of DRR generation, the time of the whole 2D/3D registration procedure will be dramatically reduced. 2 Methods The DRR generation process is organized in three parts. In Part I, firstly we obtain the raw CT data and create a 3D texture out of it. Then, we set the texture sampling interpolation style to bilinear the clipping coordinate style to no clipping. In Part II, we start by building two planes according to the nearest and farthest point in the volume cube. Then we define the projection rectangle and calculate four vertexes’ world coordinates and texture coordinates so as to calculate the raycasting function and the texture size. Finally, we transfer these parameters to GPU and draw the cube according to the four vertexes of the rectangle. In Part III, all the computation is running in pixel programs on GPU, where two-step procedure is designed to carry out the ray-casting. One is to initiate the ray terminals and another is to cast ray to calculate scales for that pixel. Firstly, we determine two terminals of the ray for each pixel. Secondly, we cast the ray from the beginning of texture coordinate to the last one. After the casting process, the program outputs the pixel scale. 3 Results The experiment employs a NVIDIA NV40 GPU and is programmed under a Visual studio 6.0 environment on a Windows XP operation, where the GPU programs are coded with CG language. The OpenGL and VTK libraries are used in the software and the rendering is
471 directed to a 512 · 512 view port. Two CT data sets, were employed, which have size of 64 · 64 · 93 and 512 · 512 · 38. For the first data set, the DRR generation costs 2.45 ms compared to 50 ms using traditional ray-casting method. For the second one, it costs only 10.34 ms compared to 278 ms using the old method. 4 Conclusion This paper presented a novel DRR generation algorithm for intensity based 2D/3D registration using NV40 GPU. Based on the flexible programming model of FV40 pixel shader, we build up a new data pipeline to implement DRR generation completely in GPU. Different from the traditional ray-casting method, this algorithm writes frame buffer only once and used the mathematic calculation determining the ray terminals to avoid the redundant texture storing them. So both time and the memories are reduced and it contributes greatly to 2D/3D registration. Interactive volume visualization on a stereoscopic display for multiple users Y. Kitamuraa Æ T. Johkohb Æ F. Kishinoa a Human Interface Engineering Lab. Graduate School of Information Science and Technology, Osaka University, Japan b Department of Medical Physics and Radiology, Osaka University Graduate School of Medicine, Japan Keywords 3D image Æ Virtual reality Æ User interface Æ Cooperative work Æ Diagnosis Æ Surgery Æ Simulation 1 Introduction A system for interactive volume data visualization and manipulation on a stereoscopic display for cooperative work with multiple users is proposed. It allows multiple moving users to simultaneously observe a stereoscopic image of a volume data from their individual viewpoints. All users observe the 3D image at exactly the same position. Therefore, it is useful for applications in which several people work together to perform tasks with a multiplier effect. 2 Methods The system is developed based on the IllusionHole [1]. In order to display delicate tones of images of volume data, a DLP projector is used. The head position of each user is tracked by a 3D positional tracker, which is attached to LCD shutter glasses used for fieldsequential stereo viewing. The stereoscopic image of volume data for each user is generated by using a volume rendering software according to the measured head position. Here, two different types of user interfaces are developed for manipulation of volume data. The one is a graphical user interface on PC for the purpose of selection of volume data and general adjustment of parameters of images as shown in Fig. 1. The other is a 3D user interface on the IllusionHole that allows users to directly manipulate volume data intuitively.
Fig. 1 GUI on PC monitor 3 Results Figure 2 shows an example of a volume data of a human body displayed on the proposed system shared by four users as observed from
123
472
Fig. 2 A stereoscopic image of a volume data of a human body displayed on the proposed system shared by four users. the fourth user’s viewpoint. All users observe the same object but from different directions according to their viewpoints. 4 Conclusion A cooperative volume visualization and interaction environment is proposed. A volume data displayed on this system is shared by multiple users, and is interactively manipulated through discussions in a face-to-face environment. Future work includes development of software and interactive systems to support diagnosis or surgical simulations of specific targets. References 1. Kitamura Y et al (2001) Interactive stereoscopic display for three or more users. In: Computer graphics annual conference series (Proceedings of ACM SIGGRAPH), pp 231–239 Deformable model based segmentation scheme for fast and accurate segmentation of thrombus from CT datasets Yogish Mallyaa Æ Bipul Dasa Æ Srikanth Suryanarayanana Æ Ravikanth Malladia a Imaging Technologies Lab, GE Global Research, Bangalore, India Keywords Aortic aneurysms Æ Deformable model Æ Level set Æ Thrombus segmentation 1 Introduction We present a fast and minimally interactive algorithm for segmentation of thrombus from CT images. Pre-surgical planning of aortic aneurysm treatment involves creating patient specific aneurysm model and performing various measurements. Segmentation of inner boundary of the aorta and thrombus is a necessary step for model creation. Accurate boundary delineation of thrombus for creating patient specific model is challenging due to indefinite morphology of the thrombus, bridging to adjacent structures and presence of metal artifacts from interventional devices such as stents. 2 Methods The deformable model used in our method follows the level set work by Malladi et al. [1] and geodesic active contour by Caselles et al. [2]. The user initializes the model by drawing a 2D contour close to the final thrombus boundary. The contour evolves only under the influence of curvature and advection terms. The curvature force is used to prevent encroachment into adjacent structures and the advection force enables the contour to lock onto the thrombus boundary. The deformed contour is propagated to next slice as an initial condition for the model in that slice. The deformation and propagation of contour model is continued for entire length of the thrombus. 3 Results We have tested the algorithm under challenging scenarios like weak bridging with adjacent structures, metal artifacts, and varying thrombus morphology. The volume rendered images of the segmented 3D thrombus structures are shown in the Fig. 1. The algorithm took nearly 45 s for execution on 2 GHz and single processor system for 200 · 200 · 200 sub volume. The mean spatial overlap
123
Int J CARS (2006) 1:461–485
Fig. 1 From left to right, patient presented without stent and patient presented with stent using DSC between ground truth and the algorithm output was 0.94 with a mean segmentation error of 5.3%. 4 Conclusion The deformable model based algorithm for segmentation of thrombus presented in this paper is fast and requires minimal interaction. The algorithm was able to segment the thrombus boundary with an accuracy of over 90% compared to manually segmented models by experts. References 1. Malladi R, Sethian JA, Vemuri BC (1995) Shape modeling with front propagation: a level set approach. In: IEEE Transactions on PAMI, vol. 17(2), pp 158–175 2. Caselles V, Kimmel R, Sapiro G (1997) Geodesic active contours. IJCV 22(1):61–79 4D ultrasound-enhanced diagnostic capabilities and improved image presentation Bhargava Ravindraa a Sanjay Medical Center, Radiology and Imaging, Sharjah, United Arab Emirates 1 Introduction 4d usg has played an immense role in increasing diagnostic capabilities particularly regarding fetal morphology, solid/cystic lesions in the abdomen/extremities surrounded by fluid echoes. 2 Method Ultrasound equipment loaded with special probes and software programmes for 4D archiving and display of images. The surface rendering process with simultaneous image display in three dimension chroma increases visual acceptability/perceptibility of the radiologist and attending clinicians. The images can be understood much better than conventional gray scale 2D images. 3 Results I found it particularly useful in diagnosing and localizing lesions within the gall bladder particularly fixed to the wall.GB calculi floating or fixed can be better demonstrated. An inflamed appendix with a thin layer of perifocal fluid can be well delineated with the obstructing faecolith. Uterine myomas/polyps whether invading the endometrial stripe can be confirmed. Developmental morphological uterine abnormalities can be better presented. Intrauterine contraceptive devices can be well demonstrated. Uterine adnexal masses of cystic/mixed and solid origin can be well displayed. Testicular and peritesticular lesions particularly associated with a hydrocoele can be precisely diagnosed and presented including location/displacement/compression/wasting of the testis if any. Fetal morphology can be well studied (limited by fetal positions and amount of liquor) in obstetric patients this is particularly useful in fetuses with chromosomal defects, multifetal gestations. Morphology of placenta with its location, size, etc. and also umbilical cord insertions/entanglements/prolapse/length/coiling index can be well confirmed.
Int J CARS (2006) 1:461–485 It is also helpful in demonstrating cystic/fluid filled areas in the breast, ganglion cyst/bakers cyst in the limbs. 4 Conclusion 4D Sonography equipments have helped sonologists in improved image representation and have dramatically helped in early detection of many abnormalities. Digital radiography in the analysis of artwork J.E. Morrisa Æ M.L. Goldmana Æ P.J. Crossa Æ J.G. Smitha a Creighton University Medical Center, Omaha, NE, USA Keywords Digital radiography Æ Digital radiology Æ Artwork Æ Paintings Æ Art research 1 Introduction Radiography has undeniably benefited the analysis of artwork [1–3]. Nevertheless, digital imaging and conventional screen-film imaging can only produce a radiograph of a pre-set size [4]. To obtain a radiograph of larger artwork, multiple smaller images must be pieced together to form a final composite image. This process is tedious and compromises the final radiograph quality. A flexible format digital imaging system (Lodox StatsanTM) has been developed to scan trauma patients and produce a full body digital image in under 25 s [5]. We propose that this system will also provide unique advantages to the radiographic analysis of artwork. 2 Methods Twenty-eight paintings were imaged using the flexible format digital imaging system (Lodox StatsanTM). The digital radiographic images were viewed on a diagnostic work station. Radiographs were analyzed for additional images under the superficial paint layers, alternately referred to as internal compositions. 3 Results Utilizing this system, we obtained a radiograph of an entire painting in less than 25 s. This eliminated the need to create composite images in 27 of 28 paintings. All of the final radiographic images had excellent resolution. Of the 28 paintings, 4 radiographs revealed the signature of the artist, 5 radiographs revealed the artist’s initials, and 4 revealed additional images. 4 Conclusion This flexible format digital radiography system (Lodox StatsanTM) produces a high-quality digital radiograph of an entire painting in 25 s. The process of creating time-consuming and laborious composites is essentially eliminated. This system provides art historians with a more efficient means of producing high-quality radiographic images of artwork. References 1. James AP, Gibbs SJ, Sloan M et al (1983) Radiographic techniques to evaluate paintings. Am J Radiol 140:215–220 2. Reiner BI, Seigel EL, French KJ et al (1997) Use of computed radiography in the study of an historic painting. Radiographics 17:1487–1495 3. Padfield S, Saunders D, Cupitt J et al (2002) Improvements in the acquisition and processing of X-ray images of paintings. The National Gallery Technical Bulletin 23:62–75 4. James AE, Gibbs SJ, Sloan M et al (1982) Digital radiography in the analysis of paintings: a new and promising technique. J Am Inst Conserv 22:41–48 5. Lodox Critical Imaging System (StatscanTM). Lodox Systems North America LLC, 8751 Soft Maple Estates, Crogan, NY 13327. Additional information available at: http://www.lodox.com Offset and gain maps compression for digital radiographic sensors I. Frosioa Æ N.A. Borghesea a AIS Laboratory, Computer Science Department, University of Milan, Italy Keywords Digital radiography Æ Linear correction Æ Data compression
473 1 Introduction Linear correction is often adopted to guarantee that a digital imaging sensor outputs a constant value when the sensor is reached by a uniform radiation [1]. We propose here a method which allows compressing the offset and gain maps used in the linear correction [2], such that calibration data can be registered in the limited RAM embedded in the sensor. The method was successfully tested on a typical endooral radiographic sensor. 2 Methods Offset and gain map are computed using a set of five calibration radiographies, taken at different radiation doses, and clustered using a Quad Tree Decomposition (QTD) approach, plus a smart coefficient encoding. This allows to store a compressed version of the calibration data. Any image taken with the sensor is corrected through the offset and gain maps, which are reconstructed expanding the compressed data by upsampling and smoothing them with a radial basis function (RBF) approach. 3 Results Figure 1 shows the correction of a portion of an endooral radiographic sensor image (Untreated), adopting offset and gain maps compressed with different strategies. The proposed filter (QTD + RBF) achieves good image correction, comparable with the one obtained with uncompressed data (UDF). The dimension of the offset and gain maps is reduced at 1/200 of the original data: 10 kB versus the original 2 MB. Classical JPEG compression of the gain and offset maps performs similarly only when no JPEG coefficient is discarded (JPG100F); however, in this case compression ratio is only 1/10.
Fig. 1 A uniform portion of image with applied linear correction, after offset and gain data compression with different algorithms. Compressed data size are also reported here. 4 Conclusion We have proposed an algorithm for compression of linear calibration data in digital radiography. The proposed approach strongly reduces the dimension of the calibration data, whereas it preserves the image quality. References 1. Jin B, Park N, George KM, Choi M, Yeary MB (2003) Modeling and analysis of soft-test/repair CCD-based digital X-ray systems. IEEE Trans Instrum Meas 52(6):1713–1721 2. Taubman DS, Marcellin MW (2002) JPEG2000: standard for interactive imaging. Proc IEEE 90(8):1336–1357 Pulmonary perfusion imaging with a dynamic Flat-Panel Detector (FPD) Rie Tanakaa Æ Shigeru Sanadaa Æ Nobuo Okazakib Æ Takeshi Kobayashic Æ Takeshi Matsuic Æ Kazuya Nakayamaa Æ Osamu Matsuic a Graduate School of Medical Science, Kanazawa University, Japan b Department of Internal medicine, Ohtsuka Ryougoku Clinic, Tokyo, Japan c Department of Radiology, Kanazawa University Hospital, Japan Keywords FPD Æ Perfusion Æ Pulmonary blood flow Æ Quantification Æ Functional imaging 1 Introduction Dynamic chest radiography with a dynamic flat-panel detector (FPD) can allow assessment of pulmonary blood flow by quantifying changes in pixel values by cardiac pumping. The present study was performed to investigate the relationship between cardiac cycle and changes in pixel values on dynamic chest radiographs, and to develop a method for visualizing the results as relative pulmonary blood flow.
123
474 2 Methods Dynamic chest radiographs of nine normal subjects were obtained using a dynamic FPD, in combination with electrocardiograms (ECG) recorded using a Holter monitor. The ECG was analyzed to determine the R wave. The X-ray pulse wave was also analyzed to determine the frame just before the R wave as a base frame, which had the least blood in the pulmonary vessels in one cardiac cycle. The base frame was renewed every cardiac cycle. We measured the average pixel value in each lung and investigated the correlation between cardiac cycle and changes in average pixel values. We also calculated the rate of change in average pixel value in each region of interest (ROI) located on the pulmonary arteries, pulmonary vein, and peripheral lung blood vessels. On the other hand, temporal subtraction was performed between each frame and the base frame throughout all frames. Perfusion images were then created by superimposing the difference values on dynamic chest radiographs in the form of color display using a color table in which positive changes (lower X-ray translucency) were shown in warm colors and negative changes (higher X-ray translucency) were shown in cold colors. 3 Results There was a strong correlation between cardiac cycle and changes in pixel value in the lung. The rates of change in average pixel value were 3.0 ± 0.5% in the left pulmonary artery and 1.5 ± 0.6% in the right pulmonary artery, and those in the other vessels ranged from 1.0 to 1.5%. We also succeeded in visualizing the changes in pixel value in the lung as perfusion images and as blood flow distribution maps. These images were very useful for quantifying slight changes in pixel values and to understand the relative pulmonary blood flow in each lung area. 4 Conclusion Dynamic chest radiography using FPD was capable of quantifying relative local blood flow in the lung, and detecting regional differences in blood flow. This is a rapid and simple method for evaluation of pulmonary blood flow. Further studies are required to evaluate subjects with blood flow obstruction and to investigate the ability of our method to detect abnormalities, such as lung embolism and general heart disease. Reduction of a grid moire´ pattern by integrating a carbon-interspaced X-ray grid with a digital radiographic detector Jai-Woong Yoona Æ Do-Il Kima Æ Chun-Joo Parka Æ Bo-Young Choea Æ Tae-Suk Suha Æ Young-Guk Parkb Æ Jin-Ho Leeb Æ Nag-Kun Chungb Æ Hyoung-Koo Leea a Department of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul, South Korea b Jungwon Precision Ind. Co., Seoul, South Korea Keywords Anti-scatter grid Æ Moire´ pattern Æ Aliasing 1 Introduction A fixed usage of an X-ray grid in a digital X-ray detector caused a moire´ interference pattern due to an inadequate sampling of grid shades by a radio-opaque material. We propose a new method to remove moire´ patterns with a carbon-interspaced X-ray grid, which showed a superior uniformity of grid line to those of lead strip grids, integrated with a digital X-ray detector. 2 Methods A carbon-interspaced grid (Jungwon Precision Ind. Co., Korea) was processed to have the line frequency of 185 line pairs per inch, the grid ratio of 10:1 and the septa width of 28 lm. We acquired the grid images from an amorphous selenium DR of the pixel pitch of 139 lm. The detector underneath of the X-ray grid was translated and rotated with the help of a micro-controlled jig. A height of the grid from the detector was varied by 4 lm to magnify the shades of the grid lines at the detector and, hence, to exactly match with the sampling frequency of the detector.
123
Int J CARS (2006) 1:461–485 3 Results and conclusion The angular displacement of the detector caused a frequency difference to represent a higher frequency of moire´. [1] The horizontal translation did not change the moire´ frequency but only phases. As the frequency difference between the grid and DR was decreased, the low-frequency moire´ patterns were observed and finally disappeared at the complete matching as shown in Fig. 1.
Fig. 1 Moire pattern acquired in DR (pixel pitch: 139 lm) with the grid of line frequency 185 lp/in. with varying the distances from the grid to the detector surface High straightness and uniformity of grid lines of the carbon-interspaced grid and the micro-controlled alignment method enable matching the frequencies to remove moire´ patterns without a software filtering and a moving grid. References 1. Wang J, Huang HK (1994) Film digitization aliasing artifacts caused by grid line patterns. IEEE Trans Med Imaging 13:375–385 Defect correction using a priori information in digital radiography I. Frosioa Æ N.A. Borghesea a AIS Laboratory, Computer Science Department, University of Milan, Italy Keywords Digital radiography Æ Image interpolation Æ Defect correction 1 Introduction Image interpolation is one of the most used methods to correct local sensor defects (blemishes) [1]. This solution has the strong drawback that all the features inside the corrupted area can be eliminated from the image, and artifacts (e.g. signal oscillations) can be introduced [2]. We propose here a new method to correct blemishes in digital radiography, which uses a priori information on the sensor defect to obtain a reliable solution. 2 Methods The locally corrupted output of a digital radiographic sensor can be represented as: I ðx; yÞ ¼ iðx; yÞ þ o W ðx; yÞ þ g W ðx; yÞ iðx; yÞ;
ð1Þ
where I is the corrupted image, W is a modulating function which represents the a priori information on the blemish, i is the uncorrupted image, o and g are two unknown parameters. The map W is estimated through an appropriate sensor calibration procedure. o and g are estimated by minimizing the following cost function: Z Z 2 ix ðx; yÞ þ i2y ðx; yÞ E¼ dxdy þ k o2 ; ð2Þ i2 ðx; yÞ x
y
where ix and iy are the image gradients. Inverting (1), the uncorrupted image is obtained. 3 Results In Fig. 1 a blemish treated with a traditional interpolation algorithm and with the proposed approach is shown. The supremacy of the latter is evident.
Fig. 1 A blemish (left), corrected with image interpolation (middle) and with the proposed filter (right )
Int J CARS (2006) 1:461–485 4 Conclusion We have proposed a novel algorithm for the correction of local sensor defects in digital radiography. Since a priori information is used, results are more reliable than the ones obtained with traditional interpolation algorithms. References 1. Jin B et al (2003) Modeling and analysis of soft-test/repair CCDbased digital X-ray systems. IEEE Trans Instrum Meas 52(6):1713–1721 2. Chen G, de Figueriredo RJP (1993) A unified approach to optimal image interpolation problems based on linear partial differential equation models. IEEE TransImage Proc2(1):41–49 Simultaneous generation of two-DRRs using hardware acceleration for 2D/3D registration Sungmin Kima,b Æ Young Soo Kima,c a Center for Intelligent Surgery System, Hanyang University, Seoul, South Korea b Department of Biomedical Engineering, Hanyang University, Seoul, South Korea c Department of Neurosurgery, Hanyang University, Seoul, South Korea Keywords Digitally reconstructed radiography (DRR) Æ Hardware accelerated rendering 1 Introduction In this paper, to solve 2D/3D registration problem, hardware acceleration, 3D texture mapping algorithm, is used. In our research, generating DRRs is focused. Furthermore, generating two-DRRs (AP View DRR and Lateral View DRR) simultaneously is most important purpose to use biplanar fluoroscopy images, which is used to register between two DRRs and two fluoroscopy images. Comparing similarity and finding accurate position and orientation is remained for future work. 2 Methods Using 3D texture mapping algorithm, we can get proper DRRs which are similar to X-ray fluoroscopy images. When fluoroscopy images are acquired (anterior–posterior (AP) view and lateral view images are acquired simultaneously), DRRs of two views are generated by hardware accelerating method at the same time. To apply these two DRRs to the 2D/3D registration, GUI of six views are created, which is able to compute two DRRs simultaneously. 3 Results and conclusion The following image is GUI to compute two DRRs at the same time. (Fig. 1) In this paper, generating DRRs using hardware acceleration, 3D texture mapping algorithm is focused. There are several advantages when this method is applied to compute DRRs. It takes less time to compute DRR and the whole procedure time is also reduced comparing to any other conventional methods. The image quality, moreover, is improved, too. Finding position and orientation of 3D volume in simulated intraoperative imaging device is remained for future work. To accomplish
475 this procedure, multi-directional searching algorithm such as Powell method will be applied. Parametric analysis of high angular resolution diffusion data—for complex nerve structure K. Hikishimaa Æ K. Yagib Æ T.Numanob Æ K. Hommaa a National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan b Radiological Science, Tokyo Metropolitan University, Japan Keywords MRI Æ DWI Æ Parametric analysis Æ High angular resolution diffusion (HARD) ÆCrossing fiber 1 Introduction Diffusion MRI is one of the noninvasive methods that visualizing microstructure and expanded to explore fiber bundles by Tractography. High angular resolution diffusion (HARD) is recently used for complex nerve structure (e.g. crossing, twisting, diverging), but little is known about how diffusion shapes concern to actual complex nerve bundles. The purpose of this study is to show classification of HARD data to each fiber region by parametric analysis. 2 Methods We assumed different two strength of b-value (b1, b2) measured DW signals S1 and S2. Apparent diffusion coefficients (ADC) data were calculated by ()/(b1)b2)) log(S1/S2). In HARD, diffusion orientation distribution function (ODF) is generated by spherical radon transform of ADC data sets that measured along large number of directions. Next we obtained fiber ODF by performing the spherical deconvolution of diffusion response function R from diffusion ODF [1]. For parametric analysis, two formalisms are introduced for the multiple fiber orientations, definition, and comparison of measures applied to fiber ODF: (1) normalized space, and (2) parametric transformation plots. Two series of 81 diffusion weighted images (total 162 images) was acquired on a 1.5 T scanner in a single coronal slice from a normal volunteer. The scan parameters were: TR/TE: 3,000/120 ms, FOV: 38 cm, 3 mm slice thickness, 64 · 64 pixel matrix, and diffusion weighting 250 s/mm2 and 2,000. An icosahedral gradient encoding scheme was used. 3 Results and conclusion Parametric transformation plots could be used to choose between various fiber bundles by demonstrating how the measure changes relative to the fiber ODF data under study. The future direction of this parametric technique can be used for applications such as HARD Tractography mask and detection of more complex neurodegeneration. References 1. Tournier J-D, Calamante F, Gadian DG, Connelly A (2004) Direct estimation of the fiber orientation density function from diffusionweighted MRI data using spherical deconvolution. Neuroimage 23:1176–1185 2. Bahn MM (1999) Comparison of scalar measures used in magnetic resonance diffusion tensor imaging. J Magn Reson 139:1–7 Precise diffusion anisotropy measurement at crossing fiber from sparsely sampled diffusion MRI data set by using RBF based tensor field reconstruction Hiroyuki Kabasawaa Æ Yoshitaka Masutania Æ Osamu Abea Æ Shigeki Aokia Æ Kuni Ohtomoa a Department of Radiology, Graduate School of Medicine, The University of Tokyo, Japan Keywords Diffusion tensor imaging Æ Radial basis function Æ Crossing fiber 1 Introduction Typical quantitative diffusion anisotropy indices, such as fractional anisotropy (FA) are based on diffusion tensor model and only accurate at single fiber but these indices have its intrinsic limitation at crossing fiber. It has been shown that q-space based methods were
123
476
Int J CARS (2006) 1:461–485
able to provide accurate diffusion characteristic measurement at the voxel that includes more than two fibers. However these methods requires huge amount of data to decompose crossing fibers, and that makes scan time extremely long. In this study, diffusion properties are precisely estimated from sparse DTI data set, by using fiber orientation information reconstructed by novel image processing method. 2 Methods High angular resolution diffusion-weighted image were analyzed by conventional tensor based method to estimate fiber orientation spatial distribution at single fiber. Assuming that neuro fiber direction was smoothly curved, radial basis function based tensor field interpolation technique (RBF) was used to reconstruct the fiber direction at the crossing fiber voxel. As given data set, at least two ROIs were placed in both side of the crossing fiber volume in the fiber bundle to be reconstructed. DTI data were fitted to two tensor ellipsoid model by using the reconstructed fibers orientation as a prior known information. Down hill simplex method was used to estimated volume fraction and diffusion coefficient along with the estimated fiber orientation. Fiber reconstruction method was performed on origin software (dTV-II) developed by one of the authors. 3 Results Total image acquisition time was around 10 min. The synthetic phantom experiment result showed both accuracy and precision increased in FA measurement when the number of MPG increased. The crossing fiber separation process worked in stable when more than 20 MPG orientation was used. Human image processing result showed the measured FA value at crossing voxel of CST and CC. This result suggests pseudo anisotropy decrease could be recovered by the proposed method in both CC and CST. 4 Conclusion The proposed method effectively estimated diffusion characteristic at the crossing fiber bundle from sparse diffusion measurement data set. References 1. http://www.ut-radiology.umin.jp/people/masutani/dTV.htm Improvement of MR image resolution by multiple image formation technique N. Liua Æ S. Itoa Æ Y. Yamadaa Æ M. Kasugaa Æ K. Tanakab a Graduate School of Engineering, Utsunomiya University, Tochigi, Japan b Asahikawa Medical College, Asahikawa, Hokkaido, Japan Keywords Fresnel transform Æ Spatial resolution Æ Aliasing Æ Phase-scrambling Æ Fourier transform imaging 1 Introduction This paper presents a new MR high-resolution imaging technique using the phase-scrambling Fourier imaging technique (PSFT). Reconstruction of MR image can be performed not only by the inverse Fourier transform (FT) but also by inverse Fresnel transform (FRT) using PSFT signal. By combining these two reconstructed images having high-resolution and standard resolution, respectively, we can obtain a MR image that has high-resolution than the standard FT imaging technique. 2 Methods The signal obtained in PSFT is given by the following equation [1]: Z Z1 v kx ; ky ¼ qðx; yÞ exp jc x2 þ y2 1
exp j kx x þ ky y dxdy vðkx ; ky Þ ¼ exp jc x02 þ y02
ð1Þ
Z1 qðx; yÞ 1
n h io 2 2 dxdy exp jc ðx0 xÞ þðy0 yÞ
123
ð2Þ
where q(x,y) represents the spin density distribution of subject, c is imaging parameter. We can obtain images by inverse FT of the signal (1). Equation 1 can be rewritten as form of FRT as shown in Eq. 2 by using the variable substitutions x¢= )kx/2c and y¢= )ky/2c [2]. Therefore we can obtain an another images by inverse FRT. In general, spatial resolutions of these two reconstructed images are different. So, we can obtain two images, one of which has a highresolution but contain an aliasing artifact, and the other has a standard-resolution with no aliasing artifact. By synthesizing these two images, we can obtain an image that has high resolution in the centre of images. The factor of resolution-improvement h (=Dx/Dx¢) can take an optional value within a range from 1.0 to 2.0 by arranging the Fresnel parameter c. 3 Results Experiments were performed using a hand-made 0.0187 T MRI scanner. Imaging parameters are Dx = 0.075 cm, Dx¢ = 0.12 cm, c = 7.8 rad/cm2, h = 1.6. Matrix size of reconstructed images was enlarged to 128 · 128 from 64 · 64 to secure resolution improvement (Fig. 1).
60
120
50
100
40 30
80
20 10
60
0 0
10
20
30
40
50
60
40
120
20
100
0 60 120 40 100 20 80 60
60
50
50
40
40
30
30
20
0 0
60
20
40
60
80 100 120
40
20
20 10
10 0 0
0 10
20
30
40
50
60
0
10
20
30
40
50
60
0 0
20
40
60
80 100 120
Fig. 1 Results 4 Conclusion A new MR high-resolution imaging technique is presented and demonstrated. It was shown that proposed reconstruction technique had a capability to produce images having higher resolution compared to standard Fourier transform imaging technique. References 1. Maudsley AA (1988) Dynamic range improvement in NMR imaging using phase scrambling. J Magn Reson 76:287–305 2. Ito S et al (2002) Optical on-line running reconstruction of MR images in the phase scrambling-Fourier imaging technique. Appl Opt IP.41(26):5527–5537 Optimization of iterative image reconstruction method in PET/SPECT using channelized hotelling observer Wataru Mukaia Æ Kenya Murasea Æ Shunsuke Fukamia Æ Keiichi Matsumotoa Æ Keiji Shimizub Æ Xiaomei Yanga Æ Keishi Kitamurac Æ Michio Sendab a Department of Medical Physics and Engineering, Graduate School of Medicine, Osaka University, Japan b Department of Image-Based Medicine, Institute of Biomedical Research and Innovation, Kobe, Japan c R & D Department, Medical Systems Division, Shimadzu Corporation, Kyoto, Japan Keywords OS-EM algorithm Æ Channelized hotelling observer 1 Introduction Iterative image reconstruction method has been successfully used for image reconstruction in PET/SPECT. However, there is no theory for determining the optimal image reconstruction parameters such as the numbers of subsets and iterations in the ordered subsets expec-
Int J CARS (2006) 1:461–485 tation-maximization (OS-EM) algorithm [1]. Then, the purpose of this study was to apply channelized hotelling observer (CHO) [2] to determination of the optimal values of the numbers of subsets and iterations in the OS-EM algorithm, and to investigate its usefulness. 2 Methods To investigate the optimal values of the numbers of subsets and iterations in the OS-EM algorithm, a PET scanner was used to scan images of two types of phantoms, a hot spot phantom and a cylindrical phantom with a diameter of 20 cm filled with 18F-fluorodeoxy glucose (FDG). The gated emission scan data were acquired using an electrocardiogram simulator. The OS-EM algorithm was used for 2D image reconstruction of each dataset. The CHO was applied to 50 slices from the images of each type to compute detectability index (dCHO). To investigate the optimal values of the numbers of subsets and iterations in the OS-EM algorithm for SPECT, an elliptical numerical phantom containing a hot cylinder of different sizes and contrasts was generated using a simulation tool [3]. Fifty lesion-absent and lesionpresent images were generated for each condition, and the dCHO value was calculated from a total of 100 images for each condition. 3 Results In PET, the dCHO value increased with an increase of the number of iterations when the number of subsets was 1, 2, 4, 8 and 16. However, when the number of subsets was 32, the dCHO value decreased with an increase of the number of iterations. The optimum numbers of subsets and iterations were 16 and 3. In SPECT, the number of iterations at which the dCHO value became the maximum changed depending on the projection counts and lesion contrast. 4 Conclusion The optimal image reconstruction parameters of the OS-EM algorithm are dependent on the statistical noise of projection data, lesion contrast, and so on, the optimal image reconstruction parameters can be evaluated using CHO. Furthermore, CHO will reduce the need for tedious and time-consuming studies with human observers. References 1. Hudson HM, Larkin R (1994) Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans Med Imaging 13:601–609 2. Barrett HH, Yao J, Rolland JP, Myers K (1993) Model observers for assessment of image quality. Proc Natl Acad Sci USA 90:9758–9765 3. Yamaki N, Natsume T, Maeda H, Mukai T, Murase K (2005) Development of a software package for nuclear medicine image processing and analysis (prominence processor). Jpn J Nucl Med 42:315 [in Japanese]
477 characteristic X-ray to detectors. However, if the source–detector distance is long, the spatial resolution is reduced by scattered radiation. In this study, we attempted to improve the spatial resolution in I-123 N-isopropyl-p-iodoamphetamine (IMP) brain SPECT using simultaneous three-dimensional distance-dependent resolution correction (DRC) incorporated into the ordered subsets-expectation maximization (OS-EM) algorithm. 2 Methods The same amount of radioactivity of I-123 was filled in a phantom. The contrast in the SPECT images was evaluated using profile curves. The effect of DRC was compared between the high resolution (HR) and general purpose (GP) collimators compared, and the r value in a Gauss function was changed about quantification of FWHM. The optimal response function of this phantom was also estimated. Brain SPECT of a healthy 57-year-old man was examined 30 min after intravenous injection of 222 MBq of I-123 IMP at rest. 3 Results The difference in the effect of DRC was observed between the HR and GP collimators, and the case when using the GP collimator was better than when using the HR collimator. Regarding the response function, the r value of 1.0 was better in terms of repeatability. In human subject, when using DRC, the SPECT images demonstrated the cerebellum, basal ganglia and thalamus more clearly than not using DRC. In addition, the gray-to-white matter count ratio increased after DRC. Detection of deep vein thrombosis in high risk surgical patients by compression ultrasound and color Doppler sonography N. Nayaka Æ Manavjit Sandhub Æ G. Vermaa Æ L. Kamana a Department of General Surgery, Postgraduate Institute of Medical Education and Research, Chandigarh, India b Department of Radiodiagnosis, Postgraduate Institute of Medical Education and Research, Chandigarh, India 1 Introduction Determined the prevalence of sub-clinical deep vein thrombosis (DVT) in high-risk patients and compared compared compression US (CUS) with color Doppler (CD) in diagnosis of lower limb DVT.
Simultaneous three-dimensional resolution correction in I-123 IMP brain SPECT using the OS-EM algorithm Y. Takahashia Æ H. Otakeb Æ K. Murasec Æ T. Mochizukid Æ H. Maedae Æ A. Kindaf a Department of Nuclear Medicine Technology, Gunma Prefectural College of Health Science, Japan b Department of Radioligical Technology, Gunma University School of Medicine, Japan c Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Japan d Department of Radiology, Ehime University School of Medicine, Japan e Department of Medical Physics and Engineering, Fujita Health University Graduate School of Health Science, Aichi, Japan f Toshiba Medical Engineering Laboratory, Tochigi, Japan Keywords Three-dimensional distance-dependent resolution correction Æ Ordered subsets-expectation maximization algorithm Æ Transmission computed tomography 1 Introduction Collimators are used for the improvement of information about the position of sources by limiting the incident direction of c-ray and
123
478
Int J CARS (2006) 1:461–485
2 Methods Hundred high-risk patients fulfilling minimum of two criteria—abdominal surgery, surgery for advanced malignancy, age > 40 with associated risk factors, traumatic injuries of lower limb, duration of surgery > 3 h were included in this study. Patients with pre-existent DVT, on anticoagulant drugs, previous history of DVT or pulmonary embolism, congestive cardiac failure or with possible cause of extrinsic venous compression like tumor mass or enlarged lymph nodes were excluded. Patients were evaluated for DVT by CUS and CD on 1st, 3rd and 5th postoperative day. 3 Results Mean age 51 years (64 males and 36 femaels), 62 had a malignancy and 38 had a swelling in lower limbs. None of the 100 patients had DVT on CUS or CD. 4 Conclusions Prevalence of DVT in high-risk patients undergoing surgery is low in Indians compared to Western population. This may be due to Asian Diets, limb exercises started immediately post-operatively, awareness of importance of early ambulation for patients among our staff, use of epidural anaesthesia that is protective against thromboembolism and cultural practice of massaging calves which may prevent venous stasis and reduce occurrence of venous thrombosis. Subharmonic imaging using amplitude-modulated ultrasound wave Norihide Maikusaa Æ Tadanori Fukamia Æ Tetsuya Yuasaa Æ Yasutaka Tamuraa Æ Takao Akatsukab a Yamagata University, Japan b Yamagata College of Industry and Technology, Japan Keywords Subharmonic imaging Æ Ultrasound Æ Modulated wave Æ Contrast agents 1 Introduction It is well known that subharmonic component included in ultrasound echo from contrast agent is superior in image contrast between tissues and blood. However, the subharmonic signal power from contrast agents is relatively low. In this research, we proposed new imaging method using amplitude-modulated wave as transmitted wave and pulse inversion method to enhance subharmonic echo signal. 2 Methods We employed amplitude-modulated wave, y(t), as described by: yðtÞ ¼ sin ð2pf1 tÞ cos ð2pf2 tÞ ð0 t n=f1 Þ
ð1Þ
where f1 and f2 are frequencies of carrier and modulation, respectively. The modulated wave, y(t), consists of two frequency components, f1 ± f2. We set f1 and f2 to 2.125 and 1.275 MHz, respectively, in order to correspond subharmonic frequency, f1 + f2, and second harmonic frequency, f1)f2, with 1.7 MHz. In other words, we combined second harmonic and subharmonic component as a received pulsed. Moreover we used pulse inversion method to remove frequency component of f1)f2, 0.85 MHz, because we afraid to overlap spectrum around 1.7 MHz. We constructed agar phantom as shown Fig. 1 for simulated experiment. The phantom penetrated the 5.0 mm holes at 20.0 and 45.0 mm depth, and injected polyvinyl chloride as imitated bio20.0 mm 5.0 mm
Polyvinyl chloride
5.0 mm
Contrast Agents
20.0 mm
50.0 mm
Fig. 1 Structure of phantom
123
logical tissue into the upper hole, and contrast agents, Optison, diluted to 100 ll/ml into the lower hole. We compared the results of ultrasound B-mode imaging between proposed and conventional method. 3 Results and conclusion We showed the results of imaging using proposed and conventional method in Fig. 2. We could not recognize polyvinyl chloride layer in Fig. 2a, b. Those results showed that subharmonic imaging superior in image contrast between tissue and blood. Moreover, we could see clearly contrast agent layer in Fig. 2a in comparison with Fig. 2c. We finally calculated S/N of subharmonic imaging with conventional and proposed method. As a result, proposed method indicated 7.6 dB S/N improvement to conventional method.
Fig. 2 Results of subharmonic imaging (a and b are with proposed and conventional subharmonic imaging, c and d are with fundamental imaging) Digitally integrated teleradiology network—2 years experience W. Auffermanna Æ J. Stettinb a Hamburg University of Applied Sciences, HANSERAD Radiology Center, Germany b Hamburg University of Applied Sciences, Germany Keywords PACS Æ RIS Æ DICOM Æ Virtual private network Æ Teleradiology Æ Wide area network ÆWorkflow 1 Introduction The realization of a completely integrated radiology network is a fascinating challenge on many levels of a radiology network enterprise. Implementing PACS is a process, not an event. The PACS process has a life cycle replete with key decision points. Learning how to handle these events in a holistic fashion can be a crucial factor in the success of a digital image management system. The process comprises many factors on many levels. Issues can be resolved even after your PACS is up and running. 2 Methods For the realization the question is important how the radiology appointments are set, who retrieves the patient, how patients are managed in radiology with and without film, and how quickly radiology releases patients to other caregivers. The wide area network (WAN) connecting the four hospital sites is based on 2–4 mbps leased line connections. Since the costs of leased lines are basically defined by performance and distance it is required to find a compromise between cost and performance. Network security is one more essential topic: To keep the network secure, all border entry points are provided with firewall appliances. Secure network access for teleradiology users—home offices, external and internal
Int J CARS (2006) 1:461–485 referrers—is based on web-based virtual private network (VPN) technology. 3 Results Initially we had progressed well into the PACS deployment process with a tactical approach before realizing that no provision had been made for providing PACS training to physicians and other staff members, or for managing workflow changes. Optimization of workflow is the key for productivity gain on each step of the process. Radiology departments should firstly, prior to the implementation of an IT-system, develop good workflow and management, then improve the efficiency of its imaging equipment, upgrade the comprehensive quality of its staff and advance the level of medical treatment and education and scientific research. Conversion from traditional radiology to digital radiology should be an essential step in the process. 3 Conclusion This is to our knowledge the first completely digitally integrated and web-connected radiology department in Germany. Our goal for the system is, that it should at some point in the near future reflect and materialize the physical process chain as well as the intellectual workflow in a way that is adapted to the individual preferences and tastes of each physician as well as the other co-workers. Because PACS affects such a broad spectrum of professionals in numerous ways, it demands a holistic, strategic approach. Providers who adhere to this basic wisdom will avoid the need for remediation. And those in need of remediation can best do so by honouring the underlying wisdom of a holistic approach, applied at key decision points and throughout the PACS life cycle. OSIRIX: an open source platform for multimodality image processing and peer-to-peer image communication Antoine Rosseta Æ Osman Ratiba Æ Joris Heubergera a Department of Radiology and Medical Informatics, University Hospital of Geneva, Switzerland Abstract A multidimensional image navigation and display software was designed for large sets of multidimensional and multi-modality images such as combined PET-CT studies that require three-dimensional image fusion and volume rendering We have elected to explore peer-to-peer technology as an alternative to centralized PACS architecture for the increasing requirements for wide access to images across the enterprise. The infrastructure implemented allows fast and efficient access independently from the actual physical location of the data with a performance 10–20 times faster than central PACS archive. It is particularly suitable for large hospitals and academic environments where clinical conferences, interdisciplinary discussions and successive sessions of image processing are often part of complex workflow or patient management and decision making. Keywords PACS workstation Æ Peer-to-peer technology Æ Workflow Æ Image navigation 1 Introduction The rapid evolution of digital imaging techniques and the increasing number of multidimensional and multimodality studies constitute a challenge for PACS workstations and image display programs. While web-based solutions have emerged as simple ways for wide distribution of images they often lack the necessary tools for advanced image processing and 3D visualization. The goal of our project is to allow users across the enterprise to access any study anytime without the need for prefetching or routing of images from central archive servers. 2 Methods We implemented a new peer-to-peer and remote file access technology developed by Apple computer called ‘‘bonjour’’ that is imbedded in the latest UNIX-based OsX operating system version 10.4.Bonjour allows applications to share data and files remotely with
479 optimized data access and data transfer. Our Open-source image display platform called OsiriX was adapted to allow sharing of local DICOM images through direct access of a local SQL database to be accessible from any other OsiriX workstation over the network. A server version of Osirix Core Data database also allows to access distributed archives servers in the same way. 3 Results The infrastructure that we implemented provides a seamless integration of multiple workstations and archive servers allowing image data to be shared across the network without the need of a central database or archive. The performance of peer-to-peer access to the images was found to be 10–20 times faster that accessing the same date from the central PACS archive. The convenience and high performance of the system allows multiple users to share data more efficiently and perform advanced image processing and analysis in a distributed environment. 4 Discussion and conclusion Open-Source software combined with peer-to-peer architecture connecting multiple workstations and temporary storage servers can provided an alternative system that can complement traditional PACS infrastructure and allow rapid and easy exchange of image data among large number of user and image processing workstations. It offers an attractive solutions for the increasing demand for multidimensional and multimodality image visualization outside the traditional radiology diagnostic interpretation schema, allowing for multidisciplinary clinical users to rely on image processing tools for clinical decision making and therapeutic interventions. Software interoperability of heterogeneous medical information systems Ke Lia Æ Dezhong Yaoa a School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, P.R. of China Keywords Medical information system Æ Interoperability Æ Heterogeneous Æ CSCW 1 Introduction This paper presents a framework architecture of cooperative work in heterogeneous platform, and simulates a scenario of cooperative work using XML. 2 Design principles 1. Ensuring interoperability with the legacy and incompatible system 2. Basing on the existing communication Infrastructure 3. Layered design 4. Using HL7 5. XML-based model. 3 Architecture description This paper defines a framework to cooperative work between heterogeneous medical information system. The system framework consists of four layers. 1. User interface layer: in the architecture, Web is used for the integration of user interface. User associate with each other using integrative user interface. 2. Cooperation layer: the users work independently on their own task. When negotiation is needed, they enter into the shared workspace and interact with shared data with the tools provided by shared workspace [1]. 3. Information exchange layer: various healthcare information systems have exchange information in XML format. Using Health Level 7 (HL7) standard between the various information system, we were able to develop application based on XML [2]. 4. Common communication layer: the medical content and storage management system must support the communication protocols used within the various organizations.
123
480 4 A simple scenario of cooperative work We also address a simple scenario of heterogeneous HIS and LIS in the hospital, which use Web Services technology. The sample scenario has evidenced the following features: (1) cooperative work between the heterogeneous medical information systems. (2) Compatibility with several heterogeneous telematic means. (3) Adaptability of the graphical interfaces and their compliance with the user needs. (4) Ensuring flexibility and extensibility using HL7 standard XML based. 5 Conclusion Based on the proposed architecture which includes four layers, medical information has been exchanged using XML and cooperative work in heterogeneous traditional medical systems. References 1. Ganguly P, Ray P (2000) Software interoperability of telemedicine systems: a CSCW perspective. Seventh international conference on parallel and distributed systems (ICPADS’00), pp 349–355 2. Ahn C, Nah Y et al (2001) An integrated medical information system using XML. Human Society Internet 2001, pp 307–322 Disaster recovery planning of picture archiving and communications systems C.K.S. Tonga Æ K.K. Chana a Medical Physics Department, Pamela Youde Nethersole Eastern Hospital, Hospital Authority, Hong Kong, P.R. of China Keywords Disaster recovery planning Æ PACS Æ ISO 27000 Æ Information security management system 1 Introduction Disaster recovery (DR) (Toigo et al. 2002) is the ability of an infrastructure to restart operations after a disaster which is one of the essential requirements in ISO27000 Information security management system (BSI et al. 2005). Picture archiving and communications system (PACS) is no different from any computer system. Disasters cannot be avoided. The objective of this paper is to discuss the planning of recovery of PACS after a disaster. 2 Methods The proposed disaster recovery planning (DRP) of a PACS included six steps: define of core business model, business impact analysis, backup planning, design of disaster recovery procedures, maintenance plan, and documentation of the maintenance plan. 3 Results Before the detail planning, the core business model of the PACS
Fig. 1 Core business model of a PACS should be identified as shown in Fig. 1. From the core business model using the above procedures, the DR plan of a PACS was developed as Fig. 2. 4 Conclusion The importance of DR does not required be overstated. These vital activities require careful planning by PACS professionals to establish systems that function smoothly in the near-term while providing for any eventuality in the future.
123
Int J CARS (2006) 1:461–485 Disaster Plan of a PACS Step 1
2
3
~ 13
Recovering Sub-Process Image distribution to clinicians Image distribution through Cisco switches Image online storage
~ Remote maintenance
Responsible Person PACS Team, contractor PACS Team, contractor
Process Location Web servers
PACS Team, contractor
PACS Servers, SAN
~ PACS Team
Cisco switches
~ RAS server and Cisco router
Fig. 2 Disaster Plan of a PACS References 1. Toigo JW (2002) Disaster recovery planning: strategies for protecting critical information assets, 3rd edn. Prentice Hall PTR, Englewood Cliffs 2. BSI (2005) BS ISO/IEC 27001:2005 (BS 7799–2:2002) Region of interest encoding method in JPEG scheme for normal JPEG viewers Y. Okuraa a Department of Clinical Sciences, Faculty of Health Sciences, Hiroshima International University, Japan Keywords PACS Æ Image processing Æ Image compression 1 Introduction JPEG compressed images are widely employed. In electronic patient record system, they are used for reference images for physicians, not for primary diagnosis. On the other hand, ROIs can be often defined as more significant regions than others by radiologists. Higher image quality is required for the inside of the ROIs, but not for the outside of ROIs. However, JPEG scheme does not include the concept of ‘‘ROI’’. The purpose of this study is to develop a new image compression method to realize different compression ratio to the inside of ROI according to normal JPEG scheme. 2 Materials and methods In our method, two quantization matrices were used in the phase of encoding. In a similar fashion to normal JPEG encoding, a quantization matrix has relatively large number in the matrix is used to quantize the outside region of ROI. Next, another quantization matrix having relatively small number is employed to quantize the inside region of the ROI. However, only one quantization table can be recoded in the JPEG file format. Therefore, the inside of ROI was decoded correctly, however, the outside region were de-quantized by different quantization matrix. We carefully studied the quantization matrix for outside of ROI to minimize the degradation of image quality. 3 Results and discussion The average file size of compressed images by the normal JPEG method was 23,107.3 B. By our method, the images were compressed to the size of 21,401.3 B with maintaining equivalent image quality of the inside ROI to the normal JPEG images. This size is 7.4% smaller than the normal JPEG. The advantage of our method is that normal JPEG decoder or viewer can be used to decode image compressed by our method (Figs. 1, 2).
Int J CARS (2006) 1:461–485
Fig. 1 Compressed image by normal JPEG method(10:1). The file size was 25,720 B
481 investigate the relationship between the hospital scale and the data volume of the images generated by each physician, each radiographer and each plain X-ray system, CT and MRI , the Kruskal–Wallis test was performed using SPSS. Stepwise multiple regression analysis was performed to find the related factors by considering the ten variables. 3 Results The valid response rate of the survey was 57.6% (sent: 661, received: 387). The average data volume generated daily by the medical facilities was 1.7 GB. The average data volume of images generated by examinations ordered by one physician in a day was 202 MB. The average data volume of images generated daily by a radiographer was 422.5 MB. The average data volume of the images generated daily per bed with over 20 beds hospitals was 10.2 MB. No significant difference was found between the hospital scale and the data volume of the images generated by one radiographer (p = 0.886) and by one bed (p = 0.768). The following results were obtained by stepwise multiple regression analyses between the data volume of the generated images and ten variables in the hospitals with over 20 beds. ½Data volume of medical images in a day ¼ 344:6 ½Number of MRI units þ 310:3 ½Number of CT units þ 66:1 ½Number of X ray TV units þ 183:7 ½Number of radiographers þ 2:8 ½Number of outpatients 2:0 ½Number of beds 8:0 ½Number of physicians 204:17 MB Adjusted R2: 0.738 **: p < 0.01, *: p < 0.05
Fig. 2 Chest image compressed by our method. File size was 23,018 B. The right pulmonary region was selected as a ROI for lower compression ratio than others Survey research for generating volume of medical-image in Japan; analysis by medical staffs and the number of radiological equipments K. Ogasawara Department of Health Sciences, School of Medicine, Hokkaido University, Sapporo, Japan Keywords Medical image Æ Data volume Æ Survey research 1 Purpose In this study, we investigated the generating volume of medical images produced in medical facilities in Hokkaido prefecture, Japan, which could be used as basic data for the introduction of PACS. 2 Methods The questionnaire was sent by postal mail to the chief radiographers of 661 medical facilities. The questionnaire included the number of physicians, the number of beds, the number of diagnostic machines such as those for CT and MRI, the daily number of images and the number of images on one film. The following seven items were calculated by the types and scales of the facilities in a day: (1) the data volume of total images, (2) the data volume of the images ordered by one physician, (3) the data volume of the images generated by one radiographer, (4) the data volume of the images for one bed, and (5) the data volume of the images generated by one plain X-ray system, (6) CT and (7) MRI. In the study, the medical facilities were classified into the five types. To
Effectiveness of the cardiology workflow by the cardiology PACS with the catheterisation reporting system in Juntendo University Hospital in Japan T. Ozekia,d Æ M. Oea Æ K. Miyauchib Æ M. Zuchowskic Æ Y. Takebayashid Æ D. Aoshimad a Toshiba Medical systems corporation, Otawara, Japan b Juntendo University Hospital, Tokyo, Japan c Heartlab, an Agfa Company, Westerly, RI, USA d Shizuoka University, Hamamatsu, Japan Keywords Cardiology workflow Æ Cardiology PACS Æ Catheterisation report 1 Purpose In this paper, we focus on an applied information technology for the catheterisation report creation system within the cardiology PACS. We evaluate its effectiveness upon the cardiology workflow at Juntendo University hospital. 2 Methods Compare the workflow before and after installing the cardiology PACS system, including digital reporting. 3 Result Before the implementation of the cardiology PACS, the average time to create a report after the completion of the catheterisation exam was 5 days. This process included not only film development, but also the creation of an original text report on a word processor as well as manual input of the patient’s data into a File Maker Pro database. On the other hand, using the PACS and its digital reporting module, the time has been reduced to less than 1 day. First of all, the clinical images are now accessible from a PC immediately after the exam. Secondly, the patient demographics are automatically entered into the database and an electronic report template that the clinician can complete at the same workstation. Furthermore, the average time to search and retrieve a patient’s previous study from the film library
123
482
Int J CARS (2006) 1:461–485
was 45 min. The average time to query and retrieve a patient’s previous study using cardiology PACS is less than 1 min. 4 Conclusion The use of the PACS system has improved the workflow. First, it has eliminated the lengthy process of film development. Secondly, the report process has been streamlined. Finally, the patient’s historical images, and now reports, have been made available nearly instantaneously. References 1. Miyauchi K (2004) Usefulness of cardiovascular information network. In: luncheon seminar #13 of 68th Japan Cardiology Society Conference, 2004 Mar 3/27 presentation slide 2. Elion J (2004) Information systems for cardiac catheterization. Luncheon Seminar #13 of 68th Japan Cardiovascular Society Meeting, 2004 Mar 3/27 presentation slide 2004 Mar 3/27 presentation slide 3. Oe M (2001) The multi-modality cardiac network system. TOSHIBA Med Rev 80, 2001/2 4. Elion J, Oe M, Ozeki T (2002) Development of Toshiba cardiac network system (CardioAgentTM) TOSHIBA Med Rev Cardiol special Issue, 2002/2. Capacity planning of web-based picture archiving and communications systems C.K.S. Tonga Æ K.K. Chana a Medical Physics Department, Pamela Youde Nethersole Eastern Hospital, Hospital Authority, Hong Kong, P.R. of China Keywords Capacity planning Æ PACS Æ ISO 27000 Æ Information security management system 1 Introduction Like other Internet service, speed, around-the-clock availability, and security are the most common indicators of quality of service of a PACS (Dreyer et al. 2005). In ISO 27000 standard (BSI et al. 2005), there is requirement to have capacity plan for a quality information system. In order to determine the most cost-effective architecture and system, a quantitative approach of capacity planning techniques come into place. In this paper, a feasibility study of implementing a quantitative approach of capacity planning technique (Menasce et al. 2001) for a Web-based PACS is presented. Methods Capacity planning of Web-based PACS service requires systematically following a series of steps: definition of business models and measurable goals, understand the service architecture, characterize the workload, obtain model parameters, forecast workload evolution, develop performance model, calibrate and validate models, predict service performance, and analyze cost-performance tradeoffs. Results CAPACITY PLANNING OF A WEB-BASED PACS Definition of business models and measurable goals MR, CT, CR, .. Images?
Calibrate and validate models Output
measures
~= Output Predicted
Predict service performance Predict any changes
Understand the service architecture Web, PACS, database servers
Characterize the workload :
Obtain model parameters
Keys, curses, buttons, ..
CPU Speed, RAM size, Storage Capacity
Develop performance model Output= K*ΣInputs
Forecast workload evolution Film amount, User numbers, peak hours?
Comparison of internet based and mobile network videoconferencing systems for emergency teleconsultations in orthopedics, traumatology and radiology W. Glinkowskia Æ A. Wojciechowskic Æ K. Ma˛kosad Æ K. Maraseke Æ M. Gilf Æ A. Go´reckia a Department of Orthopedics and Traumatology of Locomotor System, Center of Excellence ‘‘TeleOrto’’ , Medical University of Warsaw, Poland b Department of Anatomy, Center of Biostructure, Medical University of Warsaw, Poland c Department of Radiology , Medical University of Warsaw, Poland d Entropia, Warsaw, Poland e Multimedia Department, Polish-Japanese Institute of Information Technology, Warsaw, Poland f Department of Integrated Teleinformatic Projects, Polkomtel S.A., Warsaw, Poland Keywords Mobile videoconferencing Æ Internet Æ Teleconsultation Æ Real time messaging protocol 1 Introduction The videoconferencing and data sharing systems are required when the facility have separately located departments (Medical Imaging and Orthopedic Trauma) and do not have implemented PACS system. The aim of the study was to continue previous research [1] and evaluate capabilities of available systems (Internet based and mobile) for videoconferencing between departments. 2 Materials and methods Macromedia Flash Player was set on both sides of videoconferencing system. We used the web-based or mobile devices videoconferencing system to provide on-line or store and forward capabilities for the daily remote review of all kind of medical images and simultaneous direct teleconsultation. Conventional X-ray images and CT scans were sent via Flash enhanced videoconferencing system, MDA device and mobile phone video call between consultant radiologist and trauma surgeon. 3 Results The quality of the image was mostly screen size and resolution dependent. The best resolution was achieved on PC screen and radiology workstation. Medical images evaluated on PC (Fig. 1) screen allowed getting reliable and diagnostically sufficient image resolution, when MDA (Fig. 1) and mobile phones images were rather informative than diagnostically reliable. Internet based Flash Enhanced Videoconference was superior MDA and video call teleconsultation. 4 Conclusion Internet based Flash Enhanced Videoconference using PC, MDA and video call teleconsultations for orthopedic trauma patients may
Analyze costperformance tradeoffs Find cost-effective solution
Conclusion The bottom line in managing a Web-based PACS services is guaranteeing performance, availability, and return on improvement of PACS service. This is possible only if the PACS infrastructure is ready to provide customers with high-quality service. Web-based PACS services infrastructure is complex enough to preclude any guesswork when it comes to capacity planning.
123
References 1. Dreyer KJ, Hirschorn DS, Thrall JH, Mehta A (2005) PACS : a guide to the digital revolution, 2nd edn. Springer, Berlin Heidelberg New York 2. BSI (2005) BS ISO/IEC 27001:2005 (BS 7799-2:2002) 3. Menasce DA, Almeida VAF (2001) Capacity planning for Web Services: metrics, models, and methods, 2nd edn. Prentice Hall PTR, Englewood Cliffs
Fig. 1 PC versus Pocket PC videoconference
Int J CARS (2006) 1:461–485 fasten surgical procedures during emergency duties in facility without PACS. References 1. Glinkowski W, Ciszek B, Gil M (2005) Teleconsultations in orthopedic trauma and neurotrauma utilizing mobile digital assistant. International Congress Series 1268, 2005, 1292 Telemedical cooperation in a virtual reality environment Steffen Ma¨rklea Æ Marion Schmidta a Technische Universita¨t Berlin, Germany Keywords Telemedicine Æ Virtual reality Æ Medical application 1 Purpose The presented platform for telemedical cooperation allows collaborative examination of text and image data of the electronic patient record, presentation of three-dimensional models of patients’ anatomy, communication with remote partners via audio- and videoconference components and even meeting with virtual representatives of the other users. 2 Methods The system is realized as a client-server system, which provides a central server as entry point for several clients. The implementation is based on Java and several freely available libraries, (Java3D, JSDT [1], JMF [2], ITK [3], VTK [4]...) and can be applied on all popular computers independently of the platform? Medical information systems are connected via HL7 and DICOM. Administration of the user’s settings is carried out on a MySQL database running on the server. Anatomic objects are created from medical image data as follows: Images are read from the PACS, preprocessed, filtered, segmented and visualized in the Java3D scenegraph. A toolbox with predefined processing pipelines depending on modality and anatomic organ, is provided. Therefore, the visualization toolkit VTK and the National Library of Medicine’s image processing toolkit ITK were included. As they are implemented in C++, they were embedded by additional Java interfaces applying SWIG. In each processing step, the user may interactively modify the parameters, for example threshold values or ranges, where appropriate. 3 Results and conclusion The prototype has been implemented at the TU Berlin by students as a part of teaching projects and theses. Almost all the functions required have now been realized and can be presented as a proof of concept in our demonstrator environment. Users can work collaboratively with all types of data from the underlying electronic patient record. To provide the most comprehensive representation of medical data, anatomic models can be created from diagnostic images. Inside the virtual universe, users can move about, select objects, interact, meet other users and establish contact with them, talk to them in an audio- or video-conference and collaboratively examine data or work on objects. A haptic force-feedback device, the Phantom engine, has been connected to the system as one of the possible interaction devices. Therefore users can not only look at objects in the virtual world but can also touch and feel them. References 1. Java Shared Data Toolkit. http://www.java.sun.com/products/javamedia/jsdt/index.jsp 2. Java Media Framework API. http://www.java.sun.com/products/ java-media/jmf/index.jsp 3. NLM Insight Segmentation and Registration Toolkit. http:// www.itk.org/ 4. The Visualization ToolKit. http://www.vtk.org/ Application of telemedicine for pregraduate student training and postgraduate education in neurosciences ˇ ı´ha I.b Æ Chrastina J.a Æ Pohanka M.a Nova´k Z.a Æ R
483 a Neurosurgical Clinic Medical Faculty Masaryk Univerzity St.Ann´s Hospital Brno, Czech Republic b Institute of Biomedical Engineering, FEEC BUT, Brno, Czech Republic Keywords Telemedicine Æ Neurosurgery Æ Pregradual education Æ Postgradual training 1 Introduction After the reconstruction of Neurosurgical Clinic MF MU telemedicine education project was initiated. The aim was the transfer of image data from the operating theatre or surgical environment to the teleconfering room or MU Congress Centre. Optical cables are used due the presence of isolated systems in operating theatres and connections variability for the future. 2 Methods Data are transferred by means of Polycom (LAN transmission, H 232 standard). Video works in PAL and NTCS standard.Data transfer speed ranges from 56 kbps to 2 Mbps and standard H . 263+ is used for quality augmentation. The networks based on optic connections have the width of transfer range 1 Gbps sufficient for full DV stream transmission considering future requirements. Systems contains protected circuits for fully bidirectional sound transmission. Variability of interconnections is the main advantage, together with integration of surgical (Brain Lab VectorVision Sky, endoscope with OR1 archivation, microscope Zeiss) and imaging modalities for teleconference. 3 Results Regarding education posssibilities the concept followed direct mutual communication between operating surgeon and students. Teacher in the teleconference room presents the details to students. The questions asked by the students can be answered both by the surgeon and the teacher. Communication foreign languages can be easily listened to in operating theatre and conference room. Both students and postgraduates express positive feeling regarding surgical details 4 Conclusion The capacity of operating theatre is limited and economical aspect and safety measures cannot be neglected .The amount of informations in neurosciences is continuously growing. Concept of multimodality educations augments the effectivity of teaching process.
Telemedicine cockpit with intuitive interface for integrated control of communication and presentation T. Tsukasaa Æ M. Morib Æ K. Horic Æ T. Kurodad Æ H. Yoshiharad a Graduate School of Informatics, Kyoto University, Japan b Innovation Plaza, Kyoto, Japan Science and Technology Agency, Japan c School of Radiological Technology, Gunma Prefectural College of Health Sciences, Japan d Department of Medical Informatics, Kyoto University Hospital, Japan Keywords Telemedicine Æ Multi modal interface Æ Application level QoS control 1 Purpose Many researches and trials of telemedicine have been performed. However, most of them require setup plenty of devices and components on network before starting telemedicine. To boost up telemedicine, telemedicine should be performed in ‘‘Plug & Play’’. Thus, integrated information environment to simplify preparation and operation is indispensable. To enable telemedicine, various medical data is required to be exchanged via network. A doctor requires multiple information of a patient. However, because network bandwidth is not unlimited, damaged information could lead to medical mishaps. On the other hand, a doctor needs to glance over various information to grasp a patient’s condition, and, at the same time, needs to watch certain information carefully to diagnose or treat the patient. This research proposes ‘‘Telemedicine cockpit’’ with intuitive interface for integrated control of
123
484 communication and presentation as mentioned above. Telemedicine cockpit let users to control arrangement and communication quality at once via an intuitive command based on voice and action. 2 Methods The telemedicine cockpit consists of the presentation control part, the QoS control part, and the input interface part. The presentation control part arranges various medical data on displays consists of small displays close to a doctor and a large-scale screen behind the small displays. Thus, the presentation control part lets information with a high importance degree present in the centre of the doctor’s sight (important image) and the other information present in the surrounding (reference image). The QoS control part transmits important image in maximum quality and reference image in minimum quality under limitation of the bandwidth. The input interface part let a doctor to select important images from presented catalogue intuitively. The input interface is intuitive command based on voice and action as ‘‘put that there’’. Thus, the telemedicine cockpit let a doctor to control communication and presentation of referring medical information through a single intuitive action. 3 Results The aim of experiment is to confirm that the telemedicine cockpit simplify operation. In the experiment, testees operated master, slave robot through the telemedicine cockpit via LAN. Transmitted information is three MPEG-2 streams. Two image streams are views of the target point from different viewpoints and another is long distant view of slave robot. The testees could operate communication and presentation of three image streams in a single ‘‘put that there’’ action. The result shows that a user can manipulate two different parameters in a single action although it requires additional engineer to handle one of them without using proposed telemedicine cockpit. Feasibility of MPEG compression of CT and MRI image series: improving the mobile access to teaching files and diagnostic cases Reponen J.a,b,c Æ Niinima¨ki J.a a Department of Diagnostic Radiology, Oulu University Hospital, Finland b FinnTelemedicum, University of Oulu, Finland c Raahe Hospital, Finland Keywords Mobile phone Æ Image compression Æ Computed tomography Æ DICOM 1 Introduction There is a need for image compression for teleradiology purposes in order to gain faster transmission speeds [1]. We have previously utilised JPEG and wavelett compression for individual CT or MRI slices [2]. Multislice CT yields a new challenge with several hundreds slices in typical examination. In the present work, we compressed series of CT images with motion video compression algorithms. 2 Methods Toshiba multislice CT studies, one bone, one abdominal, and one whole body scan were transformed into MPEG-1 video with a DICOM workstation. Consecutive processing to MPEG-2 video format was done with TMPGEnc v.1.2. software. Video sequences were stored in VideoCD (VCD) disks, SuperVideoCD (SVCD) disks and DVD-compatible SVCD disks with Ahead Nero software. File sizes and image quality were registered. 3 Results The preliminary results show that there is a benefit in image size compared DICOM files and to JPEG frame by frame compression (Table 1). In MPEG-1 format the compression ratio compared to DICOM was on the average 1:24 and in MPEG-2 format 1:40, both dependable on the image contents. Quality was evaluated visually and acceptable for typical emergency situations. 4 Conclusion The preliminary results suggest that there is a benefit in file size compared to frame-by-frame compression and that the image quality
123
Int J CARS (2006) 1:461–485 Table 1 File sizes (in kB) for different formats Series
Original DICOM
JPEG frame
MPEG-1
MPEG-2 SVCD
Bone Abdomen Body
70,656 67,072 121,344
5,520 5,240 9,480
2,315 3,537 5,531
1,789 1,793 2,738
can be good enough for evaluation of emergency cases. Also current consumer market provides several devices suitable for downloading and displaying video. Main drawback of the method is that windowing or measurements are not possible with standard DVD-players or software players. However, with preset windowing the images are comparable to film prints. The results suggest that compressed video can be suitable form of transmission of longer image series over a slow mobile connection. It could also be utilised for teaching archives. References 1. Yamamoto LG (1995) Using JPEG image compression to facilitate telemedicine. Am J Emerg Med 13:55–57 2. Reponen J, Niinima¨ki J, Holopainen A et al (2000) MOMEDA, a mobile smartphone terminal for dicom images and web-based electronic patient data. In: Proceedings of EuroPACS2000, Graz, Austria, 2000, pp 274–277 Optimisation of workflow and data flow in radiology E. M. Robertsona Æ G. A. McKenziea a Department of Clinical Radiology, Aberdeen Royal Infirmary, United Kingdom Keywords Dataset Æ Capacity Æ Demand Æ Workforce Æ Planning 1 Introduction The need to reduce patient waiting times for diagnostics to comply with Scottish Health Department targets and sustain improvements against a background of workforce shortages has created an imperative to understand and optimise workflow through the use of robust, standardised data collection. This will provide a sound basis for efficient diagnostics delivery, electronic health records, developing trends analysis, demand prediction, workforce planning and matching with education and training needs. 2 Methods A detailed understanding of the current service is being achieved through rigorous analysis of datasets derived from capacity and demand audits. Process mapping at institutional level then allows exploration of role development and skill mix for professionals thereby improving workflow, addressing clinical governance and delivery of waiting time targets. Identification of local improvement opportunities forms part of the process that will create transferable lessons between institutions while providing accumulative information for regional resource allocation and strategic planning. Detailed engagement of key stakeholders is resulting in a universal core dataset available for individual stakeholder analysis by their parameters thereby obviating the need for retrospective ad hoc calls for information in non standard form. Scottish national dataset programmes dovetail with this work ensuring an accredited approach. 3 Results Agreed core datasets are being achieved through comprehensive engagement of stakeholders. Outcomes are building blocks for improved workforce planning, enhanced education and training opportunities, and a regional approach to infrastructure investment in imaging equipment. Engagement with public health information allows major disease trends and diagnostic profiles to be established allowing improved service planning ensuring timely, clinically appropriate and equitable service provision.
Int J CARS (2006) 1:461–485 4 Conclusion Long waits for diagnostics in Scotland have led to government imposed waiting time targets. The shortage of professional staff provides a further drive to engage in service re-design. This necessitates analysis of universal, robust data and using ‘‘information for improvement.’’ To enable this the software functionality of radiology information systems is being specified a national level. This work in progress allows a sound basis for national service planning, funding allocation and engagement with higher education institutions to underpin the education and training requirements. Implementing a digital image acquisition system in a new hospital—first step for a surgical pacs? Umberto Noccoa Æ Antonio Mezzacapob Æ Roberto Castellib Æ Emanuele Lettierib Æ Raffaele Novarioc Æ Fabio Tanzid a Clinical Engineering Department, Varese Town Hospital, Italy b Politecnico di Milano School of Engineering, Italy c Department of Clinical Science, University of Insubria, Italy d Medical Physiscs Department, Varese Town Hospital, Italy Keywords OR Æ HTA Æ Surgical PACS 1 Introduction As part of the process of building 20 ORs, a study has been developed to evaluate the need for a system capable of both acquiring and broadcasting images within or outside the ORs. 2 Methods Coherently to the HTA philosophy, a structured multi-choice questionnaire has been handed to 20 surgeons and 12 staff nurses. Both professionals have been questioned on many topics, such as the importance given to devices in the OR and their layout, the best way
485 to display radiological images, the best way to acquire and record images, the need for real time bi-directional image transmission for teaching purposes, the need for videoconference and tele-consulting. 3 Results Results can be summarised as follows: with regard to the OR layout image digitalisation received the highest ranking. Video broadcasting the least. Non-sterile on-field display was considered the most rated way of displaying radiological images, the common image viewer the least. The main need for acquiring videos was outlined to be the availability of video for future vision or analysis. Both surgeons and staff nurses consider very valuable the possibility to broadcast surgery for teaching purposes. 4 Conclusions Results show that surgeons (even if affiliated to different specialties) are tremendously interested to digital acquisition of images within the operating theatre. This need can be mainly related to the teaching need because such a system, combined with video broadcasting, allows students to view a greater number of surgery procedure with better points of view that might not be obtained within the OR. Certain relations have been found in staff nurses answers. Special attention is to be paid to the high ranking (first place) given to the possibility of acquiring digital images for future reference, vision or editing, with no particular attention payed to the media used to store images. This last aspect opens a new area of study: which is the best media to be used to store images? Answers given by surgeons give way to many solutions. A surgical PACS might be a very good solution, although it is known that it might be difficult to calculate the storage dimensions, since it depends on data definition and length.
123