Histochem Cell Biol (1996) 105:333-355
9 Springer-Verlag 1996
Martin Oberholzer 9 Marc Ostreicher Heinz Christen- Marcel Briihlmann
Methods in quantitative image analysis
Accepted: 12 January 1996
Abstract The main steps of image analysis are image
capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (28). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixelbased image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to M. Oberholzer (~) 9M. (Jstreicher. M. BrOhlman Department of Pathologyof the Universityof Basel, Sch6nbeinstrasse 40, CH-4003 Baset, Switzerland Tel. +41-61-265-25-25; Fax +41-61-265-31-94 H. Christen Computer Center and Instituteof Informaticand Calculation of the Universityof Basel, Basel, Switzerland
expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). The principal advantage of ranking filters over averaging operators is that they do not reduce the brightness difference across steps. The edges remain in place and well-defined. Very important prerequisites for extracting quantitative infor-
334 mation from digitised images are clearly identifiable segmented objects and knowledge about instrumental and technical influences on the results (glare effect and thickness of histological slides). Segmentation of objects is traditionally based on threshold grey values. The grey value histogram of the original or enhanced image is an important tool for setting threshold levels. For determining the threshold grey value, bidimensional histograms can be applied. Quantitative information can be extracted from images by mathematical operations on binary images or on grey scale images. Boolean operations on binary images are applied when one desires to combine the information contained in several binary images. The morphological operations on binary images mainly include erosion and dilation, and modifications of these operations. There are many methods allowing direct quantitation of segmented objects within grey scale images. They use different sets of parameters: planimetric, histogram-derived, densitometric, co-occurrence matrixderived parameters, invariant moments, run lengths, parameters of quantitative immunohistochemistry and immunocytochemistry, parameters of silver-stained nucleolar organiser regions (AgNORs) and parameters of cellular sociology. Digital image analysis requires a distinction between two phases for the evaluation procedure: generation of fundamental data (x- and y-coordinates and grey values of the pixels, immediately after object segmentation) and calculation of parameters from these data. The data generated during segementation must remain always available and these data must be conceptually separated from the parameters deduced from them. With such a data organisation it is no longer necessary to repeat the object segmentation if new algorithms should be applied on objects which earlier were segmented. In dealing with methods or instruments for digital image analysis, it is always essential to know precisely the characteristics of both of them.
Introduction Quantitative image analysis started in the early 1960s, applying stereological axioms by point counting (Weibel 1979). The application of stereological methods was based on two pillars: (1) identification of the objects to be measured: and (2) actual measurements (counting points lying over object profiles, crossing boundaries, or counting the objects themselves) and calculation of stereological parameters derived from these primary data. Since then, manual point counting has been replaced by computers, and segmentation is carried out by electronic aid, either by interactive tracing of the objects or by automatical procedures. The absolute condition for quantifying information contained in images by computers is digitised images. Therefore, a presentation of what quantitative image analysis is must mainly deal with fundamental aspects, hurdles and limits in handling digitised images. Digital image analysis is now a frequently used meth-
od in many medical and biological disciplines (Russ 1992; Shotton 1995). The main steps of image analysis are (Russ 1992) image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic noise, glare effect), image enhancement, segmentation of objects in the image and image measurements (processing binary images and measurements on grey scale images). Such image processing is used for two somewhat different purposes: (1) improving the visual appearance of images to a human viewer; and (2) preparing images for measurements. Image processing rearranges data present in an initial image. The present paper is mainly oriented towards the most important tools of the analysis of digitised images (digital image analysis) and the most important definitions in this context.
Main tools of image processing In image processing, two standardised instruments are used: image digitisation and shading correction.
Image digitisation Cameras scanning an image produce analogous voltage signals corresponding to the brightness at different points in the image (Shotton 1995). The analogous voltage signal is converted by an analog-to-digital converter (frame-grabber) into digital (numerical) information. The converter is a chip which produces a numerical value between 0 and 255 in fixed units of nanoseconds and along the scan line. A brightness of 0 corresponds to black and a brightness of 255 to white. The brightness is usually represented by grey values. If the image contains only points with either a grey value of 0 or 255, the image is designated a binary image. The points in an image are named pixels (picture elements). Pixels are comparable to silver grains in a film. They have an approximately square area. Since the standard video image is not square, the digitised image should represent only a portion of the entire video image for keeping the pixels square. How are the grey values of pixels stored in a computer? The computer memory is organised into units named bytes, each of which contains 8 bits. One bit is a binary information unit containing the information ON or OFF or the values 0 or 1. Because one bit can represent two states, the basis of the numerical binary system is 2. Using computers demands working with information packages. Such an information package is a byte. A very convenient system is one working with packages which contain 8 bits. If the condition is defined that the grey value of a pixel should be stored in 1 byte, then 256 grey values can be distinguished, because 28=256. An image with an amount of 256 grey values is named an 8-bit image. It is possible to represent an image by fewer than 256 grey values. A 7-bit image, for example, contains 27
335
(124) grey values only. The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). Digitised images can be very large and vary from 2 9 105 to several million bytes. One of the consequences of this large size is the time used in digital image analysis to address each pixel. Another is the requirement for storage and transmission. Compaction schemes attempt to reduce the size of images. The time needed to accomplish compression and decompression is one criterion for judging this approach. The other is the degree of image preservation. Methods of current interest include the JPEG (Joint Photographic Experts Group) algorithm based on a discrete cosine transform. It offers a compression ratio of more than 50. In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. This can be achieved by electronic noise in the amplifiers, by instability in the light source or by cabling. The noise is mainly manifested in the background of the image. The ratio between the contrast of a scene, which is due to structural differences in the scene, and the noise intensity of the background is indicated as the signal-to-noise ratio (Russ 1992). The noise can successfully be reduced by image processing operations if the pixels in the image are much smaller than any of the important details and if most of the pixels' neighbours represent the same structure.
Fig. la, b Effect of shading and its correction, a Original image (stained for the Ki-67 antigen). The image is not uniformlyilluminated, being darker at the edges than in the centre, b The same image is demonstratedafter shading correction by subtraction. • tion: the corners of the image can be darker than the centre because the light tends to be more absorbed in the periphery (Fig. 1). The overall absorption is mainly influenced when the specimen thickness varies. Most of these illumination defects can be minimised by setting up the imaging conditions carefully. Additionally, it should always be corrected by image processing, by shading correction. Shading correction demands a background image in which a uniform reference area (white area) is inserted. This white background image is recorded. The mean grey value of it is calculated. In the next step, the light stream in the microscope is interrupted. In the resulting black image, the mean grey value is evaluated. The correction consists of a subtraction of the background image from the original image, pixel per pixel (Eq. la) or in a division of the original image by the background image (Eq. lb) (Abmayr 1994): IMAGE(x,,,)corr = IMAGE(~,y) -
W(x,y
)
+ [W(mean) -- B( ..... )]
(la)
IMAGE(x,y)corr = IMAGE(x,y) 9 W(mean)w)(x,y B(mean)
(lb)
or
Shading correction For an optimal discrimination between different objects or features in an image, stringent requirements for uniformity of illumination in the whole image exist. However, lenses or cameras may cause an inconstant illumina-
where W(x,.y)=white image, W(mean)=mean grey value of the white image and B(mean~=mean grey value of the black image. Whether a subtraction or division is carried out depends upon camera characteristics (Russ 1992). In the
336 process of subtracting or dividing images by another, some of the data in the original image will be lost (Russ 1992). This fact argues that all practical steps should be taken to make illumination uniform while acquiring images before resorting to processing methods.
[E(i)] can be calculated using Beer-Lambert's law (Erler and Marchevsky 1994):
1 E(i ) = -logT(i ) = log T(i)
(5)
The measurement unit of the extinction of an object (containing n pixels) is optical density (OD): n
Analysis of grey values Digitised images are characterised by the brightness of their pixels (z) and the position (x- and y-coordinates) of their pixels in relation to the whole image or to neighbouring pixels. The brightness of a pixel is represented by its grey value. Rough information about the effect of an operation applied to an image can be achieved by analysing the grey values of the resulting image. Such an analysis can be carried out with the data of each individual pixel or with the data of groups (neighbourhoods) or classes of pixels within the new image. Pixel-based analysis of grey values The most frequently used pixel-based image descriptors are optical density, integrated optical density, histogram of the grey values, mean grey value, mean square deviation of the grey value and entropy. These descriptors are not influenced by the topography (x- and ycoordinates) of the individual pixel within the image.
Optical density (extinction or absorption) Light can be reflected from or can pass through the specimen. The portion of the incident light which passes through the specimen is equivalent to the transmission. The grey value of a pixel is proportional to its transmission. The values of transmission are mostly standardised for both the difference between the minimal and maximal grey value of the measuring system and the grey value of the background of the actual image. The intensity of the light hitting the pixel [I(o~] is: I(o) = W - B
(2)
where W=maximal grey value of the system (white in the system) and B=minimal grey value of the system (black in the system). The theoretical intensity of the light leaving the pixel [I(i)] depends on the minimal grey value of the system: I(i) = I(iA ~-B
(3)
where I(q)=directly measured light intensity leaving the individual pixel, i. The transmission [T(i)] is now defined as follows:
T(i)
1(5 I(~) - B = I(0) - W - B
(4)
The transmission is proportional to the grey value of the pixel. From the transmission, the extinction (absorption)
~E(i) OD = i-1 n
(6a)
OD is the key parameter used in densitometry. Another frequently used parameter for quantifying the brightness of an object is its integrated optical density (IOD): IOD = ~ E(i I i=l
(6b)
Whenever a digitised image is saved, its grey values (z) and the position of each pixel represented by the x- and y-coordinates are stored. However, for showing the image on a screen, the grey values have to be retransformed into transmission values.
Histogram of the grey values The distribution of the grey values existing within an image allows a general orientation about what information is included in the image. Thus, the histogram of an image with a high contrast has a broad distribution of the grey values and shows only low and flat peaks. Mathematically, the constrast is defined as follows:
C = GV(max) -Gg(min) (7) 6V(max ) + aV(min ) where C=contrast, GV(max)=maximal grey value of the image and GV(min)=minimal grey value of the image. However, the constrast is not only influenced by the maximal and minimal grey values, but also by the distance between dark and bright elements in the image. A histogram of grey levels gives no information about the texture of the image. Texture is the result of a local variation in brightness from one pixel to the next, or within a small region of the image. If an image is interpreted as surface and the brightness as elevation, then the texture is a measure of the surface roughness (Abmayr 1994). Many images do not have a brightness range that covers the full dynamic range of 256 grey values because many grey values are not used by any of the pixels in the image. Expanding the brightness scale by spreading the histogram out to the full available range may improve the visibility of objects and is the simplest way to improve the contrast of the image (Haber~icker 1989). The general formula for a linear transfer function that transduces an original image to a new image is: IMAGE(x,x 1 = c~. IMAGE(ax.y)+(c I .c2)
(8)
337 where IMAGE(~,y)=new image, IMAGE(A~,:.)=originalim- the original image are stored under x, the grey values of age, ci=constant for changing the grey values of the im- the newly created image under y: age: if c~<0 then the image becomes darker and c2=cony = LUT(x) (14) stant for changing the contrast of the image: if Fc21>l then the histogram becomes flatter and the contrast of For example, a square transformation can be defined by y=x2/255. In this case, a pixel with a grey value of 100 the image higher (Haber~cker 1989). The determination of the two constants, c~ and c2, (x=100) in the original image has a value of 39 in the needs a careful analysis of the original histogram. The new image, because 1002/255=39.2, LUTs with a nonconstants can be derived from the minimal and maximal linear relationship can expand one portion of the grey measured grey values and from the actual histogram value histogram while compressing another. (Haber~icker 1989), for example: %) : -I(~mi,)
255 A A C(2) = I(max) _ i(min)
(9) (10)
where I(~min)=minimal measured grey value and I(~ax)=maximal measured grey value grey. c~ and c 2 can also be defined by calculated parameters, i.e. the mean grey value of the original image, the mean square deviation of the original image, the mean grey value of the new image and the mean square deviation of the new image. Between these parameters and c[ and % the following relationships result (Haber~icker 1989): %) = I ( ~ n ) 9c(2 ) - I(~mean~
(11)
where I(m,a~)=mean grey value of the new image and I(~ean)=mean grey value of the original image. ,/-~Ev A
c(2)= ~
(12)
where DEVA=mean square deviation of the original image and DEV=mean square deviation of the new image. The histogram of an original image can be influenced by gain, offset and gamma of the camera. Gain defines the amplitude or voltage range with which the camera should work. The gain of the camera should be adjusted so that the grey value histogram of a specimen image is spread over the widest number of grey values without clipping pixels to a value of 0 or 255 (Erler and Marchevsky 1994). The camera offset defines the reference voltage. It allows centring the grey value histogram peak along the axis of the grey value distribution. The response of a light sensor can be described by gamma. Gamma represents the slope of the regression line between the light intensity and the voltage of the camera. Rules for transforming the grey value histogram of an input image into the grey value histogram of a new output image are most quickly handled by a look-up table (LUT) (Abmayr 1994):
Mean grey value The mean grey value is the simplest numerical descriptor of a digitised image: A
L
1 R-I
I( ..... ) : 2 x=O
A I(x,y )
}2 L . R
(15)
x=O
A
where I(mean)=dlrectly measured mean grey value, L=number of pixel lines of the image, R=number of pixel columns of the image and I~x,y)=directly measured mean grey value of the individual pixel at the position x,y in the image. I(~nean) does not contain any information about image contrast.
Mean square deviation of the grey value In constrast to the mean grey value, the mean square deviation (DEV) contains information about the contrast of the image (Haberficker 1989). This parameter is calculated as follows: DEV= • • t t.y) x=0x=0 L.R
(16)
The contrast of an image depends as much upon the existence of different dark and light image segments in the image as upon the distance between two different types of image segments (Abmayr 1994). Therefore, the contrast is influenced not only by pixel-based but also by neighbourhood-based characteristics, also called local frequency of pixels. From this fact, it follows that the contrast may be very well represented by the ratio between the standard deviation and the mean value of the grey values within an image. Another parameter for measuring contrast was introduced by Gonzales and Wintz (1987): R=I-~
(17) l+s 2 where s2=variance of the grey values. R tends to infinity in a poorly contrasted image, while R tends to 1 in a well-contrasted image.
INPUT_ IMAGE(x,y) = LUT. OUTPUT_ IMAGE(x,y) (13)
Entropy where INPUT_IMAGE(x,p=input or original image and OUTPUT IMAGE(x,y)=output or newly created image. An LUT corresponds to a file in which the grey values of
The entropy of an image is a measure of its information content. A person who always says 'a a a a', does not
338 transmit information. However, if all characters of the alphabet are used with the same probability for each to be employed, a lot of information will be exchanged (Abmayr 1994). In the information theory the information content (INFORM) is defined as:
Table 1 Co-occurrence matrix resulting from the model image shown in Fig. 2 if the pixel at the fight side of the initial pixel is defined as neighbour
1 =-c.logp(i) INFORM = c. log P(i)
Class 0 Class 1 Class 2 Class 3
(18)
where c=constant defining the information unit and p(i)=probability that the message n(i ) c a n be found under n messages. The grey values between 0 and 255 can be compared with the characters of the alphabet and an individual pixel with a single message. Therefore, the information content of an image can be estimated from the relative frequency of the grey values occurring in the image, according to Eq. 18. If the logarithm to the base 2 is used, then the unit for the information content is 1 bit (21=2). 255
INFORM = •
-f(~v/'lOgdua'[f(GV)]
(19)
GV=0
where lOgdual=lOgarithm to the base 2 and f(Gv)=relative frequency of the grey values. INFORM in Eq. 19 is equal to the parameter termed entropy (ENT). A high value for the entropy represents a high contrast, a low value is an indicator of a homogeneous image with only a few various grey values.
Neighbourhood (or matrix)-based analysis of grey values A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix (Haberficker 1989; Haralick et al. 1973). The elaboration of the co-occurrence matrix is based on a relationship, 9, among pairs of pixels [(xpyl) and (x2,ya)] (Haber~icker 1989). The scale of the total 256 grey values can be divided into eight classes with the same width (32 pixels each). Each pixel within the original image is allocated to one of these eight classes. There are two reasons for this procedure: (1) the handling of a 8x8 co-occurrence matrix is simpler than the handling of a bigger matrix; and (2) the
3 3i3 2 3 3 232123 31 I 102 321003 021233 332322
Fig. 2 Model image containing pixels with four different grey values as an example for the evaluation of the co-occurrence matrix. The corresponding grey values are presented on the right
Neighbouring pixels Class 0
Class 1
Class 2
Class 3
1 2 0 0
0 2 3 1
2 2 1 5
1 0 5 5
essential information of an image can be sufficently presented by 16 grey values. In the second step the relationship, P, is defined. 9 can be the neighbour at the right or at the left of the initial pixel, or below or above the initial pixel. In the last step, for each pixel which belongs to one of the eight classes one counts how many neighbouring pixels belong to the same class or to one of the other ones. The procedure is explained in the following example: In Fig. 2, four classes (0, 1, 2 and 3) of grey values occur; 0 stands for the darkest, 3 for the brightest grey value. The pixel at the right side of the initial pixel is defined as neighbour (9). The resulting co:occurrence matrix is presented in Table 1. The grey value classes of the neighbouring pixels are indicated in the x-axis, those of the initial pixels in the y-axis. A neighbouring pixel with a grey value corresponding to class 0 (dark) can be observed once if the original pixel has a grey value belonging to the same class (Table 1). The values in the main diagonal axis (bold in Table 1) approximate the size of the homogeneous image scenes. For an image with a low contrast or relatively homogeneous areas, the main diagonal axis shows high values. A high contrast is declared in high values in the two corners of the matrix, either above right or below left. The values of the co-occurrence matrix can be standardised according to Eq. 20: f(~,y)=
C(x,y)
n
.100
(20)
where f(xy)=standardized frequency, C(xy)=observed frequency a~d n=number of pixels in the i~iage. The neighbouring pixel carl directly lie beside the initial pixel or at a certain distance from it. This distance between the pixels can influence the various parameters calculated from the co-occurrence matrix. It should be chosen according to the resolution of the optical system (Lefkovits 1993; Moldenhauer 1993; Panli 1993). If in our system a 40x objective is used, the pixel side length is 0.192771 gm and the optical resolution is 0.84 ~tm. Therefore, the ratio between optical resolution and pixel side length is 4.358. This value means that only those pixels which are at least five away from the original can be considered as real neighbours. Consequently, it makes no sense to determine the co-occurrence matrix with pixels lying directly beside the original pixel if a 40x objective is used.
339 Pixels [in the original (old) image]
Important methods for image enhancement
7,
6. 5. p~ 4.
i
3.
i
2.
mJ
1 o
i
!
12
14
-1 0
-2
2
4
6 8 10 Grey value class
16
Pixels [in the enhanced (new) image] 9 8 7 6
The main goals of image enhancement are: (1) elimination of surface roughness in an image (smoothing); (2) correction of defects (e.g. noise); (3) extraction of edges; (4) identification of points; (5) strengthening texture elements; and (6) improving contrast. These operations are essential elements for optimally preparing the segmentation of objects for antomatic measurements. Changing pixel grey values of an original image according to certain rules can be simply described in terms of operations. Each operation is performed on each pixet in the original (also called input) image. The result is a new image (also called output image). In enhancement, two types of operations can be distinguished: pixel-based operations and neighbourhood-based operations9 The pixel-based operations are called point operations, the neighbourhood-based ones, matrix operations. Which of the two operations is used depends on the problem to be solved (Abmayr 1994; Haber~icker 1989).
5
Pixel-based enhancement operations
3
0 -1
-2
i 0
..... 2
4 6 8 10 Grey value class
12
14
16
Fig. 3 a Grey value histogram of a model image with 42 pixels and 16 various grey values (top). b Grey value histogram after equalisation of the model image (bottom)
The most important pixel-based operations are linear stretching of grey values, application of pre-stored lookup tables and histogram equalisation. The first two methods are described above. Histogram equalisation (Abmayr 1994; Haber~icker 1989; Russ 1992) reassigns the grey values of pixels based on the histogram of the original image. The grey Fig. 4a, b Effect of image enhancement by equalisation. Variations within regions that appeared nearly uniform in the original image can be better seen. a Original image stained for AgNORs (nucleolar organiser regions), b Enhanced image, x80
340
values in the peak areas of the histogram of the original image are selectively spread out by assigning the same or very close grey values to those pixels that are few in number (Fig. 3a). The pixels lying in the valleys of the histogram are compressed (Russ 1992). The histogram equalisation makes it possible to see variations within regions that appeared nearly uniform in the original image (Fig. 4). For histogram equalisation, two formulae (Eqs. 21 and 22) are used, one for the calculation of the stun frequencies [k(i)] of pixels for each grey value, the other for the calculation of the new grey value by which the original grey value is replaced [GV(i,new)] j-1 k(i ) = i__~0N(i)
(new) image, because GV(3,new)=5/42'(16-1)=l.79, corresponding to an integer value of 2 (according to Eq. 22), and grey value class 4 turns into class 3 in the new image [GV(4,new)=8/42.(16-1)=2.86, corresponding to an integer value of 3]. Therefore, the 5 pixels belonging to the grey value class 3 in the original image (dark in Table 2) are newly allocated into the grey value class 2 and the 3 pixels of the grey value class 4 in the original image into the grey value class 3. In the equalised image, the grey value classes 1, 7, 9, 11 and 12 no longer occur. All the pixels which belonged to the grey value classes 12-15 in the old image are assigned to class 15 in the equalised image. The grey value histogram of the equalised model image is presented in Fig. 3b.
(21)
where i=current index of the grey value class, j=number of various grey values occuring in the image and N(i)=number of pixels of the original image with the grey values falling into the grey value class i. (22)
a v ( i , new) = k(i) "[n(GV) -- l]
T
where T=total number of pixels in the image and n(cv)=number of grey value classes (identical in the original and in the new image). The procedure is explained in the following example. A model image contains in its original form 42 pixels and 16 grey values. Each grey value corresponds to a grey value class. The histogram of the original image is shown in Fig. 3a. The calculation of k(i) and GV(i,new) is demonstrated in Table 2. For class (3), the original grey value GV(3,old)=3. The grey value class 3 of the original corresponds to the grey value class 2 in the equalised TaMe 2 Calculation of the key parameters for an equalisation of the model image [k(i) sum frequency of pixels in the class i, GVli.otd ) original grey Value of the pixel i, GVii new)new grey value of tlae pixel i belonging to the grey value i in the new (enhanced) image] Grey value Original (old) image class GV(i, old) Number of pixels
k(i)
Equalised (new) image GV(i, ,~ew)
Number of pixels
0
0
0
9
0
0=0+0+0
1
1
0
0
0
-
2 3 4 5 6 7 8 9 10 11 12 13 14 15
2 3 4 5 6 7 8 9 10 11 12 t3 14 15
0 5 3 2 3 4 4 6 8 5 1 0 1 0
0 5 8 10 13 17 21 27 35 40 41 41 42 42
0 2a 3 4 5 6 8 10 13 14 15 15 15 15
a Values are rounded
5 3 2 3 4 4 6 8 5 2=1+0+1+0
Neighbourhood-based enhancement operations Neighbourhood-based operations work with so-called filters. Filters can be designated as organising elements (Haber~icker 1989) or neighbourhood patterns (Russ 1992). They consist of an original or initial point in the centre and of neighbouring points to it in the environment. Usually, the organising elements are square regions with a dimension of 2re+l, which is odd number (Russ 1992). Organising elements or filters can also be named operators. Filters are used to accentuate or to suppress specific structures in the image. One distinguishes between highpass and lowpass, linear and non-linear, averaging and ranking, and symmetrical and assymmetrical filters. The terms highpass and lowpass are associated with the idea of frequencies. They need further explanation. In acoustics, high frequency means high tone and low frequency means low tone. Frequencies result from alterations of a physical value within a time interval. An image can be interpreted as a two-dimensional distribution of intensity (grey) values altering in the x- and y-directions. Generally, changes of intensities can occur either in the time domain [represented by the function V(t)] or in the frequency domain [represented by the function V(n)]. The classical method used for analysing alterations of intensities is the Fourier transform (Abmayr 1994; Haber~icker 1989; Russ 1992), developed in 1807 by Baron Jean Baptiste Joseph Fourier (Reeves 1990). Fourier's theorem states that it is possible to form any one-dimensional function V(x) as a summation (integration) of a series of sine and cosine terms. The classical Fourier transform works with continuous variables and decomposes a time- or space-varying signal into a spectrum of its constituent frequencies. In Eq. 23 these frequencies are represented by the term k. A close cousin of the Fourier transform is the Hartley transform, which is applied to images: N-1 =
(~ 2rckn "~
9cas -N- )
(23)
where n=current index of the single pixels within an image, N=size of the one-dimensional vector of grey values
341
(number of different grey values in the image), V(~}=grey value of the pixel n, and k=frequency of the cas function per interval. cas( 2~---k~n) = sin ~ )
+
(24)
The Hartley transform is characterised by the fact that the continuous one-dimensional function of time [V(t)], as in the Fourier transform, has been sampled at equispaced points corresponding to pixels, under this condition, for an example along a column in an image. The term V(t) used in the original Fourier transform is replaced by the term V(n); as demonstrated in Eq. 23, the index describing time (t) is replaced by the index describing space (n). By this transformation, a discrete function of n is generated. [Variables are designated as discrete, if their values can be calculated only for defined x values (i.e. for x=l.2, or x=l.3, or x=l.4)]. An example of the calculation of the H values, based on a simple model, is given in Table 3. Note that the index of the pixels (n) in the presented column (of an image) starts at zero. The difference between the space and the time domain is obvious if one looks at images. In images, every pixel can be accessed simultaneously, whereas a time series is inherently sequential. Since images are two-dimensional space objects, the two-dimensional Hartley transform must be applied. It is expressed in Eq. 25: N2_I N~_~
I
( klnl k2n2 "~ (25) N1 + N2 j ]
H(kl,k2) = n2=0• nl=02V(nI'n2) ' c a s k 2 / ~
The term {cas [2n (klnJNl+k2ne/N2)] } corresponds to a kernel for a NI• 2 matrix dimension (for the term "kernal" see below). In an image, high frequencies result from sharply defined edges with fast local grey value changes or from small objects with frequent local grey value changes (i.e. noise). Edges can be defined as scenes in which height differences of grey values in a small region exist (Russ
Table 3 Example of the calculation of H(k) for an image column consisting of four pixels with the grey values 110, 120, 60 and 50 (N=4). The Hartley transform is carried out with four different frequencies (k): 0, 1, 2 and 3. H(k ) describes the distribution of the frequencies in the image. High frequencies indicate that many jumps between pixels with high grey values and those with small ones occur. The fictitious image presented here contains more low frequencies than high ones, which is represented by the high value of H (about 340) resulting for low frequencies compared to those (about 120) for high frequencies. That can be concluded from the high H values resulting for low k values n
0 1 2 3
v
110 120 60 50 H
k 0
1
110.0 120.0 60.0 50.0 340.0
110.0 169.7 60.0 0.0 339.7
2 110.0 120.0 -60.0 -50.0 120.0
3 110.0 0.0 -60.0 70.7 120.7
1992). They can be accentuated by improving the difference of grey values between the object and its neighbourhood or by increasing the local contrast at boundaries. In contrast to this, low frequencies are established when gradual or infrequent grey value changes occur, as in shading or with large objects. High frequencies are amplified by highpass filters, low frequencies by lowpass filters. Highpass filters attenuate lower frequencies, allowing the accentuation of small structures, enhancement of edges, or extraction of contours. However, noise and other high frequency disturbances are also amplified. Lowpass filters have the opposite effect. They attenuate noise and other small objects. Therefore, lowpass filters are mainly used for shading correction and separating image structures. The most important lowpass filters are the median filter and other rank operators, the most important highpass filters are the two difference operators, Laplace (Rosenfeld and Kak 1982) and Sobel (Abmayr 1994; Haberficker 1989; Russ 1992). By a filter operation, the neighbourhood of a pixel can be summarised and expressed in the form of a single value. This value can be calculated either by averaging or by ranking the grey values occurring in the organising element (neighbourhood). Filter operations are continuously applied to all of the pixels in the image. The most important filters or operators are summarised in Table 4. In neighbourhood averaging, each original pixel is replaced with the average of its neighbours and of itself. This operation is carried out with "kernels" (Russ 1992). A kernel is a two-dimensional array of integers. These integers correspond to weights. Each pixel in the organising element is multiplied by its weight defined in the kernel before averaging. Such a general kernel can appear as shown in Table 5. A very useful set of kernels are the Gaussian kernels (Russ 1992). This set approximates the profile of a Gaussian function along the rows, columns or diagonals with the value of the original pixel as peak or centre. The Gaussian kernel is a lowpass filter. Its main function is smoothing or noise correction, by equilibrating local differences in brightness (Fig. 5). It is easy to understand that choosing a suitable kernel or filter is something of an art, at least for larger kernels. A very important variable in the use of a kernel is the size of the organising element or neighbourhood. If edges should be detected, kernels with negative weight values are applied. Such a kernel containing negative values, is the Laplacian operator, a difference operator (Table 4). The Laplacian is invariant to rotation and, hence, insensitive to the direction of the image (Russ 1992), in contrast to the Sobel and Kirsch operator (Table 4). The Laplacian highlights the points, lines and edges in the image and suppresses uniform and smoothly varying regions. With a one-dimensional Laplacian kernel it is possible to visualise bands in electrophoretical or chromatographic preparations more clearly by increasing the contrast at the regions where the brightness between bands and background changes (Abmayr 1994).
342 Table 4 Characteristics of the most important averaging and ranking operators. The Laplace, Sobel and Kirsch operator are also called difference operators because the kernels of these operators also contain negative values. In contrast to the Sobel and Kirsch operator, the Laplace operator is rotationinvariant. A weighting in the organising element is only possible if an averaging of the neighbourhood is carried out
Filter (operator)
Type
Main functions
Averaging operators Mean
Lowpass
Gaussian
Lowpass
Laplace (difference operator)
Highpass
Sobel
Highpass
Kirsch Mexican hat
Highpass
Smoothing Noise correction Smoothing Noise correction Sharpening Edge detection Contrast improvement Texture analysis Noise correction Sharpening Edge detection Edge detection Smoothing Edge detection
Ranking operators Median
Lowpass
Top hat (suppression operator) Ridge finder
Lowpass
Range operator
Lowpass
Hurst operator
Table 5 The usually square organising element consists of the original pixel in the centre and its neighbours. The side length of the whole element in the example presented is 3 (m=3). In the kernel the weights of each pixel lying under the organising element are indicated. In the single boxes, the weights are shown on the left, the underlying pixels on the right. The sum of all weights used in the example is 16, the weight of the original pixel is 4. If the kernel is applied, then the grey value of 90 of the original pixel is replaced by the grey value of 98, because (1 x110+2x 120+1 x90+2x 110+4x90+2x100+l xT0+2x 100+Ix80)/ 16=98.3 110 110 70
2, 4, 2,
120 90 100
Line detection Skeletonisation Texture analysis Edge detection Smoothing Noise correction Texture analysis Texture analysis Boundary demarcation
Variance operator
1, 2, 1,
Smoothing Noise correction Rounding corners Contouring Posterisation (reduction of the number of grey values) Point detection
1, 2, 1,
90 100 80
A filter similar to the Laplacian is the Sobel operator. The Sobel operator belongs to the highpass and gradient filters. The Sobel technique works with eight kernels (one for each direction) because the method is not invariant to the rotation.
Reference
Russ (1992) Abmayr (1994)
Abmayr (l 994) Abmayr (1994) Russ (1992)
Bright and Steel (1986), Zamperoni (I 989) Pavlidis and Heath (1980) Russ (1992)
Feder (1988) Hurst et al. (1965), Russ (1990) Russ (1992)
Larger kernels may combine a certain amount of smoothing and edge finding by having positive weights for pixels in a small region near the original pixel, surrounded by negative weights for pixels further away. These kernels are named sombrero filters or Mexican hats (Russ 1992). The value representing the neighbourhood of the pixel designated as the original pixel can also be calculated by ranking procedures. So the original pixel can be replaced by the median value of the grey values within the organising element. A median filter (Figs. 5, 6) is an excellent rejector of noise. There are two principal advantages to the median filter compared to averaging operators: (1) the method does not reduce the brightness difference across steps, the edges remain in place and well-defined; and median filtering does not shift boundaries. The minimal degradation to edges in median filtering allows the method to be repeatedly applied. A repeated application leads to a reduction of grey values occurring within the image, also named posterisation. Edge sharpening can be
343
,ii~ ~i ~
~i~il~~
~:~ !
zii!!7111~i!l~i,
~ ~ i~i:::
:i!!i!~
~
~i~!~::~:~~i~ii!ii~i i::~
b Fig. 5a, b Effect of a median rank operator compared with the effect of a Gaussian operator on an image stained for AgNORs. Local differences in brightness are equilibrated, a Enhanced image after the application of a median rank operator using an organising element with a side length of seven pixels (m=7, five iterations), b Enhanced image after a Gaussian operation using an organising element with a side Iength of seven pixels (m=7, five iterations). The original image is presented in Fig. 7a. •
Fig. 6 Effect of a repeated application of a median rank operator (m=3, five iterations) (posterisation). Fine details are erased, edges remain in place. The original image is presented on the left, the result of the operation with the median (lowpass) filter on the right. x40 000 (by courtesy of G. Hillje, KONTRON Image Analysis, Eching near Munich) improved even more with a mode filter (Davies 1988), another ranking operator. If the goal of the operator should be to find bright points in the image, then the maximum grey value in a small central region of the filter is compared to the maximum grey value in a larger region. The top hat operator has this demanded characteristic. The large region can be compared with the brim of the hat. The top hat method finds the maximum grey value in the larger region and
subtracts that from the brightest point in the interior region. If the difference exceeds a threshold (the height of the hat's crown), the original pixel is kept. The top hat filter is an example of a suppression operator. Another operator of this type, which locates lines rather than points, is the ridge-finding operator or grey scale skeletonisation (Pavlidis and Heath 1980). The ridge-finding algorithm suppresses pixels if they are not part of the centre ridge of a line. Rank operations are also used to detect texture in images. One of the simplest texture operators is the range operator. Range means the difference between maximum and minimum grey values in the organising element. For a flat or uniform region, the range is small. Larger values of the range correspond to surfaces with a larger roughness. The range operator responds to the boundaries between regions that are of different average brightness and is sometimes also used as an edge-defining algorithm. Another, more complex rank operator for detecting textures is the Hurst algorithm (Feder 1988; Hurst et al. 1965; Russ 1990). This method is based on the fractal dimension of brightness patterns in images.
Image arithmetic Operations discussed above operate on one single image. The image arithmetic operations use two images, including addition, subtraction, division and multiplication of images, and are performed pixel by pixel. The addition operation is straightforward. A decision is required about how to deal with the result since the resulting value can range from 0 to 510 (=255+255). To overcome this problem, the results can be rescaled by using formula Eq. 26: IMAGE(x,y) - IMAGE(x'Y) n
(26)
344
where IMAGE(xy~=new image, IMAGEA(x,y)=result of the arithmetic oi~eration and n=number of images added together. Another way is to rescale each pixel-related grey value by linear scaling it within the image into the space of grey values available for the whole image: GV(x,y~ = 255 g aV(/x,y) ~ - - _2 aV(min) ~
context, two methodological biases should be taken into consideration if densitometric methods are used: the glare effect, which directly influences the integrated optical density (IOD); and the thickness of the histological slide, which may give rise to false data if the results of measurements are not adequately corrected.
(27)
where GV(xy~=grey value of the pixel(xy ) within the resulting image, GV~xy~=grey value of the pixel resulting from the operation, "GV(min~=minimal grey value in the operated images and GV(ma~)=maximal grey value in the operated images. The technique of image subtraction is often used in quality control and reconnaissance of photographs to watch for the appearance or disappearance of targets in a complex scene (Russ 1992), Division or multiplication of two images can be useful for the removal of background (Fig. 7) or for examining fluorescent probes in the light microscope.
Prerequisites for measurements on digitised images There are two main prerequisites for extracting quantitative information from digitised images. First the objects or features to be measured must be clearly identifiable, and then be segmented. Second instrumental and technical influences must be analysed and respected. In this
Fig. 7a, b Effect of image arithmetic. Multiplication can be usefnl for the removal of background, a Original image (stained for AgNORs). b Result of a x5 multiplication of the original image with itself on the right, x80
Segmentation of objects One of the most important steps in automatic digital image analysis is segmentation. Selecting objects of interest in an image divides the image into two domains, the objects (or foreground) and the rest (or background). This operation is traditionally based on a threshold grey value. A range of grey values can be defined in either the original or the enhanced image. The pixels with grey values lying in this range are selected as belonging to the object of interest (foreground), all other pixels of the image are rejected to the background. The result of this procedure is a two-level image, designated a binary image (Fig. 8). Binary images can be used either as original images for Boolean operations (logical rules) (Abmayr 1994; Russ 1992) or as masks to modify an original grey scale image for following feature measurements. Masks can be laid over the original image for specifically selecting the regions of interest within the image. The mask restricts which pixels in the original image are to be used for quantitative evaluation and which are not to be used. In practice, masks select regions of interest by multiplying the original grey scale image by the binary image. In the binary image, the pixels belonging to the mask have the value 1, the pixels belonging to the background the value 0. The grey value histogram of the original or enhanced image is an important tool for setting threshold levels.
4
9
O
9
9
, I eo
~P
9
B Q
|
qt,
a
b
:o
345
6
~s o"
o
4
3 2
0
i0
20
30
40
50
60
70
80
90 i00
70
80
90 i00
Brightness
Fig, 8 Binary images made from the image which resulted after filtering the original image (Fig. 6, /eft) with a median filter (five iterations, m=3) (Fig. 6, right). The definitive binary image is shown on the right. In this image the small objects were removed by an additional size-related threshold. Notice the defects remaining in object 2. x40 000
1 2 84
10
40
Peaks in the histogram often identify the various homogeneous regions corresponding to particular structures. The goal of enhancement with regard to preparing segmentation should be to construct a histogram with clearly defined peaks and with valleys as deep as possible. Consequently, a grey value lying in the valley of the histogram should be chosen as threshold. However, setting a threshold value at the minimum between the peaks can cause some pixels in each region to be misclassified. Interactively setting a threshold is not always consistent from one operator to another, or even for the same person over a period of time. The variability of thresholding represents a serious source of error for image analysis. That is why there are two requirements for setting threshold values: (1) reproducibility should be achieved, so that variations due to the operator, i.e. lighting, do not affect the results; and (2) accurate boundary delineation should be obtained. One of the shortcomings of thresholding is that the pixels are selected mainly by outlining regions of interest (objects, features). However, this method can lead to a bias, too. Most people tend to outline just outside the actual boundary of the region, making dimensions larger than they should be. Finally, the amount of this error is a function of the contrast at the edge. For finding out the suitable threshold value, various main characteristics of an image can be used, for example the brightness and the texture. This can be realised by combining the grey value histograms of the original image with the one after image enhancement (e.g. after applying a range operator detecting texture characteristics). A combination of histograms often works with a grey value histogram and a texture histogram (Fig. 9), or with a grey value histogram and a binary image, or with grey value histograms of two of the three channels of a colour image (red, blue and green channel). An example is demonstrated in Figs 9 and 10. A grey scale image has
O {.9
6'
0 0
i0
~ 30
20
40 50 60 Texture
Fig, 9 a Grey value (brightness).histogram of a fictitious grey scale image, b Grey value histogram of the fictitious grey scale image (presented in Fig. 9a), enhanced by a range operator
I00 90 80' 70 60
§
50'
40'
*-%
30' 4W 20' 10' 0
9
0
I0
.
,
20
.
,
30
.
,
40
-
,
50
-
,
60
-
,
70
.
,
80
.
,
90
i00
Brightness
Fig. 10 Combined, bidimensional plot of the two histograms shown in Fig. 9. This plot allows the threshold grey values for segmenting all the four objects really existing in the fictitious image to be defined
346
a grey value histogram, as shown in Fig. 9a. It seems that the image contains three features because three peaks can be observed (peaks at the grey values of 15, 52 and 87). This original image is enhanced by a texture-sensitive operator, i.e. a range operator. The grey value histogram of the enhanced image is presented in Fig. 9b. It shows two peaks. When the two histograms are combined, four clusters can be observed in the resulting plot (Fig. 10). From this plot, the grey value thresholds can be determined for segmenting the four objects really existing in the image. Other more complex segmentation methods in addition to those based on thresholding grey values discussed above are described and discussed in Abmayr (1994) and Russ (1992).
OD(true) = log
I
1-glare
W(meas) - glare
Analysis of nuclear deoxyribonucleic acid (DNA) content and quantitative immunohistochemistry are the two major applications of densitometry, also designated microphotometry (Erler and Marchevsky 1994). In DNA analysis, the concentration of a stoichiometrically stained substance is measured via the integrated optical density of the object in which the substance occurs. This procedure is based on Beer-Lambert's law (Erler and Marchevsky 1994): (28)
where C=concentration, OD=optical density, th=thickness of the object measured and e=molar extinction coefficient of the dye. However, this law is strictly valid only if the nuclear DNA is dispersed homogeneously with constant thickness. Fluorescence microscopy differs from densitometry in that light energy absorbed by a fluorochrome dye is reemitted at a specific wavelength. The intensity of the emitted light is directly proportional to the number of dye molecules present. Densitometric measurements are affected adversely by two optical factors: glare and shading. Glare is an instrument error due to multiple reflections in the microscope lenses as well as to stray light which is caused mainly by dust. Glare was first described by Naora (1951, 1955) and is identical to the Schwarzschild-Villiger effect. A significant increase in light transmittance due to the glare effect was reported by Jarvis (1980). The amount of glare depends on the size of the objects analysed and on the staining intensity. Today, it is generally accepted that the glare effect influences the DNA histogram. The consensus report of the European Society of Analytical Cellular Pathology (B6cking et al. 1995) emphasises that it is very important to acknowledge this fact. The authors point out that two main test criteria should be observed for achieving an optimal technical accuracy in DNA cytometry: (1) the variation coefficient
l
(29)
where OD(true)=corrected optical density, T(meas)=measured transmittance and glare=amount of the glare. Kindermann and Hilgers (1994) have simplified this algorithm for a glare-related correction of the measured transmittance for each pixel in an image. They used the following three formulae: r(bar) W(glare ) - Z(background)
Instrumental and technical influences on densitometrical results
OD e C = ~-ff-.
within the diploid class should not exceed 4%; and the plot between the DNA content (indicated on the x-axis) and the diameter of the nuclei (indicated on the y-axis) should not show a deviation from the vertical. Goldstein (1970) introduced an algorithm for correcting the glare effect on optical density:
(30)
where T(glare)=glare effect on the transmittance, T(bar)=transmittance of standardised bars on a micrometric glass and T(backgro,nd)=transmittance of the background of a micrometric glass. Z(meas, corr) -'= Z(meas) - T(glare)
(31)
where Ttmeas,co~r)=Corrected transmittance of a pixel(x,y) and T(meas)=transmittance of a pixel(x,y). 1 T(norm) = T(meas, corr) " 1 -- T(glare )
(32)
where T(norm)=corrected transmittance of a pixel(x,y), related to the background. The other factor affecting densitometrical parameters is nonuniform illumination, also called shading. The possibilities for overcoming this artefact are described in the second section "Main tools of image processing". There is a controversy about whether it is possible to carry out densitometric measurements on histological sections or not. Such measurements would have the advantage that cytological preparation from tissue blocks could be omitted. Densitometry on histological sections is, however, far from easy. The most important problem is to correct the defects in cell nuclei created by random sectioning through the nuclei when producing approximately 5 gm-thick tissue slides (cap defects). Haroske et al. (1984, 1993) have developed an algorithm which allows a correction for these cap defects. The procedure is based on a model with the following four assumptions (Haroske et al. 1993): (1) nuclei are spherical objects; (2) they have a homogeneous chromatin pattern; (3) nuclei are cut in the equatorial plane; and (4) the section thickness is known at each measurement site. The measured integrated optical density (IOD) can be corrected by the following formulae (Haroske et al. 1984): IOD(i, corr) -- IOD(/}) " F
(33)
where IOD(i,corr)=corrected IOD of the object(i), IOD~)=directly measured IOD of the object and F=correction factor. F can be calculated by:
347 F=
1
1 3 R
for R < d
(34)
8 d
and F=
1 3 R 4 d
1 d 8 R3
for R > d
(35)
where R=radius of the nucleus and d=section thickness. The authors point out that the most serious drawback of the correction procedure seems to be that it is purely theoretical. Bins and coworkers (Bins and Poppema 1986; Bins and Takens 1985) use another algorithm for correcting IOD values obtained from histological sections: the directly measured IOD values are multiplied by the square root of the nuclear area. If densitometry is applied to cytological preparations, a standardisation of IOD in relation to the optical density (OD) of the background is indicated. For doing this, Tucker (1979) has proposed the following algorithm: IOD(colT) = IOD(meas) - [A(meas). OD(back)]
(36)
where IOD(cor0=corrected IOD, IOD(meas)=measured IOD of the nucleus, A(meas)=measured area of the nucleus and OD(back)=mean optical density of the background. We routinely apply this algorithm in every cytometrical measurement.
Methods for quantifying digitised images Many mathematical operations exist for extracting quantitative information from images in a wider sense. They can be carried out on binary images or on grey scale images. Operations on binary images The result of segmentation operations is rarely perfect because even the most elaborate segmentation routines misclassify some pixels as foreground or background.
Therefore, additional operations on binary images are necessary for eliminating these errors. The major tools for working with binary images fit broadly into two groups: Boolean operations (Russ 1992), for combining images; and morphological operations, which modify individual pixels within images. In binary images, only two groups of pixels occur: (1) pixels which are turned ON (pixels that are part of the selected objects or of the foreground); and (2) pixels marked as OFF (the remaining pixels, which are part of the background). Boolean operations are applied when one desires to combine the information from several binary images. Four Boolean operations exist: AND, OR, EXOR and NOT. AND, OR and EXOR require two input images, NOT requires only a single input image (Abmayr 1994; Haber~icker 1989; Russ 1992). The function of the operations are summarised in Table 6. The morphological operations on binary images mainly include erosion and dilation (Coster and Chermant 1985; Serra 1982) and modifications of these operations. All are fundamentally neighbour operations. Because the values of pixels in the binary image are restricted to 0 and 1, the operations are simple rank operations. Erosion turns pixels that were originally ON to OFF. During erosion, the reference or initial point is replaced by the minimum value in the neighbourhood: the initial point only remains in ON if all the pixels in its neighbourhood are ON. Erosion leads to a shrinking of features. A special form of erosion is skeletonisation (Pavlidis and Heath 1980). A complementary operation to erosion is dilation (sometimes also called dilatation). The classical dilation rule is to add (set to ON) any background pixel that touches another pixel already part of a region of interest to this region. This operation adds a layer of pixels around the periphery of features or objects. By dilation the size of objects increases. The combination of an erosion followed by a dilation is called an opening, referring to the ability of this combination to open up spaces between just-touching fea-
Table 6 The characteristics of the four most important Boolean operations on binary images Boolean operation
AND OR EXOR NOT
Pixelsin
Pixels in the resulting image
Image 1
Image2
ON ON OFF ON ON OFF ON ON OFF ON
ON OFF ON ON OFF ON ON OFF ON ON
ON OFF OFF ON ON ON OFF ON ON OFF OFF
Fig. 11 The defect in object 2 shown in Fig. 8 could be closed by a dilation with two iterationsand by a followederosion, x40 000
348 tures. It is one of the most commonly used operations for removing pixel noise from binary images. If a dilation is followed by an erosion, the operation is called closing. Instead of removing isolated pixels that are ON, the result is to fill in places where isolated pixels are OFF (i.e. missing pixels within features or objects; Figs. 8,11). Erosion, dilation and opening can be adjusted to actual problems by changing the neighbourhood pattern and the number of iterations. Additionally, morphological operations can also consist of analysing geometrical characteristics of objects in binary images. Such criteria can be size, shape or ratio of the caliper diameters. The objects in the binary image shown in Fig. 8 (on the right) were definitively selected by a size threshold: objects with an area <500 pixels were removed.
permits the presentation of findings by numbers. Numbers are objective facts. They can be judged by testing for reproducibility and are an essential prerequisite for evaluating and classifying findings by statistical methods. Nevertheless, the importance of quantitative results as objective findings for diagnosis should not be overestimated. Quantitative data are no more than important stones in the mosaic of the entire morphological and clinical diagnosis. The set of parameters which can be generated by digital image analysis in our laboratory is summarised in Table 7. In densitometry we use two sets of parameters: (1) describing standardised DNA content; and (2) the relative integrated optical density. For measuring DNA content, we work with two reference cell populations. If ploidy classes are demanded, the mean width of the classes can be calculated according to Eq. 37:
Operations on grey scale images (quantitative feature extraction)
CW = IOD(b )
There are many methods allowing direct quantitation of segmented objects. They can be used for analyses on light microscopical (cytological and histological) or electron microscopical preparations (Shotton 1995). Measurements on cytological preparations or of cytological features in histological preparations can generally be called cytometry. Histometry deals with a quantitation of histological structures, recently also called cellular sociology (Marcelpoil and Usson 1992). The intrinsic value of digital image analysis lies in the fact that, in contrast to other morphological methods, it
Table 7 Overview over the sets of parameters which can be calculated in quantitative image analysis in our laboratory
IOD(b) IOD(a/ b- a
where CW=width of ploidy classes, IOD(b)=median value of the IOD of the reference population b, IOD(a)=median value of the IOD of the reference population a, b=ploidy value of the reference population b (referred to the human ploidy) and a=ploidy value of the reference population a (referred to the human ploidy; b>a). Comparisons between the IOD of tumour cells and the IOD of a reference population are independent of the staining used. If the measurements are made on Feulgenstained nuclei (Feulgen and Rossenbeck 1924), the results also contain information about the DNA content.
Parameters Planimetric Area Mean diameter Feret diameters Shape descriptors Derived from the histogram of the grey values Densitometric Standardised DNA content Relative integrated optical density Densitometry of the normalised co-occurrence matrix Anisotropy Contrast Entropy Correlation Sum of the squares Inverse difference moment Sum average Invariant moments Run lengths Parameters of quantitative immunohistochemistry Parameters of silver-stained nucleolar organiser regions Parameters of cellular sociology
(37)
Authors
Gschwind et al. (1986) Haber~icker (1989) Gonzates and Wintz (1987)
Haralick et al. (1973), Van Gool et al. (1985)
Christen et al. (1989), Hu (1962) Galloway (1975) Weinberg (1994) Aubele et al. (1994a, b), Chen and Wu (1992), RiJschoff et al. (1992, 1993) Marcelpoil and Usson (1992)
349 From the IOD histogram, an IOD index can be calculated, analogously to the known DNA index: IOD index - M~176 (38) IOD(ref) where Mode (IOD)=mode value of IOD in the tumour cell population and IOD(ref)=mean value of IOD in the reference cell population. Since for the calculation of the relative IOD only one reference population is used, the parameter Mode(ioD ) can be influenced by the width of the histogram classes. Therefore, the IOD index cannot be interpreted as a direct measure of ploidy. However, the IOD index gives valuable general information about the deviation of the chromatin content of a tumour population from a reference population. The relative IOD is a good estimator for chromatin content when preparation artefacts must be taken into consideration. The invariant moments can be calculated, in an analogous manner to magnetic moments, from the distance separating two or more pixels of the same grey value or from the distance of two or more pixels with a defined difference of grey values. Invariant moments combine information about the structural pattern (texture) and the grey value intensity (Christen et al. 1989). Recently, very few papers concerning quantitation of images with fractals have been published (Feder 1988; Russ 1992; Sanders and Crocker 1993).
An image analysis procedure for evaluating the Ki-67 index by machine and further techniques to use colour information Our experiences over the past year has demonstrated that the colour image processing applied for the evaluation of the Ki-67 index can now be practically used in routine work. The procedure is easy to implement on standard hardware together with commercial image analysis software. Working with filters and using mathematical operations and spectral information are valuable tools for optimising colour-based segmentation. In every case, the histological or cytological preparation of the specimens and its standardisation are additional, very important elements for quantifying staining results in immunohistology and immunocytology and which may have a direct and great influence on the final results.
Practical measurement of the Ki-67 index by machine In the first step, the camera settings are adjusted by balancing its black value (no light is transmitted) and its white value (maximum light transmission in an empty field). For controlling the settings, a white area is captured and the histogram of the grey values of all three channels of the RGB (red, green, blue) image is shown. The histogram allows one to check whether there is enough light in the measuring field. This is the case if the peaks of each of the channels lie in the upper third of the grey value scale
and if the maximum transmission in each channel is practically identical. The RGB image is acquired by a threechip CCD (charge-coupled device) camera. Gain and offset of the camera are automatically set. In the second step, a shading image is acquired and filtered by a Gaussian operator (m=2) for partly correcting the illumination non-uniformity. In the third step, the first measuring field is selected. Within this field a region of interest can be interactively segmented by a cursor. By default, the whole image shown on the screen is selected as region of interest. The shading correction is performed in each of the three channel images of the RGB image. Additionally, in all three channels, the grey value histogram is equalised to amplify the contrast (Fig. 12a). The red, green and blue images are enhanced by application of a median rank operator (m=3, five iterations). The grey images of the three channels of the RGB image and the binary mask of the selected region of interest are stored on an optical disk. Further regions of interest can now be selected either automatically or interactively. The contrast of all the nuclei in relation to the background of the preparation is then amplified by a multiplication of the red with the green channel image. The result of this operation is shown in Fig. 13a. In the fourth step, a square Gaussian (lowpass) filter is applied (m=25, one iteration; Fig. 13b). The side length of the organising element should lie within the range 1.0-1.2 times the mean diameter of all the nuclei to be analysed. After this operation, the newly created image is subtracted from the image which was generated in the third step (Fig. 13c). Within this image, all nuclei are interactively segmented with a grey value threshold in the fifth step (Fig. 13d). In the resulting binary mask, small particles and nuclei touching the margins of the image are erased by morphological operations. In the sixth step, the corrected binary mask is connected to the blue channel of the original image by a Boolean AND operation (Fig. 13e). This operation leads to an image containing positively stained epitopes shown as dark regions. These regions can easily be segmented by a grey value threshold. Nuclei within the original image are designated positive if they include a positive epitope with a minimal size of 16 pixels (Fig. 131). The above procedure leads to an image in which the positively stained nuclei and the unstained nuclei can automatically be counted. From these data, the Ki-67 index can easily be calculated. Additionally to the Ki-67 index, many other parameters are determined such as the sum of the area of all nuclei in both classes, the mean optical density of each nucleus and the area of the positively stained epitopes per nucleus in relation to the whole area of the nucleus designated as positive. The quotient between the area of the epitopes within the nuclear area and the area of nuclei designated as positive is a very good estimate of the staining intensity. The procedure described above is repeated for all selected measuring fields. The data resulting from each
350
~,~
II
351 measuring field and from each sample unit are automatically stored on an optical disk together with the measured fields and the masks of the regions of interest. It is obvious that the most sophisticated step of the whole procedure is the segmentation of all nuclei from the background. Recent experiences show that the segmentation can probably be improved by using some additional tools such as other techniques for colour image processing and other algorithms for separating touching nuclei and for eliminating small particles which do not belong to one of the two classes of nuclei. In the actual algorithm, the touching nuclei within the binary image are separated one from the other by opening operations. These operations also allow small particles to be erased. A new algorithm for better segmenting objects starts with the calculation of a Euclidean distance map (Russ 1992) followed by a watershed operation (Beucher and Lantejoul 1979; Lantejoul and Beucher 1981), as explained in Fig. 14 (Russ and Russ 1988). The original colour image corresponding to the binary image on which the effect of the Euclidean distance map and the watershed algorithm is demonstrated is shown in Fig. 12e. By the Euclidean distance map algorithm, each pixel, in either the object or the background, is assigned by a grey value equal to its distance from the nearest boundary (background or object). The resulting distance map image (Fig. 14b) can be taken as an input image on which a watershed segmentation can be carried out. For understanding what a watershed algorithm is, the grey values of each pixel in an object can be compared with a physical elevation, the object itself with a mountain. The ultimate eroded pixels arising in a Euclidean distance map are the peaks of the mountains (corresponding to the local maxima in the map). The watersheds of these mountains are the lines which are used as boundaries for the segmentation procedure. The placement of these lines according to the relative height of the mountains (size of the objects) gives the best estimate of the separation lines between the single objects. The resulting boundaries are composed of the spots remaining if an ultimate erosion on the binary mask is performed (Fig. 14c). A separation of touching objects can finally be reached (Fig. 14d) if a Boolean AND operation is carried out with the binary image (Fig. 14a) and the image containing the above-mentioned boundaries (Fig. 14c).
Fig. 12a-f Filters and colorimetricmethods in color image analysis. a Original colour image after equalisationof each of the three colour channel images (red, green, blue), b Result after application of a broadband blue filter to the original image, e Result after application of an interferencefilter with 500 nm wavelengthto the original image presented in Fig. 12a. d Result after application of an interference filter with 650 nm wavelength to the original image. e Original RGB (red, green, blue) colour image on which the effect of other segmentationmethods will be explainedin Fig. 14. f CorrespondingRGB image after equalisation of each of the three colour channel images (red, green, blue), x40
More complex methods in digital colour image analysis HSI space In digital colour image analysis, two different systems, also called spaces, are distinguished: the RGB space containing the red, green and blue channel and the HSI space with the hue, saturation and intensity channel (Russ 1992). A frequently used simple method is the conversion of a RGB image into a HSI image. The grey value histograms of the hue channel and the intensity channel can, for example, be combined. The clusters appearing in the bidimensional histogram allow a grey value threshold for an object segmentation to be chosen. Furthermore, classifications on these HSI channels can also be carried out with LUT (Garbay et al. 1986; Julis and Mikes 1992; Lutz et al. 1991; MacAulay et al. 1989; Toet 1992). However, operations using elements of the HSI space can have some disadvantages. In the green channel of the RGB image, all nuclei of both classes (stained and unstained) can easily be discriminated by eye. Conversely, in the hue channel of the HSI image, the unstained nuclei cannot clearly be separated from the background because the grey values are broadly distributed. This phenomenon can be explained by the fact that the information contained in the hue channel corresponds to a circular data type. A circular data type, however, cannot be operated by conventional statistical methods. A further problem observed in hue channel images which are stained with two different reagents is that the reaction products appear as a single new colour and not as two colours, one marking the epitopes, the other marking counterstained regions. The HSI image can partly be used for improving the visual appearance of images to a human viewer, however, it is not suitable for preparing images for segmentations and measurements.
Statistical methods In addition to grey value thresholds, statistical methods can be applied for segmenting objects within an image. The most important statistical methods are cluster analysis in the RGB space or in other n-dimensional vector rooms, and topological map analysis (Schacter et al. 1976; Souchier 1992). Some algorithms claim a priori knowledge, which is acquired from learning regions chosen by the operator. Other methods exist which are not supervised by the operator and are therefore free of a possible individual bias. Experimentally, we have obtained some very good classification results with a topological map analysis without any a priori knowledge.
Colorimetry Colorimetrical methods Fig. 12 lead to qualitatively very good results. However, the methods cannot yet be intro-
352
J8
;-
~b
0
4J e
.r,
.
9
,~
o
"
l
~Q e
bqb
P
9
e
!
Q
6
~
9 I
~
m a
~
& K ~
0 e
9
r~
r
o 4b
QP|
9 .4
a
O t
"~
O~
4
e
-
9
l
353
It
~ i~
~II~
.,._"~"~
0
~ aSllI
,,'++ +
"%
b O
qwrl
+
++
++,
Ill +
'
h +
+li+...
+ i
+--
i>
/iv..,. l , l /t #
ui " +'I
i
J,,,l
_
._I ~ . %
'Vii
i
9 9 II ,<+"e ltOb
liil
e',. ~ w "
....~
~v,,~
,~'/'#
Ill.il II 41 'i ... Ill qi41 I'
It
12~k'~; 9
~ "ll
tdlL.
" i_i
+,I[I
qt
,
t-'_ i,, _I
III,. v I ,,i .. ~ ' -
+
,
~++
__ i
~
++ ++ +
+ + + ++
_ + +~,+ +++++
i
+ +
+~ +++'++;+~
++
:.++e;++ +
~*++,
+"
N+
P
+ o+'++
+~ + +++ ;
,+ + ++,
? : 5++.+ +
<
+
+
+
9 ++~
L
+
D
+
9
+
~
00,
+
w
0
.*I,~ 0
jdro
++~-
41+ v .
Wf .
i,
D
B~
be
11 o o
qb Ib"* 9
II-IFt
~o,
9 4h
"'I'1
~
~ qllP~ll
q~,-__,,~..,I.
~BJlPe ''q
"~
--'Q
q l v ,,,..
IW q , /
+
,
e
t
m.?,'%"
41
,,, o , , P ; t e : ~ : ~ I +
J ~ e sin #
~
m
+ll
9
ql ,,,. ~ ' -
_
+
ii
# . i ~_".% , , + i I ~ . .~.~,'~!i "9 + ;.'.:,'I,- .~ ,-,.,,.'..';:; '..'+ +
i
~ir~. ::
Fig. 13a-f Explanation of the algorithm used in our laboratory for determining the Ki-67 index by machine, a Result of the following image enhancement operations: equalisation of both the green and red channel image, application of a median lowpass filter (m=3, five iterations) and multiplication of the green channel image with the red channel image, b Result of the enhancement of the image presented in a by a Gaussian operator (m=25, one iteration), e Result of the difference between the images in a and b. All the nuclei contain pixels with practically the same grey value, d Result after an interactive application of a grey value threshold within image presented in e. e Result of the Boolean AND operation (Table 6) of the image shown in d with the blue channel image. Within this image, the epitopes are interactively segmented by a grey value threshold, f Presentation of the positive nuclei after the segmentation procedure. A nucleus was classified as positive if the area of its epitopes was ->16 pixels, x40
~
+i.'lle+..
++'w-
,i,'m,.,,+ ~i
+
+
~
v
%
l,,,
9
ir~.
Fig. 14a-d Effect of other segmentation methods: Euclidean distance map and watershed algorithm, a Resulting binary mask after application of all the operations explained in Fig. 13. b Result after application of the Euclidean distance map algorithm on a. c Result after application of the watershed algorithm on a. d Result of the Boolean AND operation with the two images shown in a and e. Compare this image with the result achieved after application of the standard algorithm for detecting all nuclei in Ki-67stained slides (Fig. 14a). x40
d u c e d in routine w o r k b e c a u s e s o m e c o m p l i c a t e d additional software e l e m e n t s are required. Today, c o l o r i m e t r y m a i n l y serves for a n a l y s i n g staining results with r e s p e c t to their suitability for a u t o m a t i c s e g m e n t a t i o n p r o c e dures.
354
Data handling Quantitative evaluation of digital images produces a large quantity of numerical data. It seems to be essential and efficient that two phases in the evaluation procedure are distinguished: (1) generation of fundamental data (xand y-coordinates and grey values of the pixels immediately after object segmentation); and (2) calculation of parameters from these data. In quantitative image analysis, the development of new evaluation methods should also be anticipated. It may even be that, in time, old methods are completely replaced by new ones. With this background it is desirable that objects evaluated previously can be simply reanalysed with new algorithms. For such repeated evaluations, two essential conditions should be fulfilled: (1) the data generated during segmentation must remain always available; and (2) these data must be conceptually separated from the parameters deduced from them. Primary and secondary data In this context we differentiate between two types of data produced during quantitative image analysis: (1) primary or fundamental data; and (2) secondary or calculated data. With such a data organisation it is no longer necessary to repeat the object segmentation if new calculations on objects, which earlier were selected, are to be carried out. Incidentally, the general situation in quantitative image analysis can be compared with that in immunohistochemistry or immunocytochemistry; if tissue blocks have been stored, reexamination with new antibodies can be undertaken. If the primary data remain stored, newly developed algorithms can simply be applied to the old, still-available objects. Relational data bank The data, either primary or secondary, can be stored according to three different models: the hierarchical, network or table-related (relational) model. We successfully use the relational model. Tables are the key elements of a relational data mo~el. They are composed of data and attributes belonging to the data. One of these attributes is used as the identification key. In our department we work with several relational data banks, each assigned to the organisational units of the department. The identification key used in every data bank is the task number of the specimen analysed. Each specimen is characterised by the following four numbers: patient, job, task and, if necessary, as in telepathology (Oberholzer et al. 1995), by the image number. Over a period of time, many jobs may be requested for one patient. A job may include many tasks, and, in connection with a task, many images may be made. Each object that is segmented receives the task number of the specimen as identification key. Additionally,
the user can interactively apply a further number to the single object, a grouping code. This can be very helpful if the data are to be available for allocation or other statistical procedures as the task number and the group code are automatically stored in the record of the object. If calculations are carried out and secondary data are generated, the resulting parameter values are stored both for the individual object and as mean or median values together with other descriptors of the data histogram for each sample unit. Usually, the sample unit corresponds to a patient. The identification key is automatically introduced in the record where these data are stored.
Conclusion Quantitative image analysis needs extensive knowledge in computer technology, evaluation methodology and information theory. The practical experience collected during recent years demonstrates how essential it is to base quantitative image analysis on a clear concept. Such a well-tried concept includes two main elements: (1) a distinction betweem primary data, corresponding to the key data of the segmented objects of interest (grey value, xand y- coordinates of every pixel in the segmented object) and the secondary data, corresponding to the parameter values calculated from the primary data; (2) a clear distinction between image enhancement procedures and segmentation of objects within images. Whatever the method for enhancing images is applied, the user should be aware of whether he works with a pixel- or neighbourhood-based method, or an averaging or ranking operator. In dealing with methods or instruments, it is always essential to understand exactly the instrument, to know what one really does when using this instrument and, finally, if correction algorithms must be taken into consideration, to be aware of the model and its limitations.
References Abmayr W (1994) Einfiihrung in die digitale Bildverarbeitung. Teubner, Stuttgart Aubele M, Auer G, Gais R Jtitting U, Rodenacker K, Voss A (1994) Nucleolus organiser regions (AgNORs) in ductal mammary carcinoma: comparison with classifications and prognosis. Pathol Res Pract 190:129-137 Aubele M, Auer G, Jtitting U, Falkmer U, Gais P (1994) Prognostic value of quantitatively measured AgNORs in ductal mammary carcinoma. Anal Quant Cytol Histol 16:211-218 Beucher S, Lantejoul C (1979) Use of watersheds in contour detection. Proceedings of the international workshop on image processing. CCETT, Rennes (France) Bins M, Poppema S (1986) Tetraploidy of centroblasts in lymphomas demonstrated in thin tissue sections. In: Mary JV, Rigaut JP (eds) Quantitative image analysis in cancer cytology and histology. Elsevier Science Publishers (Biomedical Division), Amsterdam, pp 321-323 Bins M, Takens F (1985) A method to estimate the DNA content of whole nuclei from measurements made on thin tissue sections. Cytometry 6:234-237
355 B6cking A, Giroud F, Reith A (1995) Consensus report of the ESACP task force on standardisation of diagostic DNA image cytometry. Anal Cell Pathol 8:67-74 Bright DS, Steel EB (1986) Bright-field image correction with various image-processing tools. In: Romig AD, Chambers WF (eds) Microbeam analysis 1986. San Francisco Press, San Francisco, pp 517-520 Chen MH, Wu MY (1992) Morphometric study on AgNORs of esophageal squamous cell carcinoma and paraneoplastic epithelial cells. Acta Stereol 11:205-209 Christen H, Oberholzer M, Buser M, L6tscher R, Gschwind R, R6sel F, Ettlin R, Feess A, Dalquen P (1989) Digital image analysis in cytological diagnosis: a morphometric analysis on pleural mesotheliomas. Anal Cell Pathol 1:105-122 Coster M, Chermant JL (1985) Pr6cis d'analyse d'images. Editions du Centre National de la Recherche scientifique, Paris Davies ER (1988) On the noise suppression and image enhancement characteristics of the median, truncated median and mode filters. Pattern Recogn Lett 7:87-97 Erler BS, Marchevsky AM (1994) Microphotometry in pathology. In: Marchevsky AM, Bartels PH (eds) Image analysis. A primer for pathologists. Raven Press, New York, pp 181-206 Feder J (1988) Fractals. Plenum Press, New York Feulgen R, Rossenbeck H (1924) Mikroskopisch-chemischer Nachweis einer Nucleins/iure vom Typus der Thymonukleinsfiure und die darauf beruhende elektive F~irbung yon Zellkernen in mikroskopischen Prgparaten. Physiol Chem 135:203-249 Galloway MM (1975) Texture analysis using grey level run lengths. Comput Graph Image Process 4:172-179 Garbay C, Chassery JM, Brugal G (1986) An iterative regiongrowing process for cell image segmentation based on local color similarity and global shape criteria. Anal Quant Cytol Histol 8:25-34 Goldstein GJ (1970) Aspects of scanning microdensitometry. I. Stray light (glare). J Microsc 92:1-16 Gonzales R, Wintz P (1987) Digital image processing. AddisonWesley Publishing Reading Gschwind R, Umbricht CB, Torhorst J, Oberholzer M (1986) Evaluation of shape descriptors for the morphometric analysis of cell nuclei. Pathol Res Pract 181:213-222 Haber/icker P (1989) Digitale Bildverarbeitung. Grundlagen und Anwendungen. Carl Hanser, Munich Wien Haralick RM, Shanmugam K, Dinstein I (1973) Textural features for image classification. IEEE Trans Syst 3:610-621 Haroske G, Dimmer V, Herrmann WR, Kunze KD, Meyer W (1984) Metastasizing APUD cell tumours of the human gastrointestinal tract: light microscopic and karyometric studies. Pathol Res Pract 178:63-368 Haroske G, Meyer W, Dimmer V, Kunze KD, Theissig F (1993) Feasibility and limitations of a cytometric DNA ploidy analysis procedure in tissue sections. Zentralbl Pathol 139:407-417 Hu MK (1962) Visual pattern recognition by moment invariants. IRE Trans lnf Theory 8:179-187 Hurst HE, Black RR Simaika YM (1965) Long term storage: an experimental study. Constable, London Jarvis LR (1980) Microdensitometry with image analyser video scanners. J Microsc 121:337-346 Julis I, Mikes J (1992) True colour image analysis and histopathology. Eur Microsc Anal 19:1-13 Kindermann D, Hilgers CH (1994) Glare-correction in DNA image cytometry. Anal Cell Pathol 6:165-180 Lantejoul C, Beucher S (1981) On the use of the geodesic metric in image analysis. J Microsc 121:39-43 Lefkovits M (1993) Differenzierung von dysplastischen Kolonadenomen mittels statischer Morphometrie. Inaugural dissertation, Medical Faculty, University of Basel Lutz RW, Pun T, Pellegrini C (1991) Colour displays and look-up tables: real time modification of digital images. Comput Med Imaging Graph 15:73-84 MacAulay C, Tezcan H, Palcic B (1989) Adaptive color basis transformation. Anal Quant Cytol Histol 11:53-58
Marcelpoil R, Usson Y (1992) Methods for the study of cellular sociology: Voronoi diagrams and parametrization of the spatial relationships. J Theor Biol 154:359-369 Moldenhauer JH (1993) Die prognostische Bedeutung bildanalytischer Kernparameter von Prostatakarzinomen. Inaugural dissertation, Medical Faculty, University of Basel Naora H (1951) Microspectrophotometry and cytochemical analysis of nucleic acids. Science 114:279-280 Naora H (1955) Microspectrophotometry of cell nucleus stained by Feulgen reaction. I. Microspectrophotometric apparatus without Schwarzschild-Villiger effect. Exp Cell Res 8:259-264 Oberholzer M, Fischer HR, Christen H, Gerber S, Brtihlmann M, Mihatsch M J, Gahm T, Famos M, Winkler C, Fehr P, Hosch HJ, B/ichtold L (1995) A perspective to telepathology: frozen section diagnosis at a distance. Virchows Arch 426:3-9 Pauli C (1993) Morphometrische Untersuchungen an Lungenmakrophagen bei Patienen mit chronischer Bronchitis, AIDS und Karzinomen in der Lunge. Inaugural dissertation, Medical Faculty, University of Basel Pavlidis LD, Heath JP (1980) Reconstruction from stereo and multiple electron microscopy images of thick sections of embedded biological specimens using computer graphics methods. J Microsc 153:193-204 Reeves AA (1990) Optimized fast Hartley transform for the MC68000 with applications in image processing. Thesis, Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire Rosenfeld A, Kak AC (1982) Digital image processing. Academic Press, New York Rtischoff J, Bittinger A, Gogolok A, Von Keitz A, Ulsh6fer B (1992) Quantitation of AgNORs in urothelial cancer - evaluation of diagnostic parameters in histology and cytology. Acta Stereol 11:109-117 Rtischoff J, Prasser C, Cortez T, H6hne HM, Hohenberger W, Hofst/idter F (1993) Diagnostic value of AgNOR staining in follicular cell neoplasms of the thyroid: comparison of evaluation methods and nucleolar feature s. Am J Surg Pathol 17:1281-1288 Russ JC (1990) Processing images with a local Hurst operator to reveal texture differences. J Comput Assist Microsc 4:249-257 Russ JC (1992) The image processing handbook. CRC Press, Boca Raton Russ JC, Russ JC (1988) Improved implementation of a convex segmentation algorithm. Acta Stereol 7:33-40 Sanders H, Crocker J (1993) A simple technique for the measurement of fractal dimension in histopathological specimens. J Pathol 169:383-385 Schacter B J, Davis LS, Rosenfeld A (1976) Scene segmentation by cluster detection in color spaces. SIGART Newslett 58: 16-17 Serra J (1982) Image analysis and mathematical morphology. Academic Press, London Shotton DM (1995) Electronic light microscopy: present capabilities and future prospects. Histochem Cell Biol 104:97-137 Souchier C (1992) Stepwise discriminant analysis: an aid for color image segmentation. Acta Stereol 11:191-197 Toet A (1992) Multiscale color image enhancement. Pattern Recogn Lett 13:167-174 Tucker JH (1979) An image analysis system for cervical cytology automation using nuclear DNA content. J Histochem Cytochem 27:613-620 Van Gool L, Dewaele R Ooosterlinck A (1985) Texture analysis anno 1983. Comput Vision Graph Image Processing 29:336-357 Weibel ER (1979) Stereological methods. I. Practical methods for biological morphometry. Academic Press, London Weinberg DS (1994) Quantitative immunocytochemistry in pathology. In: Marchevsky AM, Bartels PH (eds) Image analysis. A primer for pathologists. Raven Press, New York, pp 235-260 Zamperoni P (1989) Methoden der digitalen Bildsignalverarbeitung. Vieweg, Braunschweig