Fig. 1

Image Segementation

Image segmentation is the first and the most important step in the development of 3D models from a stack of image slices since it dictates the accuracy of the resulting 3D model. There are different techniques and tools for automatic or semiautomatic image segmentation; however, they are not as accurate as manual segmentation. The accuracy in manual segmentation comes at the cost of a great deal of labour and time. We have resorted to manual segmentation since an accurate representation of the complex anatomical structure of the head is vital for applications.

The image segmentation was performed by utilizing the tools of Adobe Photoshop [32, 33] and Matlab. First, the color cross-sectional image slices of the VHP data set (222 head slices) were conditioned in Adobe Photoshop by cropping the relevant area and adjusting the brightness level. Cropping the relevant area reduces the image size significantly so that computation time and storage requirements were also reduced. Then, the boundaries of tissues were traced manually using the pencil tool with the help of the Atlas of the Visible Human Male [34], which completely catalogs the VHP male anatomy. The CT and MRI images were also cross-referenced for boundaries that are not clearly visible in the color image. For example, the boundary of cortical bone is more visible on the CT images; also the boundaries of soft tissues (such as, grey and white matter) and fluids (such as, cerebrospinal fluid (CSF)) are more visible in the MRI images. The traced boundaries were projected to different layers and masked with specific colors that correspond to each segmented anatomical structure in a black background. The image slices were segmented to 33 different boundaries or layers. Finally, the masked layers were exported as portable graphics network (PNG) image files with indices identifying the slices. Figure 1 shows an example of a segmented slice.

In the next stage, the masked layers were imported to Matlab and converted to greyscale with a unique ID (from 0 to 255) assigned to each anatomical structure layer. In order to create a uniform 3D matrix image data, the masked layers were also resized to 1 pixel/mm by using the nearest-neighbor interpolation technique. Finally, the layers for each structure were combined to a 3D matrix and was saved as 8-bits unsigned integer raw data. Normally, the voxel model can be created at this stage given that there is no overlap and/or gap between adjacent structure boundaries. However, gaps and overlaps were introduced during the manual image segmentation stage due to human error. Therefore, we utilized a different approach of generating the voxel data from the surface mesh model, which is discussed next. This approach results in a high correlation between the voxel and mesh based models, which is important in simulation problems that employ both formats. Also, the surface mesh model is important to design 3D components that are in contact with the head model since Boolean operations are possible. For example, an accurate 3D model of skin surface electrodes that are attached to the surface of the head model can be easily built using the surface mesh model; but it is very difficult to do that using the voxel model.

Fig. 2

Surface Construction

The 3D matrix raw data of each anatomical structure was imported to Mango (Multi-Image Analysis GUI) software for the initial surface construction. A triangular surface mesh of each tissue was constructed by the built-in marching cube algorithm. A Gaussian image smoothing was also applied on the slices to minimise the noise introduced during image segmentation. The surface mesh constructed at this stage has high degree of detail that is represented by large number of coordinate points and triangular faces. Even though a high level of detail might be desirable for the purpose of visualization, large surface mesh data requires high computation overhead in computational applications. In addition to this, the high level detail complicates the mesh error correction procedure that is inherent in the preparation of surface mesh models for the purpose of computation. For example, the white matter surface mesh shown in fig. 2(d) has 193,476 coordinate points and 387,119 triangular faces. With this large number of surface elements, the 3D volumetric tetrahedron meshes generated will be problematic. Consequently, the surface mesh of each tissue was processed further by systematically reducing the number of elements without affecting the geometrical details much.


Surface simplification, smoothing, and cleaning

In the next stage, the detailed surface meshes were further processed by simplification, smoothing and cleaning in MeshLab, an open-source mesh processing tool. The simplification was performed by employing the quadric edge collapse decimation tool, which transforms several adjacent small faces to a large triangle and reducing the number of faces. However, the decimation process introduced sharp edges and closely located vertices, which cause noise and poor face quality. For example, fig. 3(a) shows the detailed surface mesh of the grey matter that has 1,320,356 triangular faces and an average triangular face quality of 0.857 (as measured using the ratio between the radii of the incenter and circumcenter of faces). After passing it through two iterations of quadric edge collapse decimation, the number of faces were reduced to 330,094 and the face quality to 0.702 as shown in fig.3(b). The quality of the faces was increased to 0.7813 by applying Taubin smoothing [35], which is a linear low-pass filter, as shown in fig. 3(c). Taubin smoothing has superior performance compared to other methods, such as, Laplacian smoothing, since it has the ability to smooth meshes without collapsing faces to singular points.

The simplified and smoothed meshes were further cleaned and healed in MeshLab by removing non-manifold edges and vertices, self-intersecting faces, holes, and isolated and unreferenced vertices. Such cleaning ensures a watertight, manifold and non self-interesting surface mesh, which is necessary to construct volumetric meshes for FEM or boundary element method (BEM) computations.

Resolving boundary overlaps and gaps

Surface mesh boundary intersections or gaps between adjacent structures arise as result of error in surface construction and the subsequent mesh processing. Consequently, further processing of the mesh was carried out in CST Studio (CST, Darmstadt, Germany). The surface meshes were imported as STL (StereoLithography) format and automatically converted to ACIS models, which use a higher accuracy (1e-6) for coordinate point representation. The overlapping boundaries were resolved by simple Boolean operations. The gaps were resolved by subtracting all solid parts from a solid head volume (solid skin). The resulting volume was separated to small elements that represent the gaps between boundaries. Next, the small elements were merged to adjacent structures, which eliminated the gaps. Finally, each parts and the assembled head model were exported as SAT and STL file formats. SAT file format is the native data format of the ACIS kernel of CST Studio so that it ensures the resolved gaps and overlapping are preserved.

Voxel data generation

It is important to generate a voxel model that is similar to the surface mesh model since this gives the flexibility of using the two formats of a single model. It is possible to generate the voxel model at the initial stage of image segmentation. However, the final model is not exactly similar to the voxel model generated from image segmentation since the model has passed through a long process of surface construction. Thus, the voxel model was built from the final CAD model (SAT format).

 The voxelization of the CAD model was implemented in a simple, but accurate, technique. First, slices along the stack axis of the color rendered CAD model in CST Studio were captured at 0.5 mm interval and exported as PNG image files. Using custom Matlab codes, the color images were converted to greyscale with each parts assigned a unique greyness code between 0 to 255. The greyscale slice images were resized to fit a pixel of 0.5X0.5 mm^2, combined to a 3D matrix of size 396X514X442, and saved as 8 bits unsigned integer binary file.

The custom Matlab scripts used to convert the captured PNG images to voxel 3D matrix can be accessed from our Figshare data collection.