Segmentation

Each dataset of 3D grey-scale images was then segmented to separate from the rest of the rock matrix the pre-existing pores and the evolving deformation-induced cracks in a binary fashion. Herein we use the term ‘porosity’ to include all the segmented void space in the sample, whether pre-existing (and therefore associated with the igneous history of the rock) or deformation-induced. We use the term ‘void’ to describe an individual segmented object.
Although easily distinguishable by the human eye, narrow planar features such as fractures are difficult to extract automatically from large 3D image datasets. This is due to the range of greyscale values accommodated by fractures of different apertures and the increasing similarity of these grey-values to the surrounding rock matrix as the aperture decreases. The main reason for this is the partial volume effect, whereby voxels containing both air and rock matrix appear brighter than voxels containing air alone. Fracture surface roughness and narrow apertures contribute to this effect. We used a multiscale Hessian fracture filter (MSHFF) technique to meet these challenges while still using an automated approach and segment the micro-cracks from the image data. This technique, developed and described in detail by Voorn et al. (2013), uses the Hessian matrix (second-order partial derivative of the input image data) to represent the local curvature of intensity variation around each voxel in a 3D volume (e.g., Descoteaux et al., 2005). Attributes of this local curvature can be used to distinguish planar features in the dataset (Text S1a in our Supporting Information, SI). The analysis is conducted over a range of observed crack apertures, which are combined to produce the final multiscale output: narrow fractures of varying apertures detected within the 3D image data. The analysis was carried out using the macros for FIJI (Schindelin et al., 2012) published by Voorn et al. (2013), utilizing the FeatureJ plugin (Meijering, 2010) to calculate the Hessian matrices, with input parameters given in (Table 2).
Table 2: Input parameters for segmentation code. Definitions given in Voorn et al. (2013).