|A free, user-friendly toolbox for 3D image processing of X-ray CT imagery of porous rock and sediment is a valuable contribution to the community. The work put into this Cobweb project should definitively be rewarded with a stand-alone paper in GMD that can be cited, whenever the toolbox is used in upcoming projects. This manuscript has already undergone one round of referee comments and revisions. The revised manuscript can still be improved on several occasions, but in general I agree with the present structure of the paper. Also the Cobweb toolbox itself can be improved in many of the routines, which I will list below, but it would be too harsh to reject the paper for that. I would still suggest on more round of revisions to at least discuss these shortcomings more explicitly and also remove grammar/spelling mistakes.|
1. I would tone down the novelty aspect of your dual filtering and dual segmentation approach for the gas hydrate data set (e.g. in abstract, P17L3, etc.) To me it rather sounds like a drawback to first filter with an anisotropic diffusion (AD) filter and have unsatisfactory noise removal results so that another non-local means (NLM) filter is required (or vice versa that NLM, which should also be edge-preserving when the parameters are set properly, apparently cannot do a good job without preconditioning with AD). To me dual filtering doesn’t sound like something to aspire, but more like extra time required to adjust a larger set of parameters for a satisfactory result. Same holds for dual segmentation. First you have to run unsupervised K-means with many classes, only to regroup them by indexing into meaningful material classes through user interaction by an expert later on.
2. I’m not happy with this paragraph on FCM (P6L15-24). First of all, it is hard to follow for a reader that is not already familiar with k-means and fuzzy c-means. Secondly, your basically describing that FCM is incapable of dealing with partial volume voxels at material boundaries being misclassified to the intermediate class (and thus resulting in a too low volume fraction of the darkest class, i.e. porosity), since FCM is only operating in a feature space, i.e. the histogram, and cannot account for spatial features, i.e. partial volume voxels sitting on an edge vs. real intermediate material patches. So all that you can do is to tweak the FCM settings such that the partial volume problems disappear but so do the real intermediate material voxels elsewhere in the image. This drawback seems to be carefully neglected in this paragraph. Maybe remove this paragraph and replaced it with a more general statement, why FCM can be superior to KM.
3. It is next to impossible to digest this paragraph on SVM without prior knowledge (P7L19-27). Take the first sentence, for instance. How can a training dataset be non-linear, what does that actually mean? The training dataset would be a set of gray values that you obtained by clicking into the image and assigning those locations to a certain material class. Those gray values make up a 1D frequency distributions for each material class that can have substantial overlap. Where does the second dimension or even third dimension come from, that help to remove this overlap? See, I’m not even sure if this 2D or 3D coordinate system and the associated hyperplanes are in a feature space or in the spatial domain of the XCT image. Probably, I’m on the wrong track here, but so will be most of the readers.
4. The training is pretty restrictive (P12L1-21). If I understand correctly, you can only click once for one material and the class statistics is constructed from 6x6 pixels around the coordinate, where you clicked. My experience with Ilastik, https://www.ilastik.org/, another free machine-learning based segmentation toolkit, is that you can draw multiple lines of any thickness for each material and all covered pixels/voxels contribute to the class statistics. In addition, a whole set of samples can be segmented at once by only drawing training data in a small number of samples (even in live mode, i.e. the segmentation results are updated on the fly with any additional line. I think it's unfair to criticize this somewhat inflexible training mode in Cobweb. Please take this as an encouragement for further development and add Ilastik to the software survey in the beginning.
5. I do not understand the paragraph on 2D slice-by-slice segmentation (P17L14-26)). Do different area fractions of each material (i.e. spatial variability of the rock) or vertical intensity variations (due to hardware shortcomings) mess up the slice-by-slice approach? If it is the former, you need to explain why different area fractions in each slice (e.g. change in porosity) affect the segmentation results, if the average gray value of pores, rocks and matrix does not change. Also, calling the Z coordinate direction in XCT data “temporal information” is very strange.
6. Chapter 5.3 on Multi-Phase Segmentation: Is this section for all three datasets? If so, what is actually the third material beside pores and rock in Berea sandstone (and Grosmont carbonate rock)? What does it mean that the intermediate class has a Poisson distribution (P18L9)? It is a bit discouraging to read that the supervised methods did not result in better segmentations then unsupervised K-means (which has been around for many decades and needs to be cleaned up with your supervised dual segmentation strategy in one out of three show case datasets). So my take-home message is that you ‘sell’ Cobweb as the first ML-only segmentation toolbox for multi-phase segmentation (P1-19-20 in abstract), only to use K-means throughout the paper which is essentially available in all other commercial or non-commercial toolboxes. I think that this is a shot in the foot, especially since one reason against using the other ML methods in Cobweb is that they are apparently too slow at the moment. Don’t you think it would be better to show the LSSVM or ensemble classifier results instead?
7. More info on the watershed method is required (P18L31-P19L9). The x-axis in Fig. 6c (pore radius) suggests a maximum inscribed sphere method to me, but the traditional watershed transform on binary data creates irregular shaped fragments. How can an irregularly shaped object have a single pore radius?
8. You simply have not reached an REV for PSD histograms within a single slice (P19L12). My educated guess is that PSD requires an even larger REV than porosity, and yet (against your own advice) you show the PSD of each individual slice in an overloaded figure instead of a single PSD for the entire 3D REV. What do you learn from such a figure? This needs to be changed.
P2L31: tackle with this -> tackle this
P3L6: Why is fspecial striked through?
P3L9: This sentence sounds incomplete. Remove ‘Despite’?
P5L4: masking in -> filtering is - Masking is the wrong word here. You convolve the image with a filter kernel (or just apply a filter). Also, I couldn't follow why two Laplace filters are required and how exactly they are implemented. Is it a Laplace-of-Gaussian with two different sigmas, first a large Gaussian sigma for thick edges followed by a lower Gaussian sigma for thin edges? Is the second applied to the result of the first (would make no sense, since all small features are gone) or to the original image and the outcome somehow combined with the outcome of the first?
P5L9: Remove ‘Whereas,’
P5L20: comprises of pixels -> comprises pixels
P7L15: unknow -> unknown
P6L30: models -> model’s
P7L19: Remove ‘Now,’
P9L3: atleast -> at least
P9L22: Mix of present tense and past tense.
P10L21: where -> were
P11L13: Meaning of densely nested function unclear
P11L23: The term ‘back-end’ might be uncommon outside the computer science world.
P11L25-26: Combine the two sentences into one
P11L26: addition -> additional
P13L10: The beginning of chapter 4 is rather abrupt. I would write a short summary like: "this concludes the description of the Graphical user interface. For more information the user manual can be consulted which is available as supporting information. In the following Cobweb toolbox is demonstrated by means of three showcase examples, which are briefly introduced in terms of underlying research question, imaging settings and challenges for image processing.” - or something similar
P13L22: Write out ED at first occurrence of the abbreviation
P15L8: Remove ‘Now,’
P15L28 10243 ->1024^3
P18L9: Do you mean really mean ‘low sample size’ or ‘low volume fraction’? This is not necessarily the case. A counter example would by a mud-rock with a large volume fraction of the matrix (intermediate class) with low porosity and a few inherent dense rocks.
P19L4: Explanation is required, why it is so much lower than the porosity values given previously in the material description (mostly due to sub-resolution pores, I guess).
P19L29: These discrepancies in modelling and transport simulation have not been addressed in the main paper. Thereof it is not appropriate to mention them all of a sudden in the conclusions.
Fig3: What do the different blue colors for different boxes stand for?
Fig5: The color bar is misleading. There are only three materials and not five. You might need to create your own color legend if the built-in matlab functionality is not flexible enough to do that. I'm not sure what rescaled (Fig. b) means in this context. Should be explained in the caption.