1.IntroductionConvolutional neural networks (CNNs) have revolutionized image analysis and are considered the de facto standard for image classification tasks.1–5 Their main strength is the ability to learn complex spatial patterns directly from labeled examples without the need for prior knowledge or manual feature extraction. The practical application of CNNs, however, depends on several factors, the size of input images being one of the most limiting. As an example, ImageNet, which is a widespread academic resource for annotated image data,6 contains images that are less than a megapixel in size on average and are even further cropped and resized to before training CNNs. In contrast, medical images, especially whole-slide imaging (WSI) of tissue sections, can be many orders of magnitude larger, ranging from multimegapixel image tiles or regions-of-interest (ROI) to gigapixels for WSI,7 making their direct analysis using CNNs unfeasible due to computational constraints. Even though conventional image rescaling is frequently used in other imaging domains,1 it is rarely applied to histologic images because small details of individual cells might be lost, eliminating the advantages of microscope magnification. Therefore, most biomedical applications using CNNs on histologic images derived from hematoxylin and eosin (H&E)8–10 and fluorescent multiplex immunohistochemistry (fm-IHC)11–17 have relied on tiling or patching18–20 as the preferred image preprocessing step. However, extensive image tiling leads to new challenges, including loss of large-scale image context, weakly supervised labels, and multiinstance learning settings. Researchers are actively proposing new solutions to these problems,21–23 but they are still not general enough to be applied systematically. An alternative and more classical approach to using histologic images is to focus on individual cells instead.15,16,24–27 Cell segmentation algorithms take a histologic image as input and generate a list of identified cells with their coordinates in the image and other features (e.g., cell size, shape, texture, antibody marker expression levels and others). Many cell segmentation algorithms are already implemented in open source and commercial software (e.g., CellProfiler,28 ImageJ,29 and inForm (Akoya Biosciences, Inc.), or HALO software (Indica Labs)) and new ones are published every year.30,31 Cell segmentation allows one to perform more traditional hypothesis-driven spatial analysis, including cell phenotype quantification in different tissue regions, nearest neighbor analysis, identification of touching cells and cell clusters, and phenotype colocalization.32–35 In fact, several biomarkers have been identified this way, indicating the importance of cell segmentation in tissue imaging.36–39 However, in contrast to feeding histologic images to CNNs directly, this kind of analysis requires a substantial amount of a priori domain knowledge to extract meaningful spatial features. Most importantly, some biological questions still remain elusive to such hypothesis-driven approaches, as is the case of colorectal cancer relapse and survival prediction.40–44 In such cases, machine learning methods, such as random forests, support vector machines, or neural networks could be helpful, but these algorithms cannot directly consume the output of cell segmentation algorithms (one table per sample). To overcome this limitation, graph neural networks (GNNs),45,46 which first convert the list of identified cells into a so-called cell graph that represents cell–cell interactions in the data,47 have been proposed. Recent research explored the design space for GNNs for graph classification, highlighting the complexity of choosing an appropriate GNN for any given task.48 Additionally, several hyperparameters for the construction of cell graphs (usually the parameters for -nearest neighbor connections of cells and a maximum edge length cutoff46) require careful tuning. The identification of best-practices for constructing cell graphs and the selection of a suitable GNN architecture therefore remain active fields of research.48–50 Compared with these graph-based methods, conventional pixel-based CNNs are a mature technology that is well established in biomedical tissue analysis.51,52 In this paper, we introduce Cell2Grid, an algorithm for efficient representation of histologic images that transform cell segmentation data into low-resolution images suitable for training conventional CNNs. To the best of our knowledge, this is a novel approach that has not been explored in depth. For evaluation, we present a case study on colorectal cancer relapse prediction where we show that Cell2Grid images maintain cell spatial information while providing better performance compared with conventional image rescaling when training CNNs. 2.MethodsCell2Grid transforms the table-style output of cell segmentation methods into spatial cell-based data representations (referred to as “Cell2Grid images”) that are orders of magnitudes smaller than original input images. The entire image-to-image pipeline consists of three steps (see Fig. 1): (1) cell segmentation and cell feature extraction, (2) assignment of cells to a target grid, and (3) image creation. Step 2 is the main contribution of this work and depends on two parameters, such as the target grid spacing and the maximum local conflict resolution size , which controls how cell assignment conflicts are resolved. Even though our method is more closely related to image rescaling rather than image compression, we will use the term compression ratio for the ratio between high-resolution input image and Cell2Grid output image sizes as a result of a change in pixel resolution. 2.1.Step 1: Cell Segmentation and Feature ExtractionIndividual biological cells are identified in the input image using a cell segmentation method suited for the tissue type, see e.g., Refs. 24, 53, and 54. For each identified cell, the location of its nucleus in the image is stored together with extracted cell features, e.g., the average marker intensity for each fluorescent color channel, the cells size and shape, etc. The final output of the cell segmentation step is a list of cells with their coordinates and features, as shown in Table 1. Table 1Example output of the cell identification (Step 1).
Cell segmentation in an fm-IHC laboratory setting is normally performed using established software, e.g., CellProfiler,28 ImageJ,29 inForm (Akoya Biosciences, Inc.), or HALO (Indica Labs), with supervision from an expert in the field, as the process depends on multiple parameters, such as tissue type and tissue stains, and could require manual annotation as training data. This was the setting and scope of our study, but it is worth mentioning that recently published state-of-the-art methods on cell segmentation30,31 might soon become widespread in such a setting, providing even better results (see Sec. 4.1). 2.2.Step 2: Assignment of Cells to Target GridAfter choosing a target grid spacing , all identified cells are assigned to the square target grid by binning their original coordinates to the nodes of the grid using , whereas denotes conventional rounding to the next integer and corresponds to coordinates of the target grid nodes (GNs) (i.e., indices of pixel locations in the final output image). While for most biological tissue types, is a good choice, we provide guidelines for how to systematically chose the target grid spacing in Supplementary Material S.3. Using this method, any two cells within a distance of may be assigned to the same GN . To achieve a one-to-one relation of cells to output pixels, we require that each cell be assigned uniquely to a GN and that any GN can hold at most one biological cell. This one-to-one relation is the essential property of Cell2Grid data representation. Assignment conflicts are resolved successively by applying Munkres’ algorithm55 to all cells within the local subgrid around a GN with conflicts. If the number of cells within this window is higher than the number of GNs in the subgrid, the subgrid is considered. This process is continued until a maximum window size is reached. If the number of cells still exceeds the number of available GNs, , then cells are deleted from the data. This process is explained in more detail in Supplementary Material S.1. After conflict resolution, every cell has either been uniquely assigned to a GN or was deleted due to unresolvable conflicts. Similarly, every target GN either holds exactly one or no cells. 2.2.1.Grid spacing and assignment conflicts – theoretical analysisIn this section, we present analytical expressions for the expected number of GNs with assignment conflicts and the total expected cell loss after local conflict resolution. These expressions assume a random uniform distribution of cell locations over the entire image and depend on the overall cell density , the target grid spacing , and the local conflict resolution window size . Using results from the ball-into-bin model that describes the occupancy problem,56 the random allocation of cells into nodes is described by the Poisson distribution , which is the probability mass function for a single bin (GN) containing balls (cells) with , and being the number of GNs of the target grid. Therefore, estimates the fraction of GNs that hold two or more cells (i.e., assignment conflicts) and is only dependent on . For our purpose, can be defined using the cell density of in an image area and target grid spacing as . Therefore, the estimated fraction of GNs with assigned cells is written as Using this result, we express the fraction of GNs that hold two or more cells (i.e., nodes with assignment conflicts) as follows: The fraction of cells in conflict is the number of cells that are not allocated to one-cell GNs, Next, we estimate the number of unresolvable conflicts when a local conflict resolution window of size is used ( being an odd integer ). Using a modified target grid spacing , we can estimate the number of cells in a subgrid around a GN with conflicts. Subgrids (or “neighborhoods”) that contain more than cells constitute unresolvable conflicts. Because taking the local neighborhood into account is only relevant when the central node actually contains a conflict, we multiply with . By neglecting conditional probabilities for when there are already cells at the central node, we arrive at Finally, we estimate the cell-loss-fraction due to unresolvable conflicts by considering cells that have been assigned successfully using . Here, is the number of cells located in “unsaturated” local neighborhoods (all cells are assigned if an area of size contains cells): Note that the summation term contains , which is the average number of cells per GN when cells are in an area of GNs. In contrast, is the number of cells that are successfully assigned during conflict resolution in “saturated” areas (in areas of size containing cells only cells are assigned and cells need to be deleted): In , we simply count the number of GNs that are part of oversaturated areas because each of them will contain exactly one cell after conflict resolution. Put together and simplified, the estimated cell loss after conflict resolution is In Fig. 2, we visualize for different values of , , and cell densities . Note that for comparison with empirical data that use an adaptive local conflict resolution window (see Supplementary Material S.1), using , provides the best approximation. 2.3.Step 3: Image CreationIn the last step of Cell2Grid, a low-resolution output image is created. Each node of the target grid stores the extracted features of its assigned cell, like pixels in conventional RGB images storing three color values. This final data structure is a tensor of size where and are the number of GNs in and dimension. The GNs without cells are assigned zero as a default value for every cell feature. Because a square grid was used, this tensor can be converted directly into an image with color channels, e.g., using the multipage supporting tagged image file format (TIFF). This multipage image, which we term Cell2Grid image, is the final output of the image creation step and the whole Cell2Grid algorithm (see Figs. 1 and 8 for examples). 3.Experiments and ResultsTo evaluate the utility of Cell2Grid in practice, we selected an inhouse dataset of histologic images from stage II colon cancer patients. In our experiments, we aim to investigate (a) the data processing time and cell loss of Cell2Grid for different target grid spacings, (b) how conventional cell-based features are influenced by the spatial approximation of cell locations, and (c) how CNNs perform on the task of predicting the patients 5-year tumor recurrence from histologic images using our method compared to conventional image rescaling. 3.1.Data for ExperimentsSurgically removed tumor tissue samples from patients were collected from the local biobank affiliated to the university hospital where patients had been treated (Ethics Committee number: 28-342 ex 15/16). Each patient’s tumor recurrence status after 5 years was labeled as either relapse (17 patients) or healthy (37 patients), serving as the variable to be predicted by classification CNNs. To obtain fm-IHC images (see Fig. 1 for an example), one formalin-fixated paraffin-embedded tumor tissue sample per patient was stained with fluorescence-conjugated antibodies targeting CD3, CD8, CD45RO, PD-L1, FoxP3, Her2, and DAPI. Of each whole-slide scan, several ROIs covering an area of were selected by a human expert and were recorded with magnification ( resolution) using a Vectra 3 microscope (PerkinElmer, Inc.). Spectral unmixing of raw images was performed using inForm software (Akoya Biosciences, Inc.). In total, images ( per patient) were recorded. 3.2.Grid Spacing, Assignment Conflicts, and Processing TimeWe first investigated the empirical algorithm runtime and compression ratio of Cell2Grid and if real cell loss numbers follow our analytical expressions. Cell segmentation (step 1) was performed using inForm software (Akoya Biosciences, Inc.) with parameters set by an IHC expert. Cell nuclei were identified using DAPI stain, and cell membranes and cytoplasm were subsequently identified using remaining markers. On average, 4131 cells per image were identified (cell density ). For each cell, we extracted features using the average marker intensity across the entire cell area of each fluorescent color channel (excluding DAPI). We then investigated how different target grid spacing values influenced Cell2Grid, using a maximum local conflict resolution size , see Figs. 3 and 4. Algorithm runtime remained flat below and increased steeply afterward. As expected, image compression ratio increased with the square of the target grid spacing with being the original image resolution. Figure 4 shows that the percentage of GNs with conflicts and cell loss due to unresolvable conflicts followed the expected behavior [Eqs. (2), (3), and (7)]. However, the percentage of cells in conflict before conflict resolution was lower than expected below and higher for larger values of due to non-uniform distributions of cells in real biological tissue. More importantly, cell loss remained neglectable below and increased for higher values. Based on these findings, we decided to use as the target grid spacing of Cell2Grid for our following experiments. Using our dataset, this value provided a compression ratio of , average cell loss of 2.06 cells (0.05%) per image, and processing time of 1.56 s per image (excluding cell segmentation). To put the processing time of our method into perspective, we compared it with three conventional image rescaling methods and with the creation of a cell graph using cell segmentation data. Cell graphs were created using -nearest-neighbor algorithm (, see Fig. 8(c) for an example). Figure 5 shows the mean data processing time for each method but does not include time for cell segmentation required for cell graph creation and Cell2Grid. The comparably high variance in Cell2Grid is a result of the variance in cell densities and subsequent conflict resolutions per image, the latter being the main driver of computation time in Cell2Grid. 3.3.Phenotype Counts and Spatial FeaturesNext, we investigated if conventional spatial features calculated from cell segmentation output tables were altered due to the local spatial approximations performed in Cell2Grid. For every individual image of our colon cancer relapse dataset, we calculated the total number of cells (CD3+) and regulatory cells (CD3 + FoxP3 + CD8−) to investigate the potential change in a dense and a rare cell population, respectively. Additionally, we investigated a potential change in the average distance between cells and their nearest PD-L1 positive cell as well as the number of cells around PD-L1 positive cells within a radius of using the phenoptr package.34 Marker thresholds for these phenotypes were set manually. Figure 6 compares these feature values calculated with original cell segmentation coordinates and coordinates as approximated by Cell2Grid. Regression plots (Pearson , ) showed strong agreement between both methods, which confirms that the small number of deleted cells introduces only insignificant changes in the data. Additionally, Bland–Altman plots, offering an alternative way to test the same hypothesis, show that at least 95% of the method differences fall within the predefined interval of times the standard deviation (SD), which is a sign of good agreement between the two measurements.57 In particular, no more than three cells were deleted in any given image. Notably, no regulatory cell was deleted in any image, indicating that rare cell populations are less likely to be affected by cell loss simply due to their lower abundance. Furthermore, the mean distance between cells and their nearest PD-L1 positive cell was altered by for 95% of images, and the count of cells within a radius of around PD-L1 positive cells was altered by a count of in of images. Results for additional features, including cell marker distributions within the cells and nucleus and cell shape properties, are shown in Supplementary Material S.4. 3.4.Neural Network Image ClassificationFinally, we investigated how different CNN architectures performed on Cell2Grid images, and if given a fixed image size, Cell2Grid data representation improved performance compared with standard image rescaling. For the retrospective colon cancer dataset introduced in Sec. 3.1, the goal was to predict the patients’ 5-year tumor recurrence status (relapse/healthy) from histologic images in a supervised training setting. For that purpose, we trained two different CNN architectures (VGG and small interpretable network (SIN), explained below) on Cell2Grid images and on raw images rescaled with three conventional image rescaling methods (bilinear (bil) interpolation, bicubic (cub) interpolation, and Lanczos (lanc) sampling),58 all using a final resolution of . To investigate the effects of additional cell shape features in Cell2Grid, we also trained a CNN on Cell2Grid images that contain, apart from the six default mean marker channels, three additional image channels describing the cell shapes (nucleus size, cell size, and cell axis ratio). We refer to these image set as c2gShape. Additionally, we trained a GNN on a cell graph created from the same cell segmentation data used for Cell2Grid. Figure 7 shows the entire experiment setup, and a side-by-side comparison of all data modalities is shown in Fig. 8. A description of interpolation-free data augmentation schemes required for Cell2Grid images is presented in Supplementary Material S.5. 3.4.1.Modified VGG-16 architectureAs a baseline for CNN image classification, we chose the well-established VGG-16 architecture,3 which, despite its age and simplicity, has been used frequently with biological tissue images in recent research.59–64 To suit the six-color-channel image data format of our images, we modified the architecture using a different first convolutional layer. Fully connected layers at the top of the model varied in number of neurons compared with the original VGG-16 due to the change in input image size and output dimensions. We simply refer to this modified architecture as VGG in the following. This model has parameters, all of which were initialized randomly. Results for model runs with network weights pretrained on the ImageNet dataset6,65,66 are shown in Supplementary Material S.6. 3.4.2.Custom network architecture SINAs a second model, we use a custom CNN architecture in the style of VGG-16 that uses fewer layers and parameters. We refer to this architecture as SIN in the following. For the first layer of SIN, we used a convolution with a kernel inspired by the network-in-network approach of Lin et al.67 The weights of this layer were subjected to L1-regularization with regularization factor . This was followed by 4 pairs of convolutions with 16 feature maps each and subsequent max pooling layers. On top of the network, we placed 1 fully connected layer with 16 nodes, followed by a dropout layer (33 %) and a final SoftMax layer with 2 outputs. This architecture uses trainable parameters and has at most 16 feature maps per layer. No pretraining was applied to this model. Table 2 SIN architecture summarizes the SIN architecture. Table 2SIN architecture.
3.4.3.Graph neural networkAs a reference, we compared all methods with GNN trained on cell graphs using the same cell segmentation data that were used for Cell2Grid. We created cell graphs from cell segmentation data using -nearest neighbors () and an edge cutoff length of following Jaume et al.46 (see Fig. 8(c) for an example). Each node was described with the same cell features as in Cell2Grid data. We used the best GNN architecture for graph classification as identified by You et al48 implemented using the Spektral Python package.68 3.4.4.Training setupFrom the total of recorded images, we used the same hold-out test set of 247 images (176 healthy and 71 relapse) for all models. The remaining 1106 images were randomly split into a training and validation set (70 %/30 %) stratified by class label using the same split for each model. Oversampling of the minority class was used during training to account for class imbalance. We used a class weight of 3:1 in favor of the relapse class to emphasize the clinical importance of predicting relapse cases. Early stopping was used to end training when validation set error rate stopped decreasing. We used a batch size of 64 and binary cross-entropy as loss function and the Adam optimizer.69 Learning rate and layer initialization of VGG were subjected to hyperparameter optimization (see Supplementary Material S.6). All models were trained on an off-the-shelf desktop workstation with a single GPU (Nvidia RTX 2070). Models were implemented in Keras65 with a TensorFlow backend.70 We trained each model ten times and evaluated the prediction performance of each run on the validation and test set using the balanced error rate, a metric equivalent to the mean of the false-positive and false-negative rate of model predictions. 3.4.5.ResultsTo assess model performance, we report the balanced classification error rate calculated from the balanced image classification accuracy of model predictions compared with the known ground truth label (5-year tumor recurrence status). Results of all models and ten repeated runs each are summarized in Table 3 and Fig. 9. VGG models showed higher variance in error rates compared with SIN, regardless of image processing method. As shown in Supplementary Material S.6, different training settings for VGG did not improve its performance. We found that conventional rescaling methods bil and cub resulted in models with comparable performance on the test set with lanc falling slightly behind. Despite similar validation set error rates, Cell2Grid models maintained lower test set error rates compared with bil, cub, and lanc models, especially for the SIN architecture, indicating better generalization properties and less overfitting. Figure 10 shows a comparison of the training curves of the Cell2Grid-SIN and bil-SIN models and test set receiver operating characteristic (ROC) curves for all ten repeated model runs. Table 3Balanced error rates for all models and ten repeated runs. See Fig. 9 for a visual representation.
Additional cell shape features, as used in c2gShape-SIN, led to the best validation set error rates, but did not improve upon Cell2Grid-SIN (which use only fluorescent marker features) on the test set across multiple model runs. We attribute this to c2gShape containing more information but also being more prone to overfitting due to its additional input features. Nevertheless, the single best performing model run was a c2gShape-SIN model (3.8% and 4.2% balanced error rate on the validation and test set, respectively). The GNN model used in our experiment showed higher test set error rates than all CNN models. As shown in Table 3 and Fig. 9, models trained on Cell2Grid data led to an 18.1% and 25.3% relative reduction of test set error rate for VGG and SIN models, respectively, compared with conventional bil rescaling. As an additional experiment, we investigated if higher input image resolution in the bil-SIN models could improve model performance compared Cell2Grid-SIN. We therefore rescaled our data to 2.25 and (images sizes and , respectively) using bil and used the same training setup as before. Figure 11 shows that test set error rates of bil-SIN decreased with increased image resolution, but remained above Cell2Grid-SIN, even at . However, the larger images sizes at higher resolutions led to longer training times due to the increased number of network parameters in the top layers and slower data augmentation. Specifically, per-epoch training time of Cell2Grid-SIN at was decreased by 85% compared with bil-SIN at while simultaneously achieving lower test set error rates. 4.DiscussionCNN architectures have become increasingly complex, requiring long training times and powerful hardware, particularly when using large images for training.1–3,5 In some domains, rescaling images to a much lower resolution still allows for efficient image classification. In the biomedical domain, however, this is rarely applicable, as often not only the whole picture but small details can be crucial for classification, such images containing cells. In this paper, we presented an alternative to rescaling that allows for efficient training of CNNs without altering the essential morphological properties of these images. The Cell2Grid images are different from conventional images in a number of ways: (1) they contain no natural gradients, because every pixel in the image is an independent object (a biological cell); (2) they might contain many empty pixels (numerically zero) in areas without assigned cells; and (3) they cannot be rotated, zoomed, sheared, or transformed for data augmentation with methods that include pixel interpolation because the value and integrity of individual pixels matter (see Supplementary Material S.5). As shown in Fig. 8(d), final Cell2Grid images are sparse, containing several empty (black) pixels. In our example, Cell2Grid images (at ) had ten times fewer pixels in each spatial dimension compared to the original fm-IHC images (), corresponding to an image-to-image compression ratio of 100. In contrast, in conventional image downscaling methods, pixels that belong to different biological cells may be used to obtain a pixel value of the new target resolution through interpolation. This makes it difficult or sometimes even impossible to identify individual cells and their true marker expression in strongly downscaled images [Fig. 8(b)]. Due to the cell segmentation step prior to Cell2Grid that uses the original high-resolution images, the concept of individual cells is retained in its output and the cell features, as extracted during cell segmentation, are left unaltered. Keeping cell information intact is key in many applications, such when studying colon cancer,71 where the presence of just a few cells with distinct phenotype has been shown to be an important image feature. In our case study, we mainly focused on the average fluorescent marker intensities as features for each cell as this leads to images that are most similar to conventionally rescaled images. Depending on the scientific question, other features may be relevant, including size and shape of nuclei and cells and marker heterogeneity within the cells. Our experiment on the inclusion of cell shape and size properties as additional image color channels revealed that these features may contribute to better performing models but tend to be more prone to overfitting and therefore need to be used with care. Using additional image color channels, including marker distribution properties (see Supplementary Material S.4) may be used in the same fashion. We used a SIN with parameters, which is orders of magnitude smaller than our modified VGG architecture with parameters. We found that the smaller SIN model more stable during training compared with VGG and performed, on average, better than VGG on unseen test samples. As smaller networks require less GPU memory, they allow for larger batch sizes or larger input images during training. This is particularly relevant toward training on whole-slide images to capture the entire tissue heterogeneity, a feature commonly observed in tumors.72 For evaluation, we compared Cell2Grid with conventional image rescaling with bil and cub interpolations as well as lanc sampling. Instead of merging pixels based on their proximity to the pixels of the target resolution, our method merges only pixels that belong to the same biological cell, which is a process related to the concept of superpixels.73,74 This cell-based approach of Cell2Grid enables subsequent analysis, such as cell counting, phenotyping, and cell-based interpretation of a trained neural network.75,76 Graph-based methods, including GNNs45,76–78 also rely on the output of cell segmentation or nuclei identification in histologic images but they require a cell graph-based on the segmented cells and their coordinates. Our method follows a different approach by converting the list of segmented cells into a low-resolution image suited for training a CNN. While the input data for our method is the same as for cell graph creation, the output of Cell2Grid allows for the training of conventional, well-established CNNs, as compared with the more recent GNN architectures. In the last decade, higher standards for privacy protection complicated the sharing of data obtained from human samples. The Cell2Grid provides a way to share pathology data in a compact format that does not contain the original high-resolution histologic image. We therefore hope that this technique reduces concerns for data protection and encourages researchers to release datasets in this format. 4.1.Limitations and Future WorkThe research scope of this work focused on the investigation of an alternative data representation of high-resolution histologic images to facilitate their use with CNNs. Therefore, the first step of our algorithm directly relied on the output provided by commercial software available in most laboratories and provided a standardized scenario on which to experiment. Moreover, in our current implementation, an IHC expert had to choose which method and parameters were best suited for the biological tissue. Because our method strongly depends on this cell segmentation step, this set of constraints constitutes an important limitation. Our research scope did not include an investigation of the effects of different cell segmentation algorithms, but fully automated cell segmentation is an active field of research24,30,31,53,54 that we intend to explore in future work with recently published methods showing promising results.30,31 Additionally, Cell2Grid requires additional processing time compared to conventional image rescaling. As shown in Sec. 3.2, data preprocessing using Cell2Grid is slower than conventional image rescaling even without accounting for cell segmentation time. Furthermore, cell graph creation was significantly faster than our method, both starting from cell segmentation data. This additional data processing time needs to be weighed against any potential gains of our method for downstream applications, like a potential reduction in CNN training time or classification accuracy. In our experiments, we focused on a small set of cell features to construct Cell2Grid images, i.e., the mean marker expression within each cell. This enables a direct comparison with conventional image rescaling methods as the number and type of image channels remains the same. To add additional information to Cell2Grid images, other features, such as marker distribution properties across the cell and cell shape features, may be used as additional image channels in Cell2Grid output, as demonstrated by our experiments (see results for c2gShape and Sec. S.4 in the Supplementary Material). However, extracellular information or cell-based information that is either not extracted during cell segmentation or is not used as a dedicated image channel in Cell2Grid is inevitably lost from the data. This is a limiting factor for our method, especially for applications where a pathologist—or an image classification model—requires this information for accurate assessment of the images. Regarding the assignment of cells to the target grid (step 2 of Cell2Grid), care needs to be taken, as a large target grid spacing may lead to deletion of individual cells. Even if in small numbers, cell deletion may be undesired if tissue areas with high cell density are of particular interest, e.g., immune cell clusters in insulitis.79 In this case, a smaller target grid spacing , a larger local conflict resolution size , or both are required, potentially increasing processing time. In our method, the precise location of individual cells is approximated to fit the target grid during cell assignment. However, because cells are moving in live tissue, their exact position in the final tissue section depends on the time of sampling as well as the location and angle of tissue sectioning. We estimate that the net effect of these factors is greater than the local shift in cell positions introduced by Cell2Grid, which for most cells is smaller than the target grid spacing . Even though we demonstrated the benefits of Cell2Grid using fm-IHC images of size , a WSI of a tissue section can be up to 30 times bigger in each spatial dimension. In our experiments, we did not yet investigate the utility of Cell2Grid when applied to WSI data. Additionally, we only considered image rescaling instead of image compression methods (such as JPEG200080) as reference data processing methods CNN training, because the latter only reduce image storage size and not memory size once decompressed and loaded into memory for model training.8 Our study is limited using only a single fm-IHC dataset of colorectal cancer patients that comprises only 54 patients and 1,353 images. These numbers are small compared with the large number of CNN parameters ( and for VGG and SIN architectures, respectively). For a more comprehensive analysis of our method, future experiments on different datasets with larger cohorts are required to identify potential weaknesses and demonstrate robustness of our algorithm for different tissue types. Additionally, the utility of our method when applied to other image modalities, such as conventional H&E-stained images, needs to be investigated in future work. In addition, we want to build on our current work and develop further two areas of interest.
5.ConclusionsIn this paper, we introduced Cell2Grid, which is an algorithm for efficient representation of histologic images whose output can be used directly to train CNNs. As a case study for evaluation, we used fm-IHC images to predict 5-year relapse risk of stage II colon cancer patients. Our results showed that Cell2Grid preserves small-scale cell level information and how this information can reduce the test set error rate of a neural network by 25% relative to conventional bil rescaling at the same resolution while also reducing training time by 85% compared with higher resolution reference models. However, future experiments are required to investigate the performance of our method when applied to other image types, such as H&E-stained images and conventional chromogenic IHC. Compared with conventional image rescaling methods, images transformed with Cell2Grid are directly suited for cell-based analysis, and in addition, they might simplify interpretation of trained CNNs. The Cell2Grid therefore opens a door for further exploring the use of histologic images with deep learning architectures and in a wider field of applications, such as volumetric images and synthetic image creation. DisclosuresA patent for Cell2Grid has been filed by CBmed, Center for Biomarker Research in Medicine GmbH, with L.H. as inventor. AcknowledgmentsThis work was supported by the Österreichische Forschungsförderungsgesellschaft FFG (Grant No. 865836). Use of samples from human subjects: the use of data obtained from human samples stored in a biobank was approved by the Ethics Committee of the Medical University of Graz (“Spatial distribution of immune cells as a prognostic and predictive biomarker in colon cancer - a retrospective study”; Ethics Committee number: 28-342 ex 15/16). ReferencesA. Krizhevsky, I. Sutskever, G. E. Hinton,
“ImageNet classification with deep convolutional neural networks,”
in Adv. Neural Inf. Process. Syst. 25,
1097
–1105
(2012). Google Scholar
C. Szegedy et al.,
“Going deeper with convolutions,”
in IEEE Conf. Comput. Vision and Pattern Recognit. (CVPR),
1
–9
(2015). https://doi.org/10.1109/CVPR.2015.7298594 Google Scholar
K. Simonyan and A. Zisserman,
“Very deep convolutional networks for large-scale image recognition,”
(2014). Google Scholar
O. Russakovsky et al.,
“ImageNet large scale visual recognition challenge,”
Int. J. Comput. Vision, 115
(3), 211
–252 https://doi.org/10.1007/s11263-015-0816-y IJCVEQ 0920-5691
(2015).
Google Scholar
K. He et al.,
“Deep residual learning for image recognition,”
in IEEE Conf. Comput. Vision and Pattern Recognit. (CVPR),
770
–778
(2015). https://doi.org/10.1109/CVPR.2016.90 Google Scholar
J. Deng et al.,
“ImageNet: a large-scale hierarchical image database,”
in in IEEE Conf. Comput. Vision and Pattern Recognit.,
248
–255
(2009). https://doi.org/10.1109/CVPR.2009.5206848 Google Scholar
G. Romero et al.,
“Digital pathology consultations-a new era in digital imaging, challenges and practical applications,”
J. Digital Imaging, 26
(4), 668
–677 https://doi.org/10.1007/s10278-013-9572-0 JDIMEW
(2013).
Google Scholar
J. Konsti et al.,
“Effect of image compression and scaling on automated scoring of immunohistochemical stainings and segmentation of tumor epithelium,”
Diagn. Pathol., 7 29 https://doi.org/10.1186/1746-1596-7-29 DMPAES 1052-9551
(2012).
Google Scholar
P. Mobadersany et al.,
“Predicting cancer outcomes from histology and genomics using convolutional networks,”
Proc. Natl. Acad. Sci. U. S. A., 115
(13), E2970
–E2979 https://doi.org/10.1073/pnas.1717139115
(2018).
Google Scholar
J. N. Kather et al.,
“Predicting survival from colorectal cancer histology slides using deep learning: a retrospective multicenter study,”
PLoS Med., 16
(1), e1002730 https://doi.org/10.1371/journal.pmed.1002730 1549-1676
(2019).
Google Scholar
J. Xu et al.,
“A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images,”
Neurocomputing, 191 214
–223 https://doi.org/10.1016/j.neucom.2016.01.034 NRCGEO 0925-2312
(2016).
Google Scholar
P. Khosravi et al.,
“Deep convolutional neural networks enable discrimination of heterogeneous digital pathology images,”
EBioMed., 27 317
–328 https://doi.org/10.1016/j.ebiom.2017.12.026
(2018).
Google Scholar
A. Gertych et al.,
“Convolutional neural networks can accurately distinguish four histologic growth patterns of lung adenocarcinoma in digital slides,”
Sci. Rep., 9 1483 https://doi.org/10.1038/s41598-018-37638-9 SRCEC3 2045-2322
(2019).
Google Scholar
W. Bulten et al.,
“Epithelium segmentation using deep learning in H&E-stained prostate specimens with immunohistochemistry as reference standard,”
Sci. Rep., 9 864 https://doi.org/10.1038/s41598-018-37257-4 SRCEC3 2045-2322
(2019).
Google Scholar
W. C. C. Tan et al.,
“Overview of multiplex immunohistochemistry/immunofluorescence techniques in the era of cancer immunotherapy,”
Cancer Commun., 40
(4), 135
–153 https://doi.org/10.1002/cac2.12023 CNCMET 0955-3541
(2020).
Google Scholar
D. J. Fassler et al.,
“Deep learning-based image analysis methods for brightfield-acquired multiplex immunohistochemistry images,”
Diagn. Pathol., 15
(1), 100 https://doi.org/10.1186/s13000-020-01003-0 DMPAES 1052-9551
(2020).
Google Scholar
D. Maric et al.,
“Whole-brain tissue mapping toolkit using large-scale highly multiplexed immunofluorescence imaging and deep neural networks,”
Nat. Commun., 12 1550 https://doi.org/10.1038/s41467-021-21735-x NCAOBW 2041-1723
(2021).
Google Scholar
L. Hou et al.,
“Patch-based convolutional neural network for whole slide tissue image classification,”
in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit.,
2424
–2433
(2016). https://doi.org/10.1109/CVPR.2016.266 Google Scholar
B. Smith et al.,
“Developing image analysis pipelines of whole-slide images: pre- and post-processing,”
J. Clin. Transl. Sci., 5
(1), e38 https://doi.org/10.1017/cts.2020.531
(2020).
Google Scholar
A. Pirovano et al.,
“Automatic feature selection for improved interpretability on whole slide imaging,”
Mach. Learn. Knowl. Extract., 3
(1), 243
–262 https://doi.org/10.3390/make3010012
(2021).
Google Scholar
M. Y. Lu et al.,
“Data-efficient and weakly supervised computational pathology on whole-slide images,”
Nat. Biomed. Eng., 5 555
–570 https://doi.org/10.1038/s41551-020-00682-w
(2021).
Google Scholar
R. J. Chen et al.,
“Whole slide images are 2D point clouds: context-aware survival prediction using patch-based graph convolutional networks,”
Lect. Notes Comput. Sci., 12908 339
–349 https://doi.org/10.1007/978-3-030-87237-3_33 LNCSD9 0302-9743
(2021).
Google Scholar
M. Lerousseau et al.,
“SparseConvMIL: sparse convolutional context-aware multiple instance learning for whole slide image classification,”
in Proc. MICCAI Workshop on Comput. Pathol.,
129
–139
(2021). Google Scholar
Q. D. Vu et al.,
“Methods for segmentation and classification of digital microscopy tissue images,”
Front. Bioeng. Biotechnol., 7 53 https://doi.org/10.3389/fbioe.2019.00053
(2019).
Google Scholar
M. Abdolhoseini et al.,
“Segmentation of heavily clustered nuclei from histopathological images,”
Sci. Rep., 9 4551 https://doi.org/10.1038/s41598-019-38813-2 SRCEC3 2045-2322
(2019).
Google Scholar
X. Xiao et al.,
“Dice-XMBD: deep learning-based cell segmentation for imaging mass cytometry,”
Front. Genet., 12 721229 https://doi.org/10.3389/fgene.2021.721229
(2021).
Google Scholar
M. Y. Lee et al.,
“CellSeg: a robust, pre-trained nucleus segmentation and pixel quantification software for highly multiplexed fluorescence images,”
BMC Bioinf., 23
(1), 46 https://doi.org/10.1186/s12859-022-04570-9 BBMIC4 1471-2105
(2022).
Google Scholar
C. McQuin et al.,
“CellProfiler 3.0: next-generation image processing for biology,”
PLoS Biol., 16
(7), e2005970 https://doi.org/10.1371/journal.pbio.2005970
(2018).
Google Scholar
C. T. Rueden et al.,
“ImageJ2: ImageJ for the next generation of scientific image data,”
BMC Bioinf., 18
(1), 529 https://doi.org/10.1186/s12859-017-1934-z BBMIC4 1471-2105
(2017).
Google Scholar
W. Han et al.,
“Cell segmentation for immunofluorescence multiplexed images using two-stage domain adaptation and weakly labeled data for pre-training,”
Sci. Rep., 12 4399 https://doi.org/10.1038/s41598-022-08355-1 SRCEC3 2045-2322
(2022).
Google Scholar
N. F. Greenwald et al.,
“Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning,”
Nat. Biotechnol., 40
(4), 555
–565 https://doi.org/10.1038/s41587-021-01094-0 NABIF9 1087-0156
(2022).
Google Scholar
A. Baddeley and R. Turner,
“SPATSTAT: an R package for analyzing spatial point patterns,”
J. Stat. Software, 12 1
–42 https://doi.org/10.18637/jss.v012.i06
(2005).
Google Scholar
D. Schapiro et al.,
“histoCAT: analysis of cell phenotypes and interactions in multiplex image cytometry data,”
Nat. Methods, 14
(9), 873
–876 https://doi.org/10.1038/nmeth.4391 1548-7091
(2017).
Google Scholar
K. S. Johnson,
“Phenoptr: InForm helper functions. R package,”
https://github.com/PerkinElmer/phenoptr
(2018).
Google Scholar
C. M. Schürch et al.,
“Coordinated cellular neighborhoods orchestrate antitumoral immunity at the colorectal cancer invasive front,”
Cell, 182
(5), 1341
–1359.e19 https://doi.org/10.1016/j.cell.2020.07.005 CELLB5 0092-8674
(2020).
Google Scholar
M. Angelova et al.,
“Evolution of metastases in space and time under immune selection,”
Cell, 175
(3), 751
–765.e16 https://doi.org/10.1016/j.cell.2018.09.018 CELLB5 0092-8674
(2018).
Google Scholar
A. Lu et al.,
“Comparison of biomarker modalities for predicting response to PD-1/PD-L1 checkpoint blockade: a systematic review and meta-analysis,”
JAMA Oncol., 5
(8), 1195
–1204 https://doi.org/10.1001/jamaoncol.2019.1549
(2019).
Google Scholar
I. P. Nearchou et al.,
“Automated analysis of lymphocytic infiltration, tumor budding, and their spatial relationship improves prognostic accuracy in colorectal cancer,”
Cancer Immunol. Res., 7
(4), 609
–620 https://doi.org/10.1158/2326-6066.CIR-18-0377
(2019).
Google Scholar
Y. K. Huang et al.,
“Macrophage spatial heterogeneity in gastric cancer defined by multiplex immunohistochemistry,”
Nat. Commun., 10 3928 https://doi.org/10.1038/s41467-019-11788-4 NCAOBW 2041-1723
(2019).
Google Scholar
C. Lewis, P. Xun and K. He,
“Effects of adjuvant chemotherapy on recurrence, survival, and quality of life in stage II colon cancer patients: a 24-month follow-up,”
Support Care Cancer, 24
(4), 1463
–1471 https://doi.org/10.1007/s00520-015-2931-2
(2016).
Google Scholar
P. Dalerba et al.,
“CDX2 as a prognostic biomarker in Stage II and Stage III colon cancer,”
N. Engl. J. Med., 374
(3), 211
–222 https://doi.org/10.1056/NEJMoa1506597 NEJMAG 0028-4793
(2016).
Google Scholar
S. E. Rebuzzi et al.,
“Adjuvant chemotherapy for Stage II colon cancer,”
Cancers, 12
(9), 2584 https://doi.org/10.3390/cancers12092584
(2020).
Google Scholar
R. Caso, A. Fabrizio and M. Sosin,
“Prolonged follow-up of colorectal cancer patients after 5 years: to follow or not to follow, that is the question (and how)!,”
Ann. Transl. Med., 8
(5), 164 https://doi.org/10.21037/atm.2019.11.40
(2020).
Google Scholar
E. Wulczyn et al.,
“Interpretable survival prediction for colorectal cancer using deep learning,”
NPJ Digital Med., 4
(1), 71 https://doi.org/10.1038/s41746-021-00427-2
(2021).
Google Scholar
Y. Zhou et al.,
“CGC-net: cell graph convolutional network for grading of colorectal cancer histology images,”
in IEEE/CVF Int. Conf. Comput. Vision Workshop (ICCVW),
(2019). https://doi.org/10.1109/ICCVW.2019.00050 Google Scholar
G. Jaume et al.,
“Towards explainable graph representations in digital pathology,”
in ICML 2020 Workshop on Comput. Biol.,
(2020). Google Scholar
C. Gunduz, B. Yener and S. H. Gultekin,
“The cell graphs of cancer,”
Bioinformatics, 20
(Suppl. 1), i145
–i151 https://doi.org/10.1093/bioinformatics/bth933 BOINFP 1367-4803
(2004).
Google Scholar
J. You, R. Ying and J. Leskovec,
“Design space for graph neural networks,”
in Proc. 34th Int. Conf. Neural Inf. Process. Syst.,
(2020). Google Scholar
J. Zhou et al.,
“Graph neural networks: a review of methods and applications,”
AI Open, 1 57
–81 https://doi.org/10.1016/j.aiopen.2021.01.001
(2018).
Google Scholar
D. Ahmedt-Aristizabal et al.,
“A survey on graph-based deep learning for computational histopathology,”
Comput. Med. Imaging Graphics, 95 102027 https://doi.org/10.1016/j.compmedimag.2021.102027
(2022).
Google Scholar
S. Deng et al.,
“Deep learning in digital pathology image analysis: a survey,”
Front. Med., 14
(4), 470
–487 https://doi.org/10.1007/s11684-020-0782-9 FMBEEQ
(2020).
Google Scholar
J. van der Laak, G. Litjens and F. Ciompi,
“Deep learning in histopathology: the path to the clinic,”
Nat. Med., 27
(5), 775
–784 https://doi.org/10.1038/s41591-021-01343-4 1078-8956
(2021).
Google Scholar
R. M. Thomas and J. John,
“A review on cell detection and segmentation in microscopic images,”
in Int. Conf. Circuit, Power and Comput. Technol. (ICCPCT),
1
–5
(2017). Google Scholar
S. Graham et al.,
“Hover-Net: simultaneous segmentation and classification of nuclei in multi-tissue histology images,”
Med. Image Anal., 58 101563 https://doi.org/10.1016/j.media.2019.101563
(2018).
Google Scholar
J. Munkres,
“Algorithms for the assignment and transportation problems,”
J. Soc. Ind. Appl. Math., 5
(1), 32
–38 https://doi.org/10.1137/0105003 JSIMAV 0368-4245
(1957).
Google Scholar
N. L. Johnson, Univariate Discrete Distributions, Wiley(
(2005). Google Scholar
D. Giavarina,
“Understanding bland Altman analysis,”
Biochem. Med., 25
(2), 141
–151 https://doi.org/10.11613/BM.2015.015
(2015).
Google Scholar
K. Turkowski,
“Filters for common resampling tasks,”
https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.116.7898
(1990).
Google Scholar
J. Qu et al.,
“Gastric pathology image classification using stepwise fine-tuning for deep neural networks,”
J. Healthcare Eng., 2018 8961781 https://doi.org/10.1155/2018/8961781
(2018).
Google Scholar
P. Bandi et al.,
“From detection of individual metastases to classification of lymph node status at the patient level: the CAMELYON17 challenge,”
IEEE Trans. Med. Imaging, 38
(2), 550
–560 https://doi.org/10.1109/TMI.2018.2867350 ITMID4 0278-0062
(2019).
Google Scholar
Q. Guan et al.,
“Deep convolutional neural network VGG-16 model for differential diagnosing of papillary thyroid carcinomas in cytological images: a pilot study,”
J. Cancer, 10
(20), 4876
–4882 https://doi.org/10.7150/jca.28769
(2019).
Google Scholar
Y. Wang et al.,
“Using deep convolutional neural networks for multi-classification of thyroid tumor by histopathology: a large-scale pilot study,”
Ann. Transl. Med., 7
(18), 468 https://doi.org/10.21037/atm.2019.08.54
(2019).
Google Scholar
S. Pang et al.,
“VGG16-T: a novel deep convolutional neural network with boosting to identify pathological type of lung cancer in early stage by CT images,”
in Proc. Int. Joint Conf. Bioinforma Syst. Biol. Intell. Comput.,
771
(2020). Google Scholar
G. S. Ioannidis et al.,
“Pathomics and deep learning classification of a heterogeneous fluorescence histology image dataset,”
NATO Adv. Sci. Inst. Ser. E Appl. Sci., 11
(9), 3796 https://doi.org/10.3390/app11093796
(2021).
Google Scholar
B. Kieffer et al.,
“Convolutional neural networks for histopathology image classification: training vs. using pre-trained networks,”
in Seventh Int. Conf. Image Process. Theory, Tools and Appl. (IPTA),
1
–6
(2017). Google Scholar
M. Lin, Q. Chen and S. Yan,
“Network in network,”
(2013). Google Scholar
D. Grattarola and C. Alippi,
“Graph neural networks in tensorflow and keras with spektral,”
(2020). Google Scholar
D. P. Kingma and J. Ba,
“Adam: a method for stochastic optimization,”
in Int. Conf. Learn. Represent. (ICLR),
(2015). Google Scholar
M. Abadi et al.,
“TensorFlow: a system for large-scale machine learning,”
(2016). https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf Google Scholar
J. Galon et al.,
“Towards the introduction of the ‘Immunoscore’ in the classification of malignant tumours,”
J. Pathol., 232
(2), 199
–209 https://doi.org/10.1002/path.4287
(2014).
Google Scholar
O. M. Zlatian et al.,
“Histochemical and immunohistochemical evidence of tumor heterogeneity in colorectal cancer,”
Rom. J. Morphol. Embryol., 56
(1), 175
–181 RRMEEA 1220-0522
(2015).
Google Scholar
S. Akbar et al.,
“Tumor localization in tissue microarrays using rotation invariant superpixel pyramids,”
in IEEE 12th Int. Symp. Biomed. Imaging (ISBI),
1292
–1295
(2015). Google Scholar
M. E. A. Bechar et al.,
“Influence of normalization and color features on super-pixel classification: application to cytological image segmentation,”
Aust. Phys. Eng. Sci. Med., 42
(2), 427
–441 https://doi.org/10.1007/s13246-019-00735-8 AUPMDI 0158-9938
(2019).
Google Scholar
M. T. Ribeiro, S. Singh and C. Guestrin,
“Why should I trust you?: explaining the predictions of any classifier,”
1135
–1144
(2016).
Google Scholar
G. Jaume et al.,
“Quantifying explainers of graph neural networks in computational pathology,”
in IEEE/CVF Conf. Comput. Vision and Pattern Recognit. (CVPR),
8102
–8112
(2020). https://doi.org/10.1109/CVPR46437.2021.00801 Google Scholar
T. N. Kipf and M. Welling,
“Semi-supervised classification with graph convolutional networks,”
in 5th Int. Conf. Learn. Represent.,
24
–26
(2017). Google Scholar
K. Xu et al.,
“How powerful are graph neural networks?,”
in Int. Conf. Learn. Represent.,
(2019). Google Scholar
M. A. Atkinson et al.,
“Organisation of the human pancreas in health and in diabetes,”
Diabetologia, 63
(10), 1966
–1973 https://doi.org/10.1007/s00125-020-05203-7 DBTGAJ 0012-186X
(2020).
Google Scholar
D. S. Taubman and M. W. Marcellin, JPEG2000 Image Compression Fundamentals, Standards and Practice, Springer US(
(2002). Google Scholar
S. Libard, D. Cerjan and I. Alafuzoff,
“Characteristics of the tissue section that influence the staining outcome in immunohistochemistry,”
Histochem. Cell Biol., 151
(1), 91
–96 https://doi.org/10.1007/s00418-018-1742-1
(2019).
Google Scholar
T. Kurc et al.,
“Segmentation and classification in digital pathology for glioma research: challenges and deep learning approaches,”
Front. Neurosci., 14 27 https://doi.org/10.3389/fnins.2020.00027 1662-453X
(2020).
Google Scholar
F. Pagès et al.,
“International validation of the consensus Immunoscore for the classification of colon cancer: a prognostic and accuracy study,”
Lancet, 391
(10135), 2128
–2139 https://doi.org/10.1016/S0140-6736(18)30789-X LANCAO 0140-6736
(2018).
Google Scholar
J. Galon and D. Bruni,
“Approaches to treat immune hot, altered and cold tumours with combination immunotherapies,”
Nat. Rev. Drug Discov., 18
(3), 197
–218 https://doi.org/10.1038/s41573-018-0007-y NRDDAG 1474-1776
(2019).
Google Scholar
L. Herbsthofer et al.,
“Procedural generation of synthetic multiplex immunohistochemistry images using cell-based image compression and conditional generative adversarial networks,”
Proc. SPIE, 12039 120390N https://doi.org/10.1117/12.2606365 PSISDG 0277-786X
(2022).
Google Scholar
M. D. Zeiler and R. Fergus,
“Visualizing and understanding convolutional networks,”
in Comput. Vision – ECCV 2014,
818
–833
(2014). Google Scholar
A. Mahendran and A. Vedaldi,
“Understanding deep image representations by inverting them,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,
5188
–5196
(2015). https://doi.org/10.1109/CVPR.2015.7299155 Google Scholar
Q. Zhang, Y. Nian Wu and S. C. Zhu,
“Interpretable convolutional neural networks,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,
8827
–8836
(2018). https://doi.org/10.1109/CVPR.2018.00920 Google Scholar
Y. Kaya, S. Hong and T. Dumitras,
“Shallow-deep networks: understanding and mitigating network overthinking,”
in Proc. 2019 Int. Conf. Mach. Learn. (ICML),
(2019). Google Scholar
H. W. Kuhn,
“The Hungarian method for the assignment problem,”
Nav. Res. Logist Q., 2
(1–2), 83
–97 https://doi.org/10.1002/nav.3800020109
(1955).
Google Scholar
F. Bourgeois and J. C. Lassalle,
“An extension of the Munkres algorithm for the assignment problem to rectangular matrices,”
Commun. ACM, 14
(12), 802
–804 https://doi.org/10.1145/362919.362945 CACMA2 0001-0782
(1971).
Google Scholar
N. Tomizawa,
“On some techniques useful for solution of transportation network problems,”
Networks, 1
(2), 173
–194 https://doi.org/10.1002/net.3230010206 NTWKAA 1097-0037
(1971).
Google Scholar
E. J. Wood,
“Cellular and molecular immunology (5th ed.): Abbas A. K., and Lichtman, A. H.,”
Biochem. Mol. Biol. Educ., 32
(1), 65
–66 https://doi.org/10.1002/bmb.2004.494032019997 1470-8175
(2004).
Google Scholar
H. Süße and E. Rodner, Bildverarbeitung Und Objekterkennung: Computer Vision in Industrie Und Medizin, Springer Vieweg, Wiesbaden
(2014). Google Scholar
A. W. Paeth,
“A fast algorithm for general raster rotation,”
in Proc. Graphics Interface ’86/Vision Interface ’86,
77
–81
(1986). Google Scholar
BiographyLaurin Herbsthofer is a data scientist at a biomedical research company. In 2017, he received his MSc degree in theoretical and computational physics at University of Graz, Austria. He is currently enrolled in a PhD program at Medical University of Graz, Austria. His main focus is the analysis of histologic multiplex images for research and clinical applications, especially using cell-based data representations and machine learning tools. |
Image segmentation
Education and training
Neural networks
Tissues
Image resolution
Silicon nitride
Colorectal cancer