Artificial intelligence enables the identification and quantification of arbuscular mycorrhizal fungi in plant roots

1 The Sainsbury Laboratory, Cambridge University (SLCU). Cambridge, UK. 2 Department of Applied Mathematics and Theoretical Physics (DAMTP), University of Cambridge. Cambridge, UK. 3 Present address: Department of Biochemistry, University of Cambridge. Cambridge, UK. 4 Department of Plant Sciences, University of Cambridge. Cambridge, UK. * Correspondence: edouard.evangelisti@slcu.cam.ac.uk, sebastian.schornack@slcu.cam.ac.uk


Introduction
Soil fungi establish mutualistic interactions with the roots of more than 85% of vascular land plants (Brundrett & Tedersoo, 2018). These interactions, termed mycorrhizae, lead either to the formation of a dense hyphal sheath surrounding the root surface (ectomycorrhizae) or to fungal hyphae penetrating host tissues (endomycorrhizae) (Brundrett, 2002). The bestcharacterized type of endomycorrhiza, called arbuscular mycorrhiza (AM), involves species from the subphylum Glomeromycotina (Schüßler et al., 2001;Spatafora et al., 2016). AM fungal hyphae grow toward plant roots following the exchange of diffusible chemical cues (Luginbuehl & Oldroyd, 2017). At root surface penetration points, hyphae differentiate into swollen or branched structures termed hyphopodia. Following entry and crossing of the root epidermis, hyphae spread either between cortical cells (Arum-type colonization) or via intracellular passages of cortical cells (Paris-type colonization) (Dickson, 2004). The differentiationSoil fungi establish mutualistic interactions with the roots of more than 85% of vascular land plants (Brundrett & Tedersoo, 2019). These interactions, termed mycorrhizae, lead either to the formation of a dense hyphal sheath surrounding the root surface (ectomycorrhizae) or to fungal hyphae penetrating host tissues (endomycorrhizae) (Brundrett, 2007). The best-characterized type of endomycorrhiza, called arbuscular mycorrhiza (AM), involves species from the subphylum Glomeromycotina (Schüßler et al., 2001;Spatafora et al., 2016). AM fungal hyphae grow toward plant roots following the exchange of diffusible chemical cues (Luginbuehl & Oldroyd, 2017). At root surface penetration points, hyphae differentiate into swollen or branched structures termed hyphopodia. Following entry and crossing of the root epidermis, hy of highly-branched intracellular exchange structures, the arbuscules, accompanies hyphal growth and enables reciprocal transfer of nutrients (Luginbuehl & Oldroyd, 2017). Post-arbuscular development includes the differentiation of vesicles and sporulation. While these successive differentiation events reflect a precise morphogenetic program, the whole hyphal network is not synchronized. As a result, the various types of intraradical hyphal structures occur simultaneously inside plant roots (Montero et al., 2019). roots are essential parameters for characterising host genes that underlie mycorrhiza establishment and accommodation (Montero et al., 2019). Mycorrhiza-responsive host genes facilitate the molecular quantification of fungal colonisation. For instance, expression of the Medicago truncatula Phosphate transporter 4 (MtPT4) gene is limited to the root tip in the absence of mycorrhiza (Volpe et al., 2016), while cells with arbuscules express MtPT4 to enable plant acquisition of inorganic phosphate (Harrison et al., 2002;Maeda et al., 2006;Javot et al., 2007). Likewise, the abundance of transcripts encoding M. truncatula Blue Copper-Binding Protein 1 and Lotus japonicus apoplastic subtilase SbtM correlate with stage transitions during arbuscule development (Hohnjec et al., 2005;Takeda et al., 2009;Parádi et al., 2010). Complementary to molecular methods and independent of gene sequence knowledge is the visual diagnosis of AM fungal colonisation. It consists in differential staining of fungal cell walls (Vierheilig et al., 1998(Vierheilig et al., , 2005; Hulse, 2018) followed by random sampling and counting using a grid-intersect method (Giovannetti & Mosse, 1980). This method is considered a standard in mycorrhiza research (Sun & Tang, 2012).
Deep learning encompasses an extensive class of computational models that learn to extract information from raw data at multiple levels of abstraction, thereby mimicking how the human brain perceives and understands information (Voulodimos et al., 2018). In supervised learning problems, where example data labelled with correct outputs are available, these models can be iteratively improved to minimise discrepancies between correct and model-predicted outputs considering all possible interfering factors (O'Mahony et al., 2020).
With the increase in computing power over recent years, deep learning has fostered tremendous data analysis advances. Computer vision is one of the most iconic examples, with the development of convolutional neural networks (CNNs), a class of deep learning methods inspired by models of the visual system's structure (LeCun et al., 1998). A typical CNN architecture comprises three types of processing (or neural) layers. First, a convolutional layer uses a set of local receptive fields called filters or kernels to extract elementary visual features (e.g. edges or arbitrary shapes) from a group of neighbouring pixels. The resulting feature map then feeds a pooling layer. Pooling layers are down-sampling steps aiming to reduce feature map width and height for the next convolutional layer. A typical CNN allows for detecting high-order features through several rounds of convolutional and pooling layers (Voulodimos et al., 2018;Dhillon & Verma, 2020). Fully connected layers follow to enable high-level reasoning and decision-making. As their name implies, these layers contain neurons that have full connections to the previous layer. Fully connected layers convert two-3 70 75 80 85 90 dimensional feature maps into a one-dimension feature vector that can be fed forward into categories for classification (Krizhevsky et al., 2017) or used for further processing (Girshick et al., 2014). CNNs underlie breakthrough advances in diverse technological and biomedical domains including face recognition, object detection, diagnostic imaging, and self-driving cars (Matsugu et al., 2003;Szarvas et al., 2005;Bojarski et al., 2016;Yamashita et al., 2018).
We took advantage of CNNs to develop the Automatic Mycorrhiza Finder (AMFinder), an automatic, user-supervised tool suite for in silico analysis of AM fungal colonisation and recognition of intraradical hyphal structures. Using AMFinder, we quantified fungal colonisation dynamics on whole Nicotiana benthamiana root systems using lowresolution, flatbed scanner-acquired scans of ink-stained roots. Moreover, AMFinder robustly identified colonised root sections and intraradical hyphal structures on several model species used in mycorrhiza research, including Medicago truncatula, Lotus japonicus, and Oryza sativa. We developed a standalone graphical browser to enable efficient browsing of large images and manual curation of computer predictions. Overall, our work provides a framework for reproducible automated phenotyping of AM fungal colonisation of plant roots.

Plant material
Nicotiana benthamiana is a laboratory cultivar obtained from The Sainsbury Laboratory, Norwich, UK, originating from Australia (Bally et al., 2018). Medicago truncatula R108 seeds were provided by Giles Oldroyd (The Sainsbury Laboratory, UK). Lotus japonicus Gifu seeds were provided by Simona Radutoiu (Aarhus University, Denmark). Rice (Oryza sativa subsp. japonica) plant material, growth conditions and AM colonisation conditions were described elsewhere (Choi et al., 2020).

Seed germination
N. benthamiana seeds were germinated on Levington F2 compost (ICL, Ipswich, UK) for one week at 24°C with a 16-h photoperiod. M. truncatula seeds were scarified in sulphuric acid for 5 min, rinsed in sterile water and surface-sterilized in bleach for 5 min. Seeds were then soaked in water for 30 min and stratified for 3 days at 4°C in the dark. L. japonicus seeds 4 100 105 110 115 120 were scarified with sandpaper, surface-sterilized in bleach for 15 min and soaked overnight in water at 4°C. Germination was induced at 20°C.

Growth conditions for AM colonisation
One-week-old seedlings were transferred to 6×5 cellular trays containing silver sand supplemented with a 1:10 volume of R. irregularis crude inoculum (PlantWorks, Sittingbourne, UK) and grown at 24°C with a 16-h photoperiod. N. benthamiana plants were watered with a low-phosphate Long Ashton nutrient solution (Hewitt, 1966), while milliQ water was used for L. japonicus and M. truncatula plants. Plant roots were harvested at either 4 or 6 weeks post-inoculation and directly used for staining or total mRNA extraction.

Scanning and bright field imaging
Low-magnification images of ink-stained roots were acquired with an Epson Perfection flatbed scanner (Epson UK, Hemel Hempstead, UK) using default settings and a resolution of 3200 dots per inch. High-magnification images were acquired with a VHX-5000 digital microscope (Keyence, Milton Keynes, UK) equipped with a 50-200× zoom lens set to 200× magnification, using transillumination mode.

Generation of modified image datasets
Image modifications were achieved using the batch processing tool convert from the ImageMagick suite (https://imagemagick.org). Sharpening was achieved using -unsharp 5.

AMFinder uses computer vision for in silico analysis of roots colonised by AM fungi
The extent of fungal root colonisation is an important parameter used to characterise mutualistic relationships between AM fungi and plants. We developed the artificial intelligence-based software Automatic Mycorrhiza Finder (AMFinder) to enable a straightforward, automated, and reproducible estimation of this parameter. AMFinder uses a computer vision approach to quantify fungal colonisation and intraradical hyphal structures in root pictures. It comprises a command-line program (amf) for automatic root image analysis and a standalone interface (amfbrowser) for user supervision of computer predictions (Fig. 1,   S1).
The AMFinder prediction pipeline consists of five steps. During the initial pre-processing step, root images are divided into tiles by amf using a user-defined tile size depending on image magnification and resolution (Fig. 1a). The first round of predictions follows. A convolutional neural network (CNN1) labels colonised root segments by analysing tiles individually (Fig. 1a, S1). CNN1 comprises four blocks of 3×3 filters (convolutions) interleaved with size-reduction layers (maximum pooling) (Fig. S1a), followed by a classifier made of three fully connected layers which compute the probabilities of each tile belonging to the mutually exclusive classes 'colonised root section' (M+), 'non-colonised root section' (M-), and 'background/not a root/other' (Fig. 1a, Fig. S1b).
The third step consists of the user-supervised conversion of CNN1 predictions to annotations using amfbrowser (Fig. 1a specifically review tiles with low-confidence predictions (i.e. those having all probabilities close to 1/3) (Fig. 1b, toolbox 6). A magnified view of the active tile surrounded by its eight neighbours facilitates manual inspection and identification of fungal structures located at tile edges (Fig. 1b). A fixed-size sliding window (13×12 tiles) gives an overview of a larger image area and ensures optimal tile display irrespective of the overall image dimensions (Fig.   1b). For immediate visual distinction, final annotations are displayed as squares (Fig. 1a, b). After converting CNN1 predictions to annotations and upon user request, amf can proceed with a more detailed analysis of AM fungal hyphal structures (Fig. 1a). An independent convolutional neural network (CNN2) then predicts the presence of arbuscules (A), vesicles (V), hyphopodia (H), and intraradical hyphae (IH) on M+ tiles only (Fig. 1a, b).
The CNN2 architecture is essentially the same as that of CNN1 (Fig. S1a). However, the probability that each different type of intraradical hyphal structure is present is computed by a separate stack of three fully connected layers atop the convolutional and pooling layers (Fig.   S1c). Therefore, each classifier returns a single probability. As for CNN1, CNN2 predictions can be displayed in their image context using amfbrowser for manual inspection (Fig. 1b).
Since each fungal structure receives an independent probability, CNN2 scores are displayed as radar charts. Radar charts consist of three concentric circles, with the outermost circle corresponding to highest confidence (Fig. 1a, b), overlaid with coloured dots corresponding to individual hyphal structures. Dot positioning reflects prediction confidence. Automatic conversion to annotations (using 0.5 as probability threshold between presence and absence) is also available.
amf and amfbrowser communicate through a standard ZIP archive file that stores amf probabilities, user annotations, and image settings. Together, the AMFinder pipeline consisting of probability-scoring using amf and visual inspection and analysis of results in amfbrowser enables semi-automated, user-supervised, high-precision analysis of AM fungal colonisation in silico.

Figure 2. AMFinder training pipeline.
A set of manually annotated images (green) get pre-processed to generate test (black) and validation (grey) tile sets. N training cycles (epochs) follow, where CNN internal parameters are adjusted based on the test set. The validation set allows for independent monitoring of CNN performance by assessing model misfit (loss). An early stopping mechanism (blue) terminates training if loss value does not further decrease in 12 consecutive epochs. Best-performing parameters (blue) are saved for further use.

AMFinder CNNs can be trained to maximise versatility
A key prerequisite of AMFinder is having CNNs trained to recognise the desired structures ( Fig. 2, 3). The amf training pipeline uses a set of images that are split into tiles and manually 9 225 230 235 240 annotated within amfbrowser. First, amf randomly assigns annotated tiles to two groups termed training (Fig. 2, black) and validation (Fig. 2, grey) tile subsets. For optimal training, amf compensates for background over-representation by randomly removing excess background tiles and assigns training weights based on tile count in each annotation class to account for any residual imbalance. One hundred training cycles (epochs) follow where the training subset is used to adjust CNN model parameters (Fig. 2). The validation set is used at the end of each epoch to estimate model accuracy on independent data and detect overfitting, i.e. a performance decrease due to specialisation toward the training dataset. Model performance assessment relies on two evaluation metrics: the accuracy, which is the ratio between the number of correct predictions and the total number of predictions, and the loss, which is a measure of the distance between the correct annotations and the model's predictions. Consistent with their respective output, CNN1 uses categorical cross-entropy as its loss function, while each CNN2 classifier uses binary cross-entropy (Gordon-Rodriguez et al., 2020). To prevent overfitting, an early stopping mechanism prematurely terminates training and restores the best-performing model parameters if the loss does not decrease for twelve training cycles in a row (Fig. 2). These steps result in CNNs best trained to recognise the desired structures and ensure versatility of AMFinder to a range of different types of userprovided images.
Consistent with the multiple tile sizes used for training, changes in tile size did not significantly affect CNN1's performance (Fig. S6). Thus, CNN1 consistently labels fungal colonisation of N. benthamiana roots irrespective of the image resolution, suggesting it may be compatible with a wide range of acquisition devices.

Brightness, contrast, and colour are instrumental to pre-trained network performance
Neural network accuracy is affected by discrepancies between images used for predictions, and the training dataset. To provide guidelines about image settings required for optimal performance of our pre-trained models, we assessed CNN1 pre-trained model accuracy on high-resolution images following modification of either edge sharpness, noise levels, brightness, or colours (Fig. 4, S10). The pre-trained model was robust to sharpening and blurring (Fig. 4a), likely due to the smooth edges of low-resolution images used for training.
Similarly, an increase in gaussian noise did not significantly alter model performance (Fig.   12 285 290 295 300 305 4a). By contrast, low brightness or contrast, changes in hue or saturation, and color inversion, significantly increased misprediction rate up to 73% (Fig. 4a). To assess whether these limitations are due to the training dataset, we trained CNN1 de novo using high-resolution images bearing the previously tested modifications (Fig. 4b, c).
The loss function reached a minimum at epoch 16, leading to an accuracy of 92% (Fig. 4c).
We then used the new model to predict AM fungal colonisation on independent images (Fig.   4d, S10). By contrast with the original pre-trained model, the new version accurately labelled colonised root segments, roots and background in all the tested conditions (Fig. 4d, S10).
Hence, adjusting general image parameters is essential to achieve optimal AMFinder performance without the need for computer-intensive calculations. A guide is provided to troubleshoot the most frequent prediction issues (Table S1). Training further allows AMFinder to analyse highly dissimilar datasets, suggesting our software can adapt to any image type independent of fungal staining method and imaging system.

AMFinder performs consistently on multiple host model species.
A wide range of plants is used in endomycorrhiza research including legumes and monocot species with various root size and morphology. We assessed the suitability of pre-trained CNNs models trained on N. benthamiana root images to predict AM fungal colonisation and intraradical hyphal structures on colonised root images from Lotus japonicus cv. Gifu (Fig.   5a), Medicago truncatula ecotype R108 (Fig. 5b), and Oryza sativa cv. Nipponbare (Fig. 5c).
The image contrast of ClearSee-treated L. japonicus and M. truncatula roots was similar to N. benthamiana. Conversely, large lateral roots of O. sativa showed higher background (Fig.   5c). AMFinder correctly identified roots and background in all tested images, with colonised and non-colonised root areas being accurately resolved (Fig. 5a-c), including in cases where colonisation was restricted to inner cortical cell files (Fig. 5a). All types of intraradical hyphal structures were accurately recognised (Fig. 5a-c). In addition, CNN2 identified hyphopodia in four tiles, although only one was a bona fide prediction (Fig. 5b-c)

AMFinder enables in silico quantification of AM fungal colonisation dynamics
We next investigated whether AMFinder could be used to reliably quantify AM colonisation changes of plant roots over time. To that end, we assessed AM fungal colonisation extent on N. benthamiana roots harvested after a 4-or 6-week co-cultivation with R. irregularis (Fig.   6). First, we monitored the accumulation of transcripts encoding a N. benthamiana ortholog of the mycorrhiza-responsive gene MtBCP1b (Parádi et al., 2010) (Fig. 6a) and quantified fungal biomass by monitoring R. irregularis EF1α transcript levels (Fig. 6b). Both methods showed a significant, two-to three-fold increase in fungal content at 6 wpi compared to 4 wpi ( Fig. 6a, b). Then, using the grid-line intersect method (Giovannetti & Mosse, 1980) we studied the colonisation extent within randomly sampled root fragments (Fig. 6c). Consistent with the molecular analysis, more colonisation was observed at 6 wpi. The analysis of the same samples with AMFinder gave similar results (Fig. 6d). We then tested AMFinder's ability to predict fungal colonisation on low-resolution flatbed scanner pictures of whole root systems (Fig. 6e). Consistent with random sampling and molecular data, AM fungal colonisation levels were significantly higher at 6 wpi, although the colonisation extent values were lower than those obtained through random sampling (Fig. 6e). Thus, AMFinder allows for in silico quantification of AM fungal colonisation of plant roots over time including in whole root systems. (e) Quantification of AM fungal colonisation on whole root systems using AMFinder, and representative images of computer-generated maps featuring colonized (M+, magenta) and non-colonized (M−, grey) root areas at 4 and 6 wpi. Dots correspond to biological replicates. Bars represent standard error. Statistical significance was assessed using Student's t-test (*: p < 0.05; **: p < 0.01).

Discussion
We developed the software AMFinder, which uses two convolutional neural networks to annotate and quantify AM fungi in plant roots. AMFinder performs consistently well on root images of several model plant species used in endosymbiosis research. AMFinder-mediated quantification of AM fungal colonisation gives similar results to those obtained using current standard counting methods. We further show that AMFinder can process whole root systems using low-resolution flatbed scans obtained from an optimised ink-staining protocol which relies on ClearSee as a contrast enhancer. We illustrate the usefulness of this approach to study fungal colonisation dynamics over time in wild-type and mutant plants.
AMFinder deploys on Microsoft Windows, macOS and GNU/Linux and is compatible with an installation in a virtual environment. The training and prediction tool amf is implemented in Python (van Rossum & Drake, 2009;Srinath, 2017) to benefit from widely used machine learning libraries (Chollet & others, 2015;Abadi et al., 2016). Its command-line interface is suitable for batch processing and makes it compatible with high-performance computing clusters. The graphical interface amfbrowser is implemented in OCaml for enhanced language expressiveness and performance (Leroy et al., 2020). These tools constitute a highly flexible tool suite that fits many computing systems and experimental setups.
AMFinder's design adequately addresses limitations arising from its computer vision approach while still enabling a low-to medium-throughput workflow. Specifically, we implemented a semi-automatic pipeline that requires user supervision of computer predictions. High-throughput AMFinder analyses, such as large-scale field experiments, would require the entire prediction pipeline to be fully automatic, including the conversion to annotations. However, input image parameters can influence pre-trained model accuracy and may require user adjustments. Automatic analyses assuming image suitability without quality control may overestimate CNN model accuracy. Besides, CNN2 predictions on mispredicted M+ tiles without intraradical hyphal structures have not been investigated in this study. This AMFinder implementation does not discriminate between multiple root types. Therefore, the quantification of AM fungal colonisation may be affected by highly colonised contaminants.
Image data from experiments relying on crude inoculum (Habte & Byappanhalli, 1998) or nurse root systems such as chives (Demchenko et al., 2004) as an inoculation method may pose problems when contaminating root fragments remain in root images. We have trained and tested AMFinder on ink-vinegar stained N. benthamiana roots. Ink-vinegar is an inexpensive, non-toxic fungal staining method compatible with various plant and mycobiont species (Vierheilig et al., 1998). Thus, pre-trained CNNs generated from ink-stained roots ensure immediate workability of AMFinder for most endosymbiosis host systems without the need to generate manually-annotated training datasets. However, AMFinder can be trained using datasets obtained using other dyes and fluorophores for fungal staining (Vierheilig et al., 2005) or for the annotation of other tissues colonised by fungi such as liverwort thalli (Ligrone et al., 2007;Carella & Schornack, 2018;Kobae et al., 2019). Computer-intensive computations required for ab initio training can be avoided by refining the existing pre-trained networks. Thus, this software is highly versatile and can be adapted or a wide range of fungal colonisation; it may also be of interest to researchers of pathogenic fungi.
AMFinder's precision is similar on all intraradical hyphal structures. However, specificity is best on vesicles, likely because AM fungal vesicles are fairly invariant, globularshaped structures surrounded by a thick, multilayered wall (Jabaji-Hare et al., 1990) that result in high contrast signals within the surrounding plant tissues. By contrast, the arbuscular shape is more diverse, with branching extent and cell volume occupancy increasing during the initial development stages (Toth & Miller, 1984) and a size that is ultimately defined by host cell size. Intraradical hyphae show different diameters, orientations, and staining intensities, and occasionally overlay other intraradical structures. Besides, the limited pixel information of a single tile may not always discriminate between intraradical and extraradical hyphae. An approach using information from a wider area of the original image, rather than treating each tile in isolation, may help address this issue. In particular, it would be interesting to apply deep learning image segmentation techniques (Ghosh et al., 2019) to this problem, as researchers have often found success with this approach in other types of biological imaging.
Another possible issue is that convolutional neural networks do not retain relative spatial information (Patrick et al., 2019). Solutions to overcome this limitation include the combination of convolutional neural networks and multi-layer perceptrons (Haldekar et al., 2017), and capsule networks (CapsNets) (Sabour et al., 2017;Patrick et al., 2019). Future work will explore the usefulness of such approaches to achieve even higher prediction accuracy.
Obtaining contrasted fungal structures within root tissues is pivotal for accurate AMFinder predictions. The first report of ink-vinegar staining of AM fungi suggests that black and blue inks allow for high-contrast images in at least four plant species ( , 1998). Background destaining in tap water with few vinegar droplets requires at least 20 min incubation and is only effective against excess ink (Vierheilig et al., 1998). By contrast, ClearSee treatment works in seconds and allows for both destaining and clearing (Kurihara et al., 2015). Such a feature is of particular interest for thick or pigmented roots, and soil samples. Also, ClearSee preserves fluorescence (Kurihara et al., 2015) and is thus compatible with immunohistochemical fungal labelling techniques such as wheat germ agglutininfluorophore conjugates (Bonfante-Fasolo et al., 1990).
AMFinder can improve the robustness and reproducibility of AM fungal quantification. In the gridline-intersect method, gridlines have been primarily used as guides for the systematic selection of observation points (Giovannetti & Mosse, 1980), and the distance between adjacent lines has been studied to estimate the total root length, but not to improve quantification accuracy (Newman, 1966;Marsh, 1971;Giovannetti & Mosse, 1980).
As a result, a low number of root fragments is considered prejudicial to quantification accuracy (Giovannetti & Mosse, 1980). Also, the shape of the area surrounding the grid-root intersection used for visual scoring has not been formally described and may account for variations between experimenters. By contrast, AMFinder analyses well-defined tiles, and tile size can adjust to image resolution without impairing prediction accuracy.
Intraradical hyphal structures cannot be identified from flatbed scans due to limited resolution. However, machine learning-based algorithms have been recently developed to achieve data-driven image super-resolution (Park et al., 2003;Wang et al., 2019). By contrast with standard image interpolation techniques, super-resolution algorithms predict missing details by learning common patterns from training datasets. Future AMFinder development will investigate whether these algorithms can enable a detailed analysis of AM fungal hyphal structures from flatbed scans.

Conclusions
We have demonstrated that AMFinder adapts to different plant models, fungal staining methods and acquisition devices. Its design ensures user control over the annotation process and facilitates data visualisation in the context of the root images. As such, it will support better documentation and reproducibility of AM fungal colonisation analyses.        Dots correspond to biological replicates. Bars represent standard error. Statistical significance was assessed using Student's t-test (*: p < 0.05; **: p < 0.01). Figure S1. Schematic representation of AMFinder ConvNet architecture. Figure S2. ClearSee enhances the contrast of ink-stained roots. Figure S3. Optimisation of flatbed scanner resolution for imaging of ink-stained roots. Figure S4. AMFinder enables a detailed analysis of trained model performance. Figure S5. AMFinder prediction error rates. Figure S6. Tile size does not affect AMFinder prediction accuracy. Figure S7. Low-resolution image dataset used to assess CNN1 prediction accuracy. Figure S8. High-resolution image dataset used to assess CNN1 prediction accuracy. Figure S9. High-resolution image dataset used to assess CNN2 prediction accuracy. Figure S10. AMFinder can label AM fungal colonisation on a wide range of input images. Table S1. AMFinder troubleshooting guide.