We can also envision that the automated model can become more robust to systematic issues when trained with more micrographs of various brightness, contrast, focus, and defect size scales produced by different researchers for a range of materials, instruments, and imaging modes. We note that the model has only been trained on one material and evaluated on the same material.
For materials with similar looking micrographs, we would expect good performance of the model. But for materials that have defects that look similar to loops in some way, e. However, we expect that, as models like this are further developed and exposed to orders of magnitude more data, transferability to new materials, microscopes, irradiation conditions, etc. Speeds could potentially be increased sufficiently to allow real-time image recognition, which could be embedded in the electron microscopic system and provide in situ analysis directly on the monitor screen.
Such a system would enable researchers to adjust their characterizations in real time in response to the data. Although our approach was primarily built for loop defect recognition, creating new models to detect other types of defects or patterns in micrographic images is worth exploring if micrographic images containing those types of defects can be obtained experimentally or simulated with high fidelity.
Automated defect analysis in electron microscopic images
We propose that an online system in which any researcher can contribute annotated images for training the model could be an effective way to develop such models. We envision that automated image recognition can dramatically change the current microscopic characterization workflow by allowing orders of magnitude more images to be fully analyzed automatically and nearly instantly.
Data set collection was completed as part of a large-scale effort to characterize iron—chromium—aluminum FeCrAl materials neutron-irradiated within the High Flux Isotope Reactor at Oak Ridge National Laboratory. The date set comprises a series of published 40 , 41 , 42 , 43 and unpublished data. Collection was completed over 3 years and spans a range of different FeCrAl alloys, including model, commercial, and engineering-grade alloys irradiated to light water reactor—relevant conditions e.
Various techniques can image the elliptical loops that were of interest in this study, including two-beam and weak-beam dark-field imaging, 1 , 44 but these more conventional techniques are prone to background variation due to elastic contrast. Images were taken with varying magnification, detector resolution, and pixel dwell time to limit instrument biasing into the data set. Negative images were taken in the same manner on unirradiated reference samples. For the data set used to train the cascade object detector, we used an open-source software called ImageJ 39 to manually annotate the positions of loop defects on the microscopic images.
We generated labeled files containing the bounding boxes of loops for each corresponding image. For the data set used to train the CNN, we applied the cascade object detector to the micrograph images in the augmented training set and compared the detector predictions with ground truth labeling.
Each micrograph image had 10— loops inside. We grouped the correct predictions as a set of cropped images with loops and grouped the false predictions as a set of cropped images without loops. Typically, a CNN can take images with red, blue, and green channels through an input layer of width by height by number of channels. The purpose of adding two more layers in each image was to provide more information regarding various contrast levels or blurring. The cascade object detector used integral image representation for fast feature extraction 16 from original images, and it trained simple and efficient classifiers using the adaptive boosting AdaBoost method 8 to select a small number of important features.
Successively, more complex classifiers were then combined in a cascade structure, which dramatically increased the detection speed by focusing attention on promising regions of the image. For the feature extraction method, the LBP 47 was used as the feature type to encode local edge information into feature vectors. The LBP feature was selected as a compromise between training time and detector performance, compared with the slower Haar 16 and faster and less accurate HOG 6 feature extraction methods.
We trained a stage cascade object detector with the augmented training set and negative images. We set the maximum false positive rate to 0. We used the cascade object detector training function provided by Matlab computer vision toolbox. With the augmented training data set, we trained the stage cascade object detector on a 6-core Intel Xeon E CPU for about 6 days. The CNN was trained within the deep learning package from Matlab. We determined the final CNN structure by the performance on the validation set.
We constructed our CNN as one input, three convolution, four nonlinear, three maximum pooling, two fully connected layers, one softmax layer, and one class-output layer. We repeated the block of convolution, ReLu, and maximum polling layers twice.
- Top Conferences for Machine Learning & Arti. Intelligence.
- Excellence in the field of Pattern Analysis and Machine Intelligence.
- Multimedia Information Storage and Retrieval: Techniques and Technologies?
- Boundaries: A Casebook in Environmental Ethics.
- How to Make Dances in an Epidemic: Tracking Choreography in the Age of Aids.
- An Approach to the Parameterization of Structure for Fast Categorization.
- Sojourner Truth. Speaking Up for Freedom.
After three blocks of convolution—ReLu—MaxPool layers, we used a fully connected layer of 64 nodes followed by a ReLu layer and another fully connected layer of 2 nodes as the classification layer with the classes of loop and non-loop. For the final output node, we used the softmax function to give the output category for each input image.
We started training the CNN initialized with randomly assigned weights, and data were fed into the CNN in a mini-batch size of The cross-entropy loss for binary classification was optimized with Stochastic Gradient Descent with Momentum as the optimizer, with the momentum set to 0.
The initial learning rate is 0. Considering human mistakes in the training set labeling, we set the training accuracy stop threshold to 0. The data of training images and the source code that support the findings of this study are available in Materials Data Facility with the identifier doi Jenkins, M. Zinkle, S. Today 12 , 12—19 Jesse, S. Big data analytics for scanning transmission electron microscopy ptychography.
Lowe, D. Object recognition from local scale-invariant features. In Proc. Zhao, J. A method for detection and classification of glass defects in low resolution images. Dalal, N. Histograms of oriented gradients for human detection. Yu, K. Deep learning: yesterday, today, and tomorrow. Viola, P. Rapid object detection using a boosted cascade of simple features. Belongie, S. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Felzenszwalb, P. Object detection with discriminatively trained part-based models.
Ren, S. In Neural Information Processing Systems LeCun, Y. Deep learning. Nature , — Robust real-time face detection. Handwritten digit recognition with a back-propagation network. Yang, Z. Sharifara, A. A general review of human face detection including a study of neural networks and Haar feature-based cascade classifier in face detection.
Jiang, H. Face detection with the faster R-CNN. Goodfellow, I. Deep Learning. Zeiler, M. Visualizing and understanding convolutional networks. In European Conference on Computer Vision. Simonyan, K. Very deep convolutional networks for large-scale image recognition. Szegedy, C. Rethinking the Inception architecture for computer vision.
Long, J. Fully convolutional networks for semantic segmentation.
Publications - DAVID DOERMANN PROFESSOR, EMPIRE INNOVATION
He, K. Mask r-cnn. Kreshuk, A. Automated detection and segmentation of synaptic contacts in nearly isotropic serial electron microscopy images. Ciresan, D. Deep neural networks segment neuronal membranes in electron microscopy images. Mitosis detection in breast cancer histology images with deep neural networks. Lubbers, N. Inferring low-dimensional microstructure representations using convolutional neural networks.