Classification of white blood cells using a neural network


White blood cells (leukocytes) in the human immune system protect against infection. They include neutrophils, eosinophils, basophils, monocytes, and lymphocytes, each of them making up a certain percentage and performing specific functions. Traditionally, the clinical laboratory procedure of quantifying specific types of white blood cells has been an integral part of a general blood test to help monitor the health of individuals.

Thanks to advances in deep learning, images of blood composition can be classified in less time and with high accuracy using various algorithms. In 2022, Thinam Tamang, Sushish Baral and May Phu Paing, scientists from the US, conducted a comparative study on the performance of different convolutional neural network (CNN) architectures.

Which architecture performed best in the analysis and what made possible to use it?

Converged neural networks are considered one of the advanced models for various computer vision applications, including medical applications. Compared to other networks, CNNs have shown higher gains (according to a 2014-2020 study by Russian scientists).

CNNs have a specific feature invariance that allows them to see images ranging from concrete features to abstract characteristics, so that, for example, an image with scattered facial features is seen by CNNs as a person. Convergence is a feature extraction procedure that utilizes a kernel of a certain size. The kernel is moved throughout the network at certain increments that are set during the architecture implementation, resulting in a feature map.

Once it is extracted, a merging procedure is used to reduce the size of the map. Finally the image is finally aligned and a fully or partially connected layer of convolutional network is formed. The image is then categorized using a layer that determines the probability of the image falling into one of several categories.

To deal with blood composition images, 10 CNN architectures were taken for “retraining” by Transfer learning (transferring the learning process from one dataset to another – in the described case, to blood cell data): AlexNet, DenseNet 121, DenseNet 161, ResNet 18, ResNet 34, ResNet 50, SqueezeNet 10, SqueezeNet 11, VGG Net 11, VGG Net 13.

Based on the trainable parameters, average time taken and accuracy, the best results obtained after implementation of the discussed models were compared with each other. Compared to other architectures, DenseNet-161 performs better than others on the white blood cell recognition task, providing significantly higher accuracy (1.0) (1.0) (1.0) in processing 28,744,896 trainable parameters at a time cost of 4:24 minutes.