Figure
1. a) S chematic of the CAE. The CAE receives an IR spectrum as
input and tries to reproduce this input data. The pink convolutional
layers have a Kernel size of 10 with 32,64, and 32 filters. An
additional Dropout layer (20% dropout rate) was added to prevent
overfitting. The green layer describes a fully connected dense layer
with 24 neurons. The blue part of the network consists of three
upscaling convolutional layers with a kernel size of 3,5 and 4,
respectively, and 64,128 and 55 Filters. The total number of parameters
accounts for 800.863 parameters. b) The green layers represent
the two added dense layers with 10 neurons for the classification tasks.
The grey squares represent output neurons. The number of output neurons
depends on the number of classes to be distinguished. The trainable
parameters account for 261.
This final part of the network will be trained to classify the different
subtypes of lymphoma and normal (reactive) control. It uses the
pre-trained feature detection of the first part of the Autoencoder. With
that, the number of parameters to be fitted is reduced to 253
Parameters.
Labelling training and test data are required to train the classifier
part of the neural network. Here, areas of interest in lymphoma tissue
(FL = follicular and intrafollicular area; DLBCL, non-GCB and DLBCL, GCB
subtype) and rLN were labelled, and the corresponding spectra were
extracted. One sample served as a training set, and the other as an
evaluation set. The labelling procedure of the training data is depicted
in Figure 2 .