The selection of layers in the transfer learning fine-tuning process ensures a pre-trained model’s accuracy and adaptation in a new target domain. However, the selection process is still manual and without clearly defined criteria. If the wrong layers in a neural network are selected and used, it could lead to poor accuracy and model generalisation in the target domain. This paper introduces the use of Kullback-Leibler divergence on the weight correlations of the model’s convolutional neural network layers. The approach identifies the positive and negative weights in the ImageNet initial weights selecting the best-suited layers of the network depending on the correlation divergence. We experiment on four publicly available datasets and four ImageNet pre-trained models that have been used in past studies for results comparisons. This proposed approach method yields better accuracies than the standard fine-tuning baselines with a margin accuracy rate of 10.8% to 24%, thereby leading to better model adaptation for target transfer learning tasks.