Title: Visualizing Each and Every Layer in a CNN Trained for Receiver Classification
Author(s): Xin Zhang, Xingqun Zhan, Ruopu Liu, Hang Guo
Published in: Proceedings of the 2018 International Technical Meeting of The Institute of Navigation
January 29 - 1, 2018
Hyatt Regency Reston
Reston, Virginia
Pages: 283 - 291
Cite this article: Zhang, Xin, Zhan, Xingqun, Liu, Ruopu, Guo, Hang, "Visualizing Each and Every Layer in a CNN Trained for Receiver Classification," Proceedings of the 2018 International Technical Meeting of The Institute of Navigation, Reston, Virginia, January 2018, pp. 283-291.
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In
Abstract: A feature selection method is proposed here. The selected feature set will be used to train a convolutional neural network to distinguish different GNSS receivers. Here ‘different’ means ‘produced by different manufacturers’, ‘with different accuracy and precision performance’, or even ‘with different firmware versions for two products under the same designation’. The most effective feature sets will be those that produce highest classification accuracy. We tried to visualize each and every layer output of the trained model. This is meaningful in that usually what individual layers of a CNN are doing cannot be deliberately designed, i.e. CNN-based methods can be considered as a ‘blackbox’ method. If, however, we can visualize what each layer is actually doing, we can adjust network structures to minimize computation, thus facilitating its application in compute-intensive platform. The results show that the test accuracy could reach up to 87.1 % after 40 iterative epochs using LeNet-5 while with the same dataset, a linear support vector machine (SVM) reaches only 70 % of test accuracy.