HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
DOI:
https://doi.org/10.32397/tesea.vol5.n1.542Keywords:
Deep learning, Transformer Neural Networks, WiFi Fingerprinting, Indoor LocalizationAbstract
The emerging demand for a variety of novel Location-based Services (LBS) by consumers and industrial users is driven by the rapid and extensive proliferation of mobile smart devices. Sensors embedded in smart devices or machines provide wireless connectivity and Global Positioning System (GPS) capability, and are co-utilized to acquire location-linked data which are algorithmically transformed into reliable and accurate location estimates. GPS is a mature and reliable technology for outdoor localization but indoor localization in a complex multi-storey building environment remains challenging due to fluctuations in wireless signal strength arising from multipath fading. Location-linked data from wireless access points (WAPs) such as received signal strength (RSS) are acquired as numerical sequences. By conceptualizing a fixed order sequence of WAP measurements as a sentence where the RSS from each WAP are words, we may leverage on recent advances in artificial intelligence for natural language processing (NLP) to enhance localization accuracy and improve robustness against signal fluctuations. We propose the hyper-class Transformer (HyTra), an encoder-only Transformer neural network which learns the relative positions of wireless access points (WAPs) through multiple learnable embeddings. We propose a second network, HyTra-HF, which improves upon HyTra by applying a hierarchical relationship between location classes. We test our proposed networks on public and private datasets varying in sizes. HyTra-HF outperforms existing deep learning solutions by obtaining 96.7\% accuracy for the floor classification task on the UJIIndoorloc dataset. HyTra-HF is amenable to deep model compression and achieves accuracy of 95.95\% with over ten-fold reduction in model size using Sparsity Aware Orthogonal (SAO) initialization and has the best-in-class accuracy for the sparse model.
Downloads
Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2024 Muneeb, Kiara, Ibrahima Faye, Tong Boon Tang, Mazlaini Yahya, Afidalina Tumian, Eric Tatt Wei Ho
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution 4.0 International License, which allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.