A Lightweight Transformer Model With High-Throughput for Image Compression in 6G-Enabled Intelligent Transportation Systems
Xingchi Chen et al.
Abstract
In the 6G-enabled intelligent transportation systems (ITS), each intelligent transportation terminal needs to perform long-distance, low-latency image interaction to ensure real-time information exchange, including real-time vehicular environmental images and various vehicular media images. However, due to high computational cost and large computing resource usage, many learning-driven image compression models are difficult to deploy on intelligent transportation terminals such as edge devices and connected vehicle terminals in the 6G-enabled ITS to reduce the transmission resource consumption of massive image data from ITS. To address the above problems, this paper proposes a high-throughput lightweight Transformer model for image compression tasks on ITS-based intelligent transportation terminals. By constructing a lightweight Transformer combination and reducing the use of Transformer blocks by internally connecting convolutional blocks, the computational cost of the model is reduced. Furthermore, this paper proposes a lightweight Swin Transformer module, which further reduces the resources required for model calculation by directly connecting window multi-head self-attention (W-MSA) and shifted window multi-head self-attention (SW-MSA). In addition, this paper designs a simplified entropy model, which speeds up the execution time of the entropy model and improves the throughput of the overall network model by directly fusing the latent representation of the images. Experimental results show that our proposed model has significantly improved throughput and execution time compared with the state-of-the-art models on two ITS public datasets.
1 citation
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.16 × 0.4 = 0.06 |
| M · momentum | 0.53 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.