Visual Encoding Method for Semantic Mapping with Federated Learning Concept [1]
| Tytuł | Visual Encoding Method for Semantic Mapping with Federated Learning Concept |
| Publication Type | Conference Proceedings |
| Rok publikacji | 2025 |
| Autorzy | Sobczak Ł [2], Biernacki P [3], Domańska J [4] |
| Conference Name | MobiHoc '25: Proceedings of the Twenty-sixth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing |
| Pagination | 428 - 435 |
| Publisher | ACM |
| Abstract | We present a visual encoding method for semantic mapping in indoor environments, designed to minimize redundancy in image data and support federated learning across a fleet of service robots. Our pipeline combines 2D LiDAR-based segmentation with RGB image filtering based on geometric orientation, distance, visibility, and uniqueness. The result is a compact set of representative visual samples suitable for downstream semantic tasks such as object recognition or language grounding. We evaluate our method in a Gazebo simulation using a TurtleBot platform and compare it against a naive odometry-based sampling strategy. Our approach achieves up to 57.5\% reduction in collected images while preserving scene coverage. Additionally, we demonstrate how multiple robots can collaboratively improve the visual map in a federated setup, reducing collection time and enabling model generalization across diverse environments. The proposed method offers an efficient and scalable solution for semantic mapping under bandwidth and computation constraints. |
| URL | https://doi.org/10.1145/3704413.3765513 [5] |
| DOI | 10.1145/3704413.3765513 [6] |
