Visual Encoding Method for Semantic Mapping with Federated Learning Concept

TytułVisual Encoding Method for Semantic Mapping with Federated Learning Concept
Publication TypeConference Proceedings
Rok publikacjiIn Press
AutorzySobczak Ł, Biernacki P, Domańska J
Conference Name26th International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing
Abstract

We present a visual encoding method for semantic mapping in indoor environments, designed to minimize redundancy in image data and support federated learning across a fleet of service robots. Our pipeline combines 2D LiDAR-based segmentation with RGB image filtering based on geometric orientation, distance, visibility, and uniqueness. The result is a compact set of representative visual samples suitable for downstream semantic tasks such as object recognition or language grounding. We evaluate our method in a Gazebo simulation using a TurtleBot platform and compare it against a naive odometry-based sampling strategy. Our approach achieves up to 57.5\% reduction in collected images while preserving scene coverage. Additionally, we demonstrate how multiple robots can collaboratively improve the visual map in a federated setup, reducing collection time and enabling model generalization across diverse environments. The proposed method offers an efficient and scalable solution for semantic mapping under bandwidth and computation constraints.

Historia zmian

Data aktualizacji: 26/08/2025 - 15:29; autor zmian: Łukasz Sobczak (lsobczak@iitis.pl)