Semantic Orientation in Space System – Speech Processing and Voice Interaction Subsystem

Speaker: 

Mgr Marzena Halama

Date: 

15/09/2025 - 13:30

During the seminar, the results of a research project aimed at developing a subsystem for human–robot voice interaction will be presented. The implemented system integrates state-of-the-art technologies for speech processing, recognition, and response generation using large language models (LLMs, e.g., GPT-4). The talk will address, among other issues: the applied methods of automatic speech recognition and their effectiveness under acoustic conditions with varying levels of noise, as well as the research question concerning the ability of LLMs to maintain response accuracy despite incomplete understanding of the user’s utterance.

Historia zmian

Data aktualizacji: 04/09/2025 - 10:53; autor zmian: Marzena Halama (mhalama@iitis.pl)

During the seminar, the results of a research project aimed at developing a subsystem for human–robot voice interaction will be presented. The implemented system integrates state-of-the-art technologies for speech processing, recognition, and response generation using large language models (LLMs, e.g., GPT-4). The talk will address, among other issues: the applied methods of automatic speech recognition and their effectiveness under acoustic conditions with varying levels of noise, as well as the research question concerning the ability of LLMs to maintain response accuracy despite incomplete understanding of the user’s utterance.

Data aktualizacji: 04/09/2025 - 10:52; autor zmian: Marzena Halama (mhalama@iitis.pl)

During the seminar, the results of a research project aimed at developing a subsystem for human–robot voice interaction will be presented. The implemented system integrates state-of-the-art technologies for speech processing, recognition, and response generation using large language models (LLMs, e.g., GPT-4). The talk will address, among other issues: the applied methods of automatic speech recognition and their effectiveness under acoustic conditions with varying levels of noise, as well as the research question concerning the ability of LLMs to maintain response accuracy despite incomplete understanding of the user’s utterance.

Data aktualizacji: 04/09/2025 - 09:37; autor zmian: Marzena Halama (mhalama@iitis.pl)

During the seminar, the results of a research project aimed at developing a subsystem for human–robot voice interaction will be presented. The implemented system integrates state-of-the-art technologies for speech processing, recognition, and response generation using large language models (LLMs, e.g., GPT-4). The talk will address, among other issues: the applied methods of automatic speech recognition and their effectiveness under acoustic conditions with varying levels of noise, as well as the research question concerning the ability of LLMs to maintain response accuracy despite incomplete understanding of the user’s utterance.

Data aktualizacji: 03/09/2025 - 13:13; autor zmian: Zbigniew Puchała (zbyszek@iitis.pl)
Data aktualizacji: 03/09/2025 - 13:02; autor zmian: Łukasz Zimny (lzimny@iitis.pl)