Abstract

Human-robot interaction has been a significant area of research with the widespread use of social robots. Many modalities can be used to achieve interaction, including vision. For each modality, many methodologies have been proposed, with varying degrees of effectiveness and efficiency in terms of the computational power needed. The varied nature of these algorithms makes data fusion a complex and applicationspecific task. This paper introduces a novel Lattice Computing -based methodology to interpret visual stimuli for head pose estimation. An investigation of the various parameters involved and initial results are presented. The aim is to determine head pose in robot-assisted therapy settings and use it in decision making. This work is part of a broader effort to use the Lattice Computing (LC) paradigm as a unified methodology for sensory data interpretation in human-robot interaction.

Citation

V. G. Kaburlasos, C. Lytridis, C. Bazinas, G. A. Papakostas, A. Naji, M. Hicham Zaggaf, K. Mansouri, M. Qbadou, M. Mestari, “Structured human-head pose representation for estimation using fuzzy lattice reasoning (FLR)”, The Fourth International Conference on Intelligent Computing in Data Sciences (ICDS 2020), Fez, Morocco, 21-23 October 2020 (accepted).