The FORTIS project envisions a future in which humans and robots work together efficiently and safely, with natural and comprehensive interaction on both physical and non-physical levels. The project aims to develop a complete solution for Human-Robot Interaction (HRI) that utilizes multimodal communication, human social sciences, multi-aspect interaction and AI.
The main goal is to understand human interaction that relies on a complex interplay of signals. We interpret visual signals such as facial expressions and gestures, vocal signals such as tone of voice and speech, and haptic signals like touch. By processing these signals, we make sense of the context and the message behind them. Our reactions are also influenced by factors like personality, experience, and cultural background.
FORTIS translates this human-to-human interaction model into the world of human-robot interaction (HRI).
To provide a complete Human-Robot interaction, the solution must:
This approach focuses on continuous learning and improvement for humans and robots to ensure ongoing optimization of their collaboration.
Over a four-year period, FORTIS will be tested in real-world industrial environments such as construction, infrastructure services, and manufacturing. The impact of the project is expected to extend beyond its core area and contribute to advances in areas like healthcare, social inclusion, and environmental sustainability.
XLAB’s role in the project is to further improve human-centric computer vision (CV) solutions and apply them to new manufacturing and industrial applications in FORTIS. We are using our state-of-the-art methods and algorithms to add value to large-scale digital transformation in the manufacturing sector. We are also investigating new uses of our natural language processing (NLP) technologies and algorithms to improve Human-Machine interaction in construction, maintenance and logistics.