THÖR-MAGNI: A Large-scale Indoor Motion Capture Recording of Human Movement and Interaction

Kuvaus

The THÖR-MAGNI Dataset Tutorials THÖR-MAGNI datasets is a novel dataset of accurate human and robot navigation and interaction in diverse indoor contexts, building on the previous THÖR dataset protocol. We provide position and head orientation motion capture data, 3D LiDAR scans and gaze tracking. In total, THÖR-MAGNI captures 3.5 hours of motion of 40 participants on 5 recording days. This data collection is designed around systematic variation of factors in the environment to allow building cue-conditioned models of human motion and verifying hypotheses on factor impact. To that end, THÖR-MAGNI encompasses 5 scenarios, in which some of them have different conditions (i.e., we vary some factor): Scenario 1 (plus conditions A and B): Participants move in groups and individually; Robot as static obstacle; Environment with 3 obstacles and lane marking on the floor for condition B; Scenario 2: Participants move in groups, individually and transport objects with variable difficulty (i.e. bucket, boxes and a poster stand); Robot as static obstacle; Environment with 3 obstacles; Scenario 3 (plus conditions A and B): Participants move in groups, individually and transporting objects with variable difficulty (i.e. bucket, boxes and a poster stand). We denote each role as: Visitors-Alone, Visitors-Group 2, Visitors-Group 3, Carrier-Bucket, Carrier-Box, Carrier-Large Object; Teleoperated robot as moving agent: in condition A, the robot moves with differential drive; in condition B, the robot moves with omni-directional drive; Environment with 2 obstacles; Scenario 4 (plus conditions A and B): All participants, denoted as Visitors-Alone HRI interacted with the teleoperated mobile robot; Robot interacted in two ways: in condition A (Verbal-Only), the Anthropomorphic Robot Mock Driver (ARMoD), a small humanoid NAO robot on top of the mobile platform, only used speech to communicate the next goal point to the participant; in condition B the ARMoD used speech, gestures and robotic gaze to convey the same message; Free space environment Scenario 5: Participants move alone (Visitors-Alone) and one of the participants, denoted as Visitors-Alone HRI, transport objects and interact with the robot; The ARMoD is remotely controlled by an experimenter and proactively offers help; Free space environment;
Näytä enemmän

Julkaisuvuosi

2023

Aineiston tyyppi

Tekijät

Department of Electrical Engineering and Automation

Achim J. Lilienthal - Muu tekijä

Kai O. Arras - Muu tekijä

Luigi Palmieri - Muu tekijä

Martin Magnusson - Muu tekijä

P. Kucner Tomasz Orcid -palvelun logo - Muu tekijä

Andrey Rudenko - Tekijä

Eduardo Gutierrez Maestro - Tekijä

Tiago Rodrigues de Almeida - Tekijä

Tim Schreiter - Tekijä

Yufei Zhu - Tekijä

Robert Bosch GmbH - Muu tekijä

Technical University of Munich - Muu tekijä

University of Stuttgart - Muu tekijä

Zenodo - Julkaisija

Örebro University - Muu tekijä

Projekti

Muut tiedot

Tieteenalat

Sähkö-, automaatio- ja tietoliikennetekniikka, elektroniikka

Kieli

Saatavuus

Avoin

Lisenssi

Creative Commons Nimeä 4.0 Kansainvälinen (CC BY 4.0)

Avainsanat

Asiasanat

Ajallinen kattavuus

undefined

Liittyvät aineistot