Pose estimation

Dataset designed to train and evaluate pose estimation models from images. The goal of the task is to design a model that is able to predict the rotation and translation of the objects in the scene. Three datasets (cube, cylinder and sphere) are generated using Blender 2.82. The scene contains the object randomly translated and rotated within a bounded working space, and 14 perspective cameras equidistantly spaced over a sphere. Background lighting was used to avoid shadow casting and reflections that could add information to by rights meaningless perspectives. Each simulated capture contains 14 512x512 RGBA images (one for each camera) and a single groundtruth rotation and translation. The square that contains the object and whose center is the center of mass of the image is cropped from the image and resized to 128x128 pixels. u and v normalized image coordinates and the scaling factor (the original size of the square divided by 128) are stored for each image.

Data and Resources

This dataset has no data

Additional Info

Field Value
Source https://datasets.datahub.iti.es/
Author ITI
Last Updated October 24, 2023, 20:12 (UTC)
Created October 24, 2023, 20:12 (UTC)
Issued
Modified
creator ITI
id_euhubs4data 044657129be0c2bb971b677859396c91871537d715b868bb468731f9a95ba085_UCSGN7FPQCVD6T6KNRQMX62CSTYFCIOASFCYQBECROSS32EDTPIXRE5K
idsExtraInfo https://euhub4data-graphs.itainnova.es/dataset/dcat#Dataset_0ddf2a1c-61b2-4114-80e5-4740e72cce73
is_repo 0
landing_page https://datasets.datahub.iti.es/
privacy No personal data
rdf_url
spatial
team ITI