Creative Commons Attribution 4.0 (CC-BY-4.0)
Agriculture, Fisheries, Forestry and Food, Science and Technology
PERSONAL DATA PROTECTION
No personal data
* Please note that the classification is taken from the original source
Dataset designed to train and evaluate pose estimation models from images. The goal of the task is to design a model that is able to predict the rotation and translation of the objects in the scene. Three datasets (cube, cylinder and sphere) are generated using Blender 2.82. The scene contains the object randomly translated and rotated within a bounded working space, and 14 perspective cameras equidistantly spaced over a sphere. Background lighting was used to avoid shadow casting and reflections that could add information to by rights meaningless perspectives. Each simulated capture contains 14 512x512 RGBA images (one for each camera) and a single groundtruth rotation and translation. The square that contains the object and whose center is the center of mass of the image is cropped from the image and resized to 128x128 pixels. u and v normalized image coordinates and the scaling factor (the original size of the square divided by 128) are stored for each image.
Disclaimer: This data is provided by a third party. The DIH identifying this data has no responsibility for its content. Please check the provided link to the data for license terms and potential usage restrictions. In case personal data is included in the dataset, the third party who provides the dataset is the data controller of such personal data. Please note that if you use the datasets for your own purposes, you become an independent data controller and are solely responsible for your compliance with relevant data protection laws relating to the processing and security of personal data, with particular reference, but not limited to, the provisions of the General Data Protection Regulation (GDPR), as applicable to the personal data included in the data.
MORE INFORMATION ABOUT THIS DATASET