Wood anomaly detection learning exclusively from non-abnormal wooden textures In particular, this dataset addresses the anomaly detection problem by capturing several views from wooden textured objects from the same class. The anomalous regions of these images were manually labeled at pixel level. Furthermore, each image has been labelled by two different teams in order to increase its confidence merging the final labels. The objects contained 4 possible anomalies: crack, stain, porosity, and knot. These four were combined into a single label, namely, anomalous. To summarize, the goal is to develop an algorithm that learns from a "normal" wooden texture what "normal" is and infers if a wooden texture under inspection is anomalous. Therefore, the algorithm takes an image and outputs an error map (the same size as the input image) with an anomaly score for each pixel. The dataset is divided into three partitions: train, validation, and test. Train and validation datasets are exclusively built from "normal" wooden textures and the test dataset includes both "normal" and anomalous wooden textures. The division of the "normal" textures has been divided 60% to train, 20% validation, and 20% to test. All anomalous textures are in the test dataset. The structure of the dataset consists of three main folders: "train", "val" and "test". "train" and "val" directories only contain the images related to each partition as there is no need for label masks. However, the "test" directory contains two folders: "images" and "masks". The "images" directory contains the test images and "masks" contains the masks of only the anomalous images. Should an image name from the "images" directory not be in "masks", it implies that the image isn't anomalous. Moreover, the "masks" directory contains two subdirectories: "masksAND" and "masksXOR". These directories divide the regions of the images that were labeled unanimously by the two teams (masksAND) and the regions that were labeled by only one team (masksXOR). This division arises on the grounds of a more rightful evaluation of the algorithms. We cannot impose an algorithm to detect areas that not even humans can agree upon. Although, if the algorithm detects as anomalous any pixel within this region, we cannot penalize the score. Hence, this region should be omitted from the evaluation metrics. For this reason, we propose to ignore masksXOR regions and only masksAND labels should be mandatory, as anomalous pixels, in the evaluation metrics. Acknowledgments Images were acquired using a ZG3D device Please use the following reference when citing: Perez-Cortes, J. C., Perez, A. J., Saez-Barona, S., Guardiola, J. L., & Salvador, I. (2018). A System for In-Line 3D Inspection without Hidden Surfaces. Sensors, 18(9), 2993.