True 2D-to-3D Reconstruction of Heterogenous Porous Media via Deep Generative Adversarial Networks (GANs)
[The latest version of this dataset is accessible via the following DOI: https://doi.org/10.24416/UU01-AFP38O] We imaged samples of Berea sandstone from Ohio (USA) using two 2D imaging techniques: backscattered electron (BSE) and optical microscopy, and 3D X-ray (micro-)computed tomography (XCT). The goal is to employ a deep-learning-based generative model called a generative adversarial network (GAN) to reconstruct statistically equivalent microstructures in 3D from exclusively 2D training images. To evaluate the reconstruction accuracy, we conduct a visual and statistical analysis comparing reconstructions with a 3D X-ray tomography of the same sample. Unlike previous research, our method uses true 2D images from three orthogonally oriented planes for training the model. The data are organized into 10 folders: three contain the original segmented (binary) images of Berea sandstone samples, and the other 7 folders contain data and individual figures used to create figures in the main publication. Link to GitHub containing codes: https://github.com/hamediut/2D-to3D-recon
Versions
https://doi.org/10.24416/UU01-2L689L | Jul 16, 2024 |
https://doi.org/10.24416/UU01-DO6LT4 | Feb 16, 2024 |
This DOI represents all versions of this publication and will resolve to the latest publication.
View contents