File size: 5,077 Bytes
800a79d
1914d9d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10cf102
800a79d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10cf102
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
800a79d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182

---
pretty_name: MicroGen3D
tags:
  - GenAI
  - LDM
  - 3d
  - microstructure
  - diffusion-model
  - materials-science
  - synthetic-data
  - voxel
license: mit
datasets:
  - microgen3D
language:
  - en
---


# microgen3D

[![Code](https://img.shields.io/badge/GitHub-Code-black?logo=github)](https://github.com/baskargroup/MicroGen3D)

## Dataset Summary

**microgen3D** is a dataset of 3D voxelized microstructures designed for training, evaluation, and benchmarking of generative models—especially Conditional Latent Diffusion Models (LDMs). It includes both synthetic (Cahn-Hilliard) and experimental microstructures with multiple phases (2 to 3). The voxel grids range from `64³` up to `128×128×64`.

The dataset consists of three microstructure types:
- **Experimental microstructures**
- **2-phase Cahn-Hilliard microstructures**
- **3-phase Cahn-Hilliard microstructures**

The two Cahn-Hilliard datasets are thresholded versions of the same simulation source. For each dataset type, we also provide pretrained generative model weights, comprising:
- `vae.ckpt` – Variational Autoencoder
- `fp.ckpt` – Feature Predictor
- `ddpm.ckpt` – Denoising Diffusion Probabilistic Model

---

## 📁 Repository Structure

```
microgen3D/
├── data/
│   └── sample_data.h5                 # Experimental or synthetic HDF5 microstructure file
├── models/
│   └── weights/
│       ├── experimental/
│       │   ├── vae.ckpt
│       │   ├── fp.ckpt
│       │   └── ddpm.ckpt
│       ├── two_phase/
│       └── three_phase/
└── ...
```

---

## 🚀 Quick Start

### 🔧 Setup Instructions

```bash
# 1. Clone the repo
git clone https://github.com/baskargroup/MicroGen3D.git
cd MicroGen3D

# 2. Set up environment
python -m venv venv
source venv/bin/activate  # On Windows use: venv\Scripts\activate

# 3. Install dependencies
pip install -r requirements.txt

# 4. Download dataset and weights (Hugging Face)
# Make sure HF CLI is installed and you're logged in: `huggingface-cli login`
```

```python
from huggingface_hub import hf_hub_download

# Download sample data
hf_hub_download(repo_id="BGLab/microgen3D", filename="sample_data.h5", repo_type="dataset", local_dir="data")

# Download model weights
hf_hub_download(repo_id="BGLab/microgen3D", filename="vae.ckpt", local_dir="models/weights/experimental")
hf_hub_download(repo_id="BGLab/microgen3D", filename="fp.ckpt", local_dir="models/weights/experimental")
hf_hub_download(repo_id="BGLab/microgen3D", filename="ddpm.ckpt", local_dir="models/weights/experimental")
```

## ⚙️ Configuration

### Training Config (`config.yaml`)
- **task**: Auto-generated if left null  
- **data_path**: Path to training dataset (`../data/sample_train.h5`)  
- **model_dir**: Directory to save model weights  
- **batch_size**: Batch size for training  
- **image_shape**: Shape of the 3D images `[C, D, H, W]`  

#### VAE Settings:
- `latent_dim_channels`: Latent space channels size.  
- `kld_loss_weight`: Weight of KL divergence loss  
- `max_epochs`: Training epochs  
- `pretrained`: Whether to use pretrained VAE  
- `pretrained_path`: Path to pretrained VAE model  

#### FP Settings:
- `dropout`: Dropout rate  
- `max_epochs`: Training epochs  
- `pretrained`: Whether to use pretrained FP  
- `pretrained_path`: Path to pretrained FP model  

#### DDPM Settings:
- `timesteps`: Number of diffusion timesteps  
- `n_feat`: Number of feature channels for Unet. Higher the channels more model capacity. 
- `learning_rate`: Learning rate  
- `max_epochs`: Training epochs  

### Inference Parameters (`params.yaml`)
- **data_path**: Path to inference/test dataset (`../data/sample_test.h5`)  

#### Training (for model init only):
- `batch_size`, `num_batches`, `num_timesteps`, `learning_rate`, `max_epochs`  : Optional parameters

#### Model:
- `latent_dim_channels`: Latent space channels size.  
- `n_feat`: Number of feature channels for Unet.
- `image_shape`: Expected image input shape  

#### Attributes:
- List of features/targets to predict:
  - `ABS_f_D`
  - `CT_f_D_tort1`
  - `CT_f_A_tort1`

#### Paths:
- `ddpm_path`: Path to trained DDPM model  
- `vae_path`: Path to trained VAE model  
- `fc_path`: Path to trained FP model  
- `output_dir`: Where to store inference results  

## 🏋️ Training

Navigate to the training folder and run:
```bash
cd training
python training.py
```

## 🧠 Inference

After training, switch to the inference folder and run:
```bash
cd ../inference
python inference.py
```

---

## 📜 Citation

If you use this dataset or models, please cite:

```
@article{baishnab2025microgen3d,
  title={3D Multiphase Heterogeneous Microstructure Generation Using Conditional Latent Diffusion Models},
  author={Baishnab, Nirmal and Herron, Ethan and Balu, Aditya and Sarkar, Soumik and Krishnamurthy, Adarsh and Ganapathysubramanian, Baskar},
  journal={arXiv preprint arXiv:2503.10711},
  year={2025}
}
```

---

## ⚖️ License

This project is licensed under the **MIT License**.

---