BGLab commited on
Commit
10cf102
·
verified ·
1 Parent(s): 42996e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -17,7 +17,7 @@ language:
17
  - en
18
  ---
19
 
20
- ```markdown
21
  # microgen3D
22
 
23
  [![Code](https://img.shields.io/badge/GitHub-Code-black?logo=github)](https://github.com/baskargroup/MicroGen3D)
@@ -89,6 +89,73 @@ hf_hub_download(repo_id="BGLab/microgen3D", filename="fp.ckpt", local_dir="model
89
  hf_hub_download(repo_id="BGLab/microgen3D", filename="ddpm.ckpt", local_dir="models/weights/experimental")
90
  ```
91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
  ---
93
 
94
  ## 📜 Citation
 
17
  - en
18
  ---
19
 
20
+
21
  # microgen3D
22
 
23
  [![Code](https://img.shields.io/badge/GitHub-Code-black?logo=github)](https://github.com/baskargroup/MicroGen3D)
 
89
  hf_hub_download(repo_id="BGLab/microgen3D", filename="ddpm.ckpt", local_dir="models/weights/experimental")
90
  ```
91
 
92
+ ## ⚙️ Configuration
93
+
94
+ ### Training Config (`config.yaml`)
95
+ - **task**: Auto-generated if left null
96
+ - **data_path**: Path to training dataset (`../data/sample_train.h5`)
97
+ - **model_dir**: Directory to save model weights
98
+ - **batch_size**: Batch size for training
99
+ - **image_shape**: Shape of the 3D images `[C, D, H, W]`
100
+
101
+ #### VAE Settings:
102
+ - `latent_dim_channels`: Latent space channels size.
103
+ - `kld_loss_weight`: Weight of KL divergence loss
104
+ - `max_epochs`: Training epochs
105
+ - `pretrained`: Whether to use pretrained VAE
106
+ - `pretrained_path`: Path to pretrained VAE model
107
+
108
+ #### FP Settings:
109
+ - `dropout`: Dropout rate
110
+ - `max_epochs`: Training epochs
111
+ - `pretrained`: Whether to use pretrained FP
112
+ - `pretrained_path`: Path to pretrained FP model
113
+
114
+ #### DDPM Settings:
115
+ - `timesteps`: Number of diffusion timesteps
116
+ - `n_feat`: Number of feature channels for Unet. Higher the channels more model capacity.
117
+ - `learning_rate`: Learning rate
118
+ - `max_epochs`: Training epochs
119
+
120
+ ### Inference Parameters (`params.yaml`)
121
+ - **data_path**: Path to inference/test dataset (`../data/sample_test.h5`)
122
+
123
+ #### Training (for model init only):
124
+ - `batch_size`, `num_batches`, `num_timesteps`, `learning_rate`, `max_epochs` : Optional parameters
125
+
126
+ #### Model:
127
+ - `latent_dim_channels`: Latent space channels size.
128
+ - `n_feat`: Number of feature channels for Unet.
129
+ - `image_shape`: Expected image input shape
130
+
131
+ #### Attributes:
132
+ - List of features/targets to predict:
133
+ - `ABS_f_D`
134
+ - `CT_f_D_tort1`
135
+ - `CT_f_A_tort1`
136
+
137
+ #### Paths:
138
+ - `ddpm_path`: Path to trained DDPM model
139
+ - `vae_path`: Path to trained VAE model
140
+ - `fc_path`: Path to trained FP model
141
+ - `output_dir`: Where to store inference results
142
+
143
+ ## 🏋️ Training
144
+
145
+ Navigate to the training folder and run:
146
+ ```bash
147
+ cd training
148
+ python training.py
149
+ ```
150
+
151
+ ## 🧠 Inference
152
+
153
+ After training, switch to the inference folder and run:
154
+ ```bash
155
+ cd ../inference
156
+ python inference.py
157
+ ```
158
+
159
  ---
160
 
161
  ## 📜 Citation