Update README.md
Browse files
README.md
CHANGED
@@ -27,11 +27,11 @@ metrics:
|
|
27 |
|
28 |
This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2)
|
29 |
model, implemented with SpeechBrain, and pretrained on Libri3Mix dataset. For a better experience we encourage you to learn more about
|
30 |
-
[SpeechBrain](https://speechbrain.github.io). The model performance is
|
31 |
|
32 |
| Release | Test-Set SI-SNRi | Test-Set SDRi |
|
33 |
|:-------------:|:--------------:|:--------------:|
|
34 |
-
|
|
35 |
|
36 |
|
37 |
## Install SpeechBrain
|
@@ -39,7 +39,10 @@ model, implemented with SpeechBrain, and pretrained on Libri3Mix dataset. For a
|
|
39 |
First of all, please install SpeechBrain with the following command:
|
40 |
|
41 |
```
|
42 |
-
|
|
|
|
|
|
|
43 |
```
|
44 |
|
45 |
Please notice that we encourage you to read our tutorials and learn more about
|
@@ -51,17 +54,18 @@ Please notice that we encourage you to read our tutorials and learn more about
|
|
51 |
from speechbrain.pretrained import SepformerSeparation as separator
|
52 |
import torchaudio
|
53 |
|
54 |
-
model = separator.from_hparams(source="
|
55 |
|
56 |
est_sources = model.separate_file(path='speechbrain/sepformer-wsj03mix/test_mixture_3spks.wav')
|
57 |
|
58 |
-
torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(),
|
59 |
-
torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(),
|
60 |
-
torchaudio.save("source3hat.wav", est_sources[:, :, 2].detach().cpu(),
|
|
|
61 |
|
62 |
```
|
63 |
|
64 |
-
The system expects input recordings sampled at
|
65 |
If your signal has a different sample rate, resample it (e.g, using torchaudio or sox) before using the interface.
|
66 |
|
67 |
### Inference on GPU
|
@@ -86,7 +90,7 @@ pip install -e .
|
|
86 |
cd recipes/LibriMix/separation
|
87 |
python train.py hparams/sepformer.yaml --data_folder=your_data_folder
|
88 |
```
|
89 |
-
Note: change num_spks to
|
90 |
|
91 |
|
92 |
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1DN49LtAs6cq1X0jZ8tRMlh2Pj6AecClz).
|
|
|
27 |
|
28 |
This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2)
|
29 |
model, implemented with SpeechBrain, and pretrained on Libri3Mix dataset. For a better experience we encourage you to learn more about
|
30 |
+
[SpeechBrain](https://speechbrain.github.io). The model performance is 8.88 dB SI-SNRi on the test set of Libri4Mix 48k dataset.
|
31 |
|
32 |
| Release | Test-Set SI-SNRi | Test-Set SDRi |
|
33 |
|:-------------:|:--------------:|:--------------:|
|
34 |
+
| 29-01-24 | 8.88dB | 9.44dB |
|
35 |
|
36 |
|
37 |
## Install SpeechBrain
|
|
|
39 |
First of all, please install SpeechBrain with the following command:
|
40 |
|
41 |
```
|
42 |
+
!git clone https://github.com/hahmadraza/speechbrain_48k.git
|
43 |
+
%cd speechbrain_48k
|
44 |
+
!pip install -r requirements.txt
|
45 |
+
!pip install --editable .
|
46 |
```
|
47 |
|
48 |
Please notice that we encourage you to read our tutorials and learn more about
|
|
|
54 |
from speechbrain.pretrained import SepformerSeparation as separator
|
55 |
import torchaudio
|
56 |
|
57 |
+
model = separator.from_hparams(source="hahmadraz/sepformer-libri4mix", savedir='pretrained_models/sepformer-libri4mix-48k/')
|
58 |
|
59 |
est_sources = model.separate_file(path='speechbrain/sepformer-wsj03mix/test_mixture_3spks.wav')
|
60 |
|
61 |
+
torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 48000)
|
62 |
+
torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 48000)
|
63 |
+
torchaudio.save("source3hat.wav", est_sources[:, :, 2].detach().cpu(), 48000)
|
64 |
+
torchaudio.save("source3hat.wav", est_sources[:, :, 3].detach().cpu(), 48000)
|
65 |
|
66 |
```
|
67 |
|
68 |
+
The system expects input recordings sampled at 48kHz (single channel).
|
69 |
If your signal has a different sample rate, resample it (e.g, using torchaudio or sox) before using the interface.
|
70 |
|
71 |
### Inference on GPU
|
|
|
90 |
cd recipes/LibriMix/separation
|
91 |
python train.py hparams/sepformer.yaml --data_folder=your_data_folder
|
92 |
```
|
93 |
+
Note: change num_spks to 4 in the yaml file.
|
94 |
|
95 |
|
96 |
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1DN49LtAs6cq1X0jZ8tRMlh2Pj6AecClz).
|