L3AC: Towards a Lightweight and Lossless Audio Codec
This repository contains the implementation of L3AC, a lightweight neural audio codec introduced in the paper titled "L3AC: Towards a Lightweight and Lossless Audio Codec".
Neural audio codecs have recently gained traction for their ability to compress high-fidelity audio and provide discrete tokens for generative modeling. However, leading approaches often rely on resource-intensive models and complex multi-quantizer architectures, limiting their practicality in real-world applications. In this work, we introduce L3AC, a lightweight neural audio codec that addresses these challenges by leveraging a single quantizer and a highly efficient architecture. To enhance reconstruction fidelity while minimizing model complexity, L3AC explores streamlined convolutional networks and local Transformer modules, alongside TConv--a novel structure designed to capture acoustic variations across multiple temporal scales. Despite its compact design, extensive experiments across diverse datasets demonstrate that L3AC matches or exceeds the reconstruction quality of leading codecs while reducing computational overhead by an order of magnitude. The single-quantizer design further enhances its adaptability for downstream tasks.
Paper: L3AC: Towards a Lightweight and Lossless Audio Codec Official GitHub Repository: https://github.com/zhai-lw/L3AC
Installation
You can install the l3ac
library using pip:
pip install l3ac
Demo
Firstly, make sure you have installed the librosa
package to load the example audio file. You can install it using pip:
pip install librosa
Then, you can use the following code to load a sample audio file, encode it using the L3AC model, and decode it back to audio. The code also calculates the mean squared error (MSE) between the original and generated audio.
import librosa
import torch
import l3ac
all_models = l3ac.list_models()
print(f"Available models: {all_models}")
MODEL_USED = '1kbps'
codec = l3ac.get_model(MODEL_USED)
print(f"loaded codec({MODEL_USED}) and codec sample rate: {codec.config.sample_rate}")
sample_audio, sample_rate = librosa.load(librosa.example("libri1"))
sample_audio = sample_audio[None, :]
print(f"loaded sample audio and audio sample_rate :{sample_rate}")
sample_audio = librosa.resample(sample_audio, orig_sr=sample_rate, target_sr=codec.config.sample_rate)
codec.network.cuda()
codec.network.eval()
with torch.inference_mode():
audio_in = torch.tensor(sample_audio, dtype=torch.float32, device='cuda')
_, audio_length = audio_in.shape
print(f"{audio_in.shape=}")
q_feature, indices = codec.encode_audio(audio_in)
audio_out = codec.decode_audio(q_feature) # or
# audio_out = codec.decode_audio(indices=indices['indices'])
generated_audio = audio_out[:, :audio_length].detach().cpu().numpy()
mse = ((sample_audio - generated_audio) ** 2).mean().item()
print(f"codec({MODEL_USED}) mse: {mse}")
Available Models
config_name | Sample rate(Hz) | tokens/s | Codebook size | Bitrate(bps) |
---|---|---|---|---|
0k75bps | 16,000 | 44.44 | 117,649 | 748.6 |
1kbps | 16,000 | 59.26 | 117,649 | 998.2 |
1k5bps | 16,000 | 88.89 | 117,649 | 1497.3 |
3kbps | 16,000 | 166.67 | 250,047 | 2988.6 |