DiffusionLM is a novel approach to language modeling that combines transformer architectures with diffusion processes for high-quality text generation. This package provides a flexible and efficient implementation of diffusion-based language models.
-
Advanced Architecture
- Transformer-based backbone with diffusion capabilities
- Configurable model sizes (small, medium, large)
- Time step conditioning
- Attention mechanisms optimized for text
-
Multiple Generation Strategies
- Auto-regressive generation
- Parallel generation
- Confidence-based masking
- Semi-autoregressive generation
- Top-p (nucleus) sampling
- Beam search
-
Training Features
- Distributed training support
- Mixed precision training
- Gradient checkpointing
- Early stopping
- Model checkpointing
- Learning rate scheduling
-
Utilities
- Real-time token generation streaming
- Model saving and loading
- HuggingFace Hub integration
- Comprehensive logging
- Error handling
pip install diffusionLM
For development installation:
git clone https://github.com/codewithdark-git/DiffusionLM.git
cd DiffusionLM
pip install -e .
from diffusionLM.utils import prepare_dataset
from diffusionLM.model import DiffusionConfig, DiffusionLLM
from transformers import AutoTokenizer
# Load tokenizer and prepare dataset
tokenizer = AutoTokenizer.from_pretrained("gpt2")
train_dataset, val_dataset, _ = prepare_dataset(
dataset_name="wikitext/wikitext-103-v1",
tokenizer_name="gpt2"
)
# Initialize model
config = DiffusionConfig(
vocab_size=len(tokenizer),
max_position_embeddings=256,
num_timesteps=50,
pad_token_id=tokenizer.pad_token_id,
mask_token_id=tokenizer.mask_token_id,
# **config_kwargs
)
model = DiffusionLLM(config)
from diffusionLM import trainer
train_model = trainer(
model=model,
train_dataset=train_dataset,
val_dataset=val_dataset,
batch_size=batch_size,
num_epochs=num_epochs,
learning_rate=learning_rate,
num_timesteps=num_timesteps,
save_path=save_dir,
device=device,
)
from diffusionLM import registerANDpush
registerANDpush(
model=trained_model,
tokenizer=tokenizer,
model_type="diffusionLM",
repo_id="your-username/model-name"
)
The package includes comprehensive error handling:
from diffusionLM import DiffusionLMError, handle_errors
@handle_errors()
def your_function():
# Your code here
pass
If you find DiffusionLM useful for your project or research, please consider supporting its development through GitHub Sponsors. Your sponsorship helps maintain the project and develop new features.
- Support ongoing development and maintenance
- Priority bug fixes and feature requests
- Recognition in our documentation
- Help make DiffusionLM better for everyone
Click the "Sponsor" button at the top of the repository or visit our GitHub Sponsors page.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Python ≥ 3.8
- PyTorch ≥ 1.9.0
- Transformers ≥ 4.21.0
- For full requirements, see
requirements.txt
This project is licensed under the MIT License - see the LICENSE file for details.
@article{diffusionllm2025,
title={DiffusionLM: Large Language Models with Diffusion},
author={Dark Coder},
journal={GitHub Repository},
year={2025},
publisher={GitHub},
url={https://github.com/codewithdark-git/DiffusionLM}
}
- GitHub: @codewithdark-git
- Email: codewithdark90@gmail.com