← Back to all products
$29
Hyperparameter Tuning Kit
Optuna/Ray Tune configs, search space definitions, pruning strategies, and distributed tuning setups.
MarkdownYAMLJSONMLflow
📁 File Structure 8 files
hyperparameter-tuning-kit/
├── LICENSE
├── README.md
├── config.example.yaml
├── docs/
│ ├── checklists/
│ │ └── pre-deployment.md
│ ├── overview.md
│ └── patterns/
│ └── pattern-01-multi-stage-tuning.md
└── templates/
└── config.yaml
📖 Documentation Preview README excerpt
Hyperparameter Tuning Kit
Production-ready hyperparameter optimization configs using Optuna and Ray Tune. Includes search space definitions, pruning strategies, distributed tuning setups, and integration with experiment tracking.
What's Included
- Optuna study configurations with various samplers
- Ray Tune distributed hyperparameter search setups
- Search space definition templates for common model types
- Pruning strategy configs (MedianPruner, HyperbandPruner)
- Distributed tuning on multi-node clusters
- Integration with MLflow and Weights & Biases
- Results analysis and visualization templates
Quick Start
# 1. Copy the example config
cp config.example.yaml config.yaml
# 2. Install dependencies
pip install optuna ray[tune]
# 3. Run the example tuning study
python -m tuning.run --config config.yaml
Prerequisites
- Python 3.9+
- Optuna 3.x or Ray Tune 2.x
- (Optional) Distributed compute cluster for parallel trials
Contents
hyperparameter-tuning-kit/
config.example.yaml
docs/
overview.md
patterns/
pattern-01-*.md
checklists/
pre-deployment.md
templates/
config.yaml
Support
For questions or issues, contact: megafolder122122@hotmail.com
License
MIT License - Copyright 2026 Jesse Mikkola. See LICENSE for details.
📄 Code Sample .yaml preview
config.example.yaml
# Hyperparameter Tuning Kit - Example Configuration
# Copy this file to config.yaml and update values for your environment
tuning:
framework: "optuna" # optuna or ray_tune
n_trials: 50
timeout_seconds: 3600
direction: "maximize" # maximize or minimize
metric: "val_accuracy"
search_space:
learning_rate:
type: "log_uniform"
low: 1e-5
high: 1e-1
batch_size:
type: "categorical"
choices: [16, 32, 64, 128]
n_layers:
type: "int"
low: 1
high: 5
dropout:
type: "uniform"
low: 0.1
high: 0.5
pruning:
enabled: true
strategy: "median" # median, hyperband, percentile
n_warmup_steps: 5
sampler: "tpe" # tpe, random, cmaes
distributed:
enabled: false
n_workers: 4
backend: "optuna" # optuna (with storage) or ray
logging:
level: "INFO"