pytorch-lightning
PyTorch Lightning is the lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.
Description
<div align="center"> <img src="https://pl-public-data.s3.amazonaws.com/assets_lightning/pytorch-lightning.png" width="400px"> **The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.** ______________________________________________________________________ <p align="center"> <a href="https://www.pytorchlightning.ai/">Website</a> • <a href="#how-to-use">How To Use</a> • <a href="https://lightning.ai/docs/pytorch/stable/">Docs</a> • <a href="#examples">Examples</a> • <a href="#community">Community</a> • <a href="https://lightning.ai/">Lightning AI</a> • <a href="https://github.com/Lightning-AI/pytorch-lightning/blob/master/LICENSE">License</a> </p> <!-- DO NOT ADD CONDA DOWNLOADS... README CHANGES MUST BE APPROVED BY EDEN OR WILL --> [](https://pypi.org/project/pytorch-lightning/) [](https://badge.fury.io/py/pytorch-lightning) [](https://pepy.tech/project/pytorch-lightning) [](https://anaconda.org/conda-forge/pytorch-lightning) [](https://hub.docker.com/r/pytorchlightning/pytorch_lightning) [](https://codecov.io/gh/Lightning-AI/pytorch-lightning) [](https://lightning.ai/docs/pytorch/stable/)[](https://discord.gg/VptPCZkGNa) [](https://github.com/Lightning-AI/pytorch-lightning/blob/master/LICENSE) <!-- [](https://www.codefactor.io/repository/github/Lightning-AI/lightning) --> </div> ###### \*Codecov is > 90%+ but build delays may show less ______________________________________________________________________ ## PyTorch Lightning is just organized PyTorch Lightning disentangles PyTorch code to decouple the science from the engineering.  ______________________________________________________________________ ## Lightning Design Philosophy Lightning structures PyTorch code with these principles: <div align="center"> <img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/philosophies.jpg" max-height="250px"> </div> Lightning forces the following structure to your code which makes it reusable and shareable: - Research code (the LightningModule). - Engineering code (you delete, and is handled by the Trainer). - Non-essential research code (logging, etc... this goes in Callbacks). - Data (use PyTorch DataLoaders or organize them into a LightningDataModule). Once you do this, you can train on multiple-GPUs, TPUs, CPUs, HPUs and even in 16-bit precision without changing your code! [Get started in just 15 minutes](https://lightning.ai/docs/pytorch/latest/starter/introduction.html) ______________________________________________________________________ ## Continuous Integration Lightning is rigorously tested across multiple CPUs, GPUs and TPUs and against major Python and PyTorch versions. <details> <summary>Current build statuses</summary> <center> | System / PyTorch ver. | 1.12 | 1.13 | 2.0 | 2.1 | | :--------------------------------: | :---------------------------------------------------------------------------------------------------------: | ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | | Linux py3.9 [GPUs] | | |
Release History
| Version | Changes | Urgency | Date |
|---|---|---|---|
| 2.6.1 | Imported from PyPI (2.6.1) | Low | 4/21/2026 |
| 2.6.0 | # Changes in `2.6.0` <a name="changelog-pytorch"></a> ## PyTorch Lightning <details open><summary>Added</summary> - Added `WeightAveraging` callback that wraps the PyTorch `AveragedModel` class ([#20545](https://github.com/Lightning-AI/pytorch-lightning/pull/20545)) - Added Torch-Tensorrt integration with `LightningModule` ([#20808](https://github.com/Lightning-AI/pytorch-lightning/pull/20808)) - Added time-based validation support though `val_check_interval` ([#21071](https://gith | Low | 11/28/2025 |
| 2.5.6 | # Changes in `2.5.6` <a name="changelog-pytorch"></a> ## PyTorch Lightning <details open><summary>Changed</summary> - Add `name()` function to accelerator interface (([#21325](https://github.com/Lightning-AI/pytorch-lightning/pull/21325))) </details> <details open><summary>Removed</summary> - Remove support for deprecated and archived lightning-habana package ([#21327](https://github.com/Lightning-AI/pytorch-lightning/pull/21327)) </details> | Low | 11/5/2025 |
| 2.5.5 | # Changes in `2.5.5` <a name="changelog-pytorch"></a> ## PyTorch Lightning <details open><summary>Changed</summary> - Include `exclude_frozen_parameters` to `DeepSpeedStrategy` ([#21060](https://github.com/Lightning-AI/pytorch-lightning/pull/21060)) - Include `PossibleUserWarning` that is raised if modules are in eval mode when training starts ([#21146](https://github.com/Lightning-AI/pytorch-lightning/pull/21146)) </details> <details open><summary>Fixed</summary> - Fixed ` | Low | 9/5/2025 |
| 2.5.4 | # Changes in `2.5.4` <a name="changelog-pytorch"></a> ## PyTorch Lightning <details open><summary>Fixed</summary> - Fixed `AsyncCheckpointIO` snapshots tensors to avoid race with parameter mutation ([#21079](https://github.com/Lightning-AI/pytorch-lightning/pull/21079)) - Fixed `AsyncCheckpointIO` threadpool exception if calling fit or validate more than one ([#20952](https://github.com/Lightning-AI/pytorch-lightning/pull/20952)) - Fixed learning rate not being correctly set after | Low | 8/29/2025 |
| 2.5.3 | # Notable changes in this release <a name="changelog-pytorch"></a> ## PyTorch Lightning <details open><summary>Changed</summary> - Added `save_on_exception` option to `ModelCheckpoint` Callback ([#20916](https://github.com/Lightning-AI/pytorch-lightning/pull/20916)) - Allow `dataloader_idx_` in log names when `add_dataloader_idx=False` ([#20987](https://github.com/Lightning-AI/pytorch-lightning/pull/20987)) - Allow returning `ONNXProgram` when calling `to_onnx(dynamo=True)` ([#2081 | Low | 8/13/2025 |
| 2.5.2 | # Notable changes in this release <a name="changelog-pytorch"></a> ## PyTorch Lightning <details open><summary>Changed</summary> - Add `toggled_optimizer(optimizer)` method to the LightningModule, which is a context manager version of `toggle_optimize` and `untoggle_optimizer` ([#20771](https://github.com/Lightning-AI/pytorch-lightning/pull/20771)) - For cross-device local checkpoints, instruct users to install `fsspec>=2025.5.0` if unavailable ([#20780](https://github.com/Lightning | Low | 6/20/2025 |
| 2.5.1.post0 | **Full Changelog**: https://github.com/Lightning-AI/pytorch-lightning/compare/2.5.1...2.5.1.post0 | Low | 4/25/2025 |
| 2.5.1 | # Changes <a name="changelog-pytorch"></a> ## PyTorch Lightning <details open><summary>Changed</summary> - Allow LightningCLI to use a customized argument parser class ([#20596](https://github.com/Lightning-AI/pytorch-lightning/pull/20596)) - Change `wandb` default x-axis to `tensorboard`'s `global_step` when `sync_tensorboard=True` ([#20611](https://github.com/Lightning-AI/pytorch-lightning/pull/20611)) - Added a new `checkpoint_path_prefix` parameter to the MLflow logger which ca | Low | 3/19/2025 |
| 2.5.0.post0 | **Full Changelog**: https://github.com/Lightning-AI/pytorch-lightning/compare/2.5.0...2.5.0.post0 | Low | 12/21/2024 |
| 2.5.0 | [Lightning AI](https://lightning.ai) :zap: is excited to announce the release of Lightning 2.5. Lightning 2.5 comes with improvements on several fronts, with **zero** API changes. Our users love it stable, we keep it stable :smile:. Talking about love :heart:, the `lightning`, `pytorch-lightning` and `lightning-fabric` packages are collectively getting more than **10M downloads per month** :open_mouth:, for a total of over **180M downloads** :exploding_head: since the early days . It's in | Low | 12/20/2024 |
| 2.5.0rc0 | Release 2.5.0rc0 | Low | 12/12/2024 |
| 2.4.0 | [Lightning AI](https://lightning.ai) :zap: is excited to announce the release of Lightning 2.4. This is mainly a compatibility upgrade for PyTorch 2.4 and Python 3.12, with a sprinkle of a few features and bug fixes. **Did you know?** The Lightning philosophy extends beyond a boilerplate-free deep learning framework: We've been hard at work bringing you [Lightning Studio](https://lightning.ai/). Code together, prototype, train, deploy, host AI web apps. All from your browser, with zero setup. | Low | 8/7/2024 |
| 2.3.3 | This release removes the code from the main `lightning` package that was reported in [CVE-2024-5980](https://github.com/advisories/GHSA-mr7h-w2qc-ffc2). | Low | 7/8/2024 |
| 2.3.2 | Includes a minor bugfix that avoids a conflict with the entrypoint command with another package [#20041](https://github.com/Lightning-AI/pytorch-lightning/pull/20041). | Low | 7/4/2024 |
| 2.3.1 | Includes minor bugfixes and stability improvements. **Full Changelog**: https://github.com/Lightning-AI/pytorch-lightning/compare/2.3.0...2.3.1 | Low | 6/27/2024 |
| 2.3.0 | [Lightning AI](https://lightning.ai) is excited to announce the release of Lightning 2.3 :zap: **Did you know?** The Lightning philosophy extends beyond a boilerplate-free deep learning framework: We've been hard at work bringing you [Lightning Studio](https://lightning.ai/). Code together, prototype, train, deploy, host AI web apps. All from your browser, with zero setup. This release introduces experimental support for Tensor Parallelism and 2D Parallelism, [PyTorch 2.3](https://pytorch. | Low | 6/13/2024 |
| 2.2.5 | ## PyTorch Lightning + Fabric ### Fixed - Fixed a matrix shape mismatch issue when running a model loaded from a quantized checkpoint (bitsandbytes) ([#19886](https://github.com/Lightning-AI/lightning/pull/19886)) ---- **Full Changelog**: https://github.com/Lightning-AI/pytorch-lightning/compare/2.2.4...2.2.5 | Low | 5/22/2024 |
| 2.2.4 | ## App ### Fixed - Fixed HTTPClient retry for flow/work queue ([#19837](https://github.com/Lightning-AI/pytorch-lightning/pull/19837)) ## PyTorch No Changes. ## Fabric No Changes. **Full Changelog**: https://github.com/Lightning-AI/pytorch-lightning/compare/2.2.3...2.2.4 | Low | 5/1/2024 |
| 2.2.3 | ## PyTorch ### Fixed - Fixed `WandbLogger.log_hyperparameters()` raising an error if hyperparameters are not JSON serializable ([#19769](https://github.com/Lightning-AI/pytorch-lightning/pull/19769)) ## Fabric No Changes. **Full Changelog**: https://github.com/Lightning-AI/pytorch-lightning/compare/2.2.2...2.2.3 | Low | 4/23/2024 |
| 2.2.2 | ## PyTorch ### Fixed - Fixed an issue causing a TypeError when using `torch.compile` as a decorator ([#19627](https://github.com/Lightning-AI/pytorch-lightning/pull/19627)) - Fixed a KeyError when saving a FSDP sharded checkpoint and setting `save_weights_only=True` ([#19524](https://github.com/Lightning-AI/pytorch-lightning/pull/19524)) ## Fabric ### Fixed - Fixed an issue causing a TypeError when using `torch.compile` as a decorator ([#19627](https://github.com/Lightning-AI/p | Low | 4/11/2024 |
| 2.2.1 | ## PyTorch ### Fixed - Fixed an issue with CSVLogger trying to append to file from a previous run when the version is set manually ([#19446](https://github.com/Lightning-AI/lightning/pull/19446)) - Fixed the divisibility check for `Trainer.accumulate_grad_batches` and `Trainer.log_every_n_steps` in ThroughputMonitor ([#19470](https://github.com/Lightning-AI/lightning/pull/19470)) - Fixed support for Remote Stop and Remote Abort with NeptuneLogger ([#19130](https://github.com/Lightning-AI | Low | 3/4/2024 |
| 2.2.0.post0 | **Full Changelog**: https://github.com/Lightning-AI/pytorch-lightning/compare/2.2.0...2.2.0.post0 | Low | 2/12/2024 |
| 2.2.0 | [Lightning AI](https://lightning.ai) is excited to announce the release of Lightning 2.2 :zap: **Did you know?** The Lightning philosophy extends beyond a boilerplate-free deep learning framework: We've been hard at work bringing you [Lightning Studio](https://lightning.ai/). Code together, prototype, train, deploy, host AI web apps. All from your browser, with zero setup. While our previous release was packed with many big new features, this time around we're rolling out mainly improvemen | Low | 2/7/2024 |
| 2.2.0.rc0 | This is a preview release for Lightning 2.2.0. | Low | 2/1/2024 |
| 2.1.4 | ## Fabric ### Fixed - Fixed an issue preventing Fabric to run on CPU when the system's CUDA driver is outdated or broken ([#19234](https://github.com/Lightning-AI/lightning/pull/19234)) - Fixed typo in kwarg in SpikeDetection ([#19282](https://github.com/Lightning-AI/lightning/pull/19282)) --- ## PyTorch ### Fixed - Fixed `Trainer` not expanding the `default_root_dir` if it has the `~` (home) prefix ([#19179](https://github.com/Lightning-AI/lightning/pull/19179)) - Fixed wa | Low | 2/1/2024 |
| 2.1.3 | ## App ### Changed - Lightning App: Use the batch get endpoint (#19180) - Drop starsessions from App's requirements (#18470) - Optimize loading time for chunks to be there (#19109) --- ## Data ### Added - Add fault tolerance `StreamingDataset` (#19052) - Add numpy support for the `StreamingDataset` (#19050) - Add fault tolerance for the `StreamingDataset` (#19049) - Add direct s3 support to the `StreamingDataset` (#19044) - Add disk usage check before downloading files (# | Low | 12/21/2023 |
| 2.1.2 | ## App ### Changed - Forced plugin server to use localhost (#18976) - Enabled bundling additional files into app source (#18980) - Limited rate of requests to http queue (#18981) --- ## Fabric ### Fixed - Fixed precision default from environment (#18928) --- ## PyTorch ### Fixed - Fixed an issue causing permission errors on Windows when attempting to create a symlink for the "last" checkpoint (#18942) - Fixed an issue where Metric instances from `torchmetrics` wo | Low | 11/15/2023 |
| 2.1.1 | ## App ### Added - add flow `fail()` (#18883) ### Fixed - Fix failing lightning cli entry point (#18821) --- ## Fabric ### Changed - Calling a method other than `forward` that invokes submodules is now an error when the model is wrapped (e.g., with DDP) (#18819) ### Fixed - Fixed false-positive warnings about method calls on the Fabric-wrapped module (#18819) - Refined the FSDP saving logic and error messaging when the path exists (#18884) - Fixed layer conversion | Low | 11/6/2023 |
| 2.1.0 | [Lightning AI](https://lightning.ai) is excited to announce the release of Lightning 2.1 :zap: It's the culmination of work from 79 contributors who have worked on features, bug-fixes, and documentation for a total of over 750+ commits since v2.0. The theme of 2.1 is "bigger, better, faster": **Bigger** because training large multi-billion parameter models has gotten even more efficient thanks to FSDP, efficient initialization and sharded checkpointing improvements, **better** because it's ea | Low | 10/12/2023 |
| 2.1.0.rc1 | :rabbit: | Low | 10/10/2023 |
| 2.0.9.post0 | Release 2.0.9.post0 | Low | 9/28/2023 |
| 2.0.9 | ## App ### Fixed - Replace LightningClient with import from lightning_cloud (#18544) --- ## Fabric ### Fixed - Fixed an issue causing the `_FabricOptimizer.state` to remain outdated after loading with `load_state_dict` (#18488) --- ## PyTorch ### Fixed - Fixed an issue that wouldn't prevent the user to set the `log_model` parameter in `WandbLogger` via the LightningCLI (#18458) - Fixed the display of `v_num` in the progress bar when running with `Trainer(fast_dev_r | Low | 9/14/2023 |
| 2.0.8 | ## App ## Changed - Change top folder (#18212) - Remove `_handle_is_headless` calls in app run loop (#18362) ### Fixed - refactor path to root preventing circular import (#18357) --- ## Fabric ### Changed - On XLA, avoid setting the global rank before processes have been launched as this will initialize the PJRT computation client in the main process (#16966) ### Fixed - Fixed model parameters getting shared between processes when running with `strategy="ddp_spawn" | Low | 8/30/2023 |
| 2.0.7 | ## App ### Changed - Removed the top-level import `lightning.pdb`; import `lightning.app.pdb` instead (#18177) - Client retries forever (#18065) ### Fixed - Fixed an issue that would prevent the user to set the multiprocessing start method after importing lightning (#18177) --- ## Fabric ### Changed - Disabled the auto-detection of the Kubeflow environment (#18137) ### Fixed - Fixed issue where DDP subprocesses that used Hydra would set hydra's working directory to | Low | 8/16/2023 |
| 2.0.6 | ## 2.0.6 ### App - Fixed handling a `None` request in the file orchestration queue (#18111) --- ### Fabric - Fixed `TensorBoardLogger.log_graph` not unwrapping the `_FabricModule` (#17844) --- ### PyTorch - `LightningCLI` not saving correctly `seed_everything` when `run=True` and `seed_everything=True` (#18056) - Fixed validation of non-PyTorch LR schedulers in manual optimization mode (#18092) - Fixed an attribute error for `_FaultTolerantMode` when loading an old chec | Low | 7/24/2023 |
| 2.0.5 | ## App ### Added - plugin: store source app (#17892) - added colocation identifier (#16796) - Added exponential backoff to HTTPQueue put (#18013) - Content for plugins (#17243) ### Changed - Save a reference to created tasks, to avoid tasks disappearing (#17946) --- ## Fabric ### Added - Added validation against misconfigured device selection when using the DeepSpeed strategy (#17952) ### Changed - Avoid info message when loading 0 entry point callbacks (#17990) | Low | 7/10/2023 |
| 2.0.4 | ## App ### Fixed - bumped several dependencies to address security vulnerabilities. --- ## Fabric ### Fixed - Fixed validation of parameters of `plugins.precision.MixedPrecision` (#17687) - Fixed an issue with HPU imports leading to performance degradation (#17788) --- ## PyTorch ### Changed - Changes to the `NeptuneLogger` (#16761): * It now supports neptune-client 0.16.16 and neptune >=1.0, and we have replaced the `log()` method with `append()` and `extend() | Low | 6/22/2023 |
| 2.0.3 | ## App ### Added - Added the property `LightningWork.public_ip` that exposes the public IP of the `LightningWork` instance (#17742) - Add missing python-multipart dependency (#17244) ### Changed - Made type hints public (#17100) ### Fixed - Fixed `LightningWork.internal_ip` that was mistakenly exposing the public IP instead; now exposes the private/internal IP address (#17742) - Fixed resolution of the latest version in CLI (#17351) - Fixed property raised instead of returne | Low | 6/7/2023 |
| 2.0.2 | ## App ### Fixed - Resolved Lightning App with remote storage (#17426) - Fixed `AppState`, streamlit example (#17452) --- ## Fabric ### Changed - Enable precision autocast for LightningModule step methods in Fabric (#17439) ### Fixed - Fixed an issue with `LightningModule.*_step` methods bypassing the DDP/FSDP wrapper (#17424) - Fixed device handling in `Fabric.setup()` when the model has no parameters (#17441) --- ## PyTorch ### Fixed - Fixed `Model.load_f | Low | 4/24/2023 |
| 1.9.5 | ## App ### Changed - Added `healthz` endpoint to plugin server (#16882) - System customization syncing for jobs run (#16932) --- ## Fabric ### Changed - Let `TorchCollective` works on the `torch.distributed` WORLD process group by default (#16995) ### Fixed - fixed for all `_cuda_clearCublasWorkspaces` on teardown (#16907) - Improved the error message for installing tensorboard or tensorboardx (#17053) --- ## PyTorch ### Changed - Changed to the `NeptuneLogg | Low | 4/12/2023 |
| 2.0.1.post0 | ## App ### Fixed - Fix frontend hosts when running with multi-process in the cloud ([#17324](https://github.com/Lightning-AI/lightning/pull/17324)) --- ## Fabric No changes. --- ## PyTorch ### Fixed - Make the `is_picklable` function more robust ([#17270](https://github.com/Lightning-AI/lightning/pull/17270)) --- ## Contributors @eng-yue @ethanwharris @Borda @awaelchli @carmocca _If we forgot someone due to not matching commit email with GitHub account, let | Low | 4/11/2023 |
| 2.0.1 | ## App No changes --- ## Fabric ### Changed - Generalized `Optimizer` validation to accommodate both FSDP 1.x and 2.x ([#16733](https://github.com/Lightning-AI/lightning/pull/16733)) --- ## PyTorch ### Changed - Pickling the `LightningModule` no longer pickles the `Trainer` ([#17133](https://github.com/Lightning-AI/lightning/pull/17133)) - Generalized `Optimizer` validation to accommodate both FSDP 1.x and 2.x ([#16733](https://github.com/Lightning-AI/lightning/pull/1 | Low | 3/30/2023 |
| 2.0.0 | [Lightning AI](https://lightning.ai) is excited to announce the release of Lightning 2.0 :zap: - [Highlights](#highlights) - [Backward Incompatible Changes](#bc-changes) - [PyTorch](#bc-changes-pytorch) - [Fabric](#bc-changes-fabric) - [Full Changelog](#changelog) - [PyTorch](#changelog-pytorch) - [Fabric](#changelog-fabric) - [App](#changelog-app) - [Contributors](#contributors) Over the last couple of years PyTorch Lightning has become the preferred deep lea | Low | 3/15/2023 |
| 1.9.4 | ## App ### Removed - Removed implicit ui testing with `testing.run_app_in_cloud` in favor of headless login and app selection ([#16741](https://github.com/Lightning-AI/lightning/pull/16741)) --- ## Fabric ### Added - Added `Fabric(strategy="auto")` support ([#16916](https://github.com/Lightning-AI/lightning/pull/16916)) ### Fixed - Fixed edge cases in parsing device ids using NVML ([#16795](https://github.com/Lightning-AI/lightning/pull/16795)) - Fixed DDP spawn hang on | Low | 3/1/2023 |
| 2.0.0rc0 | **Full Changelog**: https://github.com/Lightning-AI/lightning/compare/1.9.0...2.0.0rc0 | Low | 2/23/2023 |
| 1.9.3 | ## App ### Fixed - Fixed `lightning open` command and improved redirects ([#16794](https://github.com/Lightning-AI/lightning/pull/16794)) --- ## Fabric ### Fixed - Fixed an issue causing a wrong environment plugin to be selected when `accelerator=tpu` and `devices > 1` ([#16806](https://github.com/Lightning-AI/lightning/pull/16806)) - Fixed parsing of defaults for `--accelerator` and `--precision` in Fabric CLI when `accelerator` and `precision` are set to non-default values i | Low | 2/21/2023 |
| 1.9.2 | ## App ### Added - Added Storage Commands ([#16740](https://github.com/Lightning-AI/lightning/pull/16740)) * `rm`: Delete files from your Cloud Platform Filesystem - Added `lightning connect data` to register data connection to private s3 buckets ([#16738](https://github.com/Lightning-AI/lightning/pull/16738)) --- ## Fabric ### Fixed - Fixed an attribute error and improved input validation for invalid strategy types being passed to Fabric ([#16693](https://github.com/Lightn | Low | 2/15/2023 |
| 1.9.1 | ## App ### Added - Added `lightning open` command (#16482) - Added experimental support for interruptable GPU in the cloud (#16399) - Added FileSystem abstraction to simply manipulate files (#16581) - Added Storage Commands (#16606) * `ls`: List files from your Cloud Platform Filesystem * `cd`: Change the current directory within your Cloud Platform filesystem (terminal session based) * `pwd`: Return the current folder in your Cloud Platform Filesystem * `cp`: Copy files | Low | 2/10/2023 |
| 1.9.0 | ## App ### Added - Added a possibility to set up basic authentication for Lightning apps (#16105) ### Changed - The LoadBalancer now uses internal ip + port instead of URL exposed (#16119) - Added support for logging in different trainer stages with `DeviceStatsMonitor` (#16002) - Changed `lightning_app.components.serve.gradio` to `lightning_app.components.serve.gradio_server` (#16201) - Made cluster creation/deletion async by default (#16185) ### Fixed - Fixed not being | Low | 1/17/2023 |
