freshcrate
Home > Developer Tools > torchmetrics

torchmetrics

PyTorch native Metrics

Description

<div align="center"> <img src="https://github.com/Lightning-AI/torchmetrics/raw/v1.9.0/docs/source/_static/images/logo.png" width="400px"> **Machine learning metrics for distributed, scalable PyTorch applications.** ______________________________________________________________________ <p align="center"> <a href="#what-is-torchmetrics">What is Torchmetrics</a> β€’ <a href="#implementing-your-own-module-metric">Implementing a metric</a> β€’ <a href="#build-in-metrics">Built-in metrics</a> β€’ <a href="https://lightning.ai/docs/torchmetrics/stable/">Docs</a> β€’ <a href="#community">Community</a> β€’ <a href="#license">License</a> </p> ______________________________________________________________________ [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/torchmetrics)](https://pypi.org/project/torchmetrics/) [![PyPI Status](https://badge.fury.io/py/torchmetrics.svg)](https://badge.fury.io/py/torchmetrics) [![PyPI - Downloads](https://img.shields.io/pypi/dm/torchmetrics) ](https://pepy.tech/project/torchmetrics) [![Conda](https://img.shields.io/conda/v/conda-forge/torchmetrics?label=conda&color=success)](https://anaconda.org/conda-forge/torchmetrics) [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/torchmetrics/blob/master/LICENSE) [![CI testing | CPU](https://github.com/Lightning-AI/torchmetrics/actions/workflows/ci-tests.yml/badge.svg?event=push)](https://github.com/Lightning-AI/torchmetrics/actions/workflows/ci-tests.yml) [![Build Status](https://dev.azure.com/Lightning-AI/Metrics/_apis/build/status%2FTM.unittests?branchName=refs%2Ftags%2Fv1.9.0)](https://dev.azure.com/Lightning-AI/Metrics/_build/latest?definitionId=2&branchName=refs%2Ftags%2Fv1.9.0) [![codecov](https://codecov.io/gh/Lightning-AI/torchmetrics/release/v1.9.0/graph/badge.svg?token=NER6LPI3HS)](https://codecov.io/gh/Lightning-AI/torchmetrics) [![pre-commit.ci status](https://results.pre-commit.ci/badge/github/Lightning-AI/torchmetrics/master.svg)](https://results.pre-commit.ci/latest/github/Lightning-AI/torchmetrics/master) [![Documentation Status](https://readthedocs.org/projects/torchmetrics/badge/?version=latest)](https://torchmetrics.readthedocs.io/en/latest/?badge=latest) [![Discord](https://img.shields.io/discord/1077906959069626439?style=plastic)](https://discord.gg/VptPCZkGNa) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5844769.svg)](https://doi.org/10.5281/zenodo.5844769) [![JOSS status](https://joss.theoj.org/papers/561d9bb59b400158bc8204e2639dca43/status.svg)](https://joss.theoj.org/papers/561d9bb59b400158bc8204e2639dca43) ______________________________________________________________________ </div> # Looking for GPUs? Over 340,000 developers use [Lightning Cloud](https://lightning.ai/?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme) - purpose-built for PyTorch and PyTorch Lightning. - [GPUs](https://lightning.ai/pricing?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme) from $0.19. - [Clusters](https://lightning.ai/clusters?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): frontier-grade training/inference clusters. - [AI Studio (vibe train)](https://lightning.ai/studios?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): workspaces where AI helps you debug, tune and vibe train. - [AI Studio (vibe deploy)](https://lightning.ai/studios?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): workspaces where AI helps you optimize, and deploy models. - [Notebooks](https://lightning.ai/notebooks?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): Persistent GPU workspaces where AI helps you code and analyze. - [Inference](https://lightning.ai/deploy?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): Deploy models as inference APIs. # Installation Simple installation from PyPI ```bash pip install torchmetrics ``` <details> <summary>Other installations</summary> Install using conda ```bash conda install -c conda-forge torchmetrics ``` Install using uv ```bash uv add torchmetrics ``` Pip from source ```bash # with git pip install git+https://github.com/Lightning-AI/torchmetrics.git@release/stable ``` Pip from archive ```bash pip install https://github.com/Lightning-AI/torchmetrics/archive/refs/heads/release/stable.zip ``` Extra dependencies for specialized metrics: ```bash pip install torchmetrics[audio] pip install torchmetrics[image] pip install torchmetrics[text] pip install torchmetrics[all] # install all of the above ``` Install latest developer version ```bash pip install https://github.com/Lightning-AI/torchmetrics/archive/master.zip ``` </details> ______________________________________________________________________ # What is TorchMetrics TorchMetrics is a collection of 100+ PyTorch metrics implementations and an easy-to-use API to create custom metrics. It offers: - A standardized interface to increase reproducibili

Release History

VersionChangesUrgencyDate
1.9.0Imported from PyPI (1.9.0)Low4/21/2026
v1.9.0## [1.9.0] - 2026-03-06 ### Changed - Defaulting Dice score `average="macro"` ([#3042](https://github.com/Lightning-AI/torchmetrics/pull/3042)) - Dropped Python 3.9 support, set 3.10 as minimum ([#3330](https://github.com/Lightning-AI/torchmetrics/pull/3330)) - Replaced `pkg_resources` with `packaging` ([#3329](https://github.com/Lightning-AI/torchmetrics/pull/3329)) ### Fixed - Fixed device mismatch in `Metric` base class ([#3316](https://github.com/Lightning-AI/torchmetrics/pull/Low3/9/2026
v1.8.2## [1.8.2] - 2025-09-03 ### Fixed - Fixed `BinaryPrecisionRecallCurve` now returns `NaN` for precision when no predictions meet a threshold ([#3227](https://github.com/Lightning-AI/torchmetrics/pull/3227)) - Fixed `precision_at_fixed_recall` and `recall_at_fixed_precision` to correctly return `NaN` thresholds when recall/precision conditions are not met ([#3226](https://github.com/Lightning-AI/torchmetrics/pull/3226)) --- ### Key Contributors @iamkulbhushansingh _If we forgot Low9/3/2025
v1.8.1## [1.8.1] - 2025-08-07 ### Changed - Added `reduction='none'` to `vif` metric (#3196) - Float input support for segmentation metrics (#3198) ### Fixed - Fixed unintended `sigmoid` normalization in `BinaryPrecisionRecallCurve` (#3182) --- ### Key Contributors @iamkulbhushansingh, @PussyCat0700, @simonreise _If we forgot someone due to not matching commit email with GitHub account, let us know :]_ --- **Full Changelog**: https://github.com/Lightning-AI/torchmetricLow8/7/2025
v1.8.0The upcoming TorchMetrics v1.8.0 release introduces three flagship metrics, each designed to address critical evaluation needs in real-world applications. Video Multi-Method Assessment Fusion (VMAF) brings a perceptual video-quality score that closely mirrors human judgment, powering streaming services such as Netflix and YouTube to optimize encoding ladders for consistent viewer experiences and enabling video-restoration labs to quantify improvements achieved by denoising and super-resolutioLow7/23/2025
v1.7.4## [1.7.4] - 2025-07-04 ### Changed - Improved numerical stability of pearson's correlation coefficient (#3152) ### Fixed - Fixed: Ignore zero and negative predictions in retrieval metrics (#3160) - Fixed SSIM `dist_reduce_fx` when `reduction=None` for distributed training (#3162, #3166) - Fixed attribute error (#3154) - Fixed incorrect shape in `_pearson_corrcoef_update` (#3168) --- ### Key Contributors @AymenKallala, @gratus907, @Isalia20, @rittik9 _If we forgot soLow7/5/2025
v1.7.3## [1.7.3] - 2025-06-13 ### Fixed - Fixed: ensure `WrapperMetric` resets `wrapped_metric` state (#3123) - Fixed `top_k` in `multiclass_accuracy` (#3117) - Fixed compatibility to COCO format for `pycocotools` 2.0.10 (#3131) --- ### Key Contributors @rittik9 _If we forgot someone due to not matching commit email with GitHub account, let us know :]_ --- **Full Changelog**: https://github.com/Lightning-AI/torchmetrics/compare/v1.7.2...v1.7.3Low6/13/2025
v1.7.2## [1.7.2] - 2025-05-27 ### Changed - Enhance: improve performance of `_rank_data` (#3103) ### Fixed - Fixed `UnboundLocalError` in `MatthewsCorrCoef` (#3059) - Fixed MIFID incorrectly converts inputs to `byte` dtype with custom encoders (#3064) - Fixed `ignore_index` in `MultilabelExactMatch` (#3085) - Fixed: disable non-blocking on MPS (#3101) --- ### Key Contributors @ahmedhshahin, @gratus907, @rittik9, @ZhiyuanChen _If we forgot someone due to not matching commit Low5/28/2025
v1.7.1## [1.7.1] - 2025-04-06 ### Changed - Enhance Support Adding a `MetricCollection` to Another `MetricCollection` in `add_metrics` Function (#3032) ### Fixed - Fixed absent class `MeanIOU` (#2892) - Fixed detection IoU ignores predictions without ground truth (#3025) - Fixed error raised in `MulticlassAccuracy` when top_k>1 (#3039) --- ### Key Contributors @Isalia20, @rittik9, @SkafteNicki _If we forgot someone due to not matching commit email with GitHub account, let us Low4/7/2025
v1.7.0The upcoming release of TorchMetrics is set to deliver a range of innovative features and enhancements across multiple domains, further solidifying its position as a leading tool for machine learning metrics. In the image domain, significant additions include the ARNIQA and DeepImageStructureAndTextureSimilarity metrics, which provide new insights into image quality and similarity. Additionally, the CLIPScore metric now supports more models and processors, expanding its versatility in image-textLow3/20/2025
v1.6.3## [1.6.3] - 2024-03-13 ### Fixed - Fixed logic in how metric states referencing is handled in `MetricCollection` (#2990) - Fixed integration between class-wise wrapper and metric tracker (#3004) --- ### Key Contributors @SkafteNicki _If we forgot someone due to not matching commit email with GitHub account, let us know :]_ --- **Full Changelog**: https://github.com/Lightning-AI/torchmetrics/compare/v1.6.2...v1.6.3Low3/14/2025
v1.6.2## [1.6.2] - 2024-02-28 ### Added - Added `zero_division` argument to `DiceScore` in segmentation package (#2860) - Added `cache_session` to `DNSMOS` metric to control caching behavior (#2974) - Added `disable` option to `nan_strategy` in basic aggregation metrics (#2943) ### Changed - Make `num_classes` optional for classification in case of micro averaging (#2841) - Enhance `Clip_Score` to calculate similarities between same modalities (#2875) ### Fixed - Fixed `DiceScore`Low3/3/2025
v1.6.1## [1.6.1] - 2024-12-25 ### Changed - Enabled specifying weights path for FID (#2867) - Delete `Device2Host` caused by comm with device and host (#2840) ### Fixed - Fixed plotting of multilabel confusion matrix (#2858) - Fixed issue with shared state in metric collection when using dice score (#2848) - Fixed `top_k` for `multiclassf1score` with one-hot encoding (#2839) - Fixed slow calculations of classification metrics with MPS (#2876) --- ### Key Contributors @Isalia20Low12/25/2024
v1.6.0The latest release of TorchMetrics introduces several significant enhancements and new features that will greatly benefit users across various domains. This update includes the addition of new metrics and methods that enhance the library's functionality and usability. One of the key additions is the `NISQA` audio metric, which provides advanced capabilities for evaluating audio quality. In the classification domain, the new `LogAUC` and `NegativePredictiveValue` metrics offer improved tools fLow11/12/2024
v1.5.2## [1.5.2] - 2024-11-07 ### Changed - Re-adding `numpy` 2+ support (#2804) ### Fixed - Fixed iou scores in detection for either empty predictions/targets leading to wrong scores (#2805) - Fixed `MetricCollection` compatibility with `torch.jit.script` (#2813) - Fixed assert in PIT (#2811) - Patched `np.Inf` for `numpy` 2.0+ (#2826) --- ### Key Contributors @adamjstewart, @Borda, @SkafteNicki, @StalkerShurik, @yurithefury _If we forgot someone due to not matching commit Low11/8/2024
v1.5.1## [1.5.1] - 2024-10-22 ### Fixed - Changing `_modules` dict type in Pytorch 2.5 preventing to fail collections metrics (#2793) --- ### Key Contributors @bfolie _If we forgot someone due to not matching commit email with GitHub account, let us know :]_ --- **Full Changelog**: https://github.com/Lightning-AI/torchmetrics/compare/v1.5.0...v1.5.1Low10/23/2024
v1.5.0Shape metrics are quantitative methods used to assess and compare the geometric properties of objects, often in datasets that represent shapes. One such metric is the Procrustes Disparity, which measures the sum of the squared differences between two datasets after applying a Procrustes transformation. This transformation involves scaling, rotating, and translating the datasets to achieve optimal alignment. The Procrustes Disparity is particularly useful when comparing datasets that are similar Low10/18/2024
v1.4.3## [1.4.3] - 2024-10-10 ### Fixed - Fixed for Pearson changes inputs (#2765) - Fixed bug in `PESQ` metric where `NoUtterancesError` prevented calculating on a batch of data (#2753) - Fixed corner case in `MatthewsCorrCoef` (#2743) --- ### Key Contributors @Borda, @SkafteNicki, @veera-puthiran-14082 _If we forgot someone due to not matching commit email with GitHub account, let us know :]_ --- **Full Changelog**: https://github.com/Lightning-AI/torchmetrics/compare/v1.4.Low10/10/2024
v1.4.2## [1.4.2] - 2022-09-12 ### Added - Re-adding `Chrf` implementation (#2701) ### Fixed - Fixed wrong aggregation in `segmentation.MeanIoU` (#2698) - Fixed handling zero division error in binary IoU (Jaccard index) calculation (#2726) - Corrected the padding related calculation errors in SSIM (#2721) - Fixed compatibility of audio domain with new `scipy` (#2733) - Fixed how `prefix`/`postfix` works in `MultitaskWrapper` (#2722) - Fixed flakiness in tests related to `torch.unique` Low9/13/2024
v1.4.1## [1.4.1] - 2024-08-02 ### Changed - Calculate the text color of `ConfusionMatrix` plot based on luminance (#2590) - Updated `_safe_divide` to allow `Accuracy` to run on the GPU (#2640) - Improved better error messages for intersection detection metrics for wrong user input (#2577) ### Removed - Dropped `Chrf` implementation due to licensing issues with the upstream package (#2668) ### Fixed - Fixed bug in `MetricCollection` when using compute groups and `compute` is called Low8/3/2024
v1.4.0.post0**Full Changelog**: https://github.com/Lightning-AI/torchmetrics/compare/v1.4.0...v1.4.0.post0Low5/15/2024
v1.4.0In Torchmetrics v1.4, we are happy to introduce a new domain of metrics to the library: **segmentation metrics**. Segmentation metrics are used to evaluate how well segmentation algorithms are performing, e.g., algorithms that take in an image and pixel-by-pixel decide what kind of object it is. These kind of algorithms are necessary in applications such as self driven cars. Segmentations are closely related to classification metrics, but for now, in Torchmetrics, expect the input to be formatteLow5/6/2024
v1.3.2## [1.3.2] - 2024-03-18 ### Fixed - Fixed negative variance estimates in certain image metrics (#2378) - Fixed dtype being changed by deepspeed for certain regression metrics (#2379) - Fixed plotting of metric collection when prefix/postfix is set (#2429) - Fixed bug when `top_k>1` and `average="macro"` for classification metrics (#2423) - Fixed case where label prediction tensors in classification metrics were not validated correctly (#2427) - Fixed how auc scores are calculated in `Low3/18/2024
v1.3.1## [1.3.1] - 2024-02-12 ### Fixed - Fixed how backprop is handled in `LPIPS` metric (#2326) - Fixed `MultitaskWrapper` not being able to be logged in lightning when using metric collections (#2349) - Fixed high memory consumption in `Perplexity` metric (#2346) - Fixed cached network in `FeatureShare` not being moved to the correct device (#2348) - Fix naming of statistics in `MeanAveragePrecision` with custom max det thresholds (#2367) - Fixed custom aggregation in retrieval metrics (Low2/12/2024
v1.3.0.post0**Full Changelog**: https://github.com/Lightning-AI/torchmetrics/compare/v1.3.0...v1.3.0.post0Low1/30/2024
v1.3.0TorchMetrics v1.3 is out now! This release introduces seven new metrics in the different subdomains of TorchMetrics, adding some nice features to already established metrics. In this blogpost, we present the new metrics with short code samples. We are happy to see the continued adoption of TorchMetrics in over [19,000 Github repositories](https://github.com/Lightning-AI/torchmetrics/network/dependents) projects, and we are proud to release that we have passed 1,800 GitHub stars. ## New meLow1/11/2024
v1.2.1## [1.2.1] - 2023-11-30 ### Added - Added error if `NoTrainInceptionV3` is being initialized without `torch-fidelity` not being installed (#2143) - Added support for Pytorch `v2.1` (#2142) ### Changed - Change default state of `SpectralAngleMapper` and `UniversalImageQualityIndex` to be tensors (#2089) - Use `arange` and repeat for deterministic bincount (#2184) ### Removed - Removed unused `lpips` third-party package as dependency of `LearnedPerceptualImagePatchSimilarity` mLow12/1/2023
v1.2.0Torchmetrics v1.2 is out now! The latest release includes 11 new metrics within a new subdomain: *Clustering*. In this blog post, we briefly explain what clustering is, why it’s a useful measure and newly added metrics that can be used with code samples. ## Clustering - what is it? Clustering is an *unsupervised learning* technique. The term unsupervised here refers to the fact that we do not have ground truth targets as we do in classification. **The primary goal of clustering is to discLow9/22/2023
v1.1.2## [1.1.2] - 2023-09-11 ### Fixed - Fixed tie breaking in ndcg metric (#2031) - Fixed bug in `BootStrapper` when very few samples were evaluated that could lead to crash (#2052) - Fixed bug when creating multiple plots that lead to not all plots being shown (#2060) - Fixed performance issues in `RecallAtFixedPrecision` for large batch sizes (#2042) - Fixed bug related to `MetricCollection` used with custom metrics have `prefix`/`postfix` attributes (#2070) ### Contributors @GlaviLow9/11/2023
v1.1.1## [1.1.1] - 2023-08-29 ### Added - Added `average` argument to `MeanAveragePrecision` (#2018) ### Fixed - Fixed bug in `PearsonCorrCoef` is updated on single samples at a time (#2019) - Fixed support for pixel-wise MSE (#2017) - Fixed bug in `MetricCollection` when used with multiple metrics that return dicts with same keys (#2027) - Fixed bug in detection intersection metrics when `class_metrics=True` resulting in wrong values (#1924) - Fixed missing attributes `higher_is_betteLow8/29/2023
v1.1.0In version v1.1 of Torchmetrics, in total five new metrics have been added, bringing the total number of metrics up to 128! In particular, we have two new exciting metrics for evaluating your favorite generative models for images. ### Perceptual Path length Introduced in the famous [StyleGAN paper](https://arxiv.org/abs/1812.04948) back in 2018 the Perceptual path length metric is used to quantify how smoothly a generator manages to interpolate between points in its latent space. Why doesLow8/22/2023
v1.0.3## [1.0.3] - 2022-08-08 ### Added - Added warning to `MeanAveragePrecision` if too many detections are observed (#1978) ### Fixed - Fix support for int input for when `multidim_average="samplewise"` in classification metrics (#1977) - Fixed x/y labels when plotting confusion matrices (#1976) - Fixed IOU compute in cuda (#1982) ### Contributors @borda, @SkafteNicki`^n`, @Vivswan _If we forgot someone due to not matching commit email with GitHub account, let us know :]_Low8/8/2023
v1.0.2## [1.0.2] - 2022-08-03 ### Added - Added warning to `PearsonCorrCoeff` if input has a very small variance for its given dtype (#1926) ### Changed - Changed all non-task specific classification metrics to be true subtypes of `Metric` (#1963) ### Fixed - Fixed bug in `CalibrationError` where calculations for double precision input was performed in float precision (#1919) - Fixed bug related to the `prefix/postfix` arguments in `MetricCollection` and `ClasswiseWrapper` being dupLow8/3/2023
v1.0.1## [1.0.1] - 2022-07-13 ### Fixed - Fixes corner case when using `MetricCollection` together with aggregation metrics (#1896) - Fixed the use of `max_fpr` in `AUROC` metric when only one class is present (#1895) - Fixed bug related to empty predictions for `IntersectionOverUnion` metric (#1892) - Fixed bug related to `MeanMetric` and broadcasting of weights when Nans are present (#1898) - Fixed bug related to expected input format of pycoco in `MeanAveragePrecision` (#1913) ### ContLow7/13/2023
v1.0.0We are happy to announce that the first major release of Torchmetrics, version v1.0, is publicly available. We have worked hard on a couple of new features for this milestone release, but for v1.0.0, we have also managed to implement over **100** metrics in `torchmetrics`. ## Plotting The big new feature of v1.0 is a built-in plotting feature. As the old saying goes: *"A picture is worth a thousand words"*. Within machine learning, this is definitely also true for many things. Metrics aLow7/5/2023
v0.11.4## [0.11.4] - 2023-03-10 ### Fixed - Fixed evaluation of `R2Score` with the near constant target (#1576) - Fixed `dtype` conversion when the metric is submodule (#1583) - Fixed bug related to `top_k>1` and `ignore_index!=None` in `StatScores` based metrics (#1589) - Fixed corner case for `PearsonCorrCoef` when running in DDP mode but only on a single device (#1587) - Fixed overflow error for specific cases in `MAP` when big areas are calculated (#1607) ### Contributors @borda, @FLow3/10/2023
v0.11.3## [0.11.3] - 2023-02-28 ### Fixed - Fixed classification metrics for `byte` input (#1521) - Fixed the use of `ignore_index` in `MulticlassJaccardIndex` (#1386) ### Contributors @SkafteNicki, @vincentvaroquauxads _If we forgot someone due to not matching commit email with GitHub account, let us know :]_ **Full Changelog**: https://github.com/Lightning-AI/metrics/compare/v0.11.2...v0.11.3Low2/28/2023
v0.11.2## [0.11.2] - 2023-02-21 ### Fixed - Fixed compatibility between XLA in `_bincount` function (#1471) - Fixed type hints in methods belonging to `MetricTracker` wrapper (#1472) - Fixed `multilabel` in `ExactMatch` (#1474) ### Contributors @7shoe, @borda, @SkafteNicki, @ValerianRey _If we forgot someone due to not matching commit email with GitHub account, let us know :]_ **Full Changelog**: https://github.com/Lightning-AI/metrics/compare/v0.11.1...v0.11.2Low2/21/2023
v0.11.1## [0.11.1] - 2023-01-30 ### Fixed - Fixed type checking on the `maximize` parameter at the initialization of `MetricTracker` (#1428) - Fixed mixed precision auto-cast for `SSIM` metric (#1454) - Fixed checking for `nltk.punkt` in `RougeScore` if a machine is not online (#1456) - Fixed wrongly reset method in `MultioutputWrapper` (#1460) - Fixed `dtype` checking in `PrecisionRecallCurve` for `target` tensor (#1457) ### Contributors @borda, @SkafteNicki, @stancld _If we forgot Low1/31/2023
v0.11.0We are happy to announce that Torchmetrics v0.11 is now publicly available. In Torchmetrics v0.11 we have primarily focused on the cleanup of the large classification refactor from v0.10 and adding new metrics. With v0.11 are crossing 90+ metrics in Torchmetrics nearing the milestone of having 100+ metrics. ## New domains In Torchmetrics we are not only looking to expand with new metrics in already established metric domains such as classification or regression, but also new domains. We arLow11/30/2022
v0.10.3## [0.10.3] - 2022-11-16 ### Fixed - Fixed bug in `Metrictracker.best_metric` when `return_step=False` (#1306) - Fixed bug to prevent users from going into an infinite loop if trying to iterate of a single metric (#1320) - Fixed bug in `Metrictracker.best_metric` when `return_step=False` (#1306) ### Contributors @SkafteNicki _If we forgot someone due to not matching commit email with GitHub account, let us know :]_Low11/16/2022
v0.10.2## [0.10.2] - 2022-10-31 ### Changed - Changed in-place operation to out-of-place operation in `pairwise_cosine_similarity` (#1288) ### Fixed - Fixed high memory usage for certain classification metrics when `average='micro'` (#1286) - Fixed precision problems when `structural_similarity_index_measure` was used with autocast (#1291) - Fixed slow performance for confusion matrix-based metrics (#1302) - Fixed restrictive dtype checking in `spearman_corrcoef` when used with autocast Low10/31/2022
v0.10.1## [0.10.1] - 2022-10-21 ### Fixed - Fixed broken clone method for classification metrics (#1250) - Fixed unintentional downloading of `nltk.punkt` when `lsum` not in `rouge_keys` (#1258) - Fixed type casting in `MAP` metric between `bool` and `float32` (#1150) ### Contributors @dreaquil, @SkafteNicki, @stancld _If we forgot someone due to not matching commit email with GitHub account, let us know :]_Low10/21/2022
v0.10.0TorchMetrics v0.10 is now out, significantly changing the whole classification package. This blog post will go over the reasons why the classification package needs to be refactored, what it means for our end users, and finally, what benefits it gives. A guide on how to upgrade your code to the recent changes can be found near the bottom. ## Why the classification metrics need to change We have for a long time known that there were some underlying problems with how we initially structured Low10/4/2022
v0.9.3## [0.9.3] - 2022-08-22 ### Added - Added global option `sync_on_compute` to disable automatic synchronization when `compute` is called (#1107) ### Fixed - Fixed missing reset in `ClasswiseWrapper` (#1129) - Fixed `JaccardIndex` multi-label compute (#1125) - Fix SSIM propagate device if `gaussian_kernel` is False, add test (#1149) ### Contributors @KeVoyer1, @krshrimali, @SkafteNicki _If we forgot someone due to not matching commit email with GitHub account, let us know :]Low7/23/2022
v0.9.2## [0.9.2] - 2022-06-29 ### Fixed - Fixed mAP calculation for areas with 0 predictions (#1080) - Fixed bug where avg precision state and auroc state was not merge when using MetricCollections (#1086) - Skip box conversion if no boxes are present in `MeanAveragePrecision` (#1097) - Fixed inconsistency in docs and code when setting `average="none"` in `AvaragePrecision` metric (#1116) ### Contributors @23pointsNorth, @kouyk, @SkafteNicki _If we forgot someone due to not matching Low6/29/2022
v0.9.1## [0.9.1] - 2022-06-08 ### Added - Added specific `RuntimeError` when metric object is on the wrong device (#1056) - Added an option to specify own n-gram weights for `BLEUScore` and `SacreBLEUScore` instead of using uniform weights only. (#1075) ### Fixed - Fixed aggregation metrics when input only contains zero (#1070) - Fixed `TypeError` when providing superclass arguments as `kwargs` (#1069) - Fixed bug related to state reference in metric collection when using compute groupsLow6/8/2022
v0.9.0## Highligths TorchMetrics v0.9 is now out, and it brings significant changes to how the forward method works. This blog post goes over these improvements and how they affect both users of TorchMetrics and users that implement custom metrics. TorchMetrics v0.9 also includes several new metrics and bug fixes. **Blog: [TorchMetrics v0.9 β€” Faster forward](https://medium.com/p/d595bb321e6d)** ### The Story of the Forward Method Since the beginning of TorchMetrics, Forward has served the Low5/31/2022
v0.8.2## [0.8.2] - 2022-05-06 ### Fixed - Fixed multi-device aggregation in `PearsonCorrCoef` (#998) - Fixed MAP metric when using a custom list of thresholds (#995) - Fixed compatibility between compute groups in `MetricCollection` and prefix/postfix arg (#1007) - Fixed compatibility with future Pytorch 1.12 in `safe_matmul` (#1011, #1014) ### Contributors @ben-davidson-6, @Borda, @SkafteNicki, @tanmoyio _If we forgot someone due to not matching commit email with GitHub account, letLow5/6/2022
v0.8.1## [0.8.1] - 2022-04-27 ### Changed - Reimplemented the `signal_distortion_ratio` metric, which removed the absolute requirement of `fast-bss-eval` (#964) ### Fixed - Fixed "Sort currently does not support bool dtype on CUDA" error in MAP for empty preds (#983) - Fixed `BinnedPrecisionRecallCurve` when `thresholds` argument is not provided (#968) - Fixed `CalibrationError` to work on logit input (#985) ### Contributors @DuYicong515, @krshrimali, @quancs, @SkafteNicki _If wLow4/27/2022
v0.8.0We are excited to announce that TorchMetrics v0.8 is now available. The release includes several new metrics in the classification and image domains and some performance improvements for those working with metrics collections. ## Metric collections just got faster Common wisdom dictates that you should never evaluate the performance of your models using only a single metric but instead a collection of metrics. For example, it is common to simultaneously evaluate the accuracy, precision, rLow4/15/2022

Dependencies & License Audit

Loading dependencies...

Similar Packages

pytorch-lightningPyTorch Lightning is the lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.2.6.1
outlines-coreStructured Text Generation in Rust0.2.14
sagemakerOpen source library for training and deploying models on Amazon SageMaker.3.8.0
nvidia-cuda-cupti-cu12CUDA profiling tools runtime libs.12.9.79
tritonA language and compiler for custom Deep Learning operations3.6.0