freshcrate
Home > Frameworks > async-lru

async-lru

Simple LRU cache for asyncio

Description

async-lru ========= :info: Simple lru cache for asyncio .. image:: https://github.com/aio-libs/async-lru/actions/workflows/ci-cd.yml/badge.svg?event=push :target: https://github.com/aio-libs/async-lru/actions/workflows/ci-cd.yml?query=event:push :alt: GitHub Actions CI/CD workflows status .. image:: https://img.shields.io/pypi/v/async-lru.svg?logo=Python&logoColor=white :target: https://pypi.org/project/async-lru :alt: async-lru @ PyPI .. image:: https://codecov.io/gh/aio-libs/async-lru/branch/master/graph/badge.svg :target: https://codecov.io/gh/aio-libs/async-lru .. image:: https://img.shields.io/matrix/aio-libs:matrix.org?label=Discuss%20on%20Matrix%20at%20%23aio-libs%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat :target: https://matrix.to/#/%23aio-libs:matrix.org :alt: Matrix Room — #aio-libs:matrix.org .. image:: https://img.shields.io/matrix/aio-libs-space:matrix.org?label=Discuss%20on%20Matrix%20at%20%23aio-libs-space%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat :target: https://matrix.to/#/%23aio-libs-space:matrix.org :alt: Matrix Space — #aio-libs-space:matrix.org Installation ------------ .. code-block:: shell pip install async-lru Usage ----- This package is a port of Python's built-in `functools.lru_cache <https://docs.python.org/3/library/functools.html#functools.lru_cache>`_ function for `asyncio <https://docs.python.org/3/library/asyncio.html>`_. To better handle async behaviour, it also ensures multiple concurrent calls will only result in 1 call to the wrapped function, with all ``await``\s receiving the result of that call when it completes. .. code-block:: python import asyncio import aiohttp from async_lru import alru_cache @alru_cache(maxsize=32) async def get_pep(num): resource = 'http://www.python.org/dev/peps/pep-%04d/' % num async with aiohttp.ClientSession() as session: try: async with session.get(resource) as s: return await s.read() except aiohttp.ClientError: return 'Not Found' async def main(): for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991: pep = await get_pep(n) print(n, len(pep)) print(get_pep.cache_info()) # CacheInfo(hits=3, misses=8, maxsize=32, currsize=8) # closing is optional, but highly recommended await get_pep.cache_close() asyncio.run(main()) TTL (time-to-live in seconds, expiration on timeout) is supported by accepting `ttl` configuration parameter (off by default): .. code-block:: python @alru_cache(ttl=5) async def func(arg): return arg * 2 To prevent thundering herd issues when many cache entries expire simultaneously, you can add ``jitter`` to randomize the TTL for each entry: .. code-block:: python @alru_cache(ttl=3600, jitter=1800) async def func(arg): return arg * 2 With ``ttl=3600, jitter=1800``, each cache entry will have a random TTL between 3600 and 5400 seconds, spreading out invalidations over time. The library supports explicit invalidation for specific function call by `cache_invalidate()`: .. code-block:: python @alru_cache(ttl=5) async def func(arg1, arg2): return arg1 + arg2 func.cache_invalidate(1, arg2=2) The method returns `True` if corresponding arguments set was cached already, `False` otherwise. To check whether a specific set of arguments is present in the cache without affecting hit/miss counters or LRU ordering, use `cache_contains()`: .. code-block:: python @alru_cache(maxsize=32) async def func(arg1, arg2): return arg1 + arg2 await func(1, arg2=2) func.cache_contains(1, arg2=2) # True func.cache_contains(3, arg2=4) # False The method returns `True` if the result for the given arguments is cached, `False` otherwise. Limitations ----------- **Event Loop Affinity**: ``alru_cache`` enforces that a cache instance is used with only one event loop. If you attempt to use a cached function from a different event loop than where it was first called, a ``RuntimeError`` will be raised: .. code-block:: text RuntimeError: alru_cache is not safe to use across event loops: this cache instance was first used with a different event loop. Use separate cache instances per event loop. For typical asyncio applications using a single event loop, this is automatic and requires no configuration. If your application uses multiple event loops, create separate cache instances per loop: .. code-block:: python import threading _local = threading.local() def get_cached_fetcher(): if not hasattr(_local, 'fetcher'): @alru_cache(maxsize=100) async def fetch_data(key): ... _local.fetcher = fetch_data return _local.fetcher You can also reuse the logic of an already decorated function in a new loop

Release History

VersionChangesUrgencyDate
2.3.0Imported from PyPI (2.3.0)Low4/21/2026
v2.3.0- Added ``cache_contains()`` for read-only key lookup. - Changed cross-loop cache access to auto-reset and rebind to the current event loop. - Added ``AlruCacheLoopResetWarning`` when an auto-reset happens due to event loop change. - Forwarded ``cache_close(wait=...)`` for bound methods.Low3/19/2026
v2.2.0- Added a ``jitter`` parameter to randomise TTL. - Raise ``RuntimeError`` when cache is used by different loop.Low2/20/2026
v2.1.0- Fixed cancelling of task when all tasks waiting on it have been cancelled. - Fixed DeprecationWarning from asyncio.iscoroutinefunction.Low1/17/2026
v2.0.5- Fixed a memory leak on exceptions and minor performance improvement.Low3/16/2025
v2.0.4- Fixed an error when there are pending tasks while calling ``.cache_clear()``.Low7/27/2023
v2.0.3- Fixed a ``KeyError`` that could occur when using ``ttl`` with ``maxsize``. - Dropped ``typing-extensions`` dependency in Python 3.11+.Low7/7/2023
v1.0.3Release v1.0.3Low5/7/2022

Dependencies & License Audit

Loading dependencies...

Similar Packages

python-socksProxy (SOCKS4, SOCKS5, HTTP CONNECT) client for Python2.8.1
txaioCompatibility API between asyncio/Twisted/Trollius25.12.2
autobahnWebSocket client & server library, WAMP real-time framework25.12.2
greenbackReenter an async event loop from synchronous code1.3.0
aiolimiterasyncio rate limiter, a leaky bucket implementation1.2.1