Description
# 🗂️ LlamaIndex 🦙 [](https://pypi.org/project/llama-index/) [](https://github.com/run-llama/llama_index/actions/workflows/build_package.yml) [](https://github.com/jerryjliu/llama_index/graphs/contributors) [](https://discord.gg/dGcwcsnxhU) [](https://x.com/llama_index) [](https://www.reddit.com/r/LlamaIndex/) [](https://www.phorm.ai/query?projectId=c5863b56-6703-4a5d-87b6-7e6031bf16b6) LlamaIndex OSS (by [LlamaIndex](https://llamaindex.ai?utm_medium=li_github&utm_source=github&utm_campaign=2026--)) is an open-source framework to build agentic applications. **[Parse](https://cloud.llamaindex.ai?utm_medium=li_github&utm_source=github&utm_campaign=2026--)** is our enterprise platform for agentic OCR, parsing, extraction, indexing and more. You can use LlamaParse with this framework or on its own; see [LlamaParse](#llamacloud-document-agent-platform) below for signup and product links. > ### 📚 **Documentation:** > > - [LlamaParse](https://developers.llamaindex.ai/python/cloud/llamaparse/?utm_medium=li_github&utm_source=github&utm_campaign=2026--) > - [LlamaIndex OSS](https://developers.llamaindex.ai/python/framework/?utm_medium=li_github&utm_source=github&utm_campaign=2026--) > - [LlamaAgents](https://developers.llamaindex.ai/python/llamaagents/overview/?utm_medium=li_github&utm_source=github&utm_campaign=2026--) Building with LlamaIndex typically involves working with LlamaIndex core and a c
Release History
| Version | Changes | Urgency | Date |
|---|---|---|---|
| 0.14.21 | Imported from PyPI (0.14.21) | Low | 4/21/2026 |
| v0.14.21 | # Release Notes ## [2026-04-21] ### llama-index-callbacks-honeyhive [0.5.0] - chore(deps): bump the pip group across 87 directories with 2 updates ([#21382](https://github.com/run-llama/llama_index/pull/21382)) - chore(deps): bump the pip group across 68 directories with 2 updates ([#21394](https://github.com/run-llama/llama_index/pull/21394)) ### llama-index-core [0.14.21] - fix(core): prevent `KeyError` in `DocumentSummaryIndex.delete_nodes` when invalid node ID is provided ([#21067](http | High | 4/21/2026 |
| v0.14.20 | # Release Notes ## [2026-04-03] ### llama-index-agent-agentmesh [0.2.0] - fix vulnerability with nltk ([#21275](https://github.com/run-llama/llama_index/pull/21275)) ### llama-index-callbacks-agentops [0.5.0] - chore(deps): bump the uv group across 50 directories with 2 updates ([#21164](https://github.com/run-llama/llama_index/pull/21164)) - chore(deps): bump the uv group across 24 directories with 1 update ([#21219](https://github.com/run-llama/llama_index/pull/21219)) - chore(deps): bump | Medium | 4/3/2026 |
| v0.14.19 | # Release Notes ## [2026-03-25] ### llama-index-agent-agentmesh [0.2.0] - chore(deps): bump the uv group across 49 directories with 1 update ([#21083](https://github.com/run-llama/llama_index/pull/21083)) ### llama-index-callbacks-argilla [0.5.0] - chore(deps): bump the uv group across 3 directories with 1 update ([#21069](https://github.com/run-llama/llama_index/pull/21069)) ### llama-index-core [0.14.19] - fix: pass `delete_from_docstore` parameter in `BaseIndex.delete_ref_doc` ([#20990 | Medium | 3/25/2026 |
| v0.14.18 | # Release Notes ## [2026-03-16] ### llama-index-agent-agentmesh [0.2.0] - chore: deprecate python 3.9 once and for all ([#20956](https://github.com/run-llama/llama_index/pull/20956)) ### llama-index-agent-azure [0.3.0] - chore: deprecate python 3.9 once and for all ([#20956](https://github.com/run-llama/llama_index/pull/20956)) ### llama-index-callbacks-agentops [0.5.0] - chore: deprecate python 3.9 once and for all ([#20956](https://github.com/run-llama/llama_index/pull/20956)) ### llam | Low | 3/16/2026 |
| v0.14.16 | # Release Notes ## [2026-03-10] ### llama-index-core [0.14.16] - Add token-bucket rate limiter for LLM and embedding API calls ([#20712](https://github.com/run-llama/llama_index/pull/20712)) - Fix/20706 chonkie init doc ([#20713](https://github.com/run-llama/llama_index/pull/20713)) - fix: pass tool_choice through FunctionCallingProgram ([#20740](https://github.com/run-llama/llama_index/pull/20740)) - feat: Multimodal LLMReranker ([#20743](https://github.com/run-llama/llama_index/pull/20743)) | Low | 3/10/2026 |
| v0.14.15 | # Release Notes ## [2026-02-18] ### llama-index-agent-agentmesh [0.1.0] - [Integration] AgentMesh: Trust Layer for LlamaIndex Agents ([#20644](https://github.com/run-llama/llama_index/pull/20644)) ### llama-index-core [0.14.15] - Support basic operations for multimodal types ([#20640](https://github.com/run-llama/llama_index/pull/20640)) - Feat recursive llm type support ([#20642](https://github.com/run-llama/llama_index/pull/20642)) - fix: remove redundant metadata_seperator field from Tex | Low | 2/18/2026 |
| v0.14.14 | # Release Notes ## [2026-02-10] ### llama-index-callbacks-wandb [0.4.2] - Fix potential crashes and improve security defaults in core components ([#20610](https://github.com/run-llama/llama_index/pull/20610)) ### llama-index-core [0.14.14] - fix: catch pydantic ValidationError in VectorStoreQueryOutputParser ([#20450](https://github.com/run-llama/llama_index/pull/20450)) - fix: distinguish empty string from None in MediaResource.hash ([#20451](https://github.com/run-llama/llama_index/pull/2 | Low | 2/10/2026 |
| v0.14.13 | # Release Notes ## [2026-01-21] ### llama-index-core [0.14.13] - feat: add early_stopping_method parameter to agent workflows ([#20389](https://github.com/run-llama/llama_index/pull/20389)) - feat: Add token-based code splitting support to CodeSplitter ([#20438](https://github.com/run-llama/llama_index/pull/20438)) - Add RayIngestionPipeline integration for distributed data ingestion ([#20443](https://github.com/run-llama/llama_index/pull/20443)) - Added the multi-modal version of the Condens | Low | 1/21/2026 |
| v0.14.12 | # Release Notes ## [2025-12-30] ### llama-index-callbacks-agentops [0.4.1] - Feat/async tool spec support ([#20338](https://github.com/run-llama/llama_index/pull/20338)) ### llama-index-core [0.14.12] - Feat/async tool spec support ([#20338](https://github.com/run-llama/llama_index/pull/20338)) - Improve `MockFunctionCallingLLM` ([#20356](https://github.com/run-llama/llama_index/pull/20356)) - fix(openai): sanitize generic Pydantic model schema names ([#20371](https://github.com/run-llama/l | Low | 12/30/2025 |
| v0.14.10 | # Release Notes ## [2025-12-04] ### llama-index-core [0.14.10] - feat: add mock function calling llm ([#20331](https://github.com/run-llama/llama_index/pull/20331)) ### llama-index-llms-qianfan [0.4.1] - test: fix typo 'reponse' to 'response' in variable names ([#20329](https://github.com/run-llama/llama_index/pull/20329)) ### llama-index-tools-airweave [0.1.0] - feat: add Airweave tool integration with advanced search features ([#20111](https://github.com/run-llama/llama_index/pull/20111 | Low | 12/4/2025 |
| v0.14.9 | # Release Notes ## [2025-12-02] ### llama-index-agent-azure [0.2.1] - fix: Pin azure-ai-projects version to prevent breaking changes ([#20255](https://github.com/run-llama/llama_index/pull/20255)) ### llama-index-core [0.14.9] - MultiModalVectorStoreIndex now returns a multi-modal ContextChatEngine. ([#20265](https://github.com/run-llama/llama_index/pull/20265)) - Ingestion to vector store now ensures that \_node-content is readable ([#20266](https://github.com/run-llama/llama_index/pull/20 | Low | 12/2/2025 |
| v0.14.8 | # Release Notes ## [2025-11-10] ### llama-index-core [0.14.8] - Fix ReActOutputParser getting stuck when "Answer:" contains "Action:" ([#20098](https://github.com/run-llama/llama_index/pull/20098)) - Add buffer to image, audio, video and document blocks ([#20153](https://github.com/run-llama/llama_index/pull/20153)) - fix(agent): Handle multi-block ChatMessage in ReActAgent ([#20196](https://github.com/run-llama/llama_index/pull/20196)) - Fix/20209 ([#20214](https://github.com/run-llama/llama | Low | 11/10/2025 |
| v0.14.7 | # Release Notes ## [2025-10-30] ### llama-index-core [0.14.7] - Feat/serpex tool integration ([#20141](https://github.com/run-llama/llama_index/pull/20141)) - Fix outdated error message about setting LLM ([#20157](https://github.com/run-llama/llama_index/pull/20157)) - Fixing some recently failing tests ([#20165](https://github.com/run-llama/llama_index/pull/20165)) - Fix: update lock to latest workflow and fix issues ([#20173](https://github.com/run-llama/llama_index/pull/20173)) - fix: ensu | Low | 10/30/2025 |
| v0.14.6 | # Release Notes ## [2025-10-26] ### llama-index-core [0.14.6] - Add allow_parallel_tool_calls for non-streaming ([#20117](https://github.com/run-llama/llama_index/pull/20117)) - Fix invalid use of field-specific metadata ([#20122](https://github.com/run-llama/llama_index/pull/20122)) - update doc for SemanticSplitterNodeParser ([#20125](https://github.com/run-llama/llama_index/pull/20125)) - fix rare cases when sentence splits are larger than chunk size ([#20147](https://github.com/run-llama/l | Low | 10/26/2025 |
| v0.14.5 | # Release Notes ## [2025-10-15] ### llama-index-core [0.14.5] - Remove debug print ([#20000](https://github.com/run-llama/llama_index/pull/20000)) - safely initialize RefDocInfo in Docstore ([#20031](https://github.com/run-llama/llama_index/pull/20031)) - Add progress bar for multiprocess loading ([#20048](https://github.com/run-llama/llama_index/pull/20048)) - Fix duplicate node positions when identical text appears multiple times in document ([#20050](https://github.com/run-llama/llama_inde | Low | 10/15/2025 |
| v0.14.4 | # Release Notes ## [2025-09-24] ### llama-index-core [0.14.4] - fix pre-release installs ([#20010](https://github.com/run-llama/llama_index/pull/20010)) ### llama-index-embeddings-anyscale [0.4.2] - fix llm deps for openai ([#19944](https://github.com/run-llama/llama_index/pull/19944)) ### llama-index-embeddings-baseten [0.1.2] - fix llm deps for openai ([#19944](https://github.com/run-llama/llama_index/pull/19944)) ### llama-index-embeddings-fireworks [0.4.2] - fix ll | Low | 10/3/2025 |
| v0.14.3 | # Release Notes ## [2025-09-24] ### llama-index-core [0.14.3] - Fix Gemini thought signature serialization ([#19891](https://github.com/run-llama/llama_index/pull/19891)) - Adding a ThinkingBlock among content blocks ([#19919](https://github.com/run-llama/llama_index/pull/19919)) ### llama-index-llms-anthropic [0.9.0] - Adding a ThinkingBlock among content blocks ([#19919](https://github.com/run-llama/llama_index/pull/19919)) ### llama-index-llms-baseten [0.1.4] - added kimik2 0905 and re | Low | 9/24/2025 |
| v0.14.2 | # Release Notes | Low | 9/16/2025 |
| v0.14.1.post1 | # Release Notes | Low | 9/15/2025 |
| v0.14.1 | # Release Notes | Low | 9/15/2025 |
| v0.14.0 | # Release Notes ## [2025-09-08] **NOTE:** All packages have been bumped to handle the latest llama-index-core version. ### `llama-index-core` [0.14.0] - breaking: bumped `llama-index-workflows` dependency to 2.0 - Improve stacktraces clarity by avoiding wrapping errors in WorkflowRuntimeError - Remove deprecated checkpointer feature - Remove deprecated sub-workflows feature - Remove deprecated `send_event` method from Workflow class (still existing on the Context class) - Remove de | Low | 9/8/2025 |
| v0.13.6 | # Release Notes | Low | 9/7/2025 |
| v0.13.5 | # Release Notes ## [2025-09-04] ### `llama-index-core` [0.13.5] - feat: add thinking delta field to AgentStream events to expose from LLM responses (#19785) - fix: fix path handling in SimpleDirectoryReader and PDFReader path fix (#19794) ### `llama-index-llms-bedrock-converse` [0.9.0] - feat: add system prompt and tool caching config kwargs to BedrockConverse (#19737) ### `llama-index-llms-litellm` [0.6.2] - fix: Handle missing tool call IDs with UUID fallback (#19789) - fix: Fix critica | Low | 9/4/2025 |
| v0.13.4 | # Release Notes ## [2025-09-01] ### `llama-index-core` [0.13.4] - feat: Add PostgreSQL schema support to Memory and SQLAlchemyChatStore (#19741) - feat: add missing sync wrapper of put_messages in memory (#19746) - feat: add option for an initial tool choice in FunctionAgent (#19738) - fix: Calling ContextChatEngine with a QueryBundle (instead of a string) (#19714) ### `llama-index-embeddings-baseten` [0.1.0] - feat: baseten integration (#19710) ### `llama-index-embeddings-i | Low | 9/2/2025 |
| v0.13.3.post1 | # Release Notes | Low | 8/29/2025 |
| v0.13.3 | # Release Notes ## [2025-08-22] ### `llama-index-core` [0.13.3] - fix: add timeouts on image `.get()` requests (#19723) - fix: fix StreamingAgentChatResponse losses message bug (#19674) - fix: Fixing crashing when retrieving from empty vector store index (#19706) - fix: Calling ContextChatEngine with a QueryBundle (instead of a string) (#19714) - fix: Fix faithfulness evaluate crash when no images provided (#19686) ### `llama-index-embeddings-heroku` [0.1.0] - feat: Adds support for HerokuE | Low | 8/22/2025 |
| v0.13.2.post1 | # Release Notes - docs fixes | Low | 8/14/2025 |
| v0.13.2 | # Release Notes ## [2025-08-14] ### `llama-index-core` [0.13.2] - feat: allow streaming to be disabled in agents (#19668) - fix: respect the value of NLTK_DATA env var if present (#19664) - fix: Order preservation and fetching in batch non-cached embeddings in `a/get_text_embedding_batch()` (#19536) ### `llama-index-embeddings-ollama` [0.8.1] - fix: Access embedding output (#19635) - fix: use normalized embeddings (#19622) ### `llama-index-graph-rag-cognee` [0.3.0] - fix: Update and fix c | Low | 8/14/2025 |
| v0.13.1 | # Release Notes ## [2025-08-08] ### `llama-index-core` [0.13.1] - fix: safer token counting in messages (#19599) - fix: Fix Document truncation in `FunctionTool._parse_tool_output` (#19585) - feat: Enabled partially formatted system prompt for ReAct agent (#19598) ### `llama-index-embeddings-ollama` [0.8.0] - fix: use /embed instead of /embeddings for ollama (#19622) ### `llama-index-embeddings-voyageai` [0.4.1] - feat: Add support for voyage context embeddings (#19590) ### `llama-index- | Low | 8/8/2025 |
| v0.13.0.post3 | # Release Notes | Low | 8/8/2025 |
| v0.13.0.post2 | # Release Notes | Low | 8/5/2025 |
| v0.13.0.post1 | # Release Notes | Low | 7/31/2025 |
| v0.13.0 | # Release Notes **NOTE:** All packages have been bumped to handle the latest llama-index-core version. ### `llama-index-core` [0.13.0] - breaking: removed deprecated agent classes, including `FunctionCallingAgent`, the older `ReActAgent` implementation, `AgentRunner`, all step workers, `StructuredAgentPlanner`, `OpenAIAgent`, and more. All users should migrate to the new workflow based agents: `FunctionAgent`, `CodeActAgent`, `ReActAgent`, and `AgentWorkflow` (#19529) - breaking: remov | Low | 7/31/2025 |
| v0.12.52.post1 | # Release Notes | Low | 7/28/2025 |
| v0.12.52 | # Release Notes ## [2025-07-22] ### `llama-index-core` [0.12.52.post1] - fix: do not write system prompt to memory in agents (#19512) ### `llama-index-core` [0.12.52] - fix: Fix missing prompt in async MultiModalLLMProgram calls (#19504) - fix: Properly raise errors from docstore, fixes Vector Index Retrieval for `stores_text=True/False` (#19501) ### `llama-index-indices-managed-bge-m3` [0.5.0] - feat: optimize memory usage for BGEM3Index persistence (#19496) ### `llama- | Low | 7/23/2025 |
| v0.12.51 | # Release Notes ## [2025-07-21] ### `llama-index-core` [0.12.51] - feat: Enhance FunctionTool with auto type conversion for basic Python types like date when using pydantic fields in functions (#19479) - fix: Fix retriever KeyError when using FAISS and other vector stores that do no store text (#19476) - fix: add system prompt to memory and use it also for structured generation (#19490) ### `llama-index-readers-azstorage-blob` [0.3.2] - fix: Fix metadata serialization issue in A | Low | 7/22/2025 |
| v0.12.50 | # Release Notes ## [2025-07-19] ### `llama-index-core` [0.12.50] - feat: support html table extraction in MarkdownElementNodeParser (#19449) - fix/slightly breaking: make `get_cache_dir()` function more secure by changing default location (#19415) - fix: resolve race condition in SQLAlchemyChatStore with precise timestamps (#19432) - fix: update document store import to use BaseDocumentStore in DocumentContextExtractor (#19466) - fix: improve empty retrieval check in vector index retriever (# | Low | 7/19/2025 |
| v0.12.49 | # Release Notes ## [2025-07-14] ### `llama-index-core` [0.12.49] - fix: skip tests on CI (#19416) - fix: fix structured output (#19414) - Fix: prevent duplicate triplets in SimpleGraphStore.upsert_triplet (#19404) - Add retry capability to workflow agents (#19393) - chore: modifying raptors dependencies with stricter rules to avoid test failures (#19394) - feat: adding a first implementation of structured output in agents (#19337) - Add tests for and fix issues with Vector Store n | Low | 7/14/2025 |
| v0.12.48 | # Release Notes ## [2025-07-09] ### `llama-index-core` [0.12.48] - fix: convert dict chat_history to ChatMessage objects in AgentWorkflowStartEvent (#19371) - fix: Replace ctx.get/set with ctx.store.get/set in Context (#19350) - Bump the pip group across 6 directories with 1 update (#19357) - Make fewer trips to KV store during Document Hash Checks (#19362) - Don't store Copy of document in metadata and properly return Nodes (#19343) - Bump llama-index-core from 0.12.8 to 0.12.41 i | Low | 7/9/2025 |
| v0.12.47 | # Release Notes ### `llama-index-core` [0.12.47] - feat: add default `max_iterations` arg to `.run()` of 20 for agents (#19035) - feat: set `tool_required` to `True` for `FunctionCallingProgram` and structured LLMs where supported (#19326) - fix: fix missing raw in agent workflow events (#19325) - fix: fixed parsing of empty list in parsing json output (#19318) - chore: Deprecate Multi Modal LLMs (#19115) - All existing multi-modal llms are now extensions of their base `LLM` counter | Low | 7/7/2025 |
| v0.12.46.post1 | # Release Notes | Low | 7/3/2025 |
| v0.12.46 | # Release Notes ## [2025-07-02] ### `llama-index-core` [0.12.46] - feat: Add async delete and insert to vector store index (#19281) - fix: Fixing ChatMessage to str handling of empty inputs (#19302) - fix: fix function tool context detection with typed context (#19309) - fix: inconsistent ref node handling (#19286) - chore: simplify citation block schema (#19308) ### `llama-index-embeddings-google-genai` [0.2.1] - chore: bump min google-genai version (#19304) ### `llama-ind | Low | 7/3/2025 |
| v0.12.45 | # Release Notes ## [2025-06-30] ### `llama-index-core` [0.12.45] - feat: allow tools to output content blocks (#19265) - feat: Add chat UI events and models to core package (#19242) - fix: Support loading `Node` from ingestion cache (#19279) - fix: Fix SemanticDoubleMergingSplitterNodeParser not respecting max_chunk_size (#19235) - fix: replace `get_doc_id()` with `id_` in base index (#19266) - chore: remove usage and references to deprecated Context get/set API (#19275) - chore: | Low | 7/1/2025 |
| v0.12.44.post1 | # Release Notes | Low | 6/30/2025 |
| v0.12.44 | # Release Notes ### `llama-index-core` [0.12.44] - feat: Adding a `CachePoint` content block for caching chat messages (#19193) - fix: fix react system header formatting in workflow agent (#19158) - fix: fix ReActOutputParser when no "Thought:" prefix is produced by the LLM (#19190) - fix: Fixed string striping in react output parser (#19192) - fix: properly handle system prompt for CodeAct agent (#19191) - fix: Exclude raw field in AgentStream event to fix potential serialization iss | Low | 6/26/2025 |
| v0.12.43 | # Release Notes ### `llama-index-core` [0.12.43] - feat: Make BaseWorkflowAgent a workflow itself (#19052) - fix: make the progress bar of title extractor unified (#19131) - fix: Use `get_tqdm_iterable` in SimpleDirectoryReader (#18722) - chore: move out Workflows code to `llama-index-workflows` and keeping backward compatibility (#19043) - chore: move instrumentation code out to its own package `llama-index-instrumentation` (#19062) ### `llama-index-llms-bedrock-converse` [0.7.2] | Low | 6/19/2025 |
| v0.12.42 | # Release Notes ### `llama-index-core` [0.12.42] - fix: pass input message to memory get (#19054) - fix: use async memory operations within async functions (#19032) - fix: Using uuid instead of hashing for broader compatibility in SQLTableNodeMapping (#19011) ### `llama-index-embeddings-bedrock` [0.5.1] - feat: Update aioboto3 dependency (#19015) ### `llama-index-indices-managed-llama-cloud` [0.7.7] - feat: figure retrieval SDK integration (#19017) - fix: Return empty list w | Low | 6/12/2025 |
| v0.12.41 | # Release Notes ### `llama-index-core` [0.12.41] - feat: Add MutableMappingKVStore for easier caching (#18893) - fix: async functions in tool specs (#19000) - fix: properly apply file limit to SimpleDirectoryReader (#18983) - fix: overwriting of LLM callback manager from Settings (#18951) - fix: Adding warning in the docstring of JsonPickleSerializer for the user to deserialize only safe things, rename to PickleSerializer (#18943) - fix: ImageDocument path and url checking to ensure t | Low | 6/7/2025 |
| v0.12.40 | # Release Notes ### `llama-index-core` [0.12.40] - feat: Add StopEvent step validation so only one workflow step can handle StopEvent (#18932) - fix: Add compatibility check before providing `tool_required` to LLM args (#18922) ### `llama-index-embeddings-cohere` [0.5.1] - fix: add batch size validation with 96 limit for Cohere API (#18915) ### `llama-index-llms-anthropic` [0.7.2] - feat: Support passing static AWS credentials to Anthropic Bedrock (#18935) - fix: Handle untes | Low | 6/3/2025 |
| v0.12.39 | # Release Notes ## [2025-05-30] ### `llama-index-core` [0.12.39] - feat: Adding Resource to perform dependency injection in Workflows (docs coming soon!) (#18884) - feat: Add `tool_required` param to function calling LLMs (#18654) - fix: make prefix and response non-required for hitl events (#18896) - fix: SelectionOutputParser when LLM chooses no choices (#18886) ### `llama-index-indices-managed-llama-cloud` [0.7.2] - feat: add non persisted composite retrieval (#18908) ### | Low | 5/30/2025 |
