freshcrate
Home > RAG & Memory > llama-parse

llama-parse

Parse files into RAG-Optimized formats.

Description

# LlamaParse > **⚠️ DEPRECATION NOTICE** > > This repository and its packages are deprecated and will be maintained until **May 1, 2026**. > > **Please migrate to the new packages:** > - **Python**: `pip install llama-cloud>=1.0` ([GitHub](https://github.com/run-llama/llama-cloud-py)) > - **TypeScript**: `npm install @llamaindex/llama-cloud` ([GitHub](https://github.com/run-llama/llama-cloud-ts)) > > The new packages provide the same functionality with improved performance, better support, and active development. [![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-parse)](https://pypi.org/project/llama-parse/) [![GitHub contributors](https://img.shields.io/github/contributors/run-llama/llama_parse)](https://github.com/run-llama/llama_parse/graphs/contributors) [![Discord](https://img.shields.io/discord/1059199217496772688)](https://discord.gg/dGcwcsnxhU) LlamaParse is a **GenAI-native document parser** that can parse complex document data for any downstream LLM use case (RAG, agents). It is really good at the following: - ✅ **Broad file type support**: Parsing a variety of unstructured file types (.pdf, .pptx, .docx, .xlsx, .html) with text, tables, visual elements, weird layouts, and more. - ✅ **Table recognition**: Parsing embedded tables accurately into text and semi-structured representations. - ✅ **Multimodal parsing and chunking**: Extracting visual elements (images/diagrams) into structured formats and return image chunks using the latest multimodal models. - ✅ **Custom parsing**: Input custom prompt instructions to customize the output the way you want it. LlamaParse directly integrates with [LlamaIndex](https://github.com/run-llama/llama_index). The free plan is up to 1000 pages a day. Paid plan is free 7k pages per week + 0.3c per additional page by default. There is a sandbox available to test the API [**https://cloud.llamaindex.ai/parse ↗**](https://cloud.llamaindex.ai/parse). Read below for some quickstart information, or see the [full documentation](https://docs.cloud.llamaindex.ai/). If you're a company interested in enterprise RAG solutions, and/or high volume/on-prem usage of LlamaParse, come [talk to us](https://www.llamaindex.ai/contact). ## Getting Started First, login and get an api-key from [**https://cloud.llamaindex.ai/api-key ↗**](https://cloud.llamaindex.ai/api-key). Then, make sure you have the latest LlamaIndex version installed. **NOTE:** If you are upgrading from v0.9.X, we recommend following our [migration guide](https://pretty-sodium-5e0.notion.site/v0-10-0-Migration-Guide-6ede431dcb8841b09ea171e7f133bd77), as well as uninstalling your previous version first. ``` pip uninstall llama-index # run this if upgrading from v0.9.x or older pip install -U llama-index --upgrade --no-cache-dir --force-reinstall ``` Lastly, install the package: `pip install llama-parse` Now you can parse your first PDF file using the command line interface. Use the command `llama-parse [file_paths]`. See the help text with `llama-parse --help`. ```bash export LLAMA_CLOUD_API_KEY='llx-...' # output as text llama-parse my_file.pdf --result-type text --output-file output.txt # output as markdown llama-parse my_file.pdf --result-type markdown --output-file output.md # output as raw json llama-parse my_file.pdf --output-raw-json --output-file output.json ``` You can also create simple scripts: ```python import nest_asyncio nest_asyncio.apply() from llama_parse import LlamaParse parser = LlamaParse( api_key="llx-...", # can also be set in your env as LLAMA_CLOUD_API_KEY result_type="markdown", # "markdown" and "text" are available num_workers=4, # if multiple files passed, split in `num_workers` API calls verbose=True, language="en", # Optionally you can define a language, default=en ) # sync documents = parser.load_data("./my_file.pdf") # sync batch documents = parser.load_data(["./my_file1.pdf", "./my_file2.pdf"]) # async documents = await parser.aload_data("./my_file.pdf") # async batch documents = await parser.aload_data(["./my_file1.pdf", "./my_file2.pdf"]) ``` ## Using with file object You can parse a file object directly: ```python import nest_asyncio nest_asyncio.apply() from llama_parse import LlamaParse parser = LlamaParse( api_key="llx-...", # can also be set in your env as LLAMA_CLOUD_API_KEY result_type="markdown", # "markdown" and "text" are available num_workers=4, # if multiple files passed, split in `num_workers` API calls verbose=True, language="en", # Optionally you can define a language, default=en ) file_name = "my_file1.pdf" extra_info = {"file_name": file_name} with open(f"./{file_name}", "rb") as f: # must provide extra_info with file_name key with passing file object documents = parser.load_data(f, extra_info=extra_info) # you can also pass file bytes directly with open(f"./{file_name}", "rb") as f: file_bytes = f.read() # must provide extra_info with file_name key with passi

Release History

VersionChangesUrgencyDate
0.6.94Imported from PyPI (0.6.94)Low4/21/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

azure-search-documentsMicrosoft Azure Cognitive Search Client Library for Pythonazure-template_0.1.0b6187637
apache-tvm-ffitvm ffi0.1.10
luqumA Lucene query parser generating ElasticSearch queries and more !1.0.0
torchaoPackage for applying ao techniques to GPU models0.17.0
banksA prompt programming language2.4.1