freshcrate
Home > Frameworks > onnxruntime-java

onnxruntime-java

A type-safe, lightweight, modern, and performant binding Java binding of Microsoft's ONNX Runtime

Description

A type-safe, lightweight, modern, and performant binding Java binding of Microsoft's ONNX Runtime

README

onnxruntime-java

by @yuzawa-san

GitHub branch status codecov

This is an performant and modern Java binding to Microsoft's ONNX Runtime which uses Java's new Foreign Function & Memory API (a.k.a. Project Panama).

This project's goals are to provide a type-safe, lightweight, and performant binding which abstracts a lot of the native and C API intricacies away behind a Java-friendly interface. This is loosely coupled to the upstream project and built off of the public (and stable) C API.

The minimum supported Java version is 23, since the FFI API was first stabilized in that version. There are other fine bindings which use JNI and are capable of supporting earlier Java versions.

Usage

This project is released to Maven Central and can be used in your project.

Artifacts

The library is currently built for Linux, Windows, MacOS and for arm64 and x86_64. These were chosen since the upstream projects publishes artifacts for these enviroments. Here are the artifacts published listed below. Snapshot releases are periodically released for testing and experimentation.

onnxruntime

maven javadoc Maven metadata URL

The binding with no native libraries. For use as a implementation dependency.

The native library (from Microsoft) will need to be provided at runtime using one of the next two artifacts. Alternatively, the Java library path (java.library.path) will be used if neither of those artifacts is provided. This allows users to "bring their own" shared library. The API has a validation to make sure the shared library is minor version compatible with this library.

onnxruntime-cpu

maven

A collection of native libraries with CPU support for a several common OS/architecture combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers like linux-x86_64 to provide specific support.

onnxruntime-gpu

maven

A collection of native libraries with GPU (CUDA) support for a several common OS/architecture combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers like linux-x86_64 to provide specific support.

In your library

There is an example library in the onnxruntime-sample-library directory. The library should use the onnxruntime as a implementation dependency. This puts the burden of providing a native library on your end user.

In your application

There is an example application in the onnxruntime-sample-application directory. The library should use the onnxruntime as a implementation dependency. The application needs to have acccess to the native library. You have the option providing it via a runtime dependency using either a classifier variant from onnxruntime-cpu or onnxruntime-gpu. Otherwise, the Java library path will be used to load the native library.

The example application can be ran:

./gradlew onnxruntime-sample-application:run

JVM Arguments

Since this uses a native library, this will require the runtime to have the --enable-native-access JVM option, likely --enable-native-access=ALL-UNNAMED.

Execution Providers

Only those which are exposed in the C API are supported. If you wish to use another execution provider which is present in the C API, but not in any of the artifacts from the upstream project, you can choose to bring your own onnxruntime shared library to link against.

Versioning

The version of the upstream project used will be reflected in the release notes. Semantic versioning is used. Major version will be bumped when this API or the underlying C API has backward incompatible changes. Upstream major version changes will typically be major version changes here. Minor version will be bumped for smaller, but compatible changes. Upstream minor version changes will typically be minor version changes here.

The minimum native library version (e.g. onnxruntime-cpu) for the onnxruntime artifact will be stated in the release notes.

Release History

VersionChangesUrgencyDate
v2.1.0<!-- Release notes generated using configuration in .github/release.yml at v2.1.0 --> ## What's Changed ### Required Minimum Version Changes * Upgrade onnxruntime to 1.24.2 by @yuzawa-san in https://github.com/yuzawa-san/onnxruntime-java/pull/377 * Set onnxruntime baseline to v1.24.3 by @yuzawa-san in https://github.com/yuzawa-san/onnxruntime-java/pull/385 ### Dependency Upgrades * Bump com.google.protobuf:protoc from 4.33.4 to 4.33.5 by @dependabot[bot] in https://github.com/yuzawa-san/Medium3/13/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

onnxruntimeONNX Runtime: cross-platform, high performance ML inferencing and training acceleratorv1.25.0
modular-image-classification-frameworkA modular deep learning framework for training and evaluating image classification models on datasets like CIFAR-10 and MNIST. Supports configurable CNN architectures, automated training, and performamain@2026-04-20
cONNXrPure C ONNX runtime with zero dependancies for embedded devices0.0.0
APTAI Productivity Tool - Free and open source, improve user productivity, and protect privacy and data security. Including but not limited to: built-in local exclusive ChatGPT, DeepSeek, Phi, Qwen and o2.9.16.0
fed-ragA framework for fine-tuning retrieval-augmented generation (RAG) systems.v0.0.27