CMD + K

inference-server

Community

Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.

Installation

To install this package, run one of the following:

Conda
$conda install conda-forge::inference-server

Usage Tracking

1.3.2
1.3.1
1.3.0
3 / 8 versions selected
Downloads (Last 6 months): 0

Description

inference-server is a Python library to deploy an AI/ML model to Amazon SageMaker for real-time inference. The library simplifies deploying a model using your own Docker container image.

About

Summary

Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.

Last Updated

Nov 28, 2024 at 21:43

License

Apache-2.0

Supported Platforms

noarch