inference-server
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.
To install this package, run one of the following:
inference-server is a Python library to deploy an AI/ML model to Amazon SageMaker for real-time inference. The library simplifies deploying a model using your own Docker container image.
Summary
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.
Last Updated
Nov 28, 2024 at 21:43
License
Apache-2.0
Supported Platforms
GitHub Repository
https://github.com/jpmorganchase/inference-serverDocumentation
https://inference-server.readthedocs.io