inference-server
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.
| Name | Type | Version | Platform | Labels | Updated | Size | Downloads | Actions |
|---|
noarch/inference-server-1.3.2-pyhd8ed1ab_0.conda | conda | 1.3.2 | noarch | main | Nov 28, 2024, 09:43 PM | 18.68 KB | 1.1K | |
noarch/inference-server-1.3.1-pyhd8ed1ab_1.conda | conda | 1.3.1 | noarch | main | Oct 10, 2024, 08:06 PM | 18.64 KB | 1.2K | |
noarch/inference-server-1.3.1-pyhd8ed1ab_0.conda | conda | 1.3.1 | noarch | main | Oct 9, 2024, 04:55 PM | 18.47 KB | 1.2K | |
noarch/inference-server-1.3.0-pyhd8ed1ab_0.conda | conda | 1.3.0 | noarch | main | Oct 8, 2024, 02:17 PM | 18.47 KB | 1.1K |