Skip to main content

TorchServe is a tool for serving neural net models for inference

Project description

TorchServe (PyTorch model server) is a flexible and easy to use tool for serving deep learning models exported from PyTorch.

Use the TorchServe CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.

Installation

Full installation instructions are in the project repo: https://github.com/pytorch/serve/blob/master/README.md

Source code

You can check the latest source code as follows:

git clone https://github.com/pytorch/serve.git

Citation

If you use torchserve in a publication or project, please cite torchserve: https://github.com/pytorch/serve

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page