哪里购买手机捕鱼软件『网址:ff00.co』金龙捕鱼赢,皇家捕鱼电玩城安卓版官方下载,捕鱼达人在线直播-F6F2Y1L7- B铮? R? 铮?? T.com
Pinferencia – Serving a model with REST API
Pinferencia?tries to be the simplest machine learning inference server ever!
Three extra lines and your model goes online.
Serving a model with REST API has never been so easy.
If you want to
- find a?simple but robust?way to serve your model
- write?minimal?codes while maintain controls over you service
- compatible?with other tools/platforms
You're at the right place.
- Fast to code, fast to go alive. Minimal codes needed, minimal transformation needed. Just based on what you have.
- 100% Test Coverage: Both statement and branch coverages, no kidding. Have you ever known any model serving tool so seriously tested?
- Easy to use, easy to understand.
- Automatic API documentation page. All API explained in details with online try-out feature.
- Serve any model, even a single function can be served.
- Support Kserve API, compatible with Kubeflow, TF Serving, Triton and TorchServe. There is no pain switching to or from them, and?Pinferencia?is much faster for prototyping!
pip install "pinferencia[uvicorn]"
Serve Any Model
from pinferencia import Server class MyModel: def predict(self, data): return sum(data) model = MyModel() service = Server() service.register(model_name="mymodel", model=model, entrypoint="predict")
uvicorn app:service --reload
Hooray, your service is alive. Go to?http://127.0.0.1.abctoons.com:8000/?and have fun.