Ai-API Engine

Bridge Data Science and DevOps
Deliver Prediction Services Anywhere

Zeblok's Ai-API Engine makes moving trained Ai/ML models to production easy :

Getting completed machine learning models into production is challenging. Data scientists are not experts in building production services and DevOps best practices. Trained AI/ML models produced by a data science team are hard to test and hard to deploy. This often leads to a time consuming and error-prone workflow, where a pickled model or weights file is handed over to a software engineering team.

Our Ai-API Engine is a framework within Zeblok's Ai-MicroCloud® for serving, managing and deploying completed Ai/ML models. It bridges the gap between data science and DevOps, and enables teams to deliver prediction services in a fast, repeatable and scalable way.

  • Package models trained with ML framework and then containerize the model server for production deployment
  • Deploy anywhere for online API serving endpoints or offline batch inference jobs
  • High-performance API model server with adaptive micro-batching support
  • Ai-API server is able to handle high-volume without crashing, supports multi-model inference, API server Dockerization, built-in Prometheus metric endpoint, Swagger/Open API endpoint for API client library generation, serverless endpoint deployment, etc.
  • Central hub for managing models and deployment process via web UI and APIs
  • Supports various ML frameworks including: Scikit-Learn, PyTorch, TensorFlow 2.0, Keras, FastAI v1 & v2, XGBoost, H2O, ONNX, Gluon and more
  • Supports API input data types including: DataframeInput, JsonInput, TfTensorflowInput, ImageInput, FileInput, MultifileInput, StringInput, AnnotatedImageInput and more
  • Supports API output Adapters including: BaseOutputAdapter, DefaultOutput, DataframeOutput, TfTensorOutput and JsonOutput

Easy steps to API Deployment

1
List of APIs

Quick view of the APIs that are successfully deployed

2
Select Model to Deploy

Select the completed Ai/ML model to deploy as an API

3
Select Namespace

Option to select a namespace

4
Select data centers / Edge Location

Select from the list of your data centers or Edge locations where model is to be deployed.

5
Create and distribute API

Click Create button to create the API – the Ai-API Engine does the rest, deploying Ai inference to all locations.

6
Voila! Done.

that’s it. it’s this much easy to use out Ai-AppStore to create your APIs. mb-lg-10 mb-md-10

©️ Zeblok Computational Inc. 2022