Ai-MicroCloud®
Ai Platform-as-a-Service

Powering Business Innovation

Overview

Zeblok's enterprise-ready turnkey Ai Platform-as-a-Service (Ai PaaS or Ai-MicroCloud®) helps you develop, customize and deploy Ai quickly, generate new insights and enhance decision-making capabilities. The Ai-MicroCloud® has been built from the ground up for collaboration from composable foundational components, which make it easy to deploy and satisfy a multitude of challenges that enterprises, data center operators and Edge operators face.

Zeblok Computational offers enterprise clients a shared utility platform. CIOs, CTOs and Innovation Offices have the opportunity to combine and create a community comprised of data scientists, data engineers and Ai engineers, as well as various line of business analysts, to collaborate on Ai development in a single platform. Furthermore, Ai innovation labs outside of the enterprise, including student interns from strategic academic partnerships, can be brought onto the same platform.

Zeblok Computational deploys its Ai-MicroCloud® to wherever the data resides - enterprise data centers, public clouds and Edge locations. Users determine the appropriate mix of Ai-MicroCloud® composable foundational components, based on their needs and the specific line of business applications and AI/ML models they are developing.

Once implemented, enterprises' Ai capabilities are truly multi-cloud - Ai-MicroCloud® deployment architecture flexibility enables execution on numerous enterprise infrastructure topologies.

For example, if an enterprise has a deep relationship with AWS and application teams require some data to be kept within the enterprise data centers, then some data can be moved into AWS. Then the development team can decide where to launch the foundational components. It can be launched within the enterprise data center or in AWS, with comprehensive audit trails supporting any regulatory requirements.

Architecture Diagram

Composable Foundational Components

SaaS Components
Ai-WorkStation & Ai-HPC-WorkStation

A productive web-based developer environment for Ai model trainers with dynamically provisioned resources. Best way to rapidly create end-to-end Ai pipelines on heterogeneous chip architectures.

Data Science

A versatile web-based digital asset repository for ready-to-use Ai inference engines from various sources. Can launch pre-existing Ai-model training environments and inference engines.

Product Management

A scalable web-based ML Ops env. for packaging and publishing Ai inference engines as a secure network asset to Edge. Enabling deployment, providing a consistent publishing mechanism.

PaaS Components
Microservice Manager

Microservice Manager efficiently delivers Ai applications seamlessly across hybrid-cloud, multi-cloud environments, and Edge data centers in compliance with customer policies

Ai-WorkStation Manager

Ai-WorkStation Manager enables zero learning curve, open-source Ai frameworks, curated Ai applications, other end-to-end Ai pipelines as virtualized notebooks while increasing shareability

Ai-HPC Manager

Ai-HPC Manager seamlessly scales to high-performance computing (HPC), crossing physical server boundaries for Ai model training and optimization, using familiar and consistent user experience

Ai-Database Manager

Ai-Data Lake Manager improves productivity and data handling for model training and inference engine optimization, while accelerating data manipulation tasks when GPUs are available within the underlying Ai-MicroCloud® environment

IaaS Components
RBAC Manager

Role-based access control (RBAC) restricts network access based on a person's role within an organization and/or Ai project team. It enables advanced access control for all SaaS and PaaS application components within the platform

Security Manager

A comprehensive security capability maps users seamlessly and securely between web-domain and meta-scheduled infrastructure across hybrid-cloud environments. Integration with enterprise LDAP and/or Active Directory makes it easy to bring consistent user and group policies, including single sign-on and multi-factor authentication.

Multi-cloud Orchestration

A scalable web-based ML Ops env. for packaging and publishing Ai inference engines as a secure network asset to Edge. Enabling deployment, providing a consistent publishing mechanism.

©️ Zeblok Computational Inc. 2022