Three processor families designed for distinct deployment scenarios. From data center inference to edge computing to large-scale model training, SABZEH delivers the right silicon for your AI requirements.
Data Center AI Accelerator
The NOVA-1 represents a paradigm shift in AI silicon design. Built entirely on the open-source RISC-V instruction set architecture, this processor delivers unprecedented flexibility for machine learning inference workloads. Unlike proprietary alternatives locked into vendor-specific ecosystems, NOVA-1 provides complete architectural transparency.
| Process Technology | 5nm FinFET |
| Tensor Cores | 256 AI Processing Units |
| On-Chip Memory | 128MB SRAM |
| HBM Capacity | 96GB HBM3 |
| Host Interface | PCIe 5.0 x16 |
| Precision Support | INT8, FP16, BF16, FP32 |
| Form Factor | Full-Height, Full-Length PCIe |
Optimized for transformer architectures including GPT-class models, BERT variants, and multimodal systems with native attention mechanism acceleration.
Advanced sparsity support achieving up to 4x effective throughput improvement for pruned models commonly used in production deployments.
Dynamic batching engine that automatically optimizes for latency or throughput based on real-time workload characteristics.
Native multi-model serving capability allowing different AI models to share processor resources efficiently.
Edge AI Processor
Optimized for edge deployment scenarios where power efficiency and compact form factor are paramount. NOVA-E brings data center-class AI capabilities to manufacturing floors, retail environments, autonomous vehicles, and smart infrastructure. The processor features an innovative power management architecture that enables dynamic scaling between high-performance and ultra-low-power modes.
| Process Technology | 7nm FinFET |
| Tensor Cores | 32 AI Processing Units |
| On-Chip Memory | 32MB SRAM |
| External Memory | LPDDR5 up to 32GB |
| Interfaces | PCIe 4.0 x4, MIPI CSI, USB 3.2 |
| Thermal Design | Fanless capable |
| Package | 27mm x 27mm BGA |
Real-time object detection and tracking supporting YOLO, SSD, and custom architectures at 60+ FPS on multiple HD video streams.
Integrated image signal processor with hardware acceleration for pre-processing directly from camera sensors.
Secure boot and hardware root of trust for deployment in security-sensitive edge environments.
Wake-on-AI capability for always-on monitoring with near-zero standby power consumption.
Training Accelerator
Purpose-built for AI model training at scale. NOVA-T features massive parallel processing capability and optimized interconnects for multi-chip configurations. The processor supports efficient distributed training across thousands of chips, enabling organizations to train foundation models without dependence on proprietary cloud infrastructure.
| Process Technology | 4nm FinFET |
| Tensor Cores | 512 AI Processing Units |
| On-Chip Memory | 256MB SRAM |
| HBM Configuration | 8x HBM3 stacks, 6.4 TB/s bandwidth |
| Chip-to-Chip | 16x SerDes links, 800 Gbps each |
| Host Interface | PCIe 5.0 x16 |
| Cooling | Liquid cooling required |
Native support for distributed data parallel, model parallel, and pipeline parallel training strategies with automatic optimization.
Built-in collective communication primitives (AllReduce, AllGather, ReduceScatter) implemented in hardware for minimal overhead.
Dynamic loss scaling and mixed-precision training support with automatic precision management.
Checkpoint-free gradient accumulation using on-chip memory to reduce training time for large batch sizes.
| Specification | NOVA-1 | NOVA-E | NOVA-T |
|---|---|---|---|
| Primary Use Case | Data Center Inference | Edge AI | Model Training |
| Peak INT8 Performance | 512 TOPS | 64 TOPS | 1024 TOPS |
| Peak FP16 Performance | 256 TFLOPS | 32 TFLOPS | 1 PFLOPS |
| Memory Capacity | 96GB HBM3 | Up to 32GB LPDDR5 | 256GB HBM3 |
| Memory Bandwidth | 1.6 TB/s | 102 GB/s | 6.4 TB/s |
| Typical Power | 300W | 15W | 700W |
| Process Node | 5nm | 7nm | 4nm |
| Multi-Chip Support | Up to 8 | Single chip | Up to 4096 |
Contact our team to discuss your AI compute requirements and receive detailed technical documentation, benchmark data, and pricing information.
Request Product Information