
Modula Series - Programmable 3DModeling Acceleraton
Purpose-Built forDomain-Specific Al
General-purpose GPUs and large language models excel at broad tasks, but they fall short in specialized fields like CAD/CAE, scientific computing,
and medical imaging. Modula is designed from the ground up to accelerate 3D modeling, simulation and reconstruction with unmatched precision, efficiency, and security.
Why Modula?
Accuracy that Matters:
Optimized forniche applications such as CT/MRI reconstruction, finite element analysis (FEM/CFD), and protein folding.
Performance per Watt:
Balanced compute architecture eliminates the inefficiencies of oversized genera GPUs.
Cost-Efficient:
Smaller models and optimized hardware lower both capitaland operating costs.
Data Security:
On-premises or edge deployment ensures sensitive datasets neverleave your envionment.
Addressing Today's Challenges
Data Surge:
Scales with rapidlygrowing domain datasets.
Rising GPU Costs:
A sustainablealternative to large LLMs andexpensive GpU clusters.
Hardware Limits:
Tailored algorithmsoutperform general-purposeaccelerators in specialized workloads.
Domain-Specific Models bridge the gap between generic AI and your unique needs. Deploy on edge devices for real-time, cost-efficient results.

Our Solutions
Modula-X01
-
Process / PEs: N16, 16 PEs (compute tiles + programmable logic) I/O: PCIe Gen4 x16 (down-train to x8/x4 supported)
-
Memory: On-card DDR4/5, SRAM scratchpads per tile
-
TDP (card): ~160–220 W (multi-TDP modes)
-
Form factor: FHFL, passive (2U airflow) or active shroud
-
Key value: Balanced perf/watt; one card covers 60–90% customers; dual-card config supports heavier training
Modula-X01L
-
LLM inference + Domain-model inference (cost-down
-
edge/department)
-
Process / PEs: N16/12, 8/4 PEs
-
I/O: PCIe Gen4 x8 (option: x4)
-
Memory: Fewer DDR4/5 channels; smaller SRAM tile count
-
TDP (card): ~80–140 W
-
Form factor: FHHL/FHFL (active cooling preferred for tower/edge)
-
Key value: BOM-reduced silicon; lower power; price point for volume deployments
Modula-X02
-
Target use case: LLM + Domain training at scale; higher throughput inference
-
Process / PEs: N6, 32 PEs (≈2× X01) with larger programmable fabric
-
I/O: PCIe Gen5/6 x16 (ready for CXL 3.0 / card-to-card fabric)
-
Memory: More DDR5 channels and larger SRAM per tile;
-
TDP (card): ~220–300 W (data-center focus)
-
Form factor: FHFL passive (data-center), OAM/SXM-class feasibility study
-
Key value: ~2× compute vs Gen1; tighter CPU/memory coherence; multi-card scaling
3D Graphic Modeling Accelerator Architecture
Modula-X01


Target Applications

Engineering / Industrial (CAE/Simulation)
• Finite Element Method (FEM) → stress/strain analysis, structural engineering
• Computational Fluid Dynamics (CFD) → airflow, aerodynamics, thermal simulations
• Electromagnetic Simulation → antenna design, EMC/EMI testing
• Computer-Aided Design (CAD) optimizations → parametric modeling, mesh refinement
• Digital Twin training → real-time system replicas of factories, machines, or cities

Medical / Healthcare
• 3D Medical Imaging Reconstruction → CT/MRI → volumetric training models
• Disease Classification / Detection → training models on medical images (X-ray, pathology slides)
• Protein Structure / Drug Modeling → molecular dynamics, docking simulations
• Biomechanics Simulation → bone, joint, and tissue stress models

Scientific Computing
• Physics Simulations → quantum chemistry, plasma, particle interactions
• Materials Science → nanomaterial property modeling, semiconductor device modeling
• Climate & Environmental Modeling → weather, pollution dispersion, geological surveys

3D Graphics / Computer Vision
• 3D Object Recognition → training models on point clouds, CAD datasets
• Point Cloud Transformers (LIDAR/SLAM) → robotics, autonomous driving
• Neural Radiance Fields (NeRF) → training for 3D scene reconstruction
• Generative 3D Models (Diffusion → Mesh) → asset generation for AR/VR/XR
• 3D Morphology Analysis → design optimization, topological learning

