Protopia AI is at RSAC. Meet our team of experts in AI Data Privacy and Security.

Secure, Scalable AI Factories

Data-Safe Multi-Tenant AI with Protopia SGT and NVIDIA NIM Microservices

AI factories represent a new class of purpose-built data centers, similar to manufacturing plants for machines, but dedicated to AI. Instead of assembling physical products, these factories produce tokens – the basic units of AI-generated content. AI factories are specialized computing infrastructure that optimize the entire AI lifecycle, from data ingestion to high-volume inference, delivering real-time intelligence at scale. 

Rise of multi-tenant AI Factories 

While some enterprises will build private AI factories for their exclusive use, a dominant pattern is emerging around multi-tenant, managed AI factories. In this model, a shared AI infrastructure is operated by a trusted organization—such as a telecom company, a national research university, or a systems integrator—and serves multiple organizations or departments. Across regions, providers and governments are standing up regional AI hubs that many businesses or agencies can leverage.  The appeal is clear: scalable capacity on demand, professional, centralized management, local sovereignty controls, and better economics by amortizing the investment across many users.

For example, telcos and national programs are building sovereign AI clouds and shared AI centers based on the NVIDIA AI Factory for Government reference design

Data-safe model serving with Stained Glass Transform (SGT)

While AI hubs deliver scalable resources and economics to Enterprise, by their very nature, they have a fundamental privacy Achilles heel. This is where Protopia AI’s privacy-enhancing technology comes in. Protopia Stained Glass Transform (SGT) is now compatible with NVIDIA NIM microservices. Together, SGT and NIM microservices enable organizations to run sensitive, high-value workloads on efficient shared infrastructure based on the NVIDIA AI Factory for Government reference design

SGT is a data transformation layer that converts inputs (e.g., text, images, documents) into stochastic representations, allowing models to operate with full utility. At the same time, the raw data never appears in the clear to the infrastructure operator, co-tenants, or unauthorized users. Data owners maintain control of their plain-text or raw data, significantly reducing exposure risk and unlocking previously infeasible AI applications on sensitive data. Because SGT is lightweight and can run at the data source, organizations can transform locally and safely utilize multi-tenant AI infrastructure clusters anywhere for inference without exposing proprietary data.

The net result is greater utilization of AI factories, with more trusted data flowing into workloads, yielding higher token throughput and improved ROI/TCO through radically improved efficiency. In managed AI factories, where the operator runs and manages the compute infrastructure used by many customers, privacy controls like SGT are critical for adoption and maximizing value realization.

SGT integrated into NVIDIA NIM Microservices

NVIDIA NIM, part of the NVIDIA AI Enterprise stack, provides containerized, enterprise-grade inference services with industry-standard, stable APIs and secure deployment across clouds and data centers.  With SGT support for NIM, multi-tenant privacy and high-performance, accelerated AI inference become integral capabilities of the AI factory: data entering inference pipelines is stochastically transformed, enabling multi-tenant AI services where each tenant’s data remains protected throughout the inference pipeline. This aligns with NVIDIA AI Factory for Government reference design and gives CIOs/CTOs a clear operational path to deploy more proprietary data sources on trusted, high-performance AI at scale.

Transforming AI Use Cases in Sensitive Sectors 

Government & Public Sector: Agencies can share a common AI platform (e.g., a national secure AI cloud) to use language models or analytic agents with confidential data. Each department’s data remains protected via SGT, enabling collaboration and resource sharing without breaching sovereignty and classification rules.

Higher Education & Research: University-operated AI factories can serve multiple campuses or partner schools while protecting research IP and proprietary datasets. SGT allows labs and departments to retain plain-text ownership while using shared compute for secure RAG, research copilots, and model-assisted literature/review across tenants.

Financial Services: Banks, insurers, and investment firms can leverage a shared platform (such as an industry consortium or provider-run hub) without exposing any inference data in plain-text or raw form on the shared compute. Each firm runs AI services on shared GPUs with SGT, ensuring that prompts and records are indecipherable to operators, peers, or unauthorized users —unlocking economies of scale without sacrificing confidentiality.

Telco & National AI Clouds: Telco-operated multi-tenant factories serve enterprises, startups, and agencies; SGT lets each tenant retain plaintext ownership while sharing compute. Inputs are transformed before the telco hosting layer, meeting data-residency and latency needs while pooling GPUs for AI workloads.

White House AI Policy Alignment: AI Sandboxes, Testbeds, and Open Models 

This approach aligns with the White House’s “America’s AI Action Plan,” which emphasizes regulatory sandboxes, evaluation testbeds, and broad access to affordable compute and open-weight models. SGT-enabled multi-tenant AI factories function as safe sandboxes where multiple stakeholders can experiment on real data without exposure. They also let organizations confidently use open or proprietary models while preserving data control and governance—expanding access and lowering effective cost per outcome. Learn more at The White House OSTP AI policy hub.

Conclusion: Strategic, Secure AI at Scale 

As AI becomes a core utility, the infrastructure that serves it must deliver performance, efficiency, and trust. Multi-tenant AI factories operated by telcos, financial services consortia, and government partners are a scalable model to deliver AI capabilities widely and in a sovereign manner. By integrating NVIDIA NIM microservices and Protopia SGT, these shared AI factories combine high-performance, accelerated AI inference with powerful, multi-tenant data privacy capabilities. As a crucial unlock, data owners can more securely and confidently contribute to, and extract value from, shared compute infrastructure. At the same time, operators see higher utilization and faster innovation—all within validated NVIDIA designs.

References

  1.  
    1. NVIDIA  AI Factories Validated Design
    1. NVIDIA Technical Blog – Telcos Across Five Continents Are Building NVIDIA‑Powered Sovereign AI Infrastructure
    1. NVIDIA Newsroom – Europe Builds AI Infrastructure With NVIDIA
    1. NVIDIA NIM – Product Overview
    1. The White House – OSTP AI Policy Hub

Latest News & Articles