| Page 57 | Kisaco Research

The rapid evolution of high-performance computing (HPC) clusters has been instrumental in driving transformative advancements in AI research and applications. These sophisticated systems enable the processing of complex datasets and support groundbreaking innovation. However, as their adoption grows, so do the critical security challenges they face, particularly when handling sensitive data in multi-tenant environments where diverse users and workloads coexist. Organizations are increasingly turning to Confidential Computing as a framework to protect AI workloads, emphasizing the need for robust HPC architectures that incorporate runtime attestation capabilities to ensure trust and integrity.

In this session, we present an advanced HPC cluster architecture designed to address these challenges, focusing on how runtime attestation of critical components – such as the kernel, Trusted Execution Environments (TEEs), and eBPF layers – can effectively fortify HPC clusters for AI applications operating across disjoint tenants. This architecture leverages cutting-edge security practices, enabling real-time verification and anomaly detection without compromising the performance essential to HPC systems.

Through use cases and examples, we will illustrate how runtime attestation integrates seamlessly into HPC environments, offering a scalable and efficient solution for securing AI workloads. Participants will leave this session equipped with a deeper understanding of how to leverage runtime attestation and Confidential Computing principles to build secure, reliable, and high-performing HPC clusters tailored for AI innovations.

Location: Room 201

Duration: 1 hour

Author:

Jason Rogers

CEO
Invary

Jason Rogers is the Chief Executive Officer of Invary, a cybersecurity company that ensures the security and confidentiality of critical systems by verifying their Runtime Integrity. Leveraging NSA-licensed technology, Invary detects hidden threats and reinforces confidence in an existing security posture. Previously, Jason served as the Vice President of Platform at Matterport, successfully launched a consumer-facing IoT platform for Lowe's, and developed numerous IoT and network security software products for Motorola.

Jason Rogers

CEO
Invary

Jason Rogers is the Chief Executive Officer of Invary, a cybersecurity company that ensures the security and confidentiality of critical systems by verifying their Runtime Integrity. Leveraging NSA-licensed technology, Invary detects hidden threats and reinforces confidence in an existing security posture. Previously, Jason served as the Vice President of Platform at Matterport, successfully launched a consumer-facing IoT platform for Lowe's, and developed numerous IoT and network security software products for Motorola.

Author:

Ayal Yogev

CEO & Co-founder
Anjuna

Ayal Yogev

CEO & Co-founder
Anjuna

Dive into a hands-on workshop designed exclusively for AI developers. Learn to leverage the power of Google Cloud TPUs, the custom accelerators behind Google Gemini, for highly efficient LLM inference using vLLM. In this trial run for Google Developer Experts (GDEs), you'll build and deploy Gemma 3 27B on Trillium TPUs with vLLM and Google Kubernetes Engine (GKE). Explore advanced tooling like Dynamic Workload Scheduler (DWS) for TPU provisioning, Google Cloud Storage (GCS) for model checkpoints, and essential observability and monitoring solutions. Your live feedback will directly shape the future of this workshop, and we encourage you to share your experience with the vLLM/TPU integration on your social channels.

Location: Room 207

Duration: 1 hour

Author:

Niranjan Hira

Senior Product Manager
Google Cloud

As a Product Manager in our AI Infrastructure team, Hira looks out for how Google Cloud offerings can help customers and partners build more helpful AI experiences for users.  With over 30 years of experience building applications and products across multiple industries, he likes to hog the whiteboard and tell developer tales.

Niranjan Hira

Senior Product Manager
Google Cloud

As a Product Manager in our AI Infrastructure team, Hira looks out for how Google Cloud offerings can help customers and partners build more helpful AI experiences for users.  With over 30 years of experience building applications and products across multiple industries, he likes to hog the whiteboard and tell developer tales.

 

Julianne Kur

Principal
Alliance Consumer Growth

Julianne Kur

Principal
Alliance Consumer Growth

Julianne Kur

Principal
Alliance Consumer Growth

DataBank, one of the nation’s leading data center operators, with more facilities in more markets than any other provider, has seen the future of enterprise AI infrastructure and knows how to help enterprises get there.   

With a customer base that spans 2500+ enterprises – in addition to hyperscalers and emerging AI service providers – DataBank has a unique perspective on the trends and lessons learned from customer AI deployments to date, which include some of the industry’s first NVL72/GB200 installations.   

In this 60-minute session, John Solensky, DataBank’s VP of Sales Engineering, and Mike Alvaro, DataBank’s Principal Solutions Architect, will share what DataBank has learned from its early GPU installations for hyperscalers and AI service providers, how those lessons were applied to later enterprise installations, the impact that next-generation GPUs are having on data center designs and solution costs, and the lessons for future enterprise deployments. 

Location: Room 206

Duration: 1 hour

Experience the future of GenAI inference architecture with NeuReality’s fully integrated, enterprise-ready NR1® Inference Appliance. In this hands-on workshop, you'll go from cold start to live GenAI applications in under 30 minutes using our AI-CPU-powered system. The NR1® Chip – the world’s first AI-CPU purpose built for interference – pairs with any GPU or AI accelerator and optimizes any AI data workload. We’ll walk you through setup, deployment, and real-time inference using models like LLaMA, Mistral, and DeepSeek on our disaggregated architecture—built for smooth scalability, superior price/performance and near 100% GPU utilization (vs <50% with traditional CPU/NIC architecture). Join us to see how NeuReality eliminates infrastructure complexity and delivers enterprise-ready performance and ROI today.

Location: Room 201

Duration: 1 hour

Author:

Paul Piezzo

Enterprise Sales Director
NeuReality

Paul Piezzo

Enterprise Sales Director
NeuReality

Author:

Gaurav Shah

VP of Business Development
NeuReality

Gaurav Shah

VP of Business Development
NeuReality

Author:

Naveh Grofi

Customer Success Engineer
NeuReality

Naveh Grofi

Customer Success Engineer
NeuReality

Join us in this hands-on workshop to learn how to deploy and optimize large language models (LLMs) for scalable inference at enterprise scale. Participants will learn to orchestrate distributed LLM serving with vLLM on Amazon EKS, enabling robust, flexible, and highly available deployments. The session demonstrates how to utilize AWS Trainium hardware within EKS to maximize throughput and cost efficiency, leveraging Kubernetes-native features for automated scaling, resource management, and seamless integration with AWS services.

Location: Room 206

Duration: 1 hour

Author:

Asheesh Goja

Principal GenAI Solutions Architect
AWS

Asheesh Goja

Principal GenAI Solutions Architect
AWS

Author:

Pinak Panigrahi

Sr. Machine Learning Architect - Annapurna ML
AWS

Pinak Panigrahi

Sr. Machine Learning Architect - Annapurna ML
AWS

GIGABYTE AI TOP is a groundbreaking desktop solution that empowers developers to train their own AI models locally. Featuring advanced memory offloading technology and support for open-source LLMs, LMMs, and other machine learning models, it delivers enterprise-grade performance in a compact desktop form factor. This solution enables both AI beginners and professionals to build, fine-tune, and deploy state-of-the-art models with enhanced privacy, flexibility, and security.

Author:

Charles Le

CTO, Channel AI Solutions
GIGABYTE

Dr. Charles Le currently serves as Chief Technology Officer of Channel AI Solutions at GIGABYTE. He leads the AI software division and is the architect behind GIGABYTE’s flagship platform, AI TOP Utility, which empowers developers and enterprises to train and deploy large AI models with ease.

He is an expert in the training, finetuning, and inference of LLMs, LMMs, and other machine learning models, with deep knowledge across algorithm design, hardware acceleration, and system integration.

 

Before joining GIGABYTE, Dr. Le spent four years applying deep learning to the development of radiative cooling materials for marine robotics. He also has six years of experience in structural health monitoring and modal identification for infrastructure under dynamic loads such as earthquakes and wind. More recently, he has applied AI to enhance business intelligence, hardware R&D, and service AI assistants using tools like LangChain and LLM deployment.

Charles Le

CTO, Channel AI Solutions
GIGABYTE

Dr. Charles Le currently serves as Chief Technology Officer of Channel AI Solutions at GIGABYTE. He leads the AI software division and is the architect behind GIGABYTE’s flagship platform, AI TOP Utility, which empowers developers and enterprises to train and deploy large AI models with ease.

He is an expert in the training, finetuning, and inference of LLMs, LMMs, and other machine learning models, with deep knowledge across algorithm design, hardware acceleration, and system integration.

 

Before joining GIGABYTE, Dr. Le spent four years applying deep learning to the development of radiative cooling materials for marine robotics. He also has six years of experience in structural health monitoring and modal identification for infrastructure under dynamic loads such as earthquakes and wind. More recently, he has applied AI to enhance business intelligence, hardware R&D, and service AI assistants using tools like LangChain and LLM deployment.

As specifications grow to hundreds of pages, traditional verification workflows struggle to maintain consistency, traceability, and speed. This session demos Normal EDA, which replaces subjective, hand-written flows with NormML - a proprietary formal language that ingests raw specs, timing diagrams, and existing testbenches to build an auditable graph that auto-generates zero-to-one test plans, SystemVerilog/UVM stimulus, and traceable coverage links. The system reasons across multimodal data to flag inconsistencies before RTL reaches the simulator, slashing coverage closure time.

Today’s AI designs stress verification teams to an unprecedented extent. The compound complexity from software, hardware, interfaces, and architecture options leads to the challenge of running quadrillions of verification cycles across IP, sub-systems, SoCs, and Multi-die designs. Learn how industry leaders like AMD, Arm, Nvidia, and others address these challenges with Synopsys’ latest family of Hardware-Assisted Verification products, modularity of verification, and mixed-fidelity execution setups using virtual prototyping, emulation, and FPGA-based prototyping.

Author:

Frank Schirrmeister

Executive Director, Strategic Programs, System Solutions
Synopsys

Frank Schirrmeister is Executive Director, Strategic Programs, System Solutions in Synopsys' System Design Group. He leads strategic activities across system software and hardware assisted development for industries like automotive, data center and 5G/6G communications, as well as for horizontals like Artificial Intelligence / Machine Learning. Prior to Synopsys, Frank held various senior leadership positions at Arteris, Cadence Design Systems, Imperas, Chipvision, and SICAN Microelectronics, focusing on product marketing and management, solutions, strategic ecosystem partner initiatives, and customer engagement. He holds an MSEE from the Technical University of Berlin and actively participates in cross-industry initiatives as Chair of the Design Automation Conference's Engineering Tracks.

Frank Schirrmeister

Executive Director, Strategic Programs, System Solutions
Synopsys

Frank Schirrmeister is Executive Director, Strategic Programs, System Solutions in Synopsys' System Design Group. He leads strategic activities across system software and hardware assisted development for industries like automotive, data center and 5G/6G communications, as well as for horizontals like Artificial Intelligence / Machine Learning. Prior to Synopsys, Frank held various senior leadership positions at Arteris, Cadence Design Systems, Imperas, Chipvision, and SICAN Microelectronics, focusing on product marketing and management, solutions, strategic ecosystem partner initiatives, and customer engagement. He holds an MSEE from the Technical University of Berlin and actively participates in cross-industry initiatives as Chair of the Design Automation Conference's Engineering Tracks.

MooresLabAI is redefining the semiconductor development lifecycle with its Agentic AI platform — purpose-built for silicon teams. In this live demo, we’ll showcase VerifAgent™, our flagship AI-powered verification agent that slashes engineering time by 85% and accelerates time-to-market by 7x. Seamlessly integrating with standard EDA tools, VerifAgent automates testbench creation, debugging, and coverage — without requiring prompt engineering or changes to your flows. Join us to see how MooresLabAI’s platform brings human-grade precision, machine speed, and real-world silicon expertise into one powerful development force.

Author:

Shelly Henry

CEO & Co-founder
MooresLabAI

Shelly Henry is the CEO and Co-Founder of MooresLabAI, a company pioneering Agentic AI for semiconductor design and verification. With over 25 years of experience in silicon engineering and AI, including leadership roles at Microsoft and ARM, Shelly is driven by a mission to transform chip development through intelligent automation. He has led teams building high-performance SoCs and has a deep understanding of the verification bottlenecks plaguing the industry. At MooresLabAI, Shelly combines his technical expertise and entrepreneurial vision to accelerate chip innovation and empower engineering teams worldwide.

Shelly Henry

CEO & Co-founder
MooresLabAI

Shelly Henry is the CEO and Co-Founder of MooresLabAI, a company pioneering Agentic AI for semiconductor design and verification. With over 25 years of experience in silicon engineering and AI, including leadership roles at Microsoft and ARM, Shelly is driven by a mission to transform chip development through intelligent automation. He has led teams building high-performance SoCs and has a deep understanding of the verification bottlenecks plaguing the industry. At MooresLabAI, Shelly combines his technical expertise and entrepreneurial vision to accelerate chip innovation and empower engineering teams worldwide.