As AI-native cloud platforms scale to meet global demand, the data center IT infrastructure powering generative AI has become a prime target for attackers. In this session, Dr. Yuriy Bulygin—CEO of Eclypsium and former Chief Threat Researcher at Intel—shares a case study from one of the world’s fastest-growing AI cloud providers that supports OpenAI, Microsoft, and NVIDIA workloads and how they are offering secure AI infrastructure that their customers demand.
This provider faced the challenge of protecting thousands of specialized servers, GPUs, and system components—without slowing growth. Learn how they implemented a turn-key approach to data center security, leveraging firmware integrity verification, automated vulnerability management, and continuous supply chain monitoring.
Attendees will gain insight into:
Why infrastructure threats in AI data centers are rising—and often invisible
What does a secure AI cloud look like and how to minimize risk
Lessons for building resilient, secure AI data centers without adding operational drag
If you're responsible for AI infrastructure, securing the data center isn’t just an IT concern—it’s foundational to model integrity and platform trust.

Yuriy Bulygin
Yuriy Bulygin is Co-Founder & CEO at Eclypsium. Prior to founding Eclypsium, Yuriy led the Advanced Threat Research team at Intel Security and the microprocessor security analysis team at Intel Corporation. He also created CHIPSEC, the open-source firmware and hardware security assessment framework.
California faces over 7,000 wildfires each year, with enormous costs to lives, communities, and ecosystems. Responding faster requires distributed sensing and intelligence that can act in the field where traditional satellites and watchtowers fall short. http://Wywa.ai First Responder is an open-science initiative led with researchers from MIT and CMU, together with industry leaders and policy experts, to design and deploy a scalable wildfire early-warning network. The system combines ultra-low-cost LoRa-enabled chemical sensors with edge AI and vision-language models. These distributed “artificial noses” continuously monitor air for smoke and combustion signatures. When risk thresholds are detected, the sensors activate nearby edge vision systems that confirm wildfire presence and generate real-time alerts for first responders and civic authorities. We will present results from early deployments, highlight the LoRa network architecture and AI model training that make such systems deployable at scale, and discuss how open collaboration across academia, industry, and government can accelerate resilience. The session will include a live demonstration of how edge intelligence can empower communities to act in the earliest, most critical moments of wildfire response.

Anirudh Sharma
Anirudh Sharma is a researcher and inventor whose work spans human factors, speech and vision interfaces, and system design. With a research background at the MIT Media Lab and now at Amazon Lab126, he builds novel computing interfaces that merge advanced sensing with real-world applications. His first venture developed and shipped gait-sensing haptic insoles to help elderly and visually impaired people navigate through tactile feedback- now used worldwide. He later co-founded Graviky Labs, which turns air pollution into usable materials. His contributions have earned recognition from MIT Technology Review (TR35), Forbes 30 Under 30, TIME 100, and TED Global.

Navya Veeturi
Navya Veeturi is the founder of Wywa.ai First Responder, an open initiative focused on protecting communities and forests from wildfires through the power of low-cost sensors, edge AI, and generative intelligence. With a background in leading AI and data engineering teams at NVIDIA, Navya combines technical expertise, product vision, and community impact to build scalable, AI-driven solutions that empower first responders, local leaders, and citizens.

Anusha Nerella
Anusha Nerella is an award-winning AI and fintech innovator known for her original contributions in transforming institutional trading and digital finance. She has pioneered AI-driven trading strategies, real-time big data systems, and automation frameworks that have redefined how financial institutions operate. Anusha’s innovations—from modernizing Barclaycard’s digital payments infrastructure during COVID-19 to architecting intelligent trading models—have driven measurable impact, earning her recognition as a thought leader shaping the future of AI-powered finance.
What does it take to run one of the world's largest AI supercomputers? As artificial intelligence workloads grow exponentially, operating a hyperscale AI cloud fleet demands new strategies for resilience, efficiency, and operational excellence. This session explores Microsoft’s approach to scaling infrastructure for 100X growth, focusing on the intersection of system innovation and advanced fleet management.

Dharmesh Patel
Dharmesh Patel serves as the General Manager and head of the Quality Engineering Organization at Microsoft. In this capacity, he oversees the AI Fleet Quality team to ensure AI capacity, stability, and reliability throughout the hardware supply chain from manufacturing to data centers. His responsibilities include enabling Microsoft to scale AI capacity while maintaining high hardware quality standards across all stages of product development from concept through mass production. With nearly twenty years of experience in managing complex products and promoting process excellence within data centers, Dharmesh is a recognized leader in his field.

Prabhat Ram
Prabhat leads the AI Customer Experience team within Microsoft Azure. He is responsible for operating AI Training supercomputers for OpenAI and other strategic customers. He holds a master’s in Computer Science from Brown University and a PhD from the Earth and Planetary Sciences department at U.C. Berkeley.
In addition to coauthoring more than 150 papers on computer and domain sciences, his work has been recognized throughout the industry including being awarded the 2018 ACM Gordon Bell Prize for his team’s work on Exascale Deep Learning.

Prem Theivendran
Prem Theivendran is Director of Software Engineering at Expedera, where he leads the development and productization of Expedera’s software toolchain and SDK. With an expertise in Deep Learning, Prem has held hardware and software design roles at Intel, Cisco, Cavium, and Xpliant. Prem holds a Bachelor of Science in Electrical Engineering and Computer Sciences from the University of California, Berkeley.
Responsible AI is often framed in terms of ethical models and fair data—but the foundation for responsibility lies in infrastructure. In this talk, we’ll explore how platform-level capabilities like environment isolation, auditability, workload reproducibility, and resource-aware orchestration are essential to delivering AI that’s not just performant, but trustworthy. We’ll also examine how infrastructure decisions directly impact the quality and reliability of model evaluations—enabling teams to catch bias, ensure compliance, and meet evolving governance standards. If you’re building or scaling AI systems, this session will show how infrastructure becomes the enabler of responsible AI at every stage.

Taylor Smith
Taylor Smith is a Senior AI Advocate at Red Hat, where she champions open source innovation and the responsible adoption of AI at scale. With a background in software development, Kubernetes, Linux, and technical partnerships, she focuses on helping organizations build and operationalize AI using modern infrastructure. Taylor is passionate about making AI more accessible, trustworthy, and grounded in real-world use cases.

Euicheol Lim
Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.