Tiered storage architectures integrating parallel file systems and object storage combine the performance advantages of the former with the capacity and cost advantages of the latter. While this has been common in AI training infrastructure, we will discuss how this is now being applied to inference infrastructure and enabling higher performance inference workloads. Join this session as five experts from Supermicro, NVIDIA, WEKA, Scality and Kioxia unpack this new usage model and explain the latest technologies used.
Agentic AI is the next wave in artificial intelligence, where autonomous agents employ reasoning and planning to meet high-level objectives. Deployment of agentic AI can create rapid increases in AI processing, including the storage workload. Explaining the impact of agentic AI on storage and how to plan for it are four industry leaders from Supermicro, AMD, DDN and Sandisk.
Storage-as-a-Service (StaaS) enables service providers to generate new revenues from data management and storage. This session features a special guest Iron Mountain, a pioneer in information lifecycle management solutions for 74 years, discussing the newly launched Iron Cloud data management service based on Supermicro, Intel, Scality and Western Digital technology. Lightbits Labs discusses how software-defined block storage enables CSPs to provision storage for compute workloads.
As inference processing grows to utilize AI models, scaling of inference workloads, including the storage infrastructure, is needed. New developments in distributed inference frameworks and new storage protocols developed by NVIDIA and supported by Supermicro, Cloudian, Hammerspace and Solidigm are enabling large-scale inference processing. Attend this session from the leaders in this area to learn about the latest technologies to successfully scale inference workloads.
Enterprise custom applications benefit from modernization through lowering costs, improving performance and adding new functionality, including AI capabilities. These latest technologies include new hardware platforms, database re-platforming and migration from legacy storage to new software-defined storage architectures. This session includes experts in all of these areas from Supermicro, AMD, Enterprise DB and Lightbits Labs.
Generative AI, a form of AI that can create new content, has taken the AI field by storm. Applying gen AI to the enterprise’s needs requires a combination of planning and prototyping, model development and deployment software, data management capabilities, along with compute and storage hardware resources. Breaking all of this down are experts from Supermicro, Intel, Nutanix and MinIO.
Data lakes used for aggregating enterprise data and data lakehouses used for executing analytics and AI transactions on this data are key components for an enterprise data strategy, which is necessary to implement enterprise AI applications. This session features technologists from Supermicro, AMD, Enterprise DB and MinIO to explain how to implement data lakes and lakehouses to enable AI and analytics success.
Retrieval Augmented Generation (RAG) is a very popular inference method to add additional context to inference queries. This session brings together experts in implementing RAG workflows and infrastructure from Supermicro, NVIDIA, VAST Data, Solidigm and Graid Technology. Also joining is a special guest from GPU service provider Voltage Park.
This special session contains three separate mini-sessions in one 30-minute information-packed session. First, DDN with guest NVIDIA will discuss joint solutions with Supermicro. Next, OSNexus will describe the latest developments in QuantaStor software-defined storage. Finally, SteelDome discusses the HyperSERV hyperconverged platform with Supermicro.