“Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software, and developer tools,” said Justin Hotard, Intel Executive Vice-President and General Manager of the data center and Artificial Intelligence Group. “With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency, and security.”
Intel Xeon 6 with P-cores, designed to handle compute-intensive workloads, is said to deliver twice the performance of its predecessor. It features increased core count, double the memory bandwidth, and AI acceleration capabilities embedded in every core. This processor is engineered to meet the performance demands of AI from edge to data center and cloud environments.
Optimised for large-scale GenAI, Gaudi 3 has 64 Tensor processor cores (TPCs) and eight matrix multiplication engines (MMEs) to accelerate deep neural network computations. It includes 128 GB of HBM2e memory for training and inference, and 24 200 Gigabit (Gb) Ethernet ports for scalable networking. Gaudi 3 also offers compatibility with the PyTorch framework and advanced Hugging Face transformer and diffuser models.
Intel collaborations
Intel also recently collaborated with IBM to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud. Through this collaboration, Intel and IBM aim to lower the TCO to leverage and scale AI.
About 73per cent of GPU-accelerated servers use Intel Xeon as the host CPU3, the company claims. Intel partners with OEMs like Dell Technologies and Supermicro to develop co-engineered systems tailored to specific customer needs for effective AI deployments. Dell Technologies is currently co-engineering RAG-based solutions leveraging Gaudi 3 and Xeon 6.
Intel addresses challenges such as real-time monitoring, error handling, logging, security, and scalability by co-engineering with OEMs and partners to deliver production-ready retrieval-augmented generation (RAG) solutions. These solutions, built on the Open Platform Enterprise AI (OPEA) platform, integrate OPEA-based microservices into a scalable RAG system, optimised for Xeon and Gaudi AI systems, designed to allow customers to integrate applications from Kubernetes, Red Hat OpenShift AI and Red Hat Enterprise Linux AI.
Addressing challenges
The company said its Tiber portfolio offers business solutions to address challenges like access, cost, complexity, security, efficiency, and scalability across AI, cloud, and edge environments. The Intel Tiber Developer Cloud now provides preview systems of Intel Xeon 6 for tech evaluation and testing. Additionally, select customers will gain early access to Intel Gaudi 3 for validating AI model deployments, with Gaudi 3 clusters to begin rolling out next quarter for large-scale production deployments.
New service offerings include SeekrFlow, an end-to-end AI platform from Seekr for developing trusted AI applications. The latest updates feature Intel Gaudi software’s newest release and Jupyter notebooks loaded with PyTorch 2.4 and Intel one API and AI Intel/Page 3 tools 2024.2, which include new AI acceleration capabilities and support for Xeon 6 processors.
Comments
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.