Mistral AI and NVIDIA Unveils Mistral-NeMo 12B, a New Enterprise-grade AI Model

| Updated on July 22, 2024

Mistral AI and NVIDIA have launched Mistral-NeMo 12B, a new language model for enterprise applications that can bring powerful AI abilities to business desktops, such as chatbots, multilingual tasks, coding, and summarization. 

Guillaume Lample, the cofounder and chief scientist of Mistral AI, said, “We are fortunate to collaborate with the NVIDIA team, leveraging their top-tier hardware and software. Together, we have developed a model with unprecedented accuracy, flexibility, high efficiency and enterprise-grade support and security thanks to NVIDIA AI Enterprise deployment.”

The new model combines Mistral AI’s expertise in training data and NVIDIA’s optimized hardware and software ecosystem, thus offering high performance and improved efficiency across various applications.

The Mistral NeMo model was trained on the Nvidia DGX Cloud AI platform, offering scalable and dedicated access to the latest NVIDIA architecture. The model also used NVIDIA TensorRT-LLM and NVIDIA NeMo development platforms to optimize the process.

Released under an Apache 2.0 license, it has a 12-billion-parameter model that encourages widespread AI adoption. It has a 128K context length that helps process the extensive information and uses the FP8 data format for model inference, which reduces memory size and speeds deployment.

Mistral Nemo comes packaged as an NVIDIA NIM inference microservice, which allows for easy deployment and offers enhanced flexibility for various applications.

NIM boasts enterprise-grade software with dedicated feature branches, a rigorous verification process, and enterprise-grade security. 

Furthermore, it is designed to fit on the memory of a single NVIDIA L40S, NVIDIA GeForce RTX 4090, or NVIDIA RTX 4500 GPU, offering high efficiency and low computing costs.

Jemima Hunter

Tech Journalist