Microsoft’s collaboration with Mistral AI continues to accelerate AI innovation. After the successful launch of Mistral Large, Mistral AI’s flagship model, we’re thrilled to unveil Mistral Small – a compact yet powerful language model designed for efficiency.
Available in the Azure AI model catalog, Mistral Small joins our growing collection of LLMs. Developers can access it through Models as a Service (MaaS), enabling seamless API-based interactions.
Mistral Small
As per insights provided by Mistral AI, Mistral Small is Mistral AI's smallest proprietary Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency.
Mistral Small is:
Get started with Mistral Small on Azure AI
Provision an API Endpoint: Create your Mistral Small API endpoint in seconds.
Experiment: Try it out in the Azure AI Studio playground or integrate it with popular LLM app development tools.
Build Safely: Leverage dual-layer safety mechanisms to create reliable and secure Generative AI applications.
Here are the prerequisites:
Next, you need to create a deployment to obtain the inference API and key:
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral.... Let’s review samples for some popular clients.
Explore the power of Mistral Small – where efficiency meets innovation!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.