Microsoft Boosts Phi-3 Family with Powerful New Models on Azure

Microsoft is expanding its Phi-3 family of small language models with the introduction of Phi-3-vision. This new model can understand and reason about both text and images, making it a powerful tool for a variety of tasks.

Phi-3-vision

Bringing together language and vision capabilities

Try it today

There are four models in the Phi-3 model family:

  •  Phi-3-vision is a 4.2B parameter multimodal model with language and vision capabilities.
  •  Phi-3-mini is a 3.8B parameter language model, available in two context lengths (128K and 4K).
  •  Phi-3-small is a 7B parameter language model, available in two context lengths (128K and 8K).
  •  Phi-3-medium is a 14B parameter language model, available in two context lengths (128K and 4K).

Find all Phi-3 models on Azure AI and Hugging Face.

Here's a quick rundown of the key points:

  • Phi-3 models are small but mighty: They outperform similar sized models, and even some larger ones, on various tasks including understanding language, code, and math. They are also very efficient to run.

  • Phi-3 models are safe and secure: Microsoft developed these models following strict safety guidelines to ensure they are used responsibly.

  • New Phi-3-vision brings sight to the Phi-3 family: This new addition can analyze images and text together. Phi-3-vision is great for tasks like reading text in images (OCR) and understanding charts and graphs.

Get started with Phi-3 today

You can try out Phi-3 models on Microsoft's Azure AI Playground or learn more about building applications with them using Azure AI Studio.

Read more here>>

C# Corner
C# Corner started as an online community for software developers in 1999.