Hugging Face Introduces Open-Source SmolVLM Vision Language Model Focused on Efficiency
Hugging Face, the artificial intelligence (AI) and machine learning (ML) platform, introduced a new vision-focused AI model last week. Dubbed SmolVLM (where VLM is an acronym for vision language model), it is a compact-sized model that is focused on efficiency. The company claims that due to its smaller size and high efficiency, it can be useful for enterprises and AI enthusiasts who want AI capabilities without investing a lot in its infrastructure. Hugging Face has also open-sourced the SmolVLM vision model under the Apache 2.0 license for both personal and commercial usage.
Hugging Face Introduces SmolVLMIn a blog post, Hugging Face detailed the new open-source vision model. The company called the AI model “state-of-the-art” for its efficient usage of memory and fast inference. Highlighting the usefulness of a small vision model, the company noted the recent trend of AI firms scaling down models to make them more efficient and cost-effective.
Small vision model ecosystem
Photo Credit: Hugging Face
The SmolVLM family has three AI model variants, each with two billion parameters. The first is SmolVLM-Base, which is the standard model. Apart from this, SmolVLM-Synthetic is the fine-tuned variant trained on synthetic data (data generated by AI or computer), and SmolVLM Instruct is the instruction variant that can be used to build end-user-centric applications.
Samsung Wins Patent for a Tri-Fold Smartphone With Specialised Barrier Layer for Improved DurabilityComing to technical details, the vision model can operate with just 5.02GB of GPU RAM, which is significantly lower than Qwen2-VL 2B's requirement of 13.7GB of GPU RAM and InternVL2 2B's 10.52GB of GPU RAM. Due to this, Hugging Face claims that the AI model can run on-device on a laptop.
SmolVLM can accept a sequence of text and images in any order and analyse them to generate responses to user queries. It encodes 384 x 384p resolution image patches to 81 visual data tokens. The company claimed that this enables the AI to encode test prompts and a single image in 1,200 tokens, as opposed to the 16,000 tokens required by Qwen2-VL.
This AI App Predicts When You Will Die and How to Improve Life Expectancy OpenAI Sued by Canadian News Companies Over Alleged Copyright BreachesWith these specifications, Hugging Face highlights that SmolVLM can be easily used by smaller enterprises and AI enthusiasts and be deployed to localised systems without the tech stack requiring a major upgrade. Enterprises will also be able to run the AI model for text and image-based inferences without incurring significant costs.
.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }Hugging Face Introduces Open-Source SmolVLM Vision Language Model Focused on Efficiency
Hugging Face Introduces Open-Source SmolVLM Vision Language Model Focused on Efficiency
Hugging Face Introduces Open-Source SmolVLM Vision Language Model Focused on Efficiency
Hugging Face Introduces Open-Source SmolVLM Vision Language Model Focused on Efficiency
Hugging Face Introduces Open-Source SmolVLM Vision Language Model Focused on Efficiency
Play online games for free at games.easybranches.com
Guest Post Services www.easybranches.com/contribute