NVIDIA Unveils Generative AI-Powered Visual AI Agents for Edge Deployment
An exciting breakthrough in AI technology—Vision Language Models (VLMs)—offers a more dynamic and flexible method for video analysis, according to NVIDIA Technical Blog. VLMs enable users to interact with image and video input using natural language, making the technology more accessible and adaptable. These models can run on the NVIDIA Jetson Orin edge AI platform or discrete GPUs through NIMs.
What is a Visual AI Agent?
A visual AI agent is powered by a VLM where users can ask a broad range of questions in natural language and get insights that reflect true intent and context in a recorded or live video. These agents can be interacted with through easy-to-use REST APIs and integrated with other services and mobile apps. This new generation of visual AI agents helps to summarize scenes, create a wide range of alerts, and extract actionable insights from videos using natural language.
NVIDIA Metropolis brings visual AI agent workflows, which are reference solutions that accelerate the development of AI applications powered by VLMs, to extract insights with contextual understanding from videos, whether deployed at the edge or cloud.
For cloud deployment, developers can use NVIDIA NIM, a set of inference microservices that include industry-standard APIs, domain-specific code, optimized inference engines, and enterprise runtime, to power the visual AI Agents. Get started by visiting the API catalog to explore and try the foundation models directly from a browser.
Building Visual AI Agents for the Edge
Jetson Platform Services is a suite of prebuilt microservices that provide essential out-of-the-box functionality for building computer vision solutions on NVIDIA Jetson Orin. Included in these microservices are AI services with support for generative AI models such as zero-shot detection and state-of-the-art VLMs. VLMs combine a large language model with a vision transformer, enabling complex reasoning on text and visual input.
The VLM of choice on Jetson is VILA, given its state-of-the-art reasoning capabilities and speed by optimizing the tokens per image. By combining VLMs with Jetson Platform Services, a VLM-based visual AI agent application can be created that detects events on a live-streaming camera and sends notifications to the user through a mobile app.
Integration with Mobile App
The full end-to-end system can now integrate with a mobile app to build the VLM-powered Visual AI Agent. To get video input for the VLM, the Jetson Platform Services networking service and VST automatically discover and serve IP cameras connected to the network. These are made available to the VLM service and mobile app through the VST REST APIs.
From the app, users can set custom alerts in natural language such as “Is there a fire” on their selected live stream. Once the alert rules are set, the VLM will evaluate the live stream and notify the user in real-time through a WebSocket connected to the mobile app. This will trigger a popup notification on the mobile device, allowing users to ask follow-up questions in chat mode.
Conclusion
This development highlights the potential of VLMs combined with Jetson Platform Services to build advanced Visual AI Agents. The full source code for the VLM AI service is available on GitHub, providing a reference for developers to learn how to use VLMs and build their own microservices.
For more information, visit the NVIDIA Technical Blog.
Image source: Shutterstock
Credit: Source link
Comments are closed.