Nvidia Releases Chat With RTX, an AI Chatbot That Runs Locally on Windows PC

Nvidia has released an artificial intelligence (AI)-powered chatbot called Chat with RTX that runs locally on a PC and does not need to connect to the Internet. The GPU maker has been at the forefront of the AI industry since the generative AI boom, with its advanced AI chips powering AI products and services. Nvidia also has an AI platform that provides end-to-end solutions for enterprises. The company is now building its own chatbots, and Chat with RTX is its first offering. The Nvidia chatbot is currently a demo app available for free.

Calling it a personalised AI chatbot, Nvidia released the tool on Tuesday (February 13). Users intending to download the software will need a Windows PC or workstation that runs on an RTX 30 or 40-series GPU with a minimum of 8GB VRAM. Once downloaded, the app can be installed with a few clicks and be used right away.

Since it is a local chatbot, Chat with RTX does not have any knowledge of the outside world. However, users can feed it with their own personal data, such as documents, files, and more, and customise it to run queries on them. One such use case can be feeding it large volumes of work-related documents and then asking it to summarise, analyse, or answer a specific question that could take hours to find manually. Similarly, it can be an effective research tool to skim through multiple studies and papers. It supports text, pdf, doc/docx, and xml file formats. Additionally, the AI bot also accepts YouTube video and playlist URLs and using the transcriptions of the videos, it can answer queries or summarise the video. For this functionality, it will require internet access.

Google Rolls Out Android Safe Browsing to Protect Users From These Threats

As per the demo video, Chat with RTX essentially is a Web server along with a Python instance that does not contain the information of a large language model (LLM) when it is freshly downloaded. Users can pick between Mistral or Llama 2 models to train it, and then use their own data to run queries. The company states that the chatbot leverages open-source projects such as retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration for its functionality.

According to a report by The Verge, the app is approximately 40GB in size and the Python instance can occupy up to 3GB of RAM. One particular issue pointed out by the publication is that the chatbot creates JSON files inside the folders you ask it to index. So, feeding it your entire document folder or a large parent folder might be troublesome.

OpenAI is Testing Memory Enhancement Capabilities for ChatGPT Redmi A3 With MediaTek Helio G36 SoC Goes Official in India Nothing Phone 2a Price in India, Key Specifications Tipped Ahead of Launch

Is the Samsung Galaxy Z Flip 5 the best foldable phone you can buy in India right now? We discuss the company's new clamshell-style foldable handset on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.

Nvidia Releases Chat With RTX, an AI Chatbot That Runs Locally on Windows PC

Nvidia Releases Chat With RTX, an AI Chatbot That Runs Locally on Windows PC

Nvidia Releases Chat With RTX, an AI Chatbot That Runs Locally on Windows PC

Nvidia Releases Chat With RTX, an AI Chatbot That Runs Locally on Windows PC
Nvidia Releases Chat With RTX, an AI Chatbot That Runs Locally on Windows PC
Ads Links by Easy Branches
Play online games for free at games.easybranches.com
Guest Post Services www.easybranches.com/contribute