Google Gemini AI App Expands to More Countries
Google renamed its Bard chatbot to Gemini and introduced the new Gemini...
Nvidia has launched a new method for running a Large Language Model (LLM) chatbot on your device called RTX Chat. This announcement follows shortly after Nvidia’s unveiling of its newest data center GPU, the RTX 2000 Ada.
Nvidia’s latest Chat with RTX application allows users to create a personalized chatbot that meets their specific requirements, and it can operate directly on their device if they have an RTX GPU. This customized GPT LLM will be linked to the content stored on the user’s device, including documents, notes, videos, and other data. Running locally on the device ensures rapid and secure outcomes, in contrast to cloud-based alternatives.
Similar to typical chatbots, users only need to input a query into RTX Chat, and it will provide contextually appropriate responses. The added advantage is the ability to select from various open-source AI models, like Mistral and Meta’s Llama.
The video below shows it in action.
When searching for data within documents, RTX Chat not only provides the required information but also includes citations from reference files in its responses. Similar to Google’s Gemini chatbot, users can also search for information within YouTube videos using RTX Chat.
Jensen Huang, Nvidia’s founder and CEO, remarked: “Generative AI is the single most significant platform transition in computing history and will transform every industry, including gaming. With over 100 million RTX AI PCs and workstations, NVIDIA is a massive installed base for developers and gamers to enjoy the magic of generative AI.”.
Nevertheless, it has some inherent limitations. For example, it lacks the ability to recall previous conversations, so it initiates each chat anew. Additionally, there are a few issues that need addressing, and it can be resource-intensive. Nonetheless, as RTX Chat is still under development, we anticipate improvements in the near future.
Nvidia’s latest Chat with RTX application allows users to create a personalized chatbot that meets their specific requirements, and it can operate directly on their device if they have an RTX GPU. This customized GPT LLM will be linked to the content stored on the user’s device, including documents, notes, videos, and other data. Running locally on the device ensures rapid and secure outcomes, in contrast to cloud-based alternatives.
Catch all the Sci-Tech News, Breaking News Event and Latest News Updates on The BOL News
Download The BOL News App to get the Daily News Update & Follow us on Google News.