Nvidia Debuts Fugatto AI Model That Can Generate Music, Voices and Sound Effects

Nvidia introduced a new artificial intelligence (AI) model on Monday that can generate a variety of audio and mix different types of sounds. The tech giant calls the foundation model Fugatto, which is short for Foundational Generative Audio Transformer Opus 1. While audio-focused AI platforms such as Beatoven and Suno exist, the company highlighted that Fugatto offers users granular control over the desired output. The AI model can generate or transform any mix of music, voices and sound based on specific prompts.

Nvidia Introduces AI Audio Model Fugatto

In a blog post, the tech giant detailed its new large language model (LLM). Nvidia said Fugatto can generate music snippets, remove or add instruments from an existing song, change accent or emotion in a voice, and “even let people produce sounds never heard before.”

The AI model accepts both text and audio files as input, and users can combine both to fine-tune their requests. Under the hood, the foundation model's architecture is based on the company's previous work in speech modelling, audio vocoding, and audio understanding. Its full version uses 2.5 billion parameters and was trained on the datasets of Nvidia DGX systems.

[Sponsored] Samsung Galaxy S24: The Ultimate Camera Phone With Smarter On-Device AI

Nvidia highlighted that the team that built Fugatto collaborated from different countries globally including Brazil, China, India, Jordan, and South Korea. The collaboration of people from different ethnicities has also contributed to developing the AI model's multi-accent and multilingual capabilities, the company said.

Coming to the AI audio model's capabilities, the tech giant highlighted that it has the capability to generate audio output types that it was not pre-trained on. Highlighting an example, Nvidia said, “Fugatto can make a trumpet bark or a saxophone meow. Whatever users can describe, the model can create.”

Additionally, Fugatto can combine specific audio capabilities using a technique called ComposableART. With this, users can ask the AI model to generate an audio of a person speaking French with a sad feeling. Users can also control the degree of sorrow and the heaviness of the accent with specific instructions.

Google DeepMind’s New AI Model Can Help in Drug Discovery

Further, the foundation model can also generate audio with temporal interpolation, or sounds that change over time. For instance, users can generate the sound of a rainstorm with crescendos of thunder that fade into the distance. These soundscapes can also be experimented with, and even if it is a sound that the model has never processed before, it can create them.

OpenAI Might Be Struggling to Improve Its Next AI Model Significantly

At present, the company has not shared any plans to make the AI model available to users or enterprises.

.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

Nvidia Debuts Fugatto AI Model That Can Generate Music, Voices and Sound Effects

Nvidia Debuts Fugatto AI Model That Can Generate Music, Voices and Sound Effects

Nvidia Debuts Fugatto AI Model That Can Generate Music, Voices and Sound Effects

Nvidia Debuts Fugatto AI Model That Can Generate Music, Voices and Sound Effects
Nvidia Debuts Fugatto AI Model That Can Generate Music, Voices and Sound Effects
Ads Links by Easy Branches
Play online games for free at games.easybranches.com
Guest Post Services www.easybranches.com/contribute