Skip to content
Discussion options

You must be logged in to vote

Hi,

training xtts_v2, following documentation guide then using gpt trainer I get a model having a final size more than twice the original one: 5GB vs 2GB.

Config parameters are different and if I try to use the original config the code is not able to manage it, than it means has been used a different trainer (I guess).

So I wonder I can I train xtts v2 using the same paramaters and tools used to produce the original one.

Thanks

Hi @galazzo,

It is normal to get 5GB checkpoint during the training. The released checkpoint does not have the optimizer state but the saved during the training has it. If you like you can remove the optimizer states by loading the checkpoint and deleting the key…

Replies: 4 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by erogol
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
5 participants