Training xtts_v2, getting 5gb of model size vs 2gb of original one #3362
-
|
Hi, training xtts_v2, following documentation guide then using gpt trainer I get a model having a final size more than twice the original one: 5GB vs 2GB. Config parameters are different and if I try to use the original config the code is not able to manage it, than it means has been used a different trainer (I guess). So I wonder I can I train xtts v2 using the same paramaters and tools used to produce the original one. Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
|
If you can train the model in FP16, it will weigh 2GB, but FP16 might not work correctly because it doesn't display statistics. Unfortunately, I've already asked, and no one has responded to this question. |
Beta Was this translation helpful? Give feedback.
-
|
@Edresson any idea why? |
Beta Was this translation helpful? Give feedback.
-
Hi @galazzo, It is normal to get 5GB checkpoint during the training. The released checkpoint does not have the optimizer state but the saved during the training has it. If you like you can remove the optimizer states by loading the checkpoint and deleting the key "optimizer". |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
Hi @galazzo,
It is normal to get 5GB checkpoint during the training. The released checkpoint does not have the optimizer state but the saved during the training has it. If you like you can remove the optimizer states by loading the checkpoint and deleting the key…