Hello,
We have trained successfully a multispeaker model based on your implementation using the learn_channel_contributions multispeaker option. It works properly and synthesises correctly for all speakers.
We would like to implement the same option in PyTorch (plugged into this repo: https://github.com/tugstugi/pytorch-dc-tts).
Could you provide us some insight on how this works? Several layers call the function:
|
def learn_channel_contributions(input_tensor, codes, ncodes=1, reuse=None): |
, and the size of the lcc_gate variable depends on the size of the input_tensor. Is this embedding reinitialized at each call, or is there anything shared between the layers? It looks like that it is initialized at each layer, or? (If this is implemented based on an article and you could post that would be greatly appreciated as well.)
Hope my question makes sense, I can try to explain it better if needed.
Thanks!
Hello,
We have trained successfully a multispeaker model based on your implementation using the learn_channel_contributions multispeaker option. It works properly and synthesises correctly for all speakers.
We would like to implement the same option in PyTorch (plugged into this repo: https://github.com/tugstugi/pytorch-dc-tts).
Could you provide us some insight on how this works? Several layers call the function:
ophelia/modules.py
Line 78 in 65bb7e8
Hope my question makes sense, I can try to explain it better if needed.
Thanks!