Skip to content

Questions about learn channel contributions #3

@lorinczb

Description

@lorinczb

Hello,

We have trained successfully a multispeaker model based on your implementation using the learn_channel_contributions multispeaker option. It works properly and synthesises correctly for all speakers.

We would like to implement the same option in PyTorch (plugged into this repo: https://github.com/tugstugi/pytorch-dc-tts).

Could you provide us some insight on how this works? Several layers call the function:

def learn_channel_contributions(input_tensor, codes, ncodes=1, reuse=None):
, and the size of the lcc_gate variable depends on the size of the input_tensor. Is this embedding reinitialized at each call, or is there anything shared between the layers? It looks like that it is initialized at each layer, or? (If this is implemented based on an article and you could post that would be greatly appreciated as well.)

Hope my question makes sense, I can try to explain it better if needed.

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions