use the docker "ghcr.io/coqui-ai/xtts-streaming-server", when post request, I get the follow first chunk time:
Time to make POST: 0.18376178992912173s
Time to first chunk: 0.8716433839872479s
when I use the local inference.
first chunk: 0.2440338134765625s
use the docker "ghcr.io/coqui-ai/xtts-streaming-server", when post request, I get the follow first chunk time:
when I use the local inference.
first chunk: 0.2440338134765625s