We use a lot of computer generated voices for our courses at work through Storyline. They are quick and efficient. The quality isn’t too bad, and as we make changes to the courses weekly, they are easy to update. We don’t have to get a voice actor, the same voice actor, in a quiet room to re-do audio pieces and we don’t have to splice and slice audio tracks to fit in new pieces. However, people still prefer a human voice.
I’ve been following the xVA synth project by Dan Ruta on NexusMods, and on his GitHub. Games use live human voices to record their games. Dan has taken a bunch of these games and parsed them into computer generated voices so that “pitch and durations of individual letters to provide control over emotion and emphasis” (NexusMods).
I wonder if the process for getting the game voices in the software will ever be opened up to us recording our own sounds and adding them to the database, so that we can use our own voices in our software without having to record each thing?