The Siri team has a great post about the evolution of Siri’s speech synthesis on the Apple Machine Learning Journal:
Siri is a personal assistant that communicates using speech synthesis. Starting in iOS 10 and continuing with new features in iOS 11, we base Siri voices on deep learning. The resulting voices are more natural, smoother, and allow Siri’s personality to shine through. This article presents more details about the deep learning based technology behind Siri’s voice.
Just scroll down to the bottom and listen to the progression between iOS 9, 10, and 11. It’s really impressive.
I’m surprised more beta users have not said more about this over the duration of the public betas. Until this post by Apple I’ve not seen it mentioned even once. Personally I will say that I consider it a fantastic improvement and thought it was one of the highlights of the WWDC Keynote. When I installed the public beta on my iPad the second first thing I did was invoke Siri so I could hear her new voice. So much better!