@hosford42 When it comes to requirements, in general, if it can work with both the SAPI5 and NVDA addons API, it will suit the requirements of speech dispatcher on Linux and the mac API's. The important thing is that most screen readers want to register indexes and callbacks. So, for example, if I press a key to stop the screen reader speaking, it needs to know exactly where the text to speech system stopped so that it can put the cursor in the right place. It also wants to know what the tts system is reading so it can decide when to advance the cursor, get new text from the application to send for speaking, etc. I really really really wish I had a better example of how that works in NVDA than this: github.com/fastfinge/eloquence_64/blob/master/eloquence.py