User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
Admin
completely blind computer geek, lover of science fiction and fantasy (especially LitRPG). I work in accessibility, but my opinions are my own, not that of my employer. Fandoms: Harry Potter, Discworld, My Little Pony: Friendship is Magic, Buffy, Dead Like Me, Glee, and I'll read fanfic of pretty much anything that crosses over with one of those.
keyoxide: aspe:keyoxide.org:PFAQDLXSBNO7MZRNPUMWWKQ7TQ
Location
Ottawa
Birthday
1987-12-20
Pronouns
he/him (EN)
matrix @fastfinge:interfree.ca
keyoxide aspe:keyoxide.org:PFAQDLXSBNO7MZRNPUMWWKQ7TQ
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @matt @tunmi13 I would consider these to not work. They're Windows only, and they have a hole bunch of strange timing and delay issues between when they fire and when they get read.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @matt @tunmi13 So would I. But this gets hard to do in a way that's cross-platform and cross-language, while also preserving ease of use. Most of these abstraction libraries just don't even support Braille at all.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@matt @jscholes @tunmi13 And what happens if the main Window ever gets destroyed or recreated? While I can often hook into app startup, most mod frameworks don't allow detailed hooks into Window Creation. It's possible I'm missing things, and smarter people than me can come up with a way to make this generally viable. But based on my research and skill level, I just don't see a path to avoid screen reader libraries in the majority of cases. Live regions are only useful in the case where you're writing your own app from scratch or modifying an open source app, and you never need to alert the user to things when the foreground window doesn't have the focus. This is a vanishingly small number of cases. As far as I can see, screen reader API's, and robust libraries to call them, are going to be useful for years to come.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@matt @jscholes @tunmi13 Better, but still not going to work for 99 percent of mods. In general, you don't get to spawn a new window, or modify properties on existing ones. The only place I could make this work is adispeak; I can write a full C# DLL there and do whatever I want. But if I do that, I lose the ability to notify the user if they have the IRC client in the system tray, or even just on the taskbar. Far from ideal.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@matt @jscholes @tunmi13 So would I. But the various game mods are developed by people mostly like me: hobbyists with jobs, and who are just skilled enough to find solutions and get things done. But without clear documentation and an easy to call API we can plug in, we're stuck. So I wouldn't expect this any time soon. All of the output systems in the above pole require one, maybe as many as three, lines of code to use.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @matt @tunmi13 It's possible my understanding could be out of date. I'd love a better way to do things. However, as far as I know, live regions require the window to have focus, and require the app to be a web app. That's just not the case for any one of my use-cases. Sometimes I'm using an apps built-in scripting language to add accessibility, sometimes I'm patching an app to send text to the screen reader, or sometimes I'm creating an entirely separate app to run in the background to read log files and output alerts that way. In none of these cases would live regions work.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@matt @jscholes @tunmi13 My understanding is that I also need to be comfortable in rust. I’m not. 99 percent of the time these apis are the only thing allowing medium skilled programmers like myself to plug accessibility holes.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@matt @jscholes @tunmi13 If only platform apis were actually reliable in the real world. If only toolkits like unity and others supported them. The days of the screen reader api are far from over.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
Another thing I enjoy: getting all the pipes and parameters for a command just exactly perfect, and thinking to myself: "I should add an alias for this in my profile!" Then opening my profile, and finding an alias I added to do that exact thing two years ago.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
Customizing my terminal makes me feel so productive! Of course, all of the time-saving aliases, hotkeys, and configuration changes I just made I'm going to forget within two days, and they'll never make it into my muscle memory. But hey, if I change several of the habits I've developed over a lifetime of computing, the modifications I just spent an hour making could save me as much as 0.2 seconds per month!
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @andrew @klittle667 My favourite was a documentary I watched where whenever someone was speaking a different language, the describer would announce "Subtitles appear." and then...not read them! Gee, thanks.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @andrew @klittle667 Oh! Do they do the deeply silly thing that our French channels sometimes do here? Where they air the French dub of the movie, but mess up and put the English version of the movie with Audio Description on the second stream?
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @andrew @klittle667 We do it rarely. It's mostly for live stuff like political debates or ceremonies or whatever. Sports, in one case. We also have language-specific channels. But at a political debate, for example, people might be frequently switching between French and English.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @andrew @klittle667 So in other words, in order to even know what the overly complicated standard is, you have to pay for the documentation. And then you actually have to implement the thing. And then, of course, the fact that there is no public open-source reference implementation means that everyone does it slightly differently, so if you want to build your own equipment to work with AD tracks, you have to account for every possible way the documentation could ever be interpreted by anyone, along with some impossible ones. And absolutely none of this infrastructure could be reused to offer multilingual dubs of programs in different audio streams. Whereas in Canada described audio is effectively just another language; you will sometimes encounter a program with four different audio streams: English, English AD, French, and French AD. And here I thought the UK was better at this than North America.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @andrew @klittle667 But in exchange, it means you can't just use the audio description that already exists when you're airing shows from the US and Canada, because we don't master our AD that way. It also explains to me why, when Canadian TV channels import Audio Description from the UK, the mix is an utter and total mess. I thought you guys were just really bad at that. One UK show I watch, for example, has all program audio on the left channel, and all audio description on the right channel. It's the worst of all possible worlds!
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @andrew @klittle667 I...I...hate everything about that system. No really, everything. Does the standard also require that you play back all audio in mono because of years-old limitations having to do with the SAP on analogue TV? That's the only way I can think of to make that worse.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @andrew @klittle667 I am utterly and completely baffled. If I'm not allowed to control the mix levels myself, what on earth is the point of not just mixing at source? This has all the disadvantages of both approaches.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @andrew @klittle667 Wait what? The TV Stream packets themselves include parameters for ducking? Why! I assumed TV receivers got to choose how that would work, either by using some kind of autoducking, or just playing the default track at a constant (slightly lower) volume and the AD track at a higher volume. That's what I was assuming you'd do with ffmpeg; it does allow basic modifications of track volumes.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@jscholes @andrew @klittle667 Also, the idea that he'd have to make huge modifications to the player is nonsense. Channels DVR already includes ffmpeg. It already can do live transcoding of streams. Literally all he has to do is create a thing in the advanced preferences to enable mixing audio description and TV audio together, then let ffmpeg handle it all.
User avatar
🇨🇦Samuel Proulx🇨🇦 @fastfinge@interfree.ca
2mo
@andrew @jscholes @klittle667 So looking into this a bit more, it looks like ffmpeg can just do everything you want itself with amix and amerge. ffmpeg.org/ffmpeg-filters.html#amerge

So use ckucoo to intercept Channel's call to commskip so you'll know when the recording is done, and run ffmpeg to modify the output recording to have audio from both audio streams mixed together.
github.com/Channels-DVR-Goodies/cuckoo

Sadly I can't actually do this myself as I'm not in the UK, so I couldn't test it. But once someone does, the process could be easily documented.