A reader service for the disabled

What would it take to add the orca screen reader to othernet? This could be an additional mission for the project.

1 Like

Bump to @Syed

Any user client device used with the Othernet ground station could likely have an OS specific text-to-speach reader installed.
I suppose someone could patch in espeak and some network streaming audio server but doing the TTS reader locally on the user device seems easier than a non-standard implementation. Orca screen reader is a Linux(I guess can build for other posix compliant OSs) distros using Gnome UI.

I know it could be done it s just up to the powers that be to implement into the project. It would be an additional “selling point” cause I do know there are areas of the country that lack the necessary tools for the disabled without internet and most times wait days-weeks for snail mail to deliver them 3-4 audio books. Most states lack a reading service with limited access via FM sca channel, 220 mhz or like my state has on its pbs audio sub channels.

@Gordon_Carlile I think @biketool described the situation pretty well. Accessibility products are in the domain of end-user devices. It would be the user’s PC/tablet which would be running a screen reader. I admit that I am not at all familiar with accessibility applications, but I would be happy to learn more about this. I just don’t think there is anything for us to do here.

What I mean by this is we deliver content files, such as news articles and Wikipedia pages. The receiver, Dreamcatcher, is primarily a storage unit and access point for that content. There must be another device which actually renders/displays/reads the content. We don’t have anything to do with those end user devices.

I would be thrilled to provide a reading service. There is a content licensing issue, though. We would need to partner with an existing reading service. Do you know of any that are particularly easy to work with?

@Gordon_Carlile, what exactly would you like to see the Othernet earth station to provide?(I feel like you have a specific vision we are missing or perhaps are using same words different meaning and miscommunicating.) If it helps please make a narrative of how you see an end user benefiting from your idea of a modified Dreamcatcher software or hardware.
The audio channel usually is already streaming an audio stream(VOA for now), you can even plug in a speaker into the 3.5mm TRS phone jack built into the receiver/processor(Dreamcatcher) board and housing assembly to listen to that stream.
Maybe we are missing what you would like to see, but if you think of the Dreamcatcher(Othernet Earth station) like a home network access point provided by your DSL provider, you bring your own equipment to connect to that hotspot/modem/router box.
I suppose if people wanted to patch in espeak, festival, or whatever other commercial alternatives there are built for ARM Linux(and the kernel in use) and a MP3 encoder a script could convert received ‘pages’ into text files maybe rss them to phones and tablets? It would probably require additional flash card to hold the generated TTS audio files.
My issue is that the free(FOSS) audio options are not great IMO but obviously better than no access.
Still seems that there are better options and more flexibility on the bring-your-own device already required to even take advantage of an Othernet Earth station while also not taxing the electrical power and processing capacity of a small lightweight embedded machine. It also runs into scaling as there has been talk of expanding the footprint of an Othernet rollout in an unwired community using inexpensive ESP32 dev board(or other) LoRa mesh networking with the ESP32s each hosting a WiFi hotspot for end users to access. The available long range mesh network data rate is pretty low so again it is better to let the user device convert a lightweight text file or ‘web’ page on-device rather than at the Earth station or, if implemented, at the daughter/extender hotspots.
One idea in an android dominated mobile world might be to include a freely sharable TTS APK(if you feel qualified please scour f-droid for one you think looks good) so on the wide spectrum of vision differently-abled person could one time have a friend grab the APK and from then on the user’s device can provide customized to the user special accessibility. For example
https://f-droid.org/packages/edu.cmu.cs.speech.tts.flite/
this ready to install(need to set permissions to use untrusted software sources) Flite TTS app 2MB APK made available to users without royalty or further permission as everything on f-droid is free/libre licensed software. I would guess something of similar usefulness is released under the GPL or similar libre license and available for Windows/mac/IOS but I have no useful knowledge of that ecosystem.
Just so you know, almost all fo these software projects take bug and feature reports/requests from the public so if you find a FOSS project which you think needs a tweak to improve usability let them know.

Is there any standard automagic discovery or communication API in use for special needs accessibility that we could leverage to make everyone’s life easier for both user and dev?

Not something that relies on android by the time it finally gets to reading a story you will have fallen asleep do to the way Othernets Ui functions.