Audio Stream Discussion

I didn’t see a thread regarding the audio stream but remember comments on my comment about the stream being broken, so lets discuss the audio stream provided by Othernet, our thoughts, and possible future ideas.

My idea:

If possible, bump the total stream bandwidth to 24000bps, and allocate 17000bps of that to the audio. the 8000kbps stream is terrible, and lately it has been playing music which is even more terrible than the voice only service.

In the past with SDR on HF, I have picked up a DRM (Digital Radio Mondiale) stream from Kuwait, and it was 16.7kbps AAC+, and for what it was, it sounded great. It was not only Stereo (AAC+ SBR-PR or whatever), but it sounded at least 10 times clearer and more enjoyable than the Othernet Radio.

I remember a thread saying the end-goal was 100kbps stream, but I wonder if that is still a goal, as the new EU beam is even LESS bandwidth than the NA stream. 100kbps would be great because it could offer MULTIPLE 17kbps radio channels that all sound acceptable.

The data doesn’t really need a lot of bandwidth. It is coming in and we cannot do anything with it until the file is complete and the system processes it, but the audio is a real-time stream, and should be given priority.

Anyway these are my thoughts about the audio service, what are yours?

My thought would be ‘pause’ the data stream while broadcasting audio. This would require the audio to be scheduled (for example: only on for first ten minutes of the each hour). I would recommend create a schedule with various languages.

The simulcasting and bouncing between data/audio ends up with very poor audio quality… very difficult to listen to …

My thought would be ‘pause’ the data stream while broadcasting audio. This would require the audio to be scheduled (for example: only on for first ten minutes of the each hour). I would recommend create a schedule with various languages.

The simulcasting and bouncing between data/audio ends up with very poor audio quality… very difficult to listen to …

Depends on what they are using to wrap the data into the LoRa transmission. If the LoRa can encapsulate anything, it would be plausible to have an AAC+ in a OGG/MPG container with the data files sent as extra metadata as a part of said OGG/MPG container (think Icecast/Shoutcast and pushing metadata, which is interleaved in the audio)

The poor audio quality comes from the fact I believe it is a GSM codec from the satellite then possibly re-encoded for use in the web browser.

Thought: 20kbps “audio” stream, actual audio stream is 16kbps, sends 4kbps data every so often, but switches back to audio so allow system to “prebuffer” for the moments we are not sending audio. Could use in junction with your idea to have some “downtime” on the audio stream in which it can use the entire 20kbps stream for data to catch up on what it couldn’t send during the day because 4kbps is stupid slow.