@Abhishek My audio frame service seem to stop after a couple hours of operation (or non-operation). I can restart it by rebooting.
2 questions. Is this expected?, and Can I get (if needed) a menu item on Skylark to reboot via the wifi gui?
there does seem to be a bug which cause the audio service to crash. am looking into it. so in that sense, its “expected”.
reboot: just use the power button. short press on it powers down. 2-second press on it will then power it back up again.
should kick-in in about 60-90 minutes.
i can agree the audio service likes to crash at random intervals
I do hear the classical music, but now I know the distortion I heard with the other source was distorted also. It is very easily noticed. Sounds like station is off frequency, but of course it is not. Looping and stuttering more often. This is a better test than speech. If you think this impairment is not high priority right now, we can understand that.
I think I mentioned before, If I listen to music from a Wikipedia or news entry, it absolutely clean and clear with no looping, no stuttering, no impairments. I realize it is not live, but a file.
@donde Just curious, I know the images that I get in wiki/news articles are not part of what is downloaded in the outernet transmission, they are recreated on my local computer using a link to my internet connection. So I don’t know about if the wiki audio files you are hearing are really part of the outernet file or if they are an internet link.
Some experimenting is needed.
there is some timing issue with the streaming right now - am looking into it. The looping is a known bug with the underlying format in the presence of lost or delayed packets. Stuttering is a direct result of a delayed packet.
Classical music (or any music really, maybe except pop) will always sound bad on this link - the audio is bandlimited to <4K Hz frequencies only - about the same quality as a landline (except landlines don’t get packetloss or jitter). This was necessary due to the very low rate nature of the audio stream. Theres simply no space in the packets for the higher frequency components. I designed it mainly for human speech - lectures, podcasts, news casts etc. but sources are hard to come by.
I now understand the issue much better for the reasons you gave. Maybe this subject and others could be included in a new FAQ folder on the Outernet main page.
I believe all content that is sent to us is from the normal stream of downloads we see from the carousel. Like a new Wikipedia article as an example. One day I looked at the Wikipedia folder and saw an embedded music track in an article. I clicked and it sounded perfect. Not just music but speech too. This experience is very different from the live background music stream being sent now with all the impairments. The Team knows.
I would think that VOA would jump on providing an audio source.
I see 5 instances of VOA under Files, Channels. It very well be true audio clips are there. I am mainly talking about live audio and it’s problems.
If this live audio channel could be source of emergency information in the future, it should be fixed so people would not be confused to what is happening in a disaster or whatever. Looping or stuttering should not be present.
Some of the flaws like studdering is coming from lost audio frames and there is some options to do when you loose a frame which prevents it from continuing playing in till it gets it’s next audio frames one option is to repeat the last frame in till it gets the next one or one other option that I was thinking about is it could pause the playback in till it gets the next frame but that could confused people to think it stopped working but that is the same case with the studdering.
I thought FEC was part of the protocol used to eliminate lost frames or maybe packets? If the live audio which could be speech or music was sent to a newly opened file and continuously appended to till all the frames ended, then close the file. And then send it. This is what we get now from a BBC news story, a file. It winds up in the News folder. We open it, read a perfect copy and or hear perfect music. It is not live however. I’m just thinking. I don’t know how to accomplish the task.
This all happens in layers. FEC happens in the PHY layer. A missing audio data segment happens in the Transport layer. All of this is like taping together pieces of paper to make a page, stuffing those pages into envelopes, throwing those envelopes into a box… One deals with a missing piece of a page, the other deals with a missing box.
Here’s the details:
In the PHY layer, many chips are assembled to make a bit. In LoRa, this is one chirp per bit. All of the bits are gathered together to make a frame. Before the frame is passed up to the MAC layer, the FEC use all of the bits in the packet to fix the broken ones.
Switching occurs in the MAC layer. In the MAC layer, the frames are concatenated to form packets. The packets are handed to the IP layer.
Routing occurs in the IP layer. Routing figures out what to do with packets. In Outernet, there may be no need for a separate transport layer (this is all my guess since Outernet’s upper layers are all proprietary). But essentially all of the packets have to be UDP Data Segments.
To Stream audio, the Data Segments are buffered and then dropped into a D to A on a precise time cue. If a Data Segment does not make it in time, that section is re-played so that there’s no interruption of the sound. It’s just one sample 48,000 of them are used in one second. No one is going to be confused by a very minor interpolation.
To Make a data file, the Data Segments are assembled together in memory until enough can be moved off to flash, etc. There’s no reason to over-write a completed data file.
Audio files and streams really will never mix. They’re not the same.
Thanks for taking the time to explain Layers and then Segments. I’m sure the info helped lots of us, especially me. I do remember reading about layers involved in the TCP/IP protocol years ago. Do you think the live audio problems can be fixed? It doesn’t seem to be a priority at this time. Maybe, not even necessary?
Discovered Adafruit has projects about LoRa. Probably been there awhile, never noticed. Something to do while the Team re-edits the screenplay.
I think live audio can be fixed. I believe there is a solution. But it’s up to the Outernet engineers to put together the requirements and implement. Then we can crowd-test the solution.
Now, I do have an opinion as to how the audio service could work. As you may know, Outernet’s implimentation is based off of a LoRa PHY layer and off of a proprietary upper layers. LoRa can send multiple streams at the same time that are almost orthogonal by using two (or more) spreading factors at the same time. The exception happens when the two spreading factor streams cross. Research papers show that there is some or minimal problems in doing this.
So the idea is that Outernet could interleave High-speed data with a slower service (two different spreading factors). Now the slower data rate service has more redundancy packed into it than the faster data service. If the packets are marked correctly, they could “route” or “switch” into data file applications or streaming applications. One service builds files, the other streams audio for example.
I’m sure that Outernet is in the process of the rigors of inventing this (or something like it).
We’ll see what they do.
Thanks Konrad for more insight on LoRa. Think I’ll order a few boards from Sparkfun or just for fun. Or Adafruit. 2 Arduino Pro mini 328 and 2 RFM69 Breakout (915MHz). It’s not LoRa, but close. All together about $40
Adafruit and Sparfun have LoRaWAN boards. They work on pre-assigned frequencies shared with garage door openers, key fobs, medical alert buttons, toll-tag readers, etc. SMH.
I’m going to take some time to look at how chat works between two DreamCatchers on a ham band … with me being the only one on on this side of the planet. It’s all channel Z on 1240-1300 MHz.
How would the audio be transmitted through the outernet?