Duty Cycle in DownLink on RAK7258

Issue: Gateway seems to ignore any duty cycle for downlink packets

Setup: EU868 band, RAK7258, Chirpstack Stack, Nucleo L476RG board

LoRa® Server: Chirpstack

Details: RAK7258 forwards packets as it receives them. Network Server sent a sequence of 50B packets (class C Multicast session for FUOTA) on P-Band (869.525 MHz) with SF12/125kHz. The gateway shows all packages in a cadence of 2s with an airtime of each 2793ms. Hows that even possible? Wouldn’t they overlap? And: does the gateway enforce a duty cycle or does it just forward every packet, received from network server to the air?

Enforcement is typically left to the network server, the gateway just does what it’s told to.

Your numbers do indeed suggest things don’t make sense, likely if you view the packet forwarder log you’ll see errors.

Really this is a chirpstack question; chances are if you want to be working with FUOTA you want an up-to-date version hosted in the cloud, not running on the gateway itself.

Well, my Chirpstack stack is virtually a cloud solution, but under laboratory conditions within a local RasPi. Chirpstack themself stated, that (according to their view) it was the gateway’s responsibility to enforce the DC, and thus they’re just pushing the packets sequentially to the gateway expecting that exactly the gateway will take care of delaying the packets to enforce/conform the DC. So, in my opinion, this is no pure Chirpstack issue, I want to find out, what the exact and to be expected behavior of each component is.
So, maybe there’s currently no mutual conclusion, where the DC is to be enforced, is it?

AND: I got you right, that the packet forwarder in the RAK5278 is not delaying/buffering any packets, but just airing them, as soon as they arrive in the gateway from the network server?

It’s necessary to be careful when paraphrasing what someone else has said.

It does appear that the Chirpstack developer is on record as stating the (quite unusual) viewpoint that in his belief it would be architecturally preferable to enforce duty cycle at the gateway level, and not at the network server level. Mostly he says this when people file bug reports about the fact that his network server (where the standard architecture would place that function) doesn’t do so. It’s an unusual viewpoint though, as the whole architecture of LoRaWAN tries to keep the gateway functionality to very minimal translation, with all of the real protocol as a conversation between the nodes and servers - in such an architecture the gateway is only asked to implement functionality that could not be viably implemented anywhere else.

But even if for sake of argument one shares his belief as to what would be preferable, he is no doubt aware that this is not standard behavior and not offered by current gateways. If you look at the reference implementations for packet forwarders from Semtech, you won’t find duty cycle limited there. In fact even if you look at Chirpstack’s own “Concentratord” project you will see that such a thing exists only as a future planned feature - not a current one!.

thus they’re just pushing the packets sequentially to the gateway expecting that exactly the gateway will take care of delaying the packets to enforce/conform the DC.

Given that they well know that gateways don’t behave that way, this seems to be a miss-paraphrasing of whatever was actually said. But to be clear: such a strategy would be unworkable with standard gateways running standard software.

I got you right, that the packet forwarder in the RAK5278 is not delaying/buffering any packets, but just airing them, as soon as they arrive in the gateway from the network server?

To figure out what is actually going on, you’d have to look at the content actual downlink requests being pushed towards the gateways. If that’s in immediate mode, it would be sent as soon as the gateway was done; if it’s in more typical timestamp mode it would be sent at the programmed time, or dropped if the gateway is still busy with a previous packet at that time.

It’s also important to keep in mind that traditional gateways can’t even queue packets at all - if they get more than one future downlink request, they simply drop it. More recent versions of the Semtech reference implementation (on which I believe the RAK builds are closely based) do implement a queue of limited size, but still not duty cycle. And pushing too many packets too far ahead of reality would overflow the queue.

So, in my opinion, this is no pure Chirpstack issue, I want to find out, what the exact and to be expected behavior of each component is.

If the things you have paraphrased are actually the case, then this would be purely A Chirpstack issue, as it come down to expecting non-standard behavior from standard components.

Dear Chris,

many thanks for your immersive and detailed answer.

This part

thus they’re just pushing the packets sequentially to the gateway expecting that exactly the gateway will take care of delaying the packets to enforce/conform the DC.

was not a direct quote of someone, but rather my observation what the code of Chirpstack’s FUOTA-server is doing.

May I ask you, where I can find refereceable sources stating the 'standard architecture of LoRaWAN would place the DC enforcement inside the network server, so I can use this for people argueing about that?
Many thanks and have a nice weekend!

But what exactly is it doing?

What are the json contents of the UDP messages actually being pushed to the packet forwarder? Or alternately the mqtt messages being pushed to the gateway bridge? An actual technical understanding has to start with documenting exactly what is actually being commanded.

May I ask you, where I can find refereceable sources stating the 'standard architecture of LoRaWAN would place the DC enforcement inside the network server, so I can use this for people argueing about that?

For a quick example see “The network server is responsible for”:

“Alternately, this decision can be made with respect to the availability of the gateway (is it free to transmit, or has it exhausted it Tx duty cycle?).”

Allocating downlinks to a gateway based on its available duty cycle is something that can only be done if this is tracked at the server level - there isn’t time to “propose” a downlink to one gateway, and re-assign it if a negative ack is received from the originally chosen gateway.

Please however try to avoid making this an “appeal to authority” argument, and focus on the basic technical realities: if Chirpstack is indeed doing what you believe it to be doing, then it’s not going to work except in conjunction with a new form of gateway that does not yet seem to exist even in the Chirpstack project, and it would only be workable with standard gateways by implementing this functionality somewhere in between such as in the gateway bridge - something they seem to have thought of, but rejected. If in fact they want to go down the path you believe they are, then either they’ll drastically limit which gateways can be used, or they’re simply going to have to put an alternative implementation in the server or gateway bridge.

1 Like

Just to address this one short:
Eventually it creates packets for immediate sending, but with an internally times scheduler which transmits the packets in an 2-second-cadence (configurable value) to the gateway(s). However, this is actually done by the network server. The FUOTA server itself solely creates the app frame payloads and queues them, which I consider reasonable, as I don’t like the idea of bothering with MAC-details at application layer.

The bottom line for me, since I’m locked into the CS stack, is that I have to modify parts of the network server’s code to make the packet scheduler configurable.

I’ve learned a lot from you in this thread. Thanks again for that.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.