That’s better. But there’s overhead in the protocol (about 13 bytes). And then the gateway<>server protocol adds more overhead. I think last I looked a brief node uplink turned into maybe something like 300 bytes of binary header + JSON in the UDP protocol. Guess maybe 512 for budgeting. If you use the chirpstack MQTT protocol that will add overhead and housekeeping too, but 512 is probably still safely an overestimate.
Actually it depends on the spreading factor and applicable regulations in your area
That simply won’t happen in LoRaWAN.
Nodes are supposed to select uplink frequencies with fair randomness; but that inevitably means collision. The busier your network, the less likely packets are to get through. It’s also not just a matter of frequency; in theory a gateway can demodulate packets on the same frequency at different spreading factors, but there are only so many demodulators in the chip available to assign. Additionally a node close to the gateway may manage to block out distant nodes on all frequencies.
People have written papers with models of capacity, in reality you probably have to try it in a possible usage and see how it works and scales.
But in terms of figuring out what Internet plan you need, I’d probably take the number of nodes and their interval and multiple by 512 (or simply divide by 2 to get kilobytes) and then compare against cost tiers.
Remember also that someone else’s node traffic may get reported in, too. In theory you could add some filtering in the gateway to drop the obviously irrelevant stuff there. You may even want to build some “alarms” if you start using a lot of backhaul bandwidth so that you can investigate why before you get cut off or hit with overage charges.