Turning Latency into Throughput: Speculative Decoding for the Decentralized Inference