Federated learning (FL) enables distributed model training, ensuring user privacy and reducing communication overheads. Model pruning further improves learning efficiency by removing weight connections in neural networks, increasing inference speed and reducing model storage size. While a larger pruning ratio shortens latency in each communication round, a larger number of communication rounds is needed for convergence. In this work, a training-based pruning ratio decision policy is proposed for wireless federated learning. By jointly minimizing average gradients and training latency with a given specific time budget, we optimize the pruning ratio for each device and the total number of training rounds. Numerical results demonstrate that the proposed algorithm achieves a faster convergence rate and lower latency compared to the existing approach.