LiDARs have emerged as an important sensor in autonomous driving systems because they offer more accurate geometric measurements than cameras and radars. Therefore, LiDARs have been commonly combined with cameras or radars to tackle many perception problems in autonomous driving, such as object detection, semantic segmentation, or navigation. For semantic segmentation of LiDAR data, due to the class imbalance issue of large-scale scene, there is a performance gap between majority classes and minority classes of large-scale dataset. The minority classes usually include the crucial classes to the autonomous driving, such as 'person', 'motorcyclist', 'traffic-sign'. To improve the performance of minority classes, we adopt U-Net++ as the architecture, KPConv as convolution operator, and use both dice loss and cross entropy as loss functions. We get 5.1% mIoU improvement on SemanticKITTI of all classes and 9.5% mIoU improvement of minority classes. Moreover, due to the different resolution of LiDAR sensors, we show the generalization capability of our model by training it on 64-beam dataset and testing on 32-beam and 128-beam dataset. We get 3.3% mIoU improvement on 128-beam dataset and 1.9% mIoU improvement on 32-beam dataset.