Background: Rapid on-site cytologic evaluation (ROSE) helps to improve the diagnostic accuracy in endobronchial ultrasound (EBUS) procedures. However, cytologists are seldom available to perform ROSE in many institutions. Recent studies have investigated the application of deep learning in cytologic image analysis. As such, the present study analyzed lung cytologic images obtained by EBUS procedures, and employed deep-learning methods to distinguish between benign and malignant cells and to semantically segment malignant cells. Methods: Ninety-seven patients who underwent 104 EBUS procedures were enrolled. Four hundred and ninety-nine lung cytologic images obtained via ROSE, including 425 malignant and 74 benign, and most malignant were lung adenocarcinoma (64.3%). All the images were used to train a residual network model with 101 layers (ResNet101), with suitable hyperparameters selected to classify benign and malignant lung cytologic images. An HRNet model was also employed to mark the area of malignant cells. Automatic patch-cropping was adopted to facilitate dataset preparation. Results: Malignant cells were successfully classified by ResNet101 with 98.8% classification accuracy, 98.8% sensitivity, and 98.8% specificity in patch-based classification; 95.5% classification accuracy in image-based classification; and 92.9% classification accuracy in patient-based classification. Malignant cell area was successfully marked by HRNet with a mean intersection over union of 89.2%. The automatic cropping method enabled the system to complete diagnosis within 1 s. Conclusions: This is the first study to combine lung cytologic image deep-learning classification with semantic segmentation. The model was optimized for high accuracy and the automatic cropping facilitates the clinical application of our model. The success in both lung cytologic images classification and semantic segmentation on our dataset shows a promising result for clinical application in the future.