Considerable resources have been devoted to developing self-driving systems in industry and academia, for which three-dimensional object detection is critical. The commonly used LiDAR-based methods, in which point clouds serve as the input representation, are marred by the problems of sparsity and inhomogeneity, which make small or distant objects difficult to detect. Accordingly, we propose a LiDAR-based road obstacle detection method assisted by RGB images, which operates as follows. First, a depth completion network is used to transform RGB images into dense depth maps that can be used to create a pseudo-point cloud through matrix operations. Subsequently, both pseudo point cloud and real point cloud are transformed into a pillar form for a pillar-wise feature encoder; this is executed to generate a two-dimensional (2D) feature tensor. Finally, a standard 2D convolutional neural network detection architecture is used to learn features. This method increases the number of point features to remedy the sparsity and inhomogeneity of the original point cloud. Our method had an improvement compared with its LiDAR-based counterpart in experiments.