Remote photoplethysmography (rPPG) has been used to measure vital signs such as heart rate, heart rate variability, blood pressure(BP), and blood oxygen. Recent studies adopt features developed with photoplethysmography (PPG) to achieve contactless BP measurement via rPPG. These features can be classified into two groups: time or phase differences from multiple signals, or waveform feature analysis from a single signal. Here we devise a solution to extract the time difference information from the rPPG signal captured at 30 FPS. We also propose a deep learning model architecture to estimate BP from the extracted features. To prevent overfitting and compensate for the lack of data, we leverage a multi-model design and generate synthetic data. We also use subject information related to BP to assist in model learning. For real-world usage, the subject information is replaced with values estimated from face images, with performance that is still better than the state-of-the-art. To our best knowledge, the improvements can be achieved because of: (1) the model selection with estimated subject information, (2) replacing the estimated subject information with the real one, (3) the InfoGAN assistance training (synthetic data generation), and (4) the time difference features as model input. To evaluate the performance of the proposed method, we conduct a series of experiments, including dynamic BP measurement for many single subjects and nighttime BP measurement with infrared lighting. Our approach reduces the MAE from 15.49 to 8.78 mmHg for systolic blood pressure (SBP) and 10.56 to 6.16 mmHg for diastolic blood pressure(DBP) on a self-constructed rPPG dataset. On the Taipei Veterans General Hospital(TVGH) dataset for nighttime applications, the MAE is reduced from 21.58 to 11.12 mmHg for SBP and 9.74 to 7.59 mmHg for DBP, with improvement ratios of 48.47% and 22.07% respectively.