One of the goals for the machine learning research is to improve the accuracy of the classification. Many research studies have focused on developing novel algorithms according to problem domains and statistical learning theory to continuously improve classification performance over the past decades. Recently, many researchers have found that performance bottleneck often occurs when only using a single classification algorithm, since each algorithm has its strength, but it also has its weakness. Ensemble learning, which combines several classifiers or hypotheses to become a strong classifier or learner, relies on the combination of various hypotheses rather than using state-of-the-art algorithms. In ensemble learning, hypothesis selection is crucial to performance, and the diversity of the selected hypotheses is an important selection criterion. This work proposes three algorithms focusing on generating a hierarchical hypothesis structure to achieve the goal of hypothesis selection, in which the two hypotheses are combined based on particular criterion. We conduct experiments on 8 data sets, and the experimental results indicate that the proposed method outperforms random forest, which is a state-of-the-art method.