The designs of deep neural network (DNN) accelerators have gradually gained attention due to the increased demand for real-Time AI applications. On the other hand, due to the diverse applications, kernel sizes and shapes for the involved convolutional operation in the target DNN model are not fixed. Therefore, it is necessary to design a reconfigurable DNN accelerator to cover different kernel sizes for convolutional operation in DNNs. However, due to the worst-case design policy, the designers usually select the largest kernel size as the design parameter to implement the DNN accelerator, which leads to lower hardware utilization. The reason is that the conventional array-based DNN design method restricts the efficiency of data delivery. Besides, the complicated data flow between neuron layers of DNN models counteracts the benefit of the involved data reuse method. To mitigate the design problems of complicated data flow on DNN accelerators, Network-on-Chip (NoC) interconnection has become an emerging technology to realize the Deep Neural Network on Chip (DNNoC). Compared with the conventional array-based DNN acceleration design, the DNNoC design supports flexible data flow, which leverages reconfigurable DNN accelerator implementations. In this work, we leverage the flexible NoC interconnection and propose a hybrid input/weight reuse method to reduce memory access. In addition, our proposed hybrid input/weight reuse method supports arbitrary kernel sizes for flexible convolutional operations. Compared with the related works, the proposed reconfigurable DNNoC with flexible convolutional operations helps to improve the utilization of computational capability in a PE by 1 % to 34 %, reduce memory access by 66% to 85%, which helps to improve 40% to 117% throughput.