GPGPUs have emerged as one of the most widely used throughput processors. Deep multithreading and on-chip cache hierarchy are the two effective designs to achieve high throughput computing in modern GPGPUs. However, excessive multithreading could aggravate the cache contention while conservative multithreading could leave the execution resource under-utilized. Finding a proper design point between the two has become a significant performance factor to GPGPUs. This paper investigates the correlation between caching behavior and multithreading technique. By demonstrating the trade-off issue between the multithreading and cache contention, this paper proposes a multithreading decision scheme to dynamically adjusts the multithreading degree to achieve superior performance. With the proposed decision scheme, the system performance of memory-intensive workloads can be improved by 60% in average.