Accurate segmentation of brain tumor from magnetic resonance images (MRIs) is crucial for clinical treatment decision and surgical planning. Due to the large diversity of the tumors and complex boundary interactions between sub-regions, it is of a great challenge. Besides accuracy, resource constraint is another important consideration. Recently, impressive improvement has been achieved for this task by using deep convolutional networks. However, most of state-of-the-art models rely on expensive 3D convolutions as well as model cascade/ensemble strategies, which result in high computational overheads and undesired system complexity. For clinical usage, the challenge is how to pursue the best accuracy within very limited computational budgets. In this project, we segment 3D volumetric image in onepass with a hierarchical decoupled convolution network (HDCNet), which is a light-weight but efficient pseudo-3D model. Specifically, we replace 3D convolutions with a novel hierarchical decoupled convolution (HDC) module, which can explore multiscale multi-view spatial contexts with high efficiency. Extensive experiments on the BraTS 2018 and 2017 challenge datasets show that our method performs favorably against state of the art in accuracy yet with greatly reduced computational complexity.