Remote-sensing images (RSIs) generally have strong spatial characteristics for surface features. Various ground objects, such as residential areas, roads, forests, and rivers, differ substantially. According to this visual attention characteristic, regions with complicated texture features require more realistic details to reflect a better description of the topography, while regions such as farmlands should be smooth and have less noise. However, most existing single image super resolution (SISR) methods fail to fully utilize these properties and therefore apply a uniform reconstruction strategy to the whole image. In this project propose, a novel saliency driven unequal single image reconstruction network in which the demands of various regions in the super resolution (SR) process are distinguished by saliency maps. First, we design a new gradient-based saliency analysis method to produce more accurate saliency maps with Image wise annotations. The method utilizes the superiority of a multi reception field to extract both high level features and low level features. Second, we propose a novel saliency-driven gate conditional generative adversarial network, where the saliency map is regarded as a medium during the training procedure of the whole network. The saliency map is regarded as a pixel wise condition in a generator to enhance the training capability of the network. Additionally, we design a new loss function that combines normalized content loss, saliency-driven perceptual loss, and gate-control adversarial loss to further refine details of texture-complex areas for RSIs. We evaluate the performance of our algorithm and compare it with many other state of the art SR methods using a remote-sensing data set. The experimental results show that our approach achieves the optimal outcome in salient areas. Our method attains the best effect on global quality and visual performance. This project is implemented with MATLAB software.