X-ray image segmentation is an important and crucial step for three-dimensional (3D) bone reconstruction whose final goal remains to increase effectiveness of computer-aided diagnosis, surgery and treatment plannings. However, this segmentation task is rather challenging, particularly when dealing with complicated human structures in the lower limb such as the patella, talus and pelvis. In this work, we present a multi-atlas fusion framework for the automatic segmentation of these complex bone regions from a single X-ray view. The first originality of the proposed approach lies in the use of a (training) dataset of co-registered/pre-segmented X-ray images of these aforementioned bone regions (or multi-atlas) to estimate a collection of superpixels allowing us to take into account all the nonlinear and local variability of bone regions existing in the training dataset and also to simplify the superpixel map pruning process related to our strategy. The second originality is to introduce a novel label propagation step based on the entropy concept for refining the resulting segmentation map into the most likely internal regions to the final consensus segmentation. In this framework, a leave-one-out cross-validation process was performed on 31 manually segmented radiographic image dataset for each bone structure in order to rigorously evaluate the efficiency of the proposed method. The proposed method resulted in more accurate segmentations compared to the probabilistic patch-based label fusion model (PB) and the classical patch-based majority voting fusion scheme (MV) using different registration strategies. Comparison with manual (gold standard) segmentations revealed that the good classification accuracy of our unsupervised segmentation scheme is, respectively, 93.79% for the patella, 88.3% for the talus and 85.02% for the pelvis; a score that falls within the range of accuracy levels of manual segmentations (due to the intra inter/observer variability).