We investigated the accuracy with which, in the absence of vision, one can reach again a 2D target location that had been previously identified by a guided movement. A robotic arm guided the participant's hand to a target (locating motion) and away from it (homing motion). Then, the participant pointed freely toward the remembered target position. Two experiments manipulated separately the kinematics of the locating and homing motions. Some robot motions followed a straight path with the bell-shaped velocity profile that is typical of natural movements. Other motions followed curved paths, or had strong acceleration and deceleration peaks. Current motor theories of perception suggest that pointing should be more accurate when the homing and locating motion mimics natural movements. This expectation was not borne out by the results, because amplitude and direction errors were almost independent of the kinematics of the locating and homing phases. In both experiments, participants tended to overshoot the target positions along the lateral directions. In addition, pointing movements towards oblique targets were attracted by the closest diagonal (oblique effect). This error pattern was robust not only with respect to the manner in which participants located the target position (perceptual equivalence), but also with respect to the manner in which they executed the pointing movements (motor equivalence). Because of the similarity of the results with those of previous studies on visual pointing, it is argued that the observed error pattern is basically determined by the idiosyncratic properties of the mechanisms whereby space is represented internally.