Many robust regression estimators have been proposed that have a high, finite-sample breakdown point, roughly meaning that a large porportion of points must be altered to drive the value of an estimator to infinity. But despite this, many of them can be inordinately influenced by two properly placed outliers. With one predictor, an estimator that appears to correct this problem to a fair degree, and simultaneously maintain good efficiency when standard assumptions are met, consists of checking for outliers using a projection-type method, removing any that are found, and applying the Theil-Sen estimator to the data that remain. When dealing with multiple predictors, there are two generalizations of the Theil-Sen estimator that might be used, but nothing is known about how their small-sample properties compare. Also, there are no results on testing the hypothesis of zero slopes, and there is no information about the effect on efficiency when outliers are removed. In terms of hypothesis testing, using the more obvious percentile bootstrap method in conjunction with a slight modification of Mahalanobis distance was found to avoid Type I error probabilities above the nominal level, but in some situations the actual Type I error probabilities can be substantially smaller than intended when the sample size is small. An alternative method is found to be more satisfactory.