In R, there is a package called "RandomForest", you can use it to realize RandomForest.  By the way, all most prediction algorithm such as Logistic , NN, Decision Tree(Boosting, Bagging,RamdomForest) can handle overfitting. You need to split your data before you build your model. Training data set for model building, validation dataset for model tuning. Then you can calculate fit statistics( misclassification rate, Average Square Error, ROC Index etc) on validation dataset to optimize the model complexity. Therefore, data splitting is necessary no matter which method you use.