Using Neural Network Classification in XLMinerTM: In XLMinerTM, select Classification -> Neural Network. This brings up the following dialog box, where you need to specify the data range to be processed, the input variables and the output variable.

Variables: This box lists all the variables present in the dataset. If the "First row contains headers" box is checked, the header row above the data is used to identify variable names.
Variables in input data: Select one or more variables as independent variables from the Variables box by clicking on the corresponding selection button. These variables constitute the predictor variables.
Output Variable: Select one variable as the dependent variable from the Variables box by clicking on the corresponding selection button. This is the variable being classified.
Click Next, and the following dialog box appears. Here you specify the architecture for the neural network.

Normalize input data: Normalizing the data (subtracting the mean and dividing by the standard deviation) is important to ensure that the distance measure accords equal weight to each variable -- without normalization, the variable with the largest scale will dominate the measure.
Number of hidden layers: Up to four hidden layers can be specified; see the introduction section for more detail on layers in a neural network (input, hidden and output).
# Nodes: Specify the number of nodes in each hidden layer. Selecting the number of hidden layers and the number of nodes is largely a matter of trial and error.
# Epochs: An epoch is one sweep through all the records in the training set.
Step size for gradient descent: This is the multiplying factor for the error correction during backpropagation; it is roughly equivalent to the learning rate for the neural network. A low value produces slow but steady learning, a high value produces rapid but erratic learning. Values for the step size typically range from 0.1 to 0.9.
Weight change momentum: In each new round of error correction, some memory of the prior correction is retained so that an outlier that crops up does not spoil accumulated learning. The momentum value ranges from 0-2.
Error tolerance: The error in a particular iteration is backpropagated only if it is greater than the error tolerance. Typically error tolerance is a small value in the range 0 to 1. The default value for error tolerance in XLMinerTM is 0.01.
Weight decay: To prevent over-fitting of the network on the training data set a “Weight decay” is used to penalize the error in each iteration. So if e is the error to be back-propagated, instead (e + w*e) is back-propagated where, w is the weight decay, which can be any value in the range 0-1.
Cost Function : XLminer™ provides four options for cost functions -- squared error, cross entropy, Maximum likelihood and perceptron convergence. The user can select the appropriate one.
Hidden layer sigmoid : The output of every hidden node passes through a sigmoid function. Standard sigmoid function is logistic, the range is between 0 and 1. Symmetric sigmoid function is tanh function, the range being -1 to 1.
Output layer sigmoid : Standard sigmoid function is logistic, the range is between 0 and 1. Symmetric sigmoid function is tanh function, the range being -1 to 1.
Click Next, and the following dialog appears.
Score training data: Select this option to show an assessment of the performance of the tree in classifying the training data. The report is displayed according to your specifications - Detailed, Summary and Lift charts.
Score validation data: Select this option to show an assessment of the performance of the tree in classifying the validation data. The report is displayed according to your specifications - Detailed, Summary and Lift charts.
Score Test Data: The options in this group let you apply the model for scoring to the test partition (if one had been created earlier). The option "Score Test Data" is available only if the dataset contains test partition. Select it to apply the model to test data.
Score new Data: The options in this group let you apply the model for scoring to an altogether new data. Specify where the new data is located. See the Example of Discriminant Analysis for detailed instructions on this.
Score New data in database : See the Example of Discriminant Analysis for detailed instructions on this.
Click Finish, and the output will be displayed in a separate sheet.