Computing Confidence Measures and Marking Unreliable Predictions by Estimating Input Data Densities with MLPs
In this paper we present a method to compute the distributionof input vectors with a standard MLP architecture.By training the network on all data vectorswith a target output of 1 and an additional set ofrandom vectors with a zero target, the network is ableto approximate the local density of seen input data forany point in the input space. While densities can becomputed with high precision by a number of specializedalgorithms, this fast and very easy implementablemethod allows easily to evaluate the outputs of anetwork used for function approximation or classification.If networks are queried with data outside thetraining set, the result usually will be unpredictable.But determining if the current point lies within the reliablearea is a classification problem comparable tothe main problem itself. By using three parallel networksof the same type and structure it is possible toevaluate the precision and validity of predictions aswell with minimal additional effort: In addition to thenetwork used for prediction we use one to predict theabsolute error and another to determine the input densityas an alert signal.