Computing Confidence Measures and Marking Unreliable Predictions by Estimating Input Data Densities with MLPs


Contact
lkindermann [ at ] awi-bremerhaven.de

Abstract

In this paper we present a method to compute the distributionof input vectors with a standard MLP architecture.By training the network on all data vectorswith a target output of 1 and an additional set ofrandom vectors with a zero target, the network is ableto approximate the local density of seen input data forany point in the input space. While densities can becomputed with high precision by a number of specializedalgorithms, this fast and very easy implementablemethod allows easily to evaluate the outputs of anetwork used for function approximation or classification.If networks are queried with data outside thetraining set, the result usually will be unpredictable.But determining if the current point lies within the reliablearea is a classification problem comparable tothe main problem itself. By using three parallel networksof the same type and structure it is possible toevaluate the precision and validity of predictions aswell with minimal additional effort: In addition to thenetwork used for prediction we use one to predict theabsolute error and another to determine the input densityas an alert signal.



Item Type
Conference (Conference paper)
Authors
Divisions
Programs
Publication Status
Published
Event Details
Proceedings of the Sixth International Conference on Neural Information Processing (ICONIP'99), Perth.
Eprint ID
10420
Cite as
Kindermann, L. , Lewandowski, A. , Tagscherer, M. and Protzel, P. (1999): Computing Confidence Measures and Marking Unreliable Predictions by Estimating Input Data Densities with MLPs , Proceedings of the Sixth International Conference on Neural Information Processing (ICONIP'99), Perth .


Share
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Research Platforms
N/A

Campaigns
N/A


Actions
Edit Item Edit Item